You articulated this really well: The people who collaborate most effectively with AI are not necessarily the most technically sophisticated. They are the ones who already knew how to think and explain themselves clearly. (Btw, your "golden retriever" analogy is perfect!)
Your post made me also think how this connects to the parenting conversation happening right now around AI. Parents are being asked to help children use AI as a thinking aid rather than a thinking replacement, but your article reveals a challenge. Effective AI use requires skills that take years to develop. We are perhaps asking children to be good "AI managers" before they have developed the underlying cognitive muscles (problem decomposition, clear articulation, iterative thinking) that make someone good at it.
I think even beyond child skillsets, to your point of "AI Managers", with team consolidation we are risking making a lot of the job market inaccessible to people. We need to acknowledge that AI collaboration begins to look a lot more like the type of skillset that leads and manages groups of people through complexity, which in our current employment paradigm, is the minority of the workforce.
Thank you thank you! The golden retriever analogy is one of those beauties of things that pop into your head only while writing :)
There is a definite clarity of inputs begets clarity of outputs dynamic here. And that's such a great point — kids don't have that ability for clarity yet. I could see AI used deliberately to accelerate building those skills, but coming at it behaviorally that simply has more friction, so is less likely to happen.
It leaves me wondering how we might help kids build those skills in a way where AI becomes a rapid feedback loop and testing ground to build those skills faster?
My dad asked me a related question after reading this — where would you tell someone who is for example in high school to start, given that they might not have the expertise yet to ask the right questions, or structure their thinking process? For at least that age, I would suggest probing about approaches... a "here is the context, here is what I am trying to do, what are the ways I could approach this and why".
For kids it is probably similar just more rudimentary, and maybe more at an atomic level of things such as "how should i ask good questions"? Definitely needs further thinking, and is a big issue to consider!
I think you really nailed the reality of prompting. It's not a magic bullet. In fact, it requires a lot of back and forth and setting up the intent upfront to get results we want.
And agree with the future of prompting that it will turn into conversational rather than to engineer it.
These days, I find myself less and less to optimize my prompting because I already have strong context system built on my project.
So I can just ask AI with simple questions and it will return me with high accuracy outputs.
Thanks so much Wyndo!! Really appreciate that 🙏🏻 Enjoyed pulling it together and had a lot of realizations while writing. So happy it resonated with you. Very cool to hear you’ve experienced prompts turning more into conversation as you have more context behind it.
So much back and forth with getting the intent right. And I think approaching it like that makes it so much more widely accessible for people applying their thinking skills.
Thank you @Karen Spinner for the restack! Really appreciate you doing that 🙏🏻
You articulated this really well: The people who collaborate most effectively with AI are not necessarily the most technically sophisticated. They are the ones who already knew how to think and explain themselves clearly. (Btw, your "golden retriever" analogy is perfect!)
Your post made me also think how this connects to the parenting conversation happening right now around AI. Parents are being asked to help children use AI as a thinking aid rather than a thinking replacement, but your article reveals a challenge. Effective AI use requires skills that take years to develop. We are perhaps asking children to be good "AI managers" before they have developed the underlying cognitive muscles (problem decomposition, clear articulation, iterative thinking) that make someone good at it.
For some reason I can't edit, but I wanted to send you over to my *much shorter* Risky Rise of the One-Person Team post because I think there might be some food for thought there for you. https://thinkermaker.substack.com/p/the-risky-rise-of-the-one-person
I think even beyond child skillsets, to your point of "AI Managers", with team consolidation we are risking making a lot of the job market inaccessible to people. We need to acknowledge that AI collaboration begins to look a lot more like the type of skillset that leads and manages groups of people through complexity, which in our current employment paradigm, is the minority of the workforce.
Thanks, Peter. Let me read it!
Thank you thank you! The golden retriever analogy is one of those beauties of things that pop into your head only while writing :)
There is a definite clarity of inputs begets clarity of outputs dynamic here. And that's such a great point — kids don't have that ability for clarity yet. I could see AI used deliberately to accelerate building those skills, but coming at it behaviorally that simply has more friction, so is less likely to happen.
It leaves me wondering how we might help kids build those skills in a way where AI becomes a rapid feedback loop and testing ground to build those skills faster?
My dad asked me a related question after reading this — where would you tell someone who is for example in high school to start, given that they might not have the expertise yet to ask the right questions, or structure their thinking process? For at least that age, I would suggest probing about approaches... a "here is the context, here is what I am trying to do, what are the ways I could approach this and why".
For kids it is probably similar just more rudimentary, and maybe more at an atomic level of things such as "how should i ask good questions"? Definitely needs further thinking, and is a big issue to consider!
https://substack.com/@elliotai/note/c-164487113?r=6jttqk
Context please!
https://open.substack.com/pub/elliotai/p/you-are-not-as-smart-as-you-might?utm_source=share&utm_medium=android&r=6jttqk
Thanks for the context. I’m trying to figure out… are you a person working on the one manifesto project or a bot account that’s an output of it?
very approachable, especially for non-technical audiences.
That is wonderful to hear! Thank you!! Appreciate you taking the time to read it.
love the post man!
I think you really nailed the reality of prompting. It's not a magic bullet. In fact, it requires a lot of back and forth and setting up the intent upfront to get results we want.
And agree with the future of prompting that it will turn into conversational rather than to engineer it.
These days, I find myself less and less to optimize my prompting because I already have strong context system built on my project.
So I can just ask AI with simple questions and it will return me with high accuracy outputs.
Look forward to see the 3rd part!
Thanks so much Wyndo!! Really appreciate that 🙏🏻 Enjoyed pulling it together and had a lot of realizations while writing. So happy it resonated with you. Very cool to hear you’ve experienced prompts turning more into conversation as you have more context behind it.
So much back and forth with getting the intent right. And I think approaching it like that makes it so much more widely accessible for people applying their thinking skills.
Looking forward to sharing the next one with you!