Automation vs Upskilling (in AI)
A common worry is that delegating to AI will make us forget how to think. I believe the opposite, and here's why.
Published: 2026-03-01 by Luca Dellanna
A common worry I hear about AI is that people using it too often will forget how to do their job. “Delegate your thinking to AI, and you will stop thinking,” they say.
I believe otherwise.
This belief stems from my experience, over ten years ago, with “delegating” spelling and grammar to Microsoft autocorrect. It was 2013, I was working for a multinational, and my boss asked me to follow some projects in Spain, knowing I spoke Spanish. The problem was that I had learned the language by living in Spain, so I could speak it fluently, but I had never learned how to write it well. My orthography was poor. Thankfully, Microsoft Outlook’s autocorrect kept flagging my mistakes and correcting them. Over time, that taught me how to spell Spanish correctly. Autocorrect didn’t make my Spanish worse by “automating spelling.” It made me better, because it corrected me every day, at the exact moment I made the mistake, until I internalized the right form.
Of course, this doesn’t always work (more on this later), but it led me to believe two things: one useful for individuals, and one useful for companies.
First, I believe that AI has enormous potential to lead to better thinking, especially when the user has skin in the game. A student who wants to get homework done in a subject they consider useless will resist any learning that the AI can provide. But when someone asks an AI a question whose answer genuinely matters to them, and the AI replies not just with a one-line conclusion but also with reasoning and context, the person will often learn a lot. I know because in the last 12 months, AI has taught me a lot by answering my questions, not on topics I’m already a top expert about, but on everything else.
Of course, this doesn’t happen automatically. The AI tool must be set to provide feedback in addition to just doing the job (or at least in addition to doing it).
How to do this? I could speak for hours about this, but some quick tips include:
- Use the AI not (just) to get things done, but to check their quality. What standards should your output meet, and does it meet them? And if not, why, and what can be done about it?
- If you or your employees use Claude Code, enforce “Explanatory output style.” If you use other tools, add to the “custom instructions” setting an instruction to “provide educational insights in between helping me complete tasks about why you’re answering or doing things a certain way, or on things I may have missed.”
- Add to the custom instructions for the AI to also provide feedback on your questions, when it notices they reveal some blind spot or imperfect framing of the root problem. And to always provide, together with the output itself, an assessment of its output and what it could have done better.