The dominant anxiety about AI is that it makes human thinking less valuable. The evidence points the other way. AI has raised the floor for almost everyone, but the ceiling rises only for the people who engage actively with it. What determines whether you are one of them has nothing to do with technical skill.
This is an argument about habit, not capability, and it is, on balance, an optimistic one.
The floor and the ceiling
AI has made competent output universally accessible. Tasks that previously required skill, time, or specialist knowledge, including first drafts, summaries, analysis, and code, now produce acceptable results with minimal effort. The gap between someone who can do something and someone who cannot has narrowed significantly, and in some areas it has effectively closed.
The ceiling is a different matter. The quality of work that a genuinely engaged person produces working with AI has risen too, but only for the people doing the engaging. The gap between active and passive use is widening as the tools become more capable. Most commentary focuses on the floor and the anxiety it produces. The more interesting question is what determines the ceiling.
What active engagement actually looks like
There is a tendency to frame this as a technical skill: learning the right prompting techniques, understanding how models work, knowing which tool to use for which task. That framing misses the point.
Active engagement with AI looks almost identical to active engagement with any other thinking process. It means arriving with a genuine question rather than a vague instruction, reading the output critically, noticing where it has gone generic or missed the point, and pushing back when something sounds right but does not quite fit. It means bringing your own knowledge and experience into the conversation rather than waiting to receive something complete.
The people who get the most from AI are the ones who treat it as a thinking partner. The distinction is behavioural, and it is available to anyone willing to develop the habit.
The complacency risk
The output AI produces by default is confident, coherent, and often quite good. That is precisely what makes passive use so easy to fall into. The first draft reads well. The analysis sounds plausible. The summary covers the main points. There is no obvious signal that something is missing, because what is missing is not an error. It is the judgement, the specificity, the accumulated knowledge that would have made the output genuinely yours.
AI complacency is not laziness in the conventional sense. It is the absence of critical engagement with output that does not obviously demand it. The work looks done while something important is absent.
The test is straightforward: could you defend every decision in the output, does it reflect how you actually think about the problem, and would someone who knows your work recognise it as yours? If the honest answer is no, the tool has been doing the thinking rather than supporting it.
There is a compounding factor worth naming. AI models are trained to produce fluent, confident output, and they do so consistently regardless of whether the underlying answer is complete, current, or correct. The absence of hedging is not a signal of accuracy. It is a feature of how the output is generated. An engaged reader brings scepticism to confident-sounding claims and tests them. A passive reader accepts them, and may not discover the gap until it matters.
The navigation problem
GPS did not make good navigators more valuable. It made poor ones invisible, right up until the signal dropped or the route led somewhere unexpected. The skill atrophied quietly because it was never needed. Until it was.
AI is doing something similar at a considerably higher level of abstraction. The person who never develops the habit of critical engagement, who accepts the first output and delegates rather than directs, is building a dependency that will not be visible in normal conditions. When the situations that actually matter arrive, the high-stakes decision, the nuanced client problem, the moment that requires genuine judgement, the difference between a tool that supports your thinking and one that has replaced it becomes apparent.
The skill that atrophies is not technical. It is the habit of thinking hard about a problem before asking for help with it, the ability to evaluate whether an answer is actually good rather than merely coherent, and the confidence to push back when something does not feel right. These are thinking skills, and they compound in value as AI handles more of the execution.
The critical thinking gap
There is a structural issue underneath all of this that organisations are not yet taking seriously enough.
Critical thinking is a skill that develops through practice: through being wrong, through having your reasoning challenged, through working out why something that sounds plausible is actually incomplete. It requires the habit of asking whether something is true, not just whether it sounds right. That habit is built over time, through environments that reward it.
The generation entering the workforce now has grown up in an environment optimised for the opposite. Content designed for passive consumption, algorithms that do the filtering, attention measured in seconds. None of that is a character flaw. It is an environmental outcome. It means that the critical thinking muscle, for a significant proportion of the workforce, is less developed than in previous generations, at precisely the moment when AI makes that muscle more important.
AI models produce output that reflects the aggregated thinking of their training data. When people accept that output uncritically and it feeds back into the next generation of training, the loop tightens. Thinking converges. Output becomes more uniform, more averaged, more competent in a generic sense and less distinctive in any specific one. Mediocrity does not just persist in that scenario. It compounds.
Organisations that invest in critical thinking capacity now are building something more durable than an AI strategy. They are building the human infrastructure that determines whether their AI investment produces distinctive output or averaged output. That means hiring for it, training for it, and creating environments where people are expected to challenge, question, and push back rather than accept and move on. The tools are widely available. The capacity to use them well is not, and the gap is widening.
What this means for your organisation
The so what differs depending on where you sit, and it is worth being specific.
For executives: competitive advantage is no longer access to information or even access to AI. It is the quality of judgement brought to AI-assisted decisions. The organisations that pull ahead will be the ones where senior people are actively engaged with AI output rather than simply receiving it, and that starts with the behaviour leaders model.
For managers: the teams that perform best with AI will not be the ones with the most tools. They will be the ones with the clearest standards for what good output looks like and the discipline to apply those standards consistently. Building that culture is a management challenge, and the conversation about standards needs to happen before the tools are deployed rather than after.
For individual contributors: AI raises the floor for everyone, which means the things that differentiate your work are increasingly the things AI cannot replicate. Your specific knowledge, your relationships, your judgement about what actually matters in a given situation. Those are worth developing deliberately.
For organisations deploying AI at scale: most implementation challenges are engagement failures rather than technology failures. Organisations that deploy AI without building the habits and standards that make active engagement possible across a team will see lower returns than the tools are capable of delivering. The return on AI investment is directly proportional to the quality of human engagement with it, which makes this a people and culture question before it is a technology one.
The thread running through all four is the same. Active engagement is what keeps work distinctively human, and it is a habit anyone can build regardless of technical background or seniority.
What this looks like in practice
Treat AI output as a starting point. Bring what you know into the conversation rather than waiting to see what comes back. When something reads well but does not quite fit, say so and say why. When the output is generic, ask for something more specific. When the answer is technically correct but misses the point, push until it does not.
The quality of what you get back is a function of what you put in, and what you put in is not just the prompt. It is the knowledge, the standards, the judgement you bring to the exchange, the things that cannot be prompted, and that determine whether AI makes you generically faster or specifically better.
The executives, managers, and teams building that habit now are the ones whose work will compound over time. In an AI world, thinking harder is the most productive investment available.