AI and cognitive delegation creep
What happens when a helpful tool becomes a habit and then becomes a default? Is it benign? A recommender system picks an article, your calendar proposes a time, your phone suggests a reply… and now your AI chatbot drafts the email, reviews a manuscript or provides relationship advice. Sounds wonderful. Fewer micro-decisions. Less mental clutter.
AI can free up cognitive bandwidth for deeper work, reduce burnout, catch errors, speed up synthesis and make complex systems more legible. Used deliberately, it can arguably extend human cognition rather than replace it.
But brains are bargain hunters. If there’s a cheaper way to do something, they’ll take it. That’s why we love developing heuristics.
All of a sudden – as in, within the last couple of years – the ‘something’ has become more than just admin. It’s now judgment. It’s priority-setting. It’s the act of choosing what happens next, often repeated hundreds of times a week. When those neural press-ups get outsourced, you may not feel weaker… immediately, at least. But you may notice you reach for help sooner. That’s a kind of creep – a cognitive delegation creep.
Psychologically, the short-term win is real. After all, decision fatigue is a real phenomenon.
Offloading can be smart. Humans routinely use external tools as aids for memory and thinking, and research on cognitive offloading shows that we strategically shift effort onto the environment when it’s efficient.
But there’s a trade. If we never practise holding information in mind, planning, strategising, navigating, composing and evaluating, there’s a real concern that our confidence in those abilities (and the abilities themselves) may soften. We’re not necessarily losing intelligence overnight… Hopefully. But our sense of agency starts to become conditional… ‘I can do this… but only if the tool is with me’.
Will we then see a loop? Less deciding → less confidence in our judgment → more delegation → even less deciding.
There’s a well-known cousin of this in human factors research. It’s called automation bias. When systems make suggestions, people tend to overtrust them in subtle ways, accepting recommendations, overlooking errors and deferring when they shouldn’t. The systems optimise for engagement and convenience rather than our long-term autonomy. And humans – being the wonderfully efficient primates we are – often confuse ease with good.
There’s also a tension in how people relate to algorithms. We sometimes resist them (algorithm aversion), sometimes prefer them (algorithm appreciation), depending on context and perceived stakes.
In other words, our relationship with AI advice is malleable. Which is exactly why it can drift without us noticing.
So, what does ‘cognitive delegation creep’ look like in daily life? I suspect it’s rather mundane… We stop choosing what to read; we read what appears. We stop deciding what matters today; we do what gets surfaced. We stop planning; we accept suggestions. We stop wrestling with uncertainty; we ask for ‘the best answer’ and move on. We may feel calmer, briefly. After a while, we may (or may not) feel flatter. But we have less ownership. Less authorship. Less ‘I did this’.
And isn’t there a deeper risk in that AI makes us passive?
When micro-decisions disappear, so do micro-moments of self-direction. Indeed, one could argue that a lifetime of agency is a gazillion tiny votes for ‘I choose’. If enough of those votes are removed, we may have oodles of efficient people, but ones who are oddly absent from their own day, self, and decisions, judgments and relationships.
I’ve started to sense notes of this in academia and other areas. And AI is projected to become more advanced, more embedded and more ambient.
What’s the antidote?
We shouldn’t romanticise struggle, but perhaps we should reintroduce friction with intent. Treat judgment like fitness. We don’t need to carry a fridge up a mountain to stay strong, but we do need to use the muscles. Perhaps the time will come, in the not-too-distant future, when we need to pick a few domains where we stay in ‘manual mode’ more often. Choose what you read for X minutes a day; make a weekly plan without prompts; decide a priority before opening your feed…
Or might we eventually need embodied intelligence/deep agency/resilience-enhancing retreats, shaped by the rise of AI? In other words, intentional spaces and activities where we safeguard the human capacities that matter most – relationships, creativity and critical thinking – before they atrophy through disuse?
Perhaps when you do next delegate, ask one quick question… ‘Am I outsourcing labour… or am I outsourcing agency?’
For more on this interconnected thinking, check out my books: www.jakemrobinson.com/books