Monday Morning Moan - stop saying autonomy, before things really get out of control
- Summary:
- Now hang on a minute. When you describe AI as autonomous, you're making a very dangerous leap of faith that doesn't reflect the reality of how AI agency actually works.
We're constantly being told that AI is going to change the world — and the sheer pace of improvement means that it seems like it’s coming, ready or not.
Job scares. Economic scares. Even democracy scares.
And models are now so powerful, allegedly, that their creators are terrified to even release them.
So it seems like kind of a big deal.
But hang on a minute. The ultimate outcome of deploying these new technologies — whether abundant utopia or dystopian hell-hole — is largely a factor of the way in which we understand their purpose and capabilities. Because this understanding will in turn shape our intent, and therefore the degree to which we decide to re-shape our organizations and societies to accommodate it. So the future in large part depends on the decisions that organizations and governments make about how the technology should be deployed.
At the same time, the dominant spirit behind the Silicon Valley companies at the forefront of this drive is one of ‘move fast and break things’, which can be a useful antidote to inertia and sclerotic practice — but only if we have clarity of purpose and an understanding of the implications of breaking things.
Because otherwise we’re just breaking things.
And right now this feels important because the thing AI companies are proposing to break is society — work, enterprise, social contracts, taxation, government, purpose.
Which means it’s especially important to speak calmly and with precision at this moment in history, because if 'loose lips sink ships' — as the WWII slogan asserted — then we should be pretty careful about keeping those lips tight when the ships in question are the fundamental pillars of democratic societies.
As Wittgenstein (sort of) said — our words shape the boundaries of our world, effectively constraining our understanding of what is possible or even desirable. And those boundaries in turn narrow our intent — and ultimately the possible decisions and actions we feel are open to us.
And there’s one crucial word we’ve really lost control of in the AI debate — ‘autonomy’.
Autonomy vs agency
At the bottom of this loss of control is the fact that too many people are throwing the word autonomy around when they really mean agency. And honestly, I’ve done it myself, because autonomy sounds cooler — carrying connotations of purpose, self-direction and exploration.
But that, right there, is the problem.
Because if you strip the terms back to their philosophical core, the difference between agency and autonomy is not a matter of degree — it is a fundamentally different state of existence.
Agency is about working. Autonomy is about wanting.
And while that distinction might sound trite, it cuts very deep.
An agent can take action — but only within a frame defined by something else. So even when an agent appears sophisticated — adapting, planning, even ‘choosing’ between options — it is still operating within a delegated space of action. It has agency within the tight bounds of its context — but no ability to choose that context.
Conversely, an autonomous entity is one which defines its own goals. It determines what matters and generates its own reasons for acting. Philosophically, autonomy is the capacity to decide whether and why it makes sense to act at all. Which is why autonomy is historically tied to living beings with needs — survival, reproduction, homeostasis — that create intrinsic stakes.
Autonomy, in other words, is inseparable from having something to lose.
And, however you frame it, LLMs — and the agents they power — want nothing and have nothing to lose.
Which means that they can only ever be agents.
Autonomy as memetic shortcut
The issue with this philosophical mis-categorization isn’t just that AI can never be autonomous — or that people are at risk of some minor practical misunderstanding of its capabilities.
The risk is more systemic — that autonomy has become a kind of memetic shortcut in the AI debate. One that makes people think of perfect machines free of the need for messy human oversight. A vibe that offers a mental shortcut to capabilities AI will never be able to offer — and in doing so shapes inappropriate strategies.
Because the very word ‘autonomy’ encourages people to jump straight to an end point based purely on ‘feels’.
Instead of starting from a realistic model of AI as bounded agency, talk of autonomy instead encourages a feeling that the future — whether we want it or not — is autonomous AI. Collapsing multiple ideas — independence, self-direction, no humans — into a single vibe. One with a strong implication:
‘We must remove people to remain competitive.’
Because while agency suggests borrowed legitimacy and keeps humans in the frame, autonomy suggests independence.
And in doing so, it smuggles conclusions into our lizard brains before we’ve even had a chance to reason about them — because if AI is autonomous then it can replace workers. And in doing so it creates an unconscious gravitational pull towards the idea of fewer humans, less oversight, and fully automated operation.
Even though AI can never fulfil that promise — and shaping the future as if it can will break things. Badly.
Agency > autonomy
This is why it is so critical to be precise in our language.
To be careful. To avoid framings that lead to potentially catastrophic outcomes through shortcuts and lazy mental models. To challenge people who drift into using the word autonomy.
Because loose language doesn’t only incorrectly shape understanding — it pre-packages conclusions that shape and narrow intent, ultimately leading to decisions that affect real companies, real people, and real societies.
Decisions about how we structure work, how we design systems, where accountability lies, how far we go in removing humans — none of these are just efficiency details. They are the foundation of how we innovate, control and govern.
And so without addressing these points we risk ‘breaking things’ at the most fundamental levels — not because the technology requires it, but because our understanding of it has been distorted from the outset by loose — and sometimes deliberately misleading — terminology.
It’s true that today’s AI systems exhibit increasingly high agency — they plan, act and coordinate. But they do not possess autonomy in the philosophical sense, because they do not generate their own ends or have intrinsic stakes. They are therefore powerful tools — but tools that need to operate inside human-defined frames.
Because while an agent can answer the question ‘What should I do next?’, only a human — an autonomous entity — can answer the question ‘What is worth doing at all?’ Purpose can never be the domain of an agent.
Purpose cannot move. Judgement cannot move. Accountability cannot move.
Which means the backbone of our systems will always be people deciding what is worth doing — and we shouldn’t pretend otherwise by accepting vendor-driven narratives about autonomy that push us into bad decisions.
Especially from the ones who want to move fast and break things.