What the history of money - and its rise - could tell us about AI's future
- Summary:
-
AI, like money before it, is fundamentally a coordination technology that's reorganizing society, but unlike money's gradual evolution with institutional safeguards built through centuries of struggle, AI is being rapidly deployed by a handful of private companies before democratic oversight can catch up
I've been listening to a podcast this week featuring economist and historian David McWilliams on the history of money, and it got me thinking about the parallels we could draw between the rise of money and the potential impact of AI. Bear with me. We often don’t think of it this way, but McWilliams argues that money is fundamentally a technology - a clever piece of tech developed by humans that facilitates knowledge sharing, cooperation, and innovation. It's a mutual agreement between people that has arguably done a lot to replace war and conflict to capture value.
McWilliams argues that money reorganized human society by creating a shared system of trust that let strangers cooperate without violence. Before, if you wanted someone’s land or stuff, you had to take it by force. Money changed that. It separated work from family, enabled markets to function across distance, and ultimately made possible everything from nation-states to modern democracy.
Listening to the podcast I couldn’t help but think that there’s a similarity here with AI. Why? AI is also a technology that facilitates transactions. I started to wonder, given that AI is about automating the allocation of resources, whether we are at the beginning of another ‘coordination revolution’. If my hunch is right, I’m not sure that most of us - including our governments - have fully grasped what's at stake.
We often think of AI as just a productivity tool or simply an automation technology. But it’s arguable that AI could become a fundamental transaction layer through which humans coordinate with each other across huge swathes of ‘functioning society’.
It’s worth noting that money evolved organically over millennia, with institutions adapting slowly through crisis and political struggle. Whereas AI is being built by a handful of companies within a few years, deployed at global scale before democratic institutions and societies can respond. We have seen federal intervention attempts, such as the EU AI Act, but I’m not convinced that those regulations will keep up with AI development. And unlike money, which no single entity controlled, AI coordination infrastructure is being built and controlled by private companies before we've even agreed on what public oversight should look like.
Whilst companies grapple with how to govern AI internally, I think there’s a bigger governance crisis at play - a societal and democratic one. I think we should consider how governments could look at history to understand what’s potentially unfolding before us.
How money changed everything
As I’ve already noted, and as I learned during the podcast, before money became widespread, coordination happened through family, hierarchy, and direct exchange. You worked the land you were born to. You owed obligations to your lord or your clan. Trust was personal and local. Economic activity was limited by the number of people you could know and track obligations with.
Money changed this by creating a new way that we could exchange goods and services somewhat separate from our personal relationships (although, of course, personal relationships still play a role). Suddenly you could trade with strangers, accumulate value across time, and compare and place value on things that weren’t like-for-like. Money - a seemingly simple innovation - changed almost every area of human life.
Work became wage labor. Suddenly our time was valuable. We sold our time rather than being bound to land or kin. The employment relationship as we know it, with its contractual obligations and workers, is a product of monetary exchange. It separated "work" from "life".
Governments also shifted from feudal obligations into centralized nation-states. Why? Because money created something you could tax. You need money to pay taxes, so you need wages, so you tolerate state authority. This led to armies and welfare states. The modern state is fundamentally a result of monetary taxation.
Money also impacted family structures. Pre-money societies relied on families for economic security - you cared for elderly parents because that's what children did, backed by social obligation and lack of alternatives. Whereas, money enabled the nuclear family. You could earn wages, buy services, and your obligations became financial.
Arguably, and as McWilliams lays out, money even resulted in political power being democratized. Money could, in theory at least, be accumulated by anyone, unlike hereditary titles or land ownership. The middle class emerged. This resulted in ‘markets’ - and markets created new forms of social mobility. Workers who could withhold their labor had bargaining power. Unions were created. Democratic politics emerged alongside market economies partly because workers were necessary, and therefore politically significant. We, as the people, had some power.
However, as we know now, capital accumulated and wealth gaps also emerged. Social relationships became more isolated and often people feel like ‘the meaning of life’ is about work and money. And most importantly, it took centuries of conflict - revolutions, labor movements, world wars, economic collapses - to build the institutions that made money-based coordination work reasonably well: contract law, central banking, labor protections, consumer safeguards.
We didn't get those institutions overnight, nor did we understand that we needed them before money was introduced. We got them through multiple crises.
AI as the new coordination layer
I think it’s worth considering how AI could follow a similar pattern (albeit compressed into years rather than hundreds and thousands of years). Although we are impressed by the generative capabilities of ChatGPT etc, at its core, AI is a coordination technology. It allows complex resource allocation without human deliberation. It can match buyers to sellers, workers to tasks, information to needs, at a scale that would be impossible through human coordination.
What I’m wondering, and what concerns me, is if AI isn't just working alongside money as an economic tool, it's becoming the layer beneath money that determines who can access financial services, who gets hired, who receives benefits, whose content gets seen, which businesses get found, whose reputation enables transactions?
Money still exists, but increasingly it flows through channels that AI controls and according to rules that AI enforces. Google search results and credit scores are the prime examples (not to mention dating profiles!).
We are already seeing platforms use AI to match tasks to workers in real-time, to monitor performance, to set dynamic pricing for labor. We already have Uber, Upwork, delivery apps, content moderation farms. The attention economy on social media is a direct result of algorithmic resource allocation. And the behavior that follows is a direct reaction to what algorithms favor. Work is becoming less about standardized employment and more about algorithmically-mediated tasks. Money changes hands, but AI determines the terms.
Government is also already starting to rely on AI for core functions that were decided by humans, based on the system of money - benefits eligibility, fraud detection, criminal justice risk assessment, border control. Authoritarian states, like China, have already built algorithmic governance into the core of their societies. In democracies, it's messier, but we are heading the same way.
Political power is shifting toward whoever controls the coordination infrastructure (AI companies). If a handful of companies control compute, cloud platforms, foundational models, search, social connection, and marketplaces, they have power that's starting to rival nation-states. Not through violence or land ownership, but through control of the means of coordination itself (as nation states did with money).
Are we heading for digital feudalism?
If governments continue treating AI as just another technology to be lightly regulated through existing frameworks, I think we could end up with something that looks like digital feudalism. This feudalism would be mediated by algorithmic systems rather than hereditary lines and aristocracy. Pre-money systems were status, land and family-based. Money disrupted that by enabling mobility, contracts with strangers, and merit-based exchange - at least in theory.
It probably sounds a bit hyperbolic, but AI could, I think, recreate feudal dynamics in new forms. Instead of being bound to land, you're bound to platforms. Instead of owing obligations to lords, you're dependent on algorithmic systems for access to work, credit, services, and social connection. Instead of hereditary status, you have algorithmic scores and reputations that determine your access to opportunities - scores you can't see, can't contest, and can't escape.
The "Lords of Infrastructure" already exist. A few companies control the platforms where economic activity happens, the cloud compute that runs modern business, the AI models that make decisions, and increasingly the interfaces through which we access services. You don’t have a lot of power with these systems - you comply with their terms of service or you exit, and exit costs can be large. Enterprises don’t switch from one service provider to another easily, for example. And even at the consumer level, switching away from Google or from Instagram has consequences personally.
We are arguably seeing private law emerge through platform policy rather than democratic legislation. Decisions that profoundly affect your life - credit worthiness, job prospects, benefit eligibility - are made by black-box algorithms with no meaningful right to explanation or appeal. Money still matters, but AI determines who gets access to it and on what terms.
The class structure that emerges isn't simply rich versus poor. It's those with technical capability, resources, and the ability to switch providers versus those who are dependent on platforms and don’t have any leverage (arguably, the majority of us). The first group can audit what's being done to them, switch to new services, negotiate terms, afford recourse when wronged. The second group - a new digital poverty line - cannot. They're subject to algorithmic control without the protections of either competitive markets or democratic oversight. ‘Trusting the markets to correct themselves’ becomes meaningless.
We could also see algorithms simply exclude people. Too expensive to serve, too risky to include, not worth optimizing for. A chilling thought.
What governments could do
One option moving forward is that we wait for a crisis to emerge and we hope that governments and institutions respond quickly enough to shift the balance of power in the system towards the majority. However, that’s risky and it would be better to proactively learn from the history of money. The coordination infrastructure is being built at speed - with billions and billions of dollars ploughed into it - and every day that passes without democratic oversight makes it harder for government to intervene later. As such, there’s a few things governments could and should be considering:
1. Treat AI as critical infrastructure
Governments should be recognizing that certain AI systems - those that make decisions about credit, employment, benefits eligibility, content distribution, and identity verification at scale - are not just private services. They're the infrastructure through which modern society functions. Just as we regulate banking, telecommunications, and utilities because they're too importantl to leave entirely to market forces, we need critical AI infrastructure regulation.
We should be requiring the registration of AI systems, transparency about what they optimize for, independent auditing requirements, and most importantly: the right to human review and override for decisions that materially affect people's lives, livelihoods, or rights. Transparency is essential.
2. Build public options
It’s arguable that governments, if we still want effective governments, need to provide some services so citizens that aren't entirely dependent on private platforms. The UK is an example of a nation that has recognized this in the past (British Telecom, Royal Mail, the NHS). In an AI world, these could include:
- Public digital identity systems that don't track and monetize every interaction
- Basic credit assessment infrastructure that's transparent and contestable
- Open-source models
- Interoperability standards that prevent platform lock-in
This isn't about government replacing private sector innovation. It's about ensuring citizens have alternatives. The government hasn’t caught up that its core ‘public services’ need to look radically different.
3. Establish algorithmic due process as a fundamental right
Every person subject to automated decisions should have the right to: notice that an algorithm was involved, a meaningful explanation of the decision, and the ability to appeal to a human authority with power to override the system. This can't be voluntary corporate policy - it should be legally enforceable.
We need legal principles around due process in criminal justice, administrative law, and employment. The days of “you agreed to this in the terms of service” simply aren’t fair in a world of automated decision making.
4. Mandate interoperability
The ability to leave a platform is meaningless if switching costs are too high. Governments should require that users can move their identity, reputation data, and social connections across platforms. This was what happened with telecommunications and financial services regulation - preventing any single provider from holding customers hostage.
For AI systems, this means requiring open standards for data exchange, limiting artificial barriers to switching, and potentially breaking up vertical integration that locks coordination infrastructure to specific providers (you wouldn’t want a bank that’s a monopoly!).
5. Create liability for AI harm
I can’t believe I’m arguing for more insurance, but if an AI system wrongly denies someone credit, misdiagnoses a medical condition, or incorrectly flags someone as fraudulent with huge consequences, someone should be liable. Currently, terms of service often disclaim all liability, and affected individuals have no recourse. The onus is entirely on the citizen/consumer.
Governments should establish liability for algorithmic errors, require AI system operators to carry insurance, and create streamlined compensation mechanisms for those harmed. Economic incentives are useful - if getting it wrong is costly, accuracy improves.
6. Invest in technical capacity
None of this works if governments lack the technical expertise to understand, audit, and regulate AI systems. This requires massive investment in:
- Technical staff
- Public research institutions that can develop oversight methodologies
- Cross-border coordination on AI governance standards
- Education systems that produce the next generation of democratically-accountable technical expertise
There’s actually an opportunity here too. If governments invest in this - and pay people properly - they can lead in this field. The alternative is a world where the only people who understand the systems work for the companies building them.
7. Rethink antitrust
Traditional antitrust focuses on market share and consumer prices. But AI concentration creates power through coordination control, not just market dominance. Market power looks fundamentally different. A company can have relatively small revenue but enormous power if everyone must use their identity verification, their matching algorithms, or their decision infrastructure.
Competition law needs to consider who has the power to set terms that others must accept? Who controls infrastructure that has no meaningful alternatives? Who can exclude people from economic participation through algorithmic decisions? These questions of coordination power are more fundamental than traditional market metrics.
Like money, this matters
McWilliams' main point about money is that it is as a coordination technology. Money was transformative because it allowed strangers to cooperate and made complex economic activity scalable. But money still required human judgment, human negotiation, and human institutions to work in practice.
We aren’t there quite yet in most areas of life and business, but AI arguably is set to automate coordination itself. It has the potential to automate the decisions about who gets access, on what terms, under what conditions. It's not replacing money - it's becoming the layer beneath money that determines how monetary systems work, who participates in them, and what the rules are.
If money reorganized society by creating a shared system of trust, AI could reorganize it again by centralizing trust in algorithmic systems controlled by private actors. The difference in power concentration should not be underestimated.
And unlike money's gradual evolution, this is happening fast. Everyone can feel that. It’s happening fast enough that we could end up with governance structures - or their absence - before democratic institutions catch up. Every day that passes with AI systems making decisions without public oversight, without due process, without accountability, makes it harder to establish those norms later.
My take
We're not debating whether AI will be powerful or whether it will change how we coordinate. Those questions are obvious. We're debating whether that power will be governed, checked, and accountable - or whether it becomes the standard governance infrastructure with no meaningful way for people to object to it.
Money's history shows that coordination technologies don't govern themselves. It took revolutions, world wars, depressions, and centuries of labor struggle to build the institutions that made monetary coordination work in a way that was acceptable to the majority of us: central banks, labor law, consumer protection, antitrust regulation. We got none of these through foresight - we got them through crisis.
I am arguing that we can choose to learn from that history. We can build democratic institutions for AI coordination now, proactively, before power consolidates in ways that become impossible to argue with. Or we can wait for crises to force action. I’d argue the former is more beneficial. It’s no coincidence that we are seeing a rise in populism. I think people are reacting to a system that they don’t quite yet understand or know how to articulate (myself included).
Treating AI as just another technology to be lightly regulated through existing regulations is choosing to let private coordination infrastructure become the governance layer for modern society. Nation states become weaker and a select few companies set the rules. At the moment, there is enough power in nation states to balance this out, but it can’t be taken for granted.