Main content

Monday Morning Moan - welcome to the Unenlightenment! Mr Altman has been waiting for you...

Stuart Lauchlan Profile picture for user slauchlan March 16, 2026
Summary:
Do you want to live in the real-life episode of dystopian drama Black Mirror that the OpenAI CEO has no problems pitching to the world?

monday morning moan

He finally said it loud - and if you weren’t paying attention, then don’t come complaining to me in a few years time about the 'Black Mirror' version of reality that has unfolded.

We all knew that most of the AI enfants terrible fuelled their product capabilities by helping themselves to other people’s content and IP from around the internet - if you’re still in denial about that, I have to ask, ‘How many multi-billion dollar out of court settlements is it going to take to convince you?'.

And while many of us probably suspected what the next step in all this would be, even the most cynical - ‘Hi, my name is Stuart and I am an enterprise tech sector commentator! ‘- might have wondered if anyone would actually have the nerve to put it into words.

Thanks heavens then for Sam Altman - and that’s not a phrase you’ve ever heard me use before (and may never again) - who just articulated his vision for the future. The man who once blithely tried to convince us that hallucinations (aka lies) are a feature, not a bug, of generative AI and, by implication, we should embrace them as part of the wider experience, has a new theory he wants to share - and this one’s just as clod-hoppingly awful:

We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.

So...all that information that has been pillaged from around the internet, all that creative thinking that has been assimilated into the Borg Collective, Sam’s big idea is to sell all of that back to you and, in what admittedly would be a novel development for OpenAI, turn a profit on it!

You want your knowledge? It'll cost you...

Now, I’m old enough to remember that OpenAI was founded as a non-profit. It was open source and for everyone - “To benefit humanity”, wasn’t it? Altman himself once said he didn’t want equity in OpenAI because, “the mis-aligned incentives would be sub-optimal to the world as a whole” 

Oh how quickly they grow up! Today the source code is no longer open and the firm became decidedly for-profit (in a ‘think of a really, really BIG number, then double it’ very long-term aspirational sort of way) as the bots screen-scraped the World Wide Web for juicy morsels, and the AI models grew bloated on the intellectual capital they gobbled up.

So why would anyone be surprised that Altman is now sat on a stage at a tech event happily confessing to his dystopian worldview and admitting that he wants free rein to execute on his vision as well, something worth bearing in mind next time OpenAI publishes some more performative platitudes around the importance of regulation.

Delivered in what I’ve come to think of as a typically Altman-esque manner - a sort of toxic combination of ‘man who just didn't listen during all those media training sessions’ and ‘someone who is just genuinely detached from the human implications of the theory he’s pitching’ - he pitches:

The demand that we see...seems like it's going to continue to just go like this, and if we don't have enough, we either can't sell it, or the price gets really high and it, you know, kind of goes to rich people, or society makes a bunch of sort of central planning decisions that I think almost always go badly about. You know, ‘We're going to use our limited compute supply for this and not that’. So the best thing to me, throughout all the history of capitalism, innovation, whatever you want, is to just flood the market.

Cutting off the meter

Of course, despite recent abuses, in most parts of the world electricity and gas and water are regulated industries. An electricity provider can set rates and fees and compete for your business, but state intervention can take place, prices can be capped, the consumer can be protected if the need arises. It might not seem that way when you open your next obscene electricity bill, but there is a framework in place that is intended to defend you from rapacious greed on the part of suppliers.  


Altman is arguing for the ‘greed is good’ part without the annoying regulation bit. That said, it’s worth noting that OpenAI, which openly admits it’s set run at losses of billions of dollars a year until at least 2028 at best, briefly floated the notion that government ought to provide it with a financial backstop in order to shore up the business! (This was an idea that was rapidly walked back!) Some forms of government intervention is OK, it seems, and yes, real utility providers have benefitted from state financing over the years, but the price they accept is regulatory oversight.

All of this is before you dig into the question of how you meter intelligence in the first place, and what the societal implications are. There’s a dangerous premise here that AI Armageddon pedlars have been pushing for a long time and it’s one that owes its roots to long before the current hype cycle took flight.

When I was a kid, we weren’t allowed to use pocket calculators in math or arithmetic  exams. We had to learn to think and count and do our times-tables. Today, calculators are a standard part of the curriculum and the exam process - you can’t put the genie back in the bottle! - but I was shocked to find a friend’s teen daughter was unable to do basic mental arithmetic. Why bother? She has a calculator on her phone, never mind a device dedicated to it. But what happens when she doesn’t have the phone with her?

Another case in point, people used to want to learn foreign languages, either as part of the academic program or simply for the pleasure of being able to speak outside your native tongue. Lots of people still do, I imagine. But lots more won’t bother now as they can again turn to their phones and get AI translate for them. And as more and more people take that option, their ignorance and lack of curiosity makes them increasingly dependent on a source of truth they have to trust is going to be reliable.

A few years ago now I did a shift in a newsroom to cover for the publication’s news editor. I was stunned at the silence in the room. When I was a boy reporter, the phones were always ringing, in and out, and there was a constant level of noise and interaction. On this day everyone sat quietly staring at their screens. When no stories appeared, I shouted out, ‘What’s going on with content?’. ‘We can’t do anything,’ said one reporter with complete conviction, ‘The internet’s down’. ‘Well, pick up the phones then,’ I said. This was met with looks of utter bafflement and a query of, ‘What do you mean? Who do you want us to phone?’.

I’m a technophile, not a technophobe by any manner of means.  But I’m conscious that tech dependency in its various forms over the decades has been a powerful and highly dangerous addiction and one that’s all-too-easily drifted into. And with AI, we’ve moved well beyond the occasional spliff and into  being offered the really hard stuff! As Palantir CEO Alex Karp cheerfully noted last week, this tech is “societally dangerous”.

Do we really want the society envisaged by Altman and his peers - and he’s not alone in this, even if he’s the only one gauche enough to come right out with it - wherein you don’t need to have to think about anything anymore, just rent knowledge, the meter’s running? That’s assuming you can afford to pay for this information utility, of course. You think digital poverty is a problem now? You ain’t seen nothing yet...

Altman was accused online by one commentator of aiming for ‘cognitive colonialism’, which is nice turn of phrase. But this is the reality of where we’re at and this is what the future might shape up to be. He’s got your data; now, if you want it back, here’s what it will cost you. And who’s going to stop him?

I can only wonder, how are those ‘mis-aligned objectives’ you talked about working out for you, Sam? 
 

Image credit - pixabay

Loading
A grey colored placeholder image