Monday Morning Moan - madness, Molotovs, and Mordor. Is one AI ‘Ring to rule them all’ really worth all this?
- Summary:
-
As the AI world continues to build its own Middle Earth, are we going to leave it to ‘Samwise Altman’ to suggest what we really ought to do with the ‘Ring to rule them all’ everyone’s after?
No-one should have the ring!
Might that be the most important thing anyone has said about AI this year to date?
Ok, it’s a hoary Lord of Rings metaphor, but look beyond the cliché and consider the sentiment in relation to that other sought-after magical totem, Artificial General Intelligence (AGI), always famously five minutes, five weeks, five months, or five years around the corner, depending on which AI-hungry Gandalf is chasing attention at any given time.
Despite their Tolkien-ite tone, those five opening words didn’t come from Middle Earth’s hobbit Samwise Gamgee, but from another Sam altogether - OpenAI’s Sam Altman, a man who, to say the least, hasn’t had a good few days. Now, I’ve got my doubts about Altman, his execution of his role, and his perception of his responsibilities in the AI sector, but he doesn’t deserve what happened to him late last week in any way, shape or form.
According to police reports, Altman’s San Francisco home, which he shares with his husband and young son, was the subject of an attack in the early hours of the morning by a man throwing a Molotov cocktail. Fortunately no-one was injured. Some hours later the same individual was arrested after threatening to burn down OpenAI’s HQ.
The attack was confirmed by Altman himself on Saturday in an emotional blogpost in which he posted a picture of his husband and child, stating:
Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the hopes that it might dissuade the next person from throwing a Molotov cocktail at our house, no matter what they think about me. The first person did it last night, at 3:45 am in the morning. Thankfully it bounced off the house and no-one got hurt.
At which point, let me add diginomica’s voice to that last sentiment. While this is now clearly an active police matter and therefore we need to be careful what we say here, let’s just make our stance irrevocably clear here - if anyone reading this thinks that such actions are in any way acceptable or justified, please find another website to visit. You’re not wanted here.
Hellsite
But of course there have been plenty of voices raised online to accuse Altman of playing the victim here. The briefest of trawls on X - about all I could stomach, frankly - made that clear. Alongside the predictable disgustingly homophobic and anti-semitic filth that apparently counts as free speech when exercised by the pond life that infests Elon Musk’s hellsite of a bigots echo chamber these days, were many other negative reactions, including:
Did this actually happen or did you pay someone to distract from the article proving you are a sociopath? [NB - yes, it did happen! Shall we move on to questioning the moon landings next?]
Or:
Oh congratulations, Sam. You finally got your “victim moment.” Molotov. Threatening letters. Husband and kid on camera. A blog post with family photos already queued up. What a perfect little stage. Lighting, emotion, music — all f*cking on point. You got attacked? In the same week you're getting dragged by the whole country, investigated by a state AG, sued by Musk, and sh*t on by your own users? That timeline is cleaner than a Hollywood script.
Meanwhile others were more concise in their opinions:
Scam Altman.
Unforced error
Now, Altman did make an unforced error in his blogpost that he came to regret, when he said:
There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me. I brushed it aside. Now I am awake in the middle of the night and pissed, and thinking that I have under-estimated the power of words and narratives.
That “incendiary article” was The New Yorker exposé by Pullitzer Prize winner Ronan Farrow, the man whose investigative journalism brought down Harvey Weinstein and helped trigger the ‘Me Too’ movement. His treatise last week on Altman is an excoriating read that does not present the OpenAI boss in a favorable light, to say the least!
Altman certainly can’t be expected to have enjoyed The New Yorker piece, but to link, even by implication, “incendiary” news copy to an incendiary attack on him and his family is stretching a point and undermining his own position here. To his credit, he appears to have realised this and later said:
That was a bad word choice and I wish I hadn't used it. It has been a tough day and I am not thinking the most clearly that I ever have.
OK, fair enough. I can’t imagine any of us would be fully on top of our game if the same sort of thing happened to us. But the bigger problem for Altman when it comes to the article is that it simply isn’t something that can be dismissed as a ‘gotcha’ hit piece or - heaven help us! - that old stand-by when confronted with something you don’t care for, Fake News!
It’s 18 months of hard work, based on hundreds of interviews, internal and external memos, HR documents, Slack messages, and private notes. There’s no way that The New Yorker’s legal team let that piece out into the wild without having gone over everything with the finest of tooth combs. This wasn’t just quickly run through ChatGPT for checking, that’s for certain!
The view from the shire
But that aside, Altman proceeds to use the majority of his blog seemingly to dive deeper into his worldview, his hopes, his fears, yada yada yada. Now, some of this comes across as cloyingly sickly and banal in the extreme, with echoes of 1970s beauty pageants as he expresses his ambition to be “working towards prosperity for everyone”. Yeah, yeah, yeah - that’s why OpenAI went from being a not-for-profit to a ‘very-much-up-for-as-much-profit-as-we-can-get’. That’s may be an aspiration that is a long way from being realised, but the direction of intended travel is clear.
Then there are some platitudes about fears around AI and the compulsory gestures in the vague general direction of regulation and safety, with Altman quick to insist that this needs to be everyone’s responsibility. He’s right, although I do wonder if his rationale for this is coming from the same place as mine, but let’s leave that for another day. But he’s on a roll now:
AI has to be democratized; power cannot be too concentrated. Control of the future belongs to all people and their institutions...I do not think it is right that a few AI labs would make the most consequential decisions about the shape of our future.
Fine words. Backed up by...?
Conflict, moi?
Pausing only for a pre-court drive-by sniping at his bête noire Musk - “...remembering how much I held the line on not being willing to agree to the unilateral control he wanted over OpenAI...” - Altman declares himself in favor of collective responsibility, which certainly isn’t a narrative that chimes easily with some of the output of Farrow’s investigation.
But Altman goes further - it seems Mr Sam hates falling out with anyone, despite the long, long list of those who left OpenAI under varying circumstances in its short life. He insists:
I am not proud of being conflict-averse, which has caused great pain for me and OpenAI. I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company. I have made many other mistakes throughout the insane trajectory of OpenAI; I am a flawed person in the center of an exceptionally complex situation, trying to get a little better each year, always working for the mission.
In other words, mea culpa, mea maxima culpa!!! Everyone got that before the Musk vs OpenAI trial gets underway in a couple of weeks? There’s more in case you need it:
We knew going into this how huge the stakes of AI were, and that the personal dis-agreements between well-meaning people I cared about would be amplified greatly. But it’s another thing to live through these bitter conflicts and often to have to arbitrate them, and the costs have been serious. I am sorry to people I’ve hurt and wish I had learned more faster.
So far, so self-aware/self-centered/self-pitying - delete as applicable according to your personal assessment. But then things get more interesting as Altman turns his attention to OpenAI’s place in the emerging AI industry. Abandoning Tolkien for a moment, another authorial genius is called upon as Altman ruminates on “why there has been so much Shakespearean drama between the companies in our field” in such a short space of time.
It comes down to one thing, he reckons:
Once you see AGI you can’t unsee it.
Now from where I sit, once you’ve seen one AGI press release pronouncement, you don’t need to worry about unseeing it; there will be another along in ten minutes to take its place!
But for Altman, this appears to be a ‘see the face of God’ moment - and we’re back to tortuous Tolkien imagery to back it up:
It has a real ‘ring of power’ dynamic to it, and makes people do crazy things. I don’t mean that AGI is the ring itself, but instead the totalizing philosophy of “being the one to control AGI”. The only solution I can come up with is to orient towards sharing the technology with people broadly, and for no one to have the ring.
So...
As noted above, throwing a Molotov Cocktail at someone’s house, anyone’s house, is wrong. End of story, no arguments, go directly to jail!
That said, Altman’s essay is a fascinating, if flawed, insight into his thinking, as he calls for stabilization while simultaneously pushing out and pursuing the most societally de-stabiliizing technology ever seen. The contradictions come thick and fast.
The ‘riches for all’ angle might ring a bit truer if Altman hadn’t trashed the non-profit origins of OpenAI in favor of chasing ever more enormous funding rounds, in the process seeding outrageous growth expectations to a Wall Street ‘greed is good’ brigade that is going to hit back in the nastiest way possible when those voracious appetites are not fed.
As for prosperity for all coming down the tracks, maybe that will come as some crumb of comfort to the millions of content creators around the world whose copyright concerns have been trampled over as AI models run wild on unauthorised training exercises? Or maybe it won’t...
And there’s a claim Altman makes at one point in his tome that “empowering all people, and advancing science and technology are moral obligations for me”. For some commentators, claims of morality might sit uneasily at the moment, what with memories still fresh of OpenAI’s unseemly rush to fill the Anthropic-shaped hole at the US Department of War, apparently ready to sign up to terms that its rival saw as ethical red lines that could not be crossed.
He does make one plea that surely we can surely all get behind:
While we have that debate [about who controls that AI Ring to rule them all], we should de-escalate the rhetoric and tactics and try to have fewer explosions in fewer homes, figuratively and literally.
I will admit here that members of my own profession have to accept some responsibility for the fevered nature of the debate around AI’s impact. Not a day passes without some mainstream media headline or another along the lines of, ‘The bots are coming for your job!’.
During the Industrial Revolution, the Spinning Jenny came for the jobs and the Luddites took to violent protest, and lost - and that was without the benefit of Uncle Elon’s propaganda platform to boost dis-content and mis-information - free speech, remember! - out to the widest possible audience.
But alongside such lurid media reaction, there are other factors in play that are not helpful. A US Government that overnight declares that a US AI success story is now to be considered an enemy of the state run by “left-wing nut jobs” - © D.J.Trump - because it won’t toe the line when it comes to changing a contract, is not conducive to a culture of calm, rational, and utterly essential ethical debate at a time when that debate is desperately needed!
Now is the time when we do need to have ferocious disagreements about AI and its journey of travel, but in a productive, civilised, and societally/globally beneficial way.
Instead we have people thinking it’s OK to firebomb a house in which a toddler is sleeping in his bedroom, just because his dad is head of an AI firm.
The Eye of Sauron really is watching over us, isn’t it?
My take
“I wish it need not have happened in my time," said Frodo.
"So do I," said Gandalf, "and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”
J.R.R. Tolkien, The Fellowship of the Ring