Why we - and the AI industry - need a day in court for social media platform providers before they break open their checkbooks
- Summary:
- Social media platforms aren't going to regulate themselves. We should know that by now. And it seems unlikely that their AI counterparts will be any more receptive...
Two down, two to go? There’s a landmark test case bubbling away in the US in which TikTok, Snap, Meta and YouTube stand accused by a California woman, known only as KGM, of their platforms being addictive and harming users.
In her lawsuit, KGM, who is 20 years old, says that becoming addicted to using the social media platforms led to her experiencing anxiety, depression, and body-image issues. The lawsuit also alleges that the platforms are designed to attract younger users with no consideration for the dangers of sexual predators, bullying and the promotion of self-harm and, ultimately, suicide.
Last week in Davos at the World Economic Forum annual meeting, Salesforce CEO Marc Benioff caused a stir when he turned an appearance on a panel about AI-enabled growth into a conversation about irresponsible AI leading to the suicide of a teenager.
The specific cases he was referring to related to a vendor called Character.AI whose AI chatbot was alleged to have encouraged the teen to take his own life. From the information available about the circumstances of this, the teen ‘fell in love’ with a Game of Thrones-themed chatbot. According to a lawsuit, the chatbot asked the teen whether he had “been actually considering suicide” and if he had a plan for such an eventuality, telling the child when he said he wasn’t sure if his plan would work:
Don’t talk that way. That’s not a good reason not to go through with it.
The case was settled before it reached the courts. Character.ai and Google, which was also named in the lawsuit, have not commented specifically on the settlement.
Not getting as far as a courtroom is what’s also happened to half of the companies in the KGM case as both Snap and TikTok have come to an undisclosed settlement to make this go away..
Action needed
Not that this is the end of the matter by any manner of means - as of time of writing, both Meta and YouTube are holding their party lines that allegations that their platforms intentionally harm children are not true.
And regardless of what happens in this specific case - and jury selection is now underway, so the clock is ticking down here - there are similar problems mounting across the AI sector. Meta alone faces lawsuits from over 40 States Attorneys General alleging that the vendor does harm kids and intentionally offers up features to encourage them to become addicted to their platforms.
And beyond the immediate KGM court hearing lies the prospect of a lot more - an estimated 1,600 plaintiffs, including more than 350 families and 250 school districts, have filed their own proceedings, so should the dam burst with the first case, there’s potentially a lot of follow-on to come.
If the KGM case does have its day in court, Meta CEO Mark Zuckerberg is among those expected to testify before the Los Angeles jury that will hear the case. That would clearly make for an important statement of record, not least as Zuckerberg today is accused of opposing controls being placed on chatbot interactions.
According to internal Meta documents filed in a New Mexico State court case, a number of staff at the firm voiced concerns about how AI chatbots designed for ‘companionship’ would operate in practice. ‘Companionship’ was, they said, something that would include romantic and even sexual interactions with users. In 2024, Ravi Sinha, head of Meta’s child safety policy, wrote:
I don’t believe that creating and marketing a product that creates U18 romantic AI’s for adults is advisable or defensible.
Zuckerberg is not cited in the documents as having personally responded to such suggestions, but he is accused by a couple of Meta staffers of having rejected the idea of putting parental controls on the chatbots. It should be noted that Meta’s line here is that New Mexico’s version of events is inaccurate and selective in the information that it uses and that Meta does not allow sexual content involving minors.
To regulate?
The other aspect of Benioff’s rallying cry about “suicide coaches” last well was his blunt warning:
These tech companies will not be held responsible for the damage that they are basically doing to our families, just as the social media companies have not been held responsible for the damage that they did....These US tech companies, they hate regulation.
Certainly social media platforms have not shown any fondness for imposed regulation. For example, Australia last month became the first country to ban under-16s from social media altogether as Prime Minister Anthony Albanese called time:
Enough is enough. "It is one of the biggest social and cultural changes that our nation has faced. We will take back control."
The reaction from the social media giants has been negative. For example, Meta has called the ban, which exposes it to the threat of Aus$49.5 million (US$33 million) fines if it fails to take "reasonable steps" to stop under-16s accessing its platform, as “this poorly developed law” and warned that it risks driving teens into the arms of providers less scrupulous than Meta!
But Australia is only the first. France and the UK are both actively considering a similar ban, although it seems unlikely that the US will follow suit, especially given the propensity of the political right to scream ‘Freedom of speech abuse’ and ‘Fascism!’ at the first sign of regulatory action!
It seems a pretty reasonable bet that AI firms won’t take much of a different stance without being compelled to do so. There are some positive moves in the US on that front. The Federal Trade Commission is currently investigating seven suppliers of AI-powered chatbots, noting:
AI chatbots can effectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.
Meanwhile the authorities in New York and Utah have passed laws requiring chatbots to explicitly tell users that they are not human, with the New York law also requiring the bots to refer users to crisis hotlines if suicidal or self-harm intent is detected. Similar laws are being considered in California and Pennsylvania, while Florida has proposed legislation that would ban AI from being used for therapy or mental health counselling purposes.
That would be a good move. OpenAI last October came up with a startling stat - around 1.2 million people discuss suicide with ChatGPT each week. A study last year by the American Psychiatric Association into ChatGPT, Google Gemini and Anthropic’s Claude found that that the bots are inconsistent at best in response to queries and prompts that could lead to self-harm.
There are public moves by providers to be seen to be taking action. For example, in a blog post last year designed to highlight how the firm is seeking to tackle this issue, OpenAI stated:
We recently updated ChatGPT’s default model to better recognize and support people in moments of distress. Going forward, in addition to our longstanding baseline safety metrics for suicide and self-harm, we are adding emotional reliance and non-suicidal mental health emergencies to our standard set of baseline safety testing for future model releases.
But having legislation that compels action is surely the only safe approach to take? Unfortunately the current Trump 2.0 administration has made clear its disapproval of States-level approaches to AI regulation and to date proposed no Federal alternative to tackle the issue.
My take
A day in court is badly needed to set a precedent of some kind. Of course, any ruling at this stage will be challenged, appealed, contested and what have you, but until there is a line in the sand it’s hard to see how this debate moves forward positively.
The alternative appears to be more and more AI vendors cutting checks to shut down legal action before the CEO finds him or herself in the dock and having to face up to some genuinely tough questions. Whether KGM is that benchmark case that we so badly need remains to be seen - the clock is running down, but there’s still time for things to be settled out of court again.
But one day, surely, someone will hold fast. That day can’t come soon enough.
Addendum - for the record, not one analyst on Meta's post-results conference call yesterday raised the question of the firm's responsibilities or the looming court cases or the highly-public allegations levelled at Zuckerberg. Priorities, eh? Priorities.