Davos 2026 - the man from the White House (no, not him!) on regulation, censorship, and AI addiction
- Summary:
- David Sacks is perhaps the most influential tech policy advisor to have the ear of the US President. He sat down in Davos with Salesforce's Marc Benioff the day after the latter had set off some AI truth bombs.
After raising some uncomfortable questions at Davos on Day One in regard to US AI firms, regulation and social responsibility, Salesforce CEO Marc Benioff followed up on Day Two by posing some of the same questions to the man from the White House.
No, not that man from the White House - he had, let’s say, other matters to deal with elsewhere.
But Benioff hosted a conversation with David Sacks, appointed by Donald Trump as Chairman of PCAST, the President's Committee on Science and Technology, and as such he’s possibly the most influential tech policy advisor to have the President’s ear. He’s also got a strong entrepreneurial private sector track record, including setting up a successful social media firm.
The trigger point for Benioff’s argument that US AI firms hate regulation was in part the recent accusations levelled against Character.AI, whose chatbots have been implicated as being involved in encouraging a teen to suicide, something Benioff pitched the previous day as darker than anything he’d seen before.
Asked his reaction, Sacks was clearly being careful in how he worded his response as he said:
People are worried about what impact will [AI] have, and that's something we're definitely looking at. And I think child safety has to be part of a larger regulatory framework at the same time, and there's been a number of horror stories there about about examples of self-harm that you know, AI might have contributed to. I would say that at the same time that we recognize that's an issue, we do also have to consider that a billion people are successfully using AI every day, including lots of high schoolers doing their homework and that sort of thing without a without a problem.
He added:
I think that when you have a new technology like this that a billion people are using so quickly, there’s going to be cases like this that happen. I don't think any of these companies wanted those things to happen. And I think that now that they know that it's a possibility, they can start to program the AI in a way that will try and prevent those scenarios.
There will be those who think that’s an overly optimistic view of how willing some tech vendors will be to accept their responsibilities. After all, the social media giants, against whom similar concerns have been levelled for years, haven’t exactly shown a propensity for self-regulation, have they?
Addicts
That said, Sacks makes a fair point that the AI genie is out of the bottle now and this needs to be factored in:
In China, they're incorporating AI into K-thru- 12 education, because this is going to be key to up-skilling the whole next generation, having the skills to use AI, to know how to prompt these AI engines, how to create agents. This is going to be a fundamental skill in the economy. So I think we have to be a little bit careful when we talk about protecting kids. Yes, I think we have to be aware of some of those downsides, but we can’t act like it would be a good thing for kids not to ever use AI.
I think if all you do is listen to the media, you might get that impression. I think that would be a big mistake. I talk to my kids about this, I say to them, I ask them, Do you feel like AI is addictive? And they'll admit that Tiktok is addictive, and YouTube and Snap, those things are far more engaging to them because they're talking to their friends or they're watching video.
As for AI, they say, ‘Well, it would be hard to live without it, because it's so useful, but I don't spend that much time on it. I use it like Google to look something up, it's to do research’. So it's a little different, I think, than social media, where social media really, you could argue is pretty addicting. I think AI less addictive, more a utility But I do think that in the public's mind, there's been a little bit of a transference of the concerns that people have about social media onto AI. And some of that transference is justified, and some of it may not be.
I’m not entirely sure that such careful fence-sitting, while understandable, takes the debate on further in a useful direction and the notion of gradations of ‘addiction’ seems dangerously like a ‘get out of jail’ card to be abused by the more unscrupulous.
Censorship
But Sacks seems determined not to rock the boat too much. Take, for example, his stance on Section 230 of the US Communications Act, singled out by Benioff as the only piece of regulation that social media and AI vendors do like as it offers liability protection from being held responsible for the content published by third parties on their platforms.
For his part, Sacks pitches himself as someone who regards Section 230 as “a visionary piece of legislation that helped pave the way for the modern internet” because it “basically defined who's a publisher and who's a distributor, and under the existing law, publishers are held to one standard, distributors are held to another”.
That’s a distinction he thinks makes a lot of sense:
Maybe it's not perfect in some cases, but without that, I don't think you would have had comment boards on the internet. User review sites, like Yelp, might have been sued out of existence. Think about it, every time someone posts a review on Yelp the store owner says, ‘Well, that's defamatory. I'm opposed to that’, then they can now sue Yelp. I think that these user-generated platforms may not have survived, and so I think it was very, very important in terms of allowing user-generated content to flourish all over the internet, including then on social media platforms. Ultimately, it's the user that benefits from the ability to express themselves.
At this point, the ‘freedom of speech’ talking point that has been used by so many on the right of US politics to bash any attempt to regulate content looms large on the horizon. Fortunately Sacks doesn’t go down the Elon Musk route of screeching ‘Fascist!’ at any attempt to impose any modicum of moderation, but he does argue:
What I worry about is, if [Section] 230 gets reformed and gets to be more restrictive, it's going to lead to much greater censorship. Already corporations have been taking down a lot of content. I think it's now much better under President Trump, but under the last administration, you saw an epidemic of shadow banning, de-platforming when someone expressed an opinion that went against the conventional wisdom or official [view of] the powers-that-be.
We saw a lot of censorship during COVID. Things that are now recognized as being true could not be said.... You look back on those things and say, ‘Gee, you know, that was a really dangerous period where people were censored too much’. We've now moved out of it. But what I worry about is you take away Section 230 protection and because it's simple corporate risk aversion, these sites are really going to clamp down and people aren't able to express themselves in the same way.
At this point, it should be noted that Sacks isn’t actually convinced that Section 230 necessarily applies to AI vendors anyway:
This is getting into novel areas of law, who knows how it be determined? Probably I'd want to consult some lawyers about this. But if you're getting me to react on the fly to this, you know [Section] 230 is always been applied to user-generated content on platforms. In the case of AI, it's the AI company that's publishing the content. So I think the liability might be a little different, but we'll, we'll see how it plays out.
It’s a fair point, although it does beg another question of who actually owns the content that many AI firms are publishing, but that’s a whole other bag of cats...
Regulation
Leaving aside the specifics of Section 230, what about Benioff’s wider point about AI firms antipathy to regulation of any kind? It seems that Sacks does have concerns around AI regulation, but of a different nature:
I think one of the great threats to innovation in the United States right now is that we have 50 different States running in 50 different directions, wanting to regulate AI themselves. We have 1,200 bills going through State legislatures right now. It's a little bit of a knee-jerk reaction. I would say some of those regulations may be warranted, but it's entirely too many, and the fact that you've got 50 different States means that you're going to create a patchwork of regulations.
The big tech companies are going to figure out how to comply with that, but it's a tremendous burden for small tech, for start-ups, for entrepreneurs. One of the great advantages that the United States has had is this huge, seamless national market. Historically, in most industries you’ve only really had to, certainly in the internet business, worry about complying with a single rule book. Now we're in danger of having 50. And I think the President has been very clear that he would like to see us have a single Federal framework for AI regulation and and he's really pushed for this. He spoke to it in his speech on AI in July of last year. And in my last few meetings with him, he's really made this an issue.
That said, Sacks argues that the US approach to regulation is still better than that of Europe:
You look at something like AI, we're just starting to regulate this now. As I understand that the European AI Act was actually passed before the launch of ChatGPT, so that was in anticipation of something that hadn't happened yet. Well, if the thing hasn't happened yet, you're really regulating based on hypotheticals and fears. I think a better approach would be to study the issue, see how it's actually being used, see what risks or dangers actually materialize, and then respond to those things. So I do think that our lighter touch is important.
Equally important is leaving a lot to the private sector, he added, pointing out that to what President Trump has said about winning the AI race for America:
President Trump declared we had to win the AI race, and he then defined several pillars of our strategy. The first one was being pro-innovation, and what he said, though, is that it's not up to the government to do the innovation. It's up to great entrepreneurs. It's our innovators companies...We have to be in a support role in the government and create the ideal conditions.
Now, look, there's got to be some guardrails and some rules of the road, but fundamentally, we understand that innovation comes from the private sector. When I hear European regulators talk about AI, there's a little bit of a main character syndrome where, when I've heard some European leaders talk about AI leadership, they're talking about setting the AI regulations, and somehow the rest of the world is going to copy that. That's not the way that we think of leadership, right? We think of leadership as being first and foremost, driven by the private sector.
My take
I first saw Sacks and Benioff sit down to chew the fat at last year’s Dreamforce and was impressed by the former’s eloquence then. He was equally articulate at Davos this week, even though I am inherently uncomfortable with some of the positions he appeared to take.
I agree with him that the European approach to regulation can be heavy-handed and often based on armageddon pedling rather than hard facts. But equally, the laissez faire approach advocated by many in the US tech sector leads to situations to trying to get the genie back in the bottle when it’s already too late.
And the conflict between the US approach to regulation and that of so much of the rest of the world is alarming - but then that’s been the case for so long now that it will hardly come as a surprise to anyone.
As for the knee-jerk response that regulation is inherently an attack on freedom of speech....bleurgh!
So, the debate rumbles on. And on. And on. And...