Something for the weekend - time to get smart on US AI regulation after a ‘lost decade’ for data?
- Summary:
- The macho world of AI policy favours big claims. But do tech leaders really mean what they say? I asked one of them.
Anger is growing in parts of the US at President Trump’s latest Executive Order on AI, which seeks to block states from regulating it – in line with policy objectives trailed throughout the year.
Unregulated AI would mean unfettered data center expansion, which is why over 200 environmental groups are protesting. They believe AI’s vast energy demands will have a negative impact on local communities, with pollution, water scarcity, and rising energy prices. Governor of California and self-appointed troll of the President, Gavin Newsom, has slammed the “grift and corruption” of a plan that, while not legally binding, carries clear political intent.
But another reaction was notable for different reasons. Roman Stanek is founder and CEO of privately held analytics vendor, GoodData, which boasts over 3.5 million users in 140,000 organisations, with rumoured 2024 revenues of $60+ million. In comments that were widely reported – and apparently taken from a press release – he said:
The US is moving at the speed of innovation. China is copying at scale. Europe is regulating. It’s a suicidal mindset to think you can legislate your way to relevance. The AI race is like the Moon race: you either launch or you watch. Regulation should protect innovation, not paralyse it. Velocity is not only a competitive advantage anymore, it’s survival.
A newsworthy soundbite. So, what was its significance? Well, he doesn’t believe it, he says.
And asked at the end of a fascinating conversation if he thinks that, actually, regulation will be essential, he answers:
Absolutely. It’s more that we have to regulate differently. We must start to regulate outcomes. There needs to be simple regulation. And there needs to be regulation that's based on the size of the company. So, it’s more that I think a lot of regulation today is kind of dumb and we just need to be smarter about it. That's all.
Ah. Dial down the clickbait, it seems, and you arrive at the facts.
A debate
That's not intended as snark - as you will see, Stanek proves to be an insightful interviewee when challenged or contradicted.
So, let's dive in to an enjoyably contentious conversation. Stanek tells me:
When I started this company, my vision was to help users analyze data in the cloud. And the main thought behind it was ‘plenty compute’: if you put enough compute on a data problem, it will solve that problem. But I think that was the wrong assumption. I don’t think compute alone does solve the data problem for companies.
So, 2010 to 2020 I would call a lost decade in data, with no real developments at all. But AI is changing everything. It’s the last piece, the missing piece for data – to understand semantics, to understand meaning at scale. Because if you don't know what data means, you can’t integrate it well, and you cannot interpret it.
An interesting perspective, because if you re-wind ten to 15 years, all we heard about was compute plus Business Intelligence, or compute plus Big Data, Data Warehousing, Predictive Analytics, delete as applicable. Back in those days, it was assumed that data analysts would stand at the apex of human resources. So, that vision was wrong? That golden age for data scientists was just a blind alley, which AI has made irrelevant? Stanek says:
Yeah. The thing is, if you look at the problems we are discussing today, they are the same as we discussed 15 years ago. How do we take [sic] the data? How do we interpret it? Why is every data set in a silo? And so on.
But if you talk to the business people, then they would agree with me. They would say, ‘With all the investment we put in data – in Big Data analysis, and so on – there really was no change from that at all.’ [i.e. it made no difference.]
The average business user, they still rely on spreadsheets, they still don't trust their dashboards, and their data warehouse is still where data goes to die. So yeah, from a business perspective, I don't think it was successful.
In turn, this has led to widespread scepticism about the latest shiny toy: AI:
There's a certain level of scepticism. You know, can we trust it? But from my perspective, intelligence at scale was what was missing all along. You cannot solve data problems if you only look at them as ones and zeroes.
This is true. But another reason for people’s scepticism isn't that most are frightened of AI in some generalised way (though some are), it's because much of the current uptake of AI has been in the form of shadow IT for trivial tasks.
The US Joint Economic Committee’s Congressional hearing on frontier technology recently underlined this point: in most industries, enterprise AI adoption is in the single digits, and in leading adopter industries, it is low double figures. Plus, we all - surely, by now! - have heard about the dreaded MIT report on 95% of generative AI programs failing to produce significant returns, and so find ourselves in the downward swing of the hype cycle.
Meanwhile, the same industry CEOs whose income is largely based on selling slop-making tools to consumers are making highfalutin claims about “genius-level” machine intelligences, largely to drive up their valuations and shares to “unhinged” levels (the Economist’s word, not mine). Many people’s experience of generative AI is it only seems expert if you are not an expert in a particular field yourself.
Stanek says:
Yeah, it's undeniable that AI is mainly successful in the consumer space. My kids, and everyone I know, use it to solve their personal problems. So, the question is, how do we transfer it into the enterprise? And that means asking what it is actually good at.
And there's one use case where AI is absolutely dominating today in the enterprise, and that’s coding. I don't know any company today in San Francisco where the developers still develop. Even at GoodData, we have big pieces of code that are completely written by AI. And productivity is up. But that's because, with that use case, with all the guardrails around it, you can say, ‘This is high-quality code’.
So, if we are no longer bounded by how much code we can build, that alone would be a huge success. So, can we do the same with data? But that's not happening.
It’s true that vibe coding has become the norm in many companies; when I interviewed the CEO of OpenText two years ago [LINK], he claimed he no longer employed junior developers, because their work was already done by AI (“Don’t send a human to do a machine’s job,” he said).
However, the problem with passive acceptance of this ‘new normal’ is it masks a lot of professionals’ well-informed objections. And they protest precisely because there aren’t enough guardrails.
Scuttlebutt has been rife of developers leaving Microsoft-owned GitHub for this reason. On 2 December, for example, The Register reported that the foundation that promotes the Zig programming language had quit the community, citing Microsoft’s “obsession” with AI causing a decline in standards and quality.
Meanwhile, experienced developers often say that junior colleagues lack a detailed understanding of coding today, with no insight into why things work – or don’t work – and therefore no ability to check the output of a vibe-coded project.
Was there a better, more efficient, and more secure way of achieving the desired result? Such questions are no longer asked, so inferior, buggy, insecure apps are released into the wild. Meanwhile, the provenance of that code is never questioned. Was it proprietary?
What happens to experts?
So, is Stanek worried that the more we rely on AI to carry out essential functions, the more that deep knowledge, skill, and expertise will vanish from our professions – from all of them, in fact? He says:
I hear you. First, things have obviously changed a lot over the last two years. But I'm talking about organizations professionally developing code with all the tests and servers that check their rules and know how to post code into GPT, and so on.
It's a profession, not shadow IT, and that's where the biggest impact will come from. And no, I don't worry about it. I think every job changes, and this is going to be much more about algorithms understanding business problems, and developers being able to talk about what the code is solving.
Do developers really need to write hundreds of lines to build some exception-handling function that's essentially boilerplate? It’s like a form of manual labour that you want machines to do. No one complains that people are not using shovels anymore, when they're using bulldozers. And they are so much more productive when they sit at the controls of bulldozers. It's the same with coding. Yes, there are certain pieces that you will code less, because, again, it's boilerplate. And that will free the time to focus on the algorithm: what it needs to do, and how it behaves.
Fair enough. Then he adds:
It will take away what I call the ‘manual mental labour’ that people need to do, like all the chores that no one wants, so they can focus on the creative stuff.
Alas, that statement is a red rag to a bull, as it’s an industry cliché – one rendered nonsensical by the same explosion of slop-making tools that is driving the US tech economy (and many AI companies’ revenues). Far from automating drudgery, AI is increasingly automating creative tasks and consigning expert workers to menial support roles.
In turn, AI is taking junior-level roles and making it harder for school- and college-leavers to get onto the career ladder. That risks creating an economic black hole for young people who are already saddled with debt at a time of soaring food, energy, transport, and rental costs. Only last month, for example, the Chair of the US Federal Reserve claimed that US job creation is, essentially, zero – and he blamed AI. Stanek says:
Yes, I agree. And this problem has got much worse over the past two years. I believe that, at some point, education will have to change. We have to prepare young people differently for work. But I belong in the camp of people who believe that this is going to be so transformational that we will need to establish some form of Universal Basic Income [UBI]. Because it’s possible there is not going to be enough work for everyone, because everyone will be so productive. I'm not talking about this year, but soon you will be paid by it, and then only the people who want to work will work.
A related problem is as much cultural as it is fiscal: ultra-conservative economies’ ideological hatred of paying citizens for doing nothing. How likely is it that, at a time when people moan about today’s benefits bill, a nation would put everyone on benefits? Just so they could pay their OpenAI subscriptions or buy a Teslabot – plus maybe some food, heat, clothing, and a place to live. In that light, such a future seems preposterous. Stanek says:
I think that's too much of a negative view. But I'm an optimist, and I believe we will solve that problem. But the problems we’re solving, or maybe solving in a couple years, will be with this extreme productivity that we see today in coding, as a preview of what’s to come.
To me, this is critical thinking, not pessimism: only an optimist sees a challenge to their world view as pessimistic. But if it is pessimism, then so what? Ploughing on towards an absurd outcome seems more like idiocy to me than optimism. As George Bernard Shaw once observed, “The optimist invents the aeroplane, the pessimist the parachute.”
But Stanek is undeterred – and avowedly optimistic:
It may be that we will come up with ideas that will keep everyone employed and productive. And we will all be 100 times richer, which I believe will happen!
Some technology companies will certainly be richer. Then he adds:
So, the question is, what will people do? And how will we regulate it?
Right. So… regulation will be a good thing, yes?
Absolutely
My take
No further questions, m’lud.