Main content

The AI-native naysayers are making the same failed objections we heard to cloud-native apps two decades ago

Phil Wainewright Profile picture for user pwainewright March 26, 2026
Summary:
The objections raised today by enterprise SaaS loyalists against emerging AI-native agentic applications sound a lot like the arguments that were used against the early SaaS pioneers two decades ago - and will prove just as transient.

Close-up of Romain Sestier, StackOne, speaking at a SaaStock London event 2026-03
Romain Sestier, StackOne

As I sat in a discussion earlier this week on how AI-native companies are redefining long-established software categories, a strong feeling of déjà-vu swept over me. Two decades ago, I was an early proponent of cloud-native apps and I often found myself arguing against the prevailing wisdom back then that they were too unreliable, uneconomic and insecure to ever replace the dominant client-server apps of the time. Sitting in that room this week took me right back to those days as I listened to Romain Sestier, founder and CEO of AI-native integration vendor StackOne, push back on the very same claims made today about agentic AI apps. And that made me feel that we're going to see history repeat itself when AI-native apps prove themselves worthy of fully replacing today's mainstream SaaS apps.

This doesn't mean I've suddenly swung behind the SaaS-pocalypse narrative that's surfaced in recent weeks — these are changes that happen over decades, and there are several obstacles that the AI-native apps need to overcome before they'll achieve mainstream adoption, as we'll discuss in a moment. But first of all, here's how Sestier dealt with those three big objections that reminded me so forcefully of the arguments we used to hear deployed against the rise of SaaS.

They're unreliable

Devotees of client-server apps that ran on the customer's own premises always saw this as the killer argument against cloud computing. What if your Internet connection went down? How could you entrust your core business applications to a third-party cloud data center and rely on them to keep it running 24x7x365? These arguments sounded persuasive, until you sat down and acknowledged that a good few of those on-premise client-server apps were running on servers sitting under people's desks, and even those that were carefully nurtured in properly equipped data centers were frequently down for hours and occasionally days at a time, either because of planned upgrades and maintenance or unforeseen mishaps. Cloud apps were being held to far higher standards of reliability than their on-premise equivalents had ever achieved — and as the technology and techniques of cloud computing evolved, they ended up achieving unprecedented reliability that was far ahead of anything seen on-premise.

Today, the argument against AI-native apps is that this is probabilistic technology that can't be trusted to reliably deliver the deterministic results required in an enterprise environment. It surfaces the wrong information or even hallucinates answers. But as Sestier points out, today's deterministic systems are still exposed to human error, whether that comes in the form of improper operation or inadequate programming and configuration. Once again, the new generation of tech is being compared to a higher standard than the previous generation ever actually delivered in practical reality. And just as cloud computing rapidly evolved to eventually deliver far more robust computing than the on-premise servers of old, so today's AI agents will evolve to use highly tuned models, optimized toolsets, deterministic guardrails and many other mechanisms that will ensure its results end up outperforming the capabilities of SaaS applications.

They're uneconomic

People always underestimate the total cost of ownership of the tech they already use, while exaggerating the impact of unfamiliar costs that new technologies bring along. In the case of generative AI, it's the high compute cost of inference, also known as token cost, which is measured not only in dollars but also in carbon emissions from the extra energy required to run queries on an LLM. Back in the days of SaaS, it was the ongoing monthly or annual subscription, which when you looked at it over the long term seemed to add up to a higher cost than the one-time perpetual license fee of client-server applications.

In the case of SaaS vs client-server, these cost comparisons ignored the high implementation, customization and maintenance costs of traditional on-premise software. I used to show a graphic to illustrate the massive economies of scale that cloud computing enabled because it pooled all of the infrastructure and specialist support that enterprises were otherwise duplicating in individual data centers, where servers ran idle as much as 96% of the time.

When it comes to AI-native vs SaaS, Sestier points out that the calculation has to take into account all of the cost of employing people to operate traditional applications, which potentially goes away when those operations are automated using AI. Obviously this is a somewhat more contentious argument than the SaaS one about data center economies of scale, which merely consigned redundant servers to the scrapheap. Taking away people's jobs is a far more controversial proposition. But however unpalatable its consequences may appear, the economic argument certainly has to be acknowledged.

By way of example, Sestier cites the cost of running a traditional customer service app like Zendesk compared to Sierra, the AI-native agentic CX app co-founded by former Salesforce co-CEO Bret Taylor. The team operating the traditional SaaS app includes all of the call center agents responding to customers, all of their training, management and any overtime payments to provide 24x7 cover, along with surge costs for extra capacity at peak times, plus the software subscription and associated running costs. This is a hidden iceberg of human cost, against which the token cost paid by agents pales into insignificance. And with Sierra, he argues, all of these costs are replaced with a simple fee for each query its AI agents resolve.

Another hidden cost has recently been pointed out by VC investor Tomasz Tunguz, who observes that automation enables smaller teams, which in turn reduces the collaboration and management overhead that surrounds each of those individuals, He writes: "Small teams have always paid less co-ordination tax. AI cuts it further." While there will be new cost implications that come into play when using AI-native applications, they'll inevitably be less than the spending saved through more effective automation operated by smaller, AI-augmented teams.

They're insecure

This was always the big gotcha that client-server loyalists felt they had over early cloud-native rivals. How could something that took your data out of the safety of your enterprise network and processed it in the Wild West of the public Internet ever be secure? It seemed self-evident that — and I paraphrase satirically here — your back-office server kept in a cupboard and connected to a router, both of which used the default factory-setting admin password, was far more secure.

The issue here is the familiar versus the unknown. Client-server computing had become the norm and we had come to take its security for granted — somewhat complacently at a time when vulnerabilities were rising as malware began to proliferate. The Internet and cloud computing in contrast was poorly understood yet plainly full of peril, and therefore became the focus of all our fears. The truth was that, as with reliability, cloud computing rapidly proved itself more secure than traditional on-premise software. Whenever security breaches hit the headlines, they were always due to traditional enterprise IT teams making a hash of the security of their own servers and networks, whereas the nascent cloud industry understood the imperative of demonstrating its security chops.

In his presentation, Sestier acknowledged that agentic AI faces a similar trust gap that will take time to close. He sees a multi-step process towards building trust, starting with human-in-the-loop checks and approvals, building in traceability and audit processes, adding mechanisms for ensuring deterministic outcomes are delivered reliably, and creating defenses against potential security vulnerabilities such as prompt injection. This is a classic example of Clayton Christensen's Innovator's Dilemma in action. The new technology starts out as a rough-and-ready solution to the unmet needs of businesses or individuals who are ill-served by current offerings, and over time it develops the robust feature set that the mainstream market demands.

What now?

Attendees at the presentation, a local event put on by conference organizer SaaStock, were a mix of SaaS industry entrepreneurs and developers. Many were wondering what Sestier's advice was for their own existing applications. His answer: "Build something new." In his view, AI agents force a complete change in how applications are designed, to focus on building for adaptable outcomes rather than fixed processes.

But the other thing I remember from those early days of SaaS is that the ardent predictions of the old order being swept away within just a few years are way off the mark. Even if you allow for a more rapid pace of development and knowledge exchange enabled by AI, organizations and markets take an extraordinarily long time to adapt. For every early adopter that eagerly signs up for the new technology, there are dozens of others that are going to hold back until they feel that it's proven, Even today, as the likes of Sierra in CX, Jack&Jill in recruitment and Rillet in financials are reinventing today's categories, there are many companies still in the process of migrating their operations off of client-server legacy stacks onto mainstream SaaS alternatives.

Look back over the past 25 years and sure, we've seen the rise of Google, Workday and Salesforce from nowhere to become giant incumbents in the enterprise applications space. But at the same time, the likes of SAP, Oracle and Microsoft have confounded the predictions of those early SaaS zealots that they would fade away. They remain equally significant market players. Meanwhile, many of the early SaaS startups whose founders predicted the demise of these enterprise icons have themselves long since disappeared into the mists of history.

My takeaway from what I heard at this week's event is that there's real substance to this new wave of enterprise agent offerings, and that many of the objections we currently hear against AI-native apps won't stand the test of time. The lesson from history, however, is that the market isn't going to pivot overnight en masse to agentic solutions — it'll take many years for these newcomers to get established and win the confidence of mainstream enterprise technology buyers. But while it may look like slow progress from the outside, those who reap the biggest rewards will nevertheless be those that choose to move at speed in the center of this tornado of change.

Loading
A grey colored placeholder image