Next '26 - control the agents, control the enterprise? Google Cloud enters the battle for enterprise AI governance
- Summary:
-
Is Google Cloud vying to be the ‘Android of the AI enterprise’? Or is its strategy actually more like Apple’s iOS? We bring the latest announcements from Google Cloud Next.
The question of who governs enterprise AI agents is clearly becoming one of the most contested in the technology industry. The competition is fierce, as your favourite SaaS vendors and enterprise technology partners all make their pitch for being the ‘AI governance and management layer’. As organizations move from experimenting with individual AI tools to deploying hundreds - or thousands - of autonomous agents across their operations, the question emerging is: which platform wins and gets to own the control layer?
At Google Cloud Next in Las Vegas this week, Google made its pitch for that position. The content was dense and there were a slew of announcements - everything from new TPU generations, to a redesigned data architecture, to the formal arrival of Wiz into the Google Cloud security fold - but the strategic principle running through all of it was pretty unmistakable. Google wants to be the operating system for the agentic enterprise. And it is prepared to spend serious money to get there.
The argument, as Google Cloud (and many of its competitors) are aware of is: those who own the governance and management layer for agentic AI will likely capture the most value.
Where we are
Google Cloud CEO Thomas Kurian opened his keynote with a summary of the moment enterprises currently find themselves in. He said:
The experimentation phase is behind us. Now the real challenge begins: how do you move AI into production across your entire enterprise?"
It's a framing that will resonate with the CIOs we speak to as part of the diginomica network. Our data suggests 93% of organizations are now using AI in some form - but only 57% report achieving a 50% or better success rate from their implementations. The bridge between experimentation and production is where vendors are hoping to make their case. If customers are struggling to get agents to production - and deliver value - the market will compete to be the platform of choice delivering value.
Google Cloud's answer is a unified stack - and Kurian argued that fragmentation is the enemy. He said:
You cannot deliver AI by piecing together fragmented silicon and disconnected models. To drive real value, you need an architecture where chips are designed for the models, models are grounded in your data, and agents and applications are built with models and secured by the platform.
That argument has a logic to it. But it's also, conveniently, an argument that positions Google as the only vendor with the full stack to make it work. We'll come back to whether that holds in the real world.
The platform play
The centerpiece of this year's event is the Gemini Enterprise Agent Platform - described by Kurian on stage as "the Android of the agentic era." It's not an accidental turn of phrase. Android didn't win by being the best operating system - it grew to rival Apple’s iOS by becoming the connective tissue between hardware, applications and users at a moment when the market needed a common layer. Google is making the same argument for enterprise AI.
However, if we extend this argument out, is Google Cloud really arguing to be the Android of the agentic AI era, if it’s arguing that its full stack approach is the key selling point? Software coupled with hardware? That sounds mighty similar to Apple’s iOS argument to me…
That being said, Google Cloud doesn’t mandate a consolidated stack. It still has a composable architecture - it just suggests it all works better together.
The Agent Platform brings together model selection, agent building, orchestration, governance and observability into a single environment. Agent Identity assigns every agent a unique cryptographic ID. Agent Registry indexes every agent and tool across the organization. Agent Gateway enforces policy centrally. Long-running agents can now operate autonomously for days at a time, managed through a unified Inbox. Kurian described the ambition:
Gemini Enterprise is now the end-to-end system for the Agentic Era - the connective tissue between your data, your people, and all of your apps and agents that transforms all of your processes into a single, intelligent flow.
It’s worth flagging Vertex AI here. Google confirmed today that all Vertex AI services and roadmap evolutions will be delivered exclusively through the Agent Platform going forward, rather than as a standalone service. That's not a rebrand. Google is confirming a pivot away from selling developer-facing infrastructure and towards owning the enterprise application layer. The developer tooling is being absorbed into the platform play.
Sundar Pichai, speaking earlier in the keynote, gave a sense of the scale of investment behind this ambition. He said:
In 2022, we were investing $31 billion in CapEx. This year, we plan to invest between $175 and $185 billion in total CapEx - a nearly six-times increase in just four years.
Just over half of that machine learning compute, Sundar confirmed, is expected to go towards the cloud business. That's not a company hedging its bets.
The data question
Alongside the Agent Platform, Google announced what it's calling the Agentic Data Cloud - a reimagined data architecture anchored by a new Knowledge Catalog. Amin Vahdat, Google Cloud's SVP and Chief Technologist for AI and Infrastructure, introduced the new platform and said:
Reasoning without context is just a guess. And when you expect your AI to make decisions and your agents to take actions, you cannot afford to guess.
The Knowledge Catalog is Google's answer to ‘the context problem’ - a universal context engine that aggregates business meaning from across the enterprise data estate. The Knowledge Catalog pulls context not just from Google Cloud's own services, but from third-party platforms - specifically naming Salesforce Data360, SAP, ServiceNow and Workday as sources being ingested into Google's reasoning engine. A new Cross-Cloud Lakehouse extends the same logic to infrastructure, offering zero-copy connectivity to data sitting in AWS and Azure.
Read the Knowledge Catalog announcement carefully and what you're actually seeing is Google reaching into the data estates of the very platforms it now competes with for the orchestration layer, and positioning them as subordinate context providers. That’s quite interesting.
The governance question
This is where the picture gets genuinely complicated for CIOs - and where the competitive heat is most visible.
During his keynote segment, Pichai said:
The conversation has gone from 'can we build an agent' to 'how do we manage thousands of agents?'"
As we’ve outlined throughout this piece, Google Cloud believes that it has a reasonable argument to be the one to take that forward for enterprise buyers. However, there was a telling exchange during a press Q&A at Next, where Michael Gerstenhaber, Google Cloud's VP of Product Management, offered a realistic example of what agent-to-agent management looks like. He said:
There are two kinds of agents at play…there's Michael's agent - an agent that wants to perform a task for a user at work. And then there's the person who owns the data - maybe ServiceNow wants to provide access to ticket data, and they do that with an agent. That's ServiceNow's agent, not necessarily the user's agent. The user's administrator needs to know what data was queried, and the ServiceNow administrator also has to contemplate security and governance for their agent - did I rigorously check whether Michael should have access to this data?
There is a separation of concerns here between the person who builds the agent and provides access to the data, and the agent builder who doesn't know what data is out there but sends their agent to go find it and solve the problem. Those are two very different kinds of administration.
The administrator should make it safe by default - both on the foreign agent side and the local agent side - and both should be monitored: traces, metrics, logs, specifically about what that agent did on both sides of the transaction.
It's a thoughtful articulation of a genuinely complex problem. But notice the framing: ServiceNow's agent is the "foreign agent" operating inside Google's governance plane. Google is the platform that monitors both sides of the transaction.
ServiceNow, of course, would frame it the other way around. They own the ITSM relationship, the ticket data, the operational workflows. They're not going to quietly accept being governed by Google's Agent Gateway. Salesforce, equally, has Agentforce - designed explicitly to own the governance layer for customer-facing agentic workflows. Neither is going to cede audit authority without a fight.
This creates a scenario for CIOs where you could end up with multiple competing (or complementary?) governance layers, each with a legitimate claim, each generating its own traces, metrics and logs, none naturally deferring to the others. That's not a technology problem - it's a political one. And it's one the vendors are going to be very reluctant to resolve on the customer's behalf.
The Microsoft problem is also worth referencing. Unlike Salesforce, ServiceNow and SAP, Microsoft doesn't appear in the Knowledge Catalog's list of platforms being absorbed as context providers. Instead, Google frames its relationship with Microsoft around interoperability - exporting documents into Office formats, connecting agents across Microsoft 365. It's a compatibility pitch, not an absorption one. Whether that reflects strategic caution, an acknowledgment that Microsoft would push back too visibly, or simply the reality of Microsoft's entrenched position in the enterprise productivity layer is up for debate at this point.
From open orchestrator to platform owner
What makes today's announcements particularly interesting is the distance they represent from where Google Cloud was standing twelve months ago.
At Next '25 last April, Kurian's pitch was built on openness. Google's differentiator, he argued, was its willingness to manage multiple AI agents across different frameworks and vendors - including competitors' models. The Agent2Agent protocol, launched with support from over 50 partners including Salesforce, ServiceNow and Workday, was framed as an open standard for agent interoperability. Google wanted to be the neutral broker.
That was reinforced in December, when Karthik Narain, Google Cloud's newly appointed Chief Product and Business Officer, told diginomica in his first interview since joining from Accenture that the company's differentiation came precisely from its refusal to lock customers in. The "why not Google?" moment he described was grounded in openness - the argument that AI-first transformation required a platform that prioritized interoperability over convenience.
It’s worth stating that this is still true today. Google Cloud has said multiple times today that this architecture is composable and it is open. However, what today's announcements do suggest is that the tone has hardened. The openness language is still present - multi-model support, MCP integration, Microsoft 365 connectivity - but the structural moves tell a different story. Vertex AI absorbed into Agent Platform. Knowledge Catalog ingesting competitor data. A $750 million partner fund to facilitate enterprise relationships at scale through SI partners. Agent Identity, Agent Registry and Agent Gateway asserting governance across the entire agentic workforce.
Google hasn't abandoned openness as a message. But it is building value capturing walls at the same time.
My take
Google Cloud's full stack argument is intellectually strong. If you're building an agentic enterprise from scratch, or if you're already deeply Google Cloud-native, the proposition of having infrastructure, models, data, productivity and governance co-designed in a single platform is genuinely compelling. Gerstenhaber's articulation of the agent governance problem suggests real architectural rigour, not just marketing positioning.
But most CIOs aren't operating in a greenfield environment. They have decades of ServiceNow investment, embedded Salesforce workflows, Microsoft 365 across the entire organization, and AWS or Azure infrastructure that predates Google Cloud's enterprise push by years. In that environment, the full stack argument doesn't reduce complexity - it adds another layer to it. You're not replacing the existing stack. You're asking CIOs to place Google on top of it as the governance layer, while every other incumbent makes the same claim from a position of far more embedded strength.
A year ago Google was pitching to be useful. This year it's pitching to be essential. The shift from open orchestrator to platform owner is quite telling - and the competitive response from ServiceNow, Salesforce and Microsoft will be equally real. It’s interesting to me though that many of the ‘big deal’ customers speaking here today are taking a co-innovation and investment approach, where Google Cloud is offering to place engineers in customer environments to make agentic AI valuable, sort out enterprise data siloes, and fix archaic processes. That’s a genuinely useful argument for buyers - and it might work if Google Cloud can scale it in the medium term. Customers are in need of support and Google Cloud has the resources to provide it.
Whether Google Cloud can convert this into the kind of deep IT relationships its competitors have spent decades building is the question. It’s going to be entertaining watching this competitive landscape play out from the sidelines…