ServiceNow ends the AI add-on era and defines its new platform approach
- Summary:
-
To understand what ServiceNow announced today, it helps to understand what ServiceNow has always been at its foundation - and how that foundation is now being repositioned.
For most of its existence, ServiceNow was built around the CMDB: the Configuration Management Database, a single system of record that maps every IT asset, service, and relationship across an organization's infrastructure. Workflow products - ITSM, HR Service Delivery, Customer Service Management, Security Operations - were built on top of that foundation, each serving a different department, each sold as a distinct product. The CMDB gave ServiceNow its cross-functional coherence, which has been key to much of its success. The ‘one platform, one data model’ architecture it has spoken about for years has allowed it to go across the enterprise in a way that other SaaS vendors have struggled.
Today, that structure is shifting somewhat. ServiceNow announced that its entire product portfolio is now AI-enabled, with every offering including AI, data connectivity, workflow execution, security, and governance as standard. Not as an add-on. Not as a separate tier. Built in (more on this from a competitive/pricing standpoint, shortly).
The new platform architecture ServiceNow is describing looks like this: EmployeeWorks as the conversational front door, Workflow Data Fabric as the connected data layer, AI Control Tower for visibility and governance, and autonomous workflows that move from assisting people to acting on their behalf. Sitting underneath all of it, and now explicitly named as such, is Context Engine - a new enterprise context solution that connects the relationships, policies, and decision history behind every AI agent action.
The way to think about this, I think, is that the CMDB was about knowing what you have, whereas the Context Engine is about knowing how your business actually works - and giving AI agents the real-time intelligence to act on that knowledge. Amit Zavery, president, chief product officer and chief operating officer, said:
ServiceNow is redefining how companies realize value from AI, with the capabilities required for enterprise scale. From Context Engine's enterprise intelligence to data connectivity, governance, and execution, everything is included by default, all operating inside the flow of work.
The add-on problem
The "not a separate purchase" framing in today's announcement, I’d argue, is a bit pointed. It’s likely a direct response to something that's been frustrating enterprise buyers for the past two years.
The standard vendor playbook has been to build AI capabilities and then charge extra for them - new tiers, consumption credits, add-on licences, each requiring their own procurement conversation. In some cases the bill arrives before anyone has figured out whether the capability actually works. In the diginomica network’s SaaS vs AI micro-pulse survey in February, one CIO described their Google Workspace spend jumping because Gemini had been bundled in - which they don't actually use - forcing cuts elsewhere to compensate. It’s just one example, but it likely reflects a real budget consequence playing out right now in many enterprise environments.
ServiceNow is saying that rather than layering AI on top as a billable extra, it's baking it into the base. The new tiered model spans AI assistance, agentic automation, and fully autonomous operations across the entire portfolio. Every customer starts with the full package. As such, the question of whether AI is included stops being a procurement decision.
For mid-size organizations in particular, the new Enterprise Service Management Foundation is notable, bringing IT, HR, legal, finance, procurement and workplace services onto the platform in weeks rather than months. Robinhood's Jay Hammonds, head of Technology Operations, said:
ServiceNow AI deflects 70 percent of our employee requests before human intervention is needed - across IT, HR, and Legal. We reduced manual effort by 2,200 hours across 1,300 tickets monthly with AI embedded directly into our workflows. And with ServiceNow's new AI-driven offerings, we can bring new teams and acquired entities live in weeks, not months.
Time to value is the thing CIOs in the diginomica network keep returning to as the deciding factor in AI investment. Our January 2026 AI Projects pulse survey found that nearly a quarter of organizations have no AI projects running in production at all. A vendor that removes the barriers between buying and getting value is something buyers will respond positively to, as it's a real issue on the ground.
Context Engine is key
The CMDB gave ServiceNow two decades of structured knowledge about enterprise IT environments. Context Engine is the attempt to turn that into something AI agents can actually use in real time.
With 85 billion workflows and seven trillion transactions running on the platform annually, ServiceNow claims it can ground LLM decisions in the specific strategy, approval chains, asset dependencies, and vendor history that define how your organization works - not language in general, but your business specifically. Built on the Service Graph, Knowledge Graph, and data inventory, Context Engine draws from identity relationships, business intelligence, and data lineage in real time, with the stated intention that it compounds intelligence with every human and agent decision made.
This is a direct answer to something our CIO network has consistently identified as a primary barrier to AI success. Our November 2025 research with 35 CIOs and CTOs found that poor data quality and disconnected systems - not technical capability - are the primary blockers to AI delivering returns. Our latest data report found that 94 percent of organizations still have siloed data despite years of integration investment, with 30 to 70 percent of professional time consumed by manual reconciliation.
Context Engine is ServiceNow's argument that it has the enterprise context to prevent exactly that. The caveat is that Context Engine is currently available for preview with select customers only, with full availability to be confirmed later.
Open to developers
The Build Agent skills announcement extends the platform logic outward. From April 15, developers can build using Claude Code, Cursor, OpenAI Codex, Windsurf and others and deploy directly to ServiceNow without leaving their preferred environment.
This reinforces the hub-and-spokes architecture Paul Fipps described when I spoke with him last month. ServiceNow is the governed, deterministic execution hub. The models - Claude, Gemini, GPT, whatever comes next - are interchangeable spokes. Fipps said:
We fully want to take advantage of Large Language Models - these are amazing innovations - but we do it in a way where we abstract the intelligence layer from ServiceNow. We use LLMs for decisioning and reasoning. That part is probabilistic. But the action part in ServiceNow - the workflow part - is deterministic. No guessing.
Opening the developer tooling to every major AI environment deepens that position without requiring developers to change how they work. It's a pragmatic move. ServiceNow stays in the middle, whilst the tools around it become a matter of preference.
The pricing question isn't closed
However, it’s worth being cautious when it comes to pricing, because "not a separate purchase" doesn't end the pricing conversation - it just shifts it in the right direction.
What ServiceNow has done is remove procurement friction at the entry point. AI governance, data connectivity and workflow execution are now in the base package. But heavier agentic usage - autonomous workflows running at scale, AI specialists handling thousands of cases - will likely still draw on consumption. The model remains seat-based plus consumption. The industry as a whole hasn’t decided on whether outcome-based pricing actually works, but it’s something I think ServiceNow is considering.
When I pressed Fipps on this directly, he was candid about how unsettled it remains:
On the SaaS versus AI question - honestly, I can't tell the difference anymore. The same players you'd describe as AI companies now have seat-based models. OpenAI and Anthropic both have seat-based models. So what are they?
His point was that customers need predictability, and that consumption models create the kind of unpredictable bills that procurement teams resist. ServiceNow's current answer is seat base plus consumption - predictability upfront, additional cost linked to additional value. But Fipps was explicit this is provisional:
We'll probably have to think about more innovative pricing models as our customers pull us in that direction.
I’d argue that today's packaging simplification, welcome as it is, is likely an intermediate position. CIOs want transparent pricing models that help control ongoing IT costs. That's a higher bar than "AI included in base package." Outcome-based pricing - paying for the work AI actually completes rather than for access to the capability - has been the theoretical destination for years, but making it workable is still a pipe dream in many respects.
My take
What I keep coming back to is the architectural significance of today's announcement relative to where ServiceNow has come from. The shift from a CMDB-centric platform with workflow products wrapped around it, to a platform defined by a conversational front door, connected data, governance, and autonomous execution isn't just a product refresh. ServiceNow is laying out a platform redesign built around how AI agents work rather than how human users work. Context Engine, if it delivers on its preview promise, is the piece that makes the whole thing possible - enterprise context as the foundation rather than configuration data.
The "not a separate purchase" positioning is smart commercial framing for the moment we're in. CIOs are consolidating vendors, scrutinizing AI spend, and demanding shorter runways to demonstrable value. Removing procurement friction while compressing deployment time addresses at least two of those pressures directly.
The open question, as Fipps himself acknowledged, is pricing at scale. Knowledge 2026 in Vegas in a couple of months will be the moment to press on whether the packaging holds when customers start running serious agentic workloads. We'll be on the ground to find out.