Main content

Can AI infrastructure costs be a value driver? Hot topics and customer views from Oracle AI World 2026

By Jon Reed April 16, 2026
Dyslexia mode
Excerpt:
Oracle recently launched 22 agentic apps. Meanwhile, the markets do their pinball thing, and software stocks rise (a bit) amidst the noise. But what does this mean for customers? At Oracle AI World New York City, I dug in. What follows is a best-of from those talks, along with my hot takes.

(Oracle's Chris Leone speaks at Oracle AI World NYC)

A few weeks ago, timed with Oracle AI World London, Oracle announced 22 new agentic apps. My diginomica colleague Derek du Preez issued his on-the-ground review/analysis

I later shared more context, via a virtual sit down with Steve Miranda, Oracle EVP, Application Development (Want AI outcomes? Yes - but how do customers get there? Inside Oracle's agentic apps news with Steve Miranda).

Last week, at Oracle AI World New York City, I pressed further into Oracle's AI play, via a series of keynote sessions, on-stage customer interviews, and one-on-ones (Oracle also added more agentic apps news updates, including agentic apps for finance and supply chain).

How does AI infrastructure impact pricing?

Across the big AI providers, pricing is a moving target - and customers want answers. Oracle makes the argument that managing AI workloads on their own infrastructure (OCI) allows them to better control AI costs, thereby making customer pricing more stable/affordable. 

I dove into the specifics with Nathan Thomas, SVP of Product Management at Oracle Cloud Infrastructure (OCI). Thomas says AI cost efficiency starts with structural advantages. Such as? Moving virtual components out of the hardware: 

I've been at AWS. I've been at Google. I didn't really believe it or understand it fully until I got [inside Oracle] and started looking at it... It really is the case that we have spent a long amount of time building structural advantages in OCI from the ground up... It's a huge amount of effort to have pulled the virtual components fully out of the hardware.

Multi-tenant scale is another factor - one that can impact smaller customers' inference workloads as well. Thomas:

We can do that with bare metal compute that has nothing on it from OCI; we can do it with RDMA networking, which is multi-tenant... Those percentage differences, if it's 10 or 20 or 30% cheaper across those attributes, and you're writing significant amounts of business against that, it adds up fast for those customers doing large-scale clusters. It also means a meaningful bill savings for smaller customers doing inference as well. 

Oracle customers don't have to use the latest frontier models with the highest inference costs; they can swap models out. Thomas explains:

There's also the components we have for the OCI Enterprise AI offering we just launched- things like all of those models that are available to you in OCI via either token interfaces, or you can do direct installation of those models. We're seeing customers realize that they don't have to toss every single query into a chat engine with online frontier models. 

When you bear down on AI use cases, you learn that frontier models aren't needed for many enterprise workflows. It's more about the data context you provide than the ultimate model performance/scale. When open source models are sufficient, that brings cost savings at inference time: 

They're saying, 'Look, I can cost optimize here.' The fact that we have easy ways for them to consume the latest OSS model, or the latest Llama model in those environments, and run it locally - which also has some nice benefits for them, potentially on data that they want to have stay resident inside of their tenancy. Those all add up; those numbers really matter for those customers.

One more cost control ingredient: smaller/more affordable models running dedicated workflows. Thomas:

There's a lot of use cases we see where customers are absolutely getting significant value out of small language models, or out of smaller models that they can run locally.

Controllable/predictable AI costs matter to customers - but that's only one aspect of AI value. The on-stage customers at Oracle AI World shared their data transformation adventures. That's critically important - if you can't gain value from your data/process quality pursuits, momentum will lag - no matter what shiny AI futures might be enabled by those data efforts.

Security is job one, and there is no data trust without accuracy - Oracle customers on AI

During the keynote, Mark Hura, President of Oracle Global Field Operations, interviewed Rick Hair, CIO, Corporate Technology, M&T Bank. Hura raised the issue of data trust/security: 

You mentioned trusted data and access. One of the most important things is getting all of this data in a place that can be utilized for AI capabilities. As you think about access to this information, Juan just talked about security at the database layer. Chris was talking about the agentic workflows built-in, that have the same security governance, access [controls], identity. How important is that to a bank? Because when we talk about trusted data, it can't be a guess. It's got to be accurate.

Hair says data security is job one: 

We want to embrace AI. We have to do it in a [controlled way] for sure - and so security is going to be of the utmost importance. That's the posture we're taking across the board at M&T Bank as well. We'll be able to leverage AI across the bank, but then take it use case by use case, and really enable those use cases and then learn from it - and now we can deploy it further and further. So I think this provides us a great opportunity at our bank, with having all of our data within the Oracle ecosystem to support a lot of different business lines.

Next up, Hura spoke with Terry Robbins, CTO of the STO Building Group. Robbins spoke to the value achieved through vertical SaaS, and what comes next: 

The success of a construction project is based on schedule. The schedule people who schedule materials schedule the turnover of the entity. So scheduling is the lifeblood of what we what we do. Primavera Cloud allows us to handle very complex schedules. We do very, very complex projects at times. Without that sort of a tool, it would be very difficult to accomplish that.

What we're looking for is: how can we take the schedule, which is pretty much a standalone entity for us right now, and then envelope that into our other solutions around resource planning, around change management, submittal changes, anything that impacts the project.

Hura asked Hair: AI is touching every industry, and it's clearly touching your industry as well. You started to mention: we want to see some of the capabilities embedded within Oracle. Are there other areas of business operations or capabilities you see impacting AI in your industry? Hair said that the impact of AI is pervasive; the challenge is getting it right across applications - and new point solutions. 

Saying that AI is touching our industry is understating it; AI is enveloping our industry. Our challenge we're working through is: how AI can be applied to our enterprise applications like HCM,, Primavera 6, all the new solutions that are coming to the marketplace that are point solutions, that are AI-driven, drone capture - just a whole host of opportunities there.And then there's the AI to make our people more efficient in their day-to-day, right? The AI assistants. So we're focusing on all those verticals and more. 

Hair says if AI can't scale in an integrated way, it's not going to achieve peak value. 

The challenge is: how do you bring them all together, right? So that you're getting the benefit of the AI solutions at scale. Scale is  probably the biggest challenge we have. 

Can partners disrupt with AI agents - in a good way? 

I thought STO Building Group raised a crucial point on point solutions. Even if you're a customer building/consuming agents on Oracle's end-to-end platform, there will probably be some AI point solutions in the mix. So wouldn't it be better for a vendor like Oracle to have partners building those vertical AI apps/agents, rather than third parties trying to leverage Oracle customers workloads via API calls? I put that question to Chris Leone, EVP of Development for Oracle Cloud HCM and SCM. To say he agreed would be an understatement: 

Anthropic just came out with a hosting service... Basically, you can build these agents and host them, because it's a big problem. That's like 1/5 of the problem that they're solving. You still have to connect them to the right data source; you still have to have context; you still have to have security; you still have to manage permissions; you still have to reason over unstructured data. All that: connect to the right MPCs, APIs - all of these things are super-complicated.

Leone contrasted that with building on the Fusion Apps platform: 

Think about deploying an agent in Fusion Applications, or an agentic application, which is an application that reasons across a broad surface area of Fusion. You build the agentic application; you deploy it; you are done. It has access to all the APIs; it has access to all the work, all the permissions, all the security - and oh, by the way, it runs on our SaaS and our cloud, and all the capacity is delivered for you. All that goes away. So yes, it is a partner play, and partners see it. They're building on the marketplace, and they're just getting started. Agentic applications will take it to the next level. 

He says Oracle partners are about to take this to volume eleven: 

I can't tell you what we're building next, but in the next three months, you come back and talk to me - your head, and our partners' heads - are going to explode. They'll be able to deploy agentic applications in minutes that they could never even imagine in the past, because we've done the work.

I don't know if I need my head to explode, but you can count on me to monitor partner app building. If your partners don't shake things up with vertical AI apps, then somebody else will definitely give it a go.

My take - AI readiness is a thing; trusted partners ensure you won't lag behind

That's a lot for customers take in. Oracle' official numbers are that over 7,000 customers have deployed Oracle's generative AI services, or have AI agents in production today. But as we learned from the two keynote customers, many are still building up their data strategy, and embarking on Fusion AI. 

"AI readiness is a thing" in 2026. This isn't a technology bake-off; vendors that help customers get to quality AI are the ones that will earn the trust. With that in mind, I asked Natalia Rachelson, Group Vice President, Cloud Applications Development, for her advice to Oracle customers: 

Get started today. Get started with what is available in Fusion, because it's probably one of the safest ways to experiment with AI, and understand how it works. Our message is also start small, but go fast. Our other message is: identify where your biggest pain points are, or areas of friction. It could be as simple as for all the suppliers, we have to check whether they're a nonprofit, they're government or they're a private organization, right? Something as simple as that takes manual work today, for example. Have an AI agent do it. 

The other piece of advice: lots of companies have formed AI governance boards, and they're sort of hamstrung with these boards. It's a slow moving process. What we say to customers: get an approval for Fusion at large; do not get approvals feature by feature, because you're going to fall behind really quickly. 

That's a fascinating point by Rachelson on AI governance - and managing AI risk. I would agree that when an AI governance board evaluates Oracle's data/privacy/governance model, it should sign off on it only once - not per use case. However, within the broad scope of Fusion, there are agentic scenarios that bring more need for risk mitigation than others (the EU AI Act has one of the most useful classifications of organizational AI scenarios based on risk level). 

Rachelson's point on not getting stuck in red tape is well taken; I would also argue for education (and careful use case design) around the varying risk profiles for individual agents (to take a purely hypothetical example not tied to Oracle, an agent that scans inbound leads and sends marketing materials obviously has different implications than an agent that generates performance review language). Therefore, customers should consult with their chosen AI vendors on use case design, even after approving their data privacy framework. But Rachelson's lesson also resonates: don't bog yourself down in committee land. AI is really an iterative technology, rather than an abstract blueprint exercise.

During my AI World half-time review podcast with Rebecca Wettemann of Valoir, we talked about how Oracle's Fusion Apps AI play is a slow burn when it comes to market attention. But as I've said ad nauseam, vendors with a customer long game around data platforms have a good position, even as sensationalized "news" dominates the market cycle. Events like AI World provide big clues into that long game. Despite the SaaS naysayers, the Fusion Apps platform factors into that future heavily. 

At the moment, Oracle is leading a software stocks rally, but I expect more market fluctuations as AI non-events are conflated with real breakthroughs. Will the longer AI/data value game ultimately prevail? That's the sneaky big story to watch. 

Disqus Comments Loading...