The CCE 2025 AI review - how do we account for the gap between vendors and customers on agentic AI?
- Summary:
- CCE 2025 brought the enterprise AI conversation to a head. Sometimes it was jugular; sometimes it was visionary. But was it a more "intelligent" AI discussion? If so, why? And, for the first time, I break out the components of earning AI trust.
During fall event season, we got the gut check we needed - thanks to customer projects with vendors' generative AI tools/apps. We saw similar moves towards agentic AI pilots.
Still, the gap between how customers think about AI - and how vendors talk about it - persists.
Why does this gap matter? After all, since the beginning of enterprise software, vendors have a habit of looking ahead, and promising shiny new toys that aren't ready for prime time.
More than any other tech purchase we've seen, AI is about trust. Advantage goes to the vendors that earn it. Yes, AI will be embedded in most enterprise software. But that doesn't mean every vendor will enjoy a strategic AI relationship with customers. Some vendors will earn the upstream discussions on strategy, platform, and orchestration. The rest will become API gateways or secondary automation tooling.
Forfeiting AI trust with buzzwords - "We don't need to invite AI agents over for Christmas dinner"
What's the fastest way to fall down in the AI stack, and forfeit trust? By widening the gap with your own customers. As I told Esteban Kolsky during our CCE/AI review podcast:
This was really revealed to me last spring when I had a customer at an analyst event tell a vendor: 'Please stop talking about making everything autonomous, and start talking about how you can help my people do their jobs better.'
We saw this narrative tension at Constellation Research's (15th annual) Connected Enterprise event. During a standout panel, "The Last Generation of Managers to Manage Only Humans," moderated by Constellation's Holger Mueller, I had a chance to press the issue. All three panelists - Corregan Brown, Director of Engineering, CTS Restaurant Experience, Chick-fil-A; Andrew Nebus, Senior Director, Defense Programs, ASRC Federal; and Patrick Naef, Managing Partner at Boyden executive search - were forthcoming about AI projects and plans.
I thought: Hey, let's ask these practitioners for their take on the "digital workers/labor" buzzphrase so many enterprise software vendors have been pushing! So I did - but the panelists utterly dismissed my question. I was surprised by the blunt dismissal, but I welcomed it too. Instead of joining the "digital workers" debate, these AI practitioners viewed the phrase as an irrelevant distraction. One of the panelists put this forward:
A lot was said the past two days on the subject. And when I reflect on what I heard, what I believe is: we should not too strongly human-ify, AI... I'm not too sure that we need to invite the agents now for Christmas dinner, and give them bonuses and feedback sessions and so on.
I think we should use technology where it's good at, and this is still today, analytical methods. Yes, I know it's moving more towards probabilistic, but I think we need to combine the strengths of a human being with the strengths of machines. I think we are particularly good with intuition and gut feel, but human beings are not particularly good in analytical methods. And if we can combine the two, I think this is sort of blending the two.
Customers may be united on their buzzword resistance, but they are moving at different speeds. Which led to another standout quote from another panelist:
We've been increasing these layers of abstraction with process, with more advanced technologies and so on. And I think that with agents, we're going to see the same thing. So AI agents are going to come in on the ground floor, and they're going to replace a lot of entry-level tasks. But what is also going to happen is your first level humans are going to start to think more like managers.
It's not dissimilar to what computers did say for sales people, right? They used to have secretaries and analyst assistants, and now they have to do it all themselves.
Destruction of entry-level jobs? A debated/important issue. Add to the mix of CCE takeaways: Is AI flattening management hierarchies? How about the junior-worker-as-process-manager? In keeping with the risk/governance and 'downside-of-AI' themes that came up when things got too dreamy, the panelist added:
So there's promise in that, but that's also a chance for overwork and burnout.
Defining the components of AI trust - and how to earn it
On its own, the word "trust" is a fuzzy and amorphous concept. But we've made progress in defining the components of AI trust. One big caveat: I think it's healthy to distrust AI itself. If we treat employees with "zero trust," the same should be true with AI.
But trusting the vendor that implements your AI? That makes sense - if we can define it. Start with the obvious:
- educational resources for customers
- transparency in AI pricing, architecture, and data privacy/movement
But there is more. Transparency also demands greater explainability. Here, there is progress. Contrary to the AI hype-gasm, so-called "reasoning" doesn't really solve the LLM black box issue very well. But most enterprise vendors rely on "context" to serve up customer-specific data and resources, and that documentation can be sourced and displayed in agentic output as linked documents, via tool calls, RAG etc. Here are the components of AI trust that I look for:
- AI agent evaluation and observability tools for customers - most vendors are falling down in this area, but a few are excelling.
- Present a compelling industry roadmap, not just for AI, but for managing volatility and responding quickly to changing demand signals. AI has the biggest enterprise impact when it's combined with industry know-how.
- Demonstrate a thriving "ecosystem" around your AI solutions, including expert partners able to build out last mile AI integrations- and industry apps.
- Acknowledge the pros and cons of LLMs by "constraining" them into compound systems, combined with other forms of bulletproof automation, machine learning classification tooling, and appropriate human supervision. It's a myth that keeping humans in AI workflows destroys ROI. Yes, those pesky humans-that-know-their-sh@t may limit the highest end of ROI, but they also hedge against negative AI outcomes - and unwanted public relations spank tunnels (We delve into these 'constrained' LLM architectures, and a 'context engineering' debate, in our AI review podcast from CCE).
- Focus on decision intelligence via a strong data platform play that enables better knowledge work across the organization - from self-service dashboards to hard-coded alerts to AI agents. If you can't help your customers with data quality (and the metadata framework) AI agents need, trust will be deflected to the parties that can (Also see: CCE 2025 - can we get to decision intelligence? Is AI disruptive to research? Kolksy podcast part two).
- Brute force autonomy is out - granular "autonomy toggles" are in. Yes, some companies will opt for blanket agentic autonomy (have fun with that), but most customers want granular autonomy on a per-workflow basis. They want to be able to toggle or ratchet up autonomy at their own pace, not the vendors'. Give customers the tools to balance compliance with internal efficiencies on their own terms.
Data plumbing and the "AI activation layer" - an emerging differentiator
While every vendor will tell you lovely things about their data platforms, here's the reality: some are much better at helping customers with data plumbing, transformation, and LLM-friendly annotations than others. As Steve Lucas, Boomi CEO, told me during our interview:
There's this massive gap in the middle, and we're just calling it the AI activation layer, which is, 'How do you connect your data,' if you're if you're feeding it into a lake, or you need more real-time, or you need whatever - how are you governing? We literally call that the activation pillar, which is everything from agent creation, registration, governance, control, policy management, and now agent orchestration.
Now that I know about my agents, how do I activate them? How do I ensure that I'm not driving the most expensive prompt into the most expensive model?' All that has become a real thing. So we have about 50,000+ real AI agents, deployed in production in our customer base, doing those things: integration, automation, type work. It's real.
Yes, most vendors don't have nearly all these components of trust... Welcome to my semi-subversive game plan: Maybe this will reduce some of the AI carnival barking as we bear down on these issues. During CCE, we ventured into these issues, but didn't fully hash them out. Thus this post.
My take - productive clashes are healthy, the gap between hype and enterprise reality is not
At CCE, we saw a different kind of gap... but less between vendor hype and customer reality. Instead, we saw a productive clash between AI enthusiasts and enterprise-grade tech expectations.
For me, this came to a head while moderating a vibe coding panel (The Post Vibe Coding World - What's the Future of Product Development). One panelist was speaking to the energizing effect of vibe coding on the startups they work with. Which led to this audience question, via a product engineering director from a sizeable (and regulated) enterprise:
I'm a huge proponent of vibe coding platforms. But my question is: have you seen anyone building a platform that handles things like version control, lifecycle management and integration to computational systems - integration as first principles?
Quite honestly, our team is working on this, and we kind of decided it was just too much effort... I'm pretty sure a lot of companies are working on it. I just don't know who's actually working on it.
We left CCE with more AI questions than answers. There is vision; there is risk management, and then there is the all-important 'how' of project results - at scale. The latter is what some attendees wanted more of - but three days of big ideas fly.
Jugular CCE zingers that linger:
- Will AI flatten org hierarchies? If so, how will we thrive in that new org, and not just experience a new kind of super-productivity hamster wheel?
- "The real cracking use cases are the ones we haven't thought of today."
- The waterfall-busting appeal of iterating with AI, amidst a culture of experimentation - juxtaposed against heavy-handed "AI First" mandates (e.g. Meta's questionable exhortation to Metaverse employees to 5x their individual productivity).
I also moderated a panel on finding human purpose amidst the machines, based on my pre-conference DisruptTV appearance. Constellation's Ray Wang wanted us to emerge with a manifesto on this topic, a decisive call to action. We didn't get to that, but we did get a collective sense of how we are navigating (I have a seven point review on this topic - that's for another time).
But there is an enterprise side to this also. How we define AI purpose in our organization matters. If we are just 5xing ourselves into last-super-employees standing, I don't like our chances. But as Mueller's "managing workers" panel noted, you don't have to let AI dictate your organizational culture. What if you have a goal like this one?
Agents will allow us to take care of the normal processes that are mundane, and allow managers to focus more on that culture, that person, that training, the 'why us' mission, rather than 'How do I get this payroll thing approved?'
That gets us closer to the enterprise AI that I am personally rooting for. But it's not a fait accompli by any means - so how do we get there? A better AI conversation is a good place to start - one with less evangelism, and more evaluative rigor. As Kolsky said during our AI podcast, embedded below:
Conversation is the value. Everything else is just irrelevant, right? I think that the biggest takeaway so far, in the last two days, is that the discussions around AI are becoming more intelligent. And I'm saying that facetiously, like intelligence is more intelligent. We're asking better questions, and we are depending less and less on what's been given to us.
At it best, CCE achieved that level of conversation. Now let's see if we can spark that on our projects.