Your AI agents can talk to each other - but are they saying anything useful? Confluent Intelligence aims for insight
- Summary:
-
Confluent's latest Confluent Intelligence features include support for both Anthropic's Model Context Protocol (MCP) and the Agent2Agent (A2A) protocol within Streaming Agents, plus a new multivariate anomaly detection capability. All technically credible additions. But the more difficult challenge is about whether enterprises have the data infrastructure, governance maturity, and organizational readiness to make agent coordination actually work.
It’s becoming clearer where the industry is starting to converge on shared standards for agentic AI - and Confluent has just announced support for both Anthropic's Model Context Protocol (MCP) and the Agent2Agent (A2A) protocol in its Streaming Agents capability. Similar announcements have been coming from other vendors too and it highlights something important about where we are in the agentic AI maturity curve.
We're moving, slowly, from a period of everyone building their own thing to something that at least resembles an agreed-upon set of rules. The two protocols are doing distinct jobs: MCP handles how agents connect to and consume real-time data; A2A handles how agents communicate and coordinate with each other. Together, they're starting to look like the plumbing infrastructure for enterprise agentic AI.
When I spoke with Matías Cascallares at MWC this week, said that the industry right now is full of what he called "self-made craftsmanship" of agents - organizations building their own orchestration approaches, in their own way, largely disconnected from each other.
A2A, he argued, is the industry's attempt to impose some order on that:
Initiatives like agent-to-agent show the industry is moving to a higher level of maturity. It is like, okay, this is a bit of a mess, let's try to standardize. What pretty much all vendors need to do is embrace those standards.
If everybody speaks the same language, the interaction of all these agents becomes much easier.
That's broadly true, and it's progress. But it raises the question that I think matters far more right now - what are those agents actually saying to each other, and how good is the information they're passing around? And are organizations ready for changes in how they work?
The announcement
Confluent has packaged its latest intelligence capabilities under the Confluent Intelligence umbrella. The headline addition is dual protocol support within Streaming Agents - Anthropic's Model Context Protocol (MCP) and the Agent2Agent (A2A) protocol - now available in Open Preview. Often these are conflated, but MCP is the data connection layer, where Streaming Agents connect to real-time data streams and feed that context to AI systems; whilst A2A is the coordination layer, which allows agents communicate with, trigger, and share outputs with other agents. Confluent's argument is that you need both - continuous fresh context flowing in via MCP, and the ability to orchestrate what agents do with that context via A2A.
The practical result is that Streaming Agents can connect to agent frameworks like LangChain, data platforms including BigQuery, Databricks, and Snowflake via MCP, and then trigger and coordinate workflows in enterprise platforms like Salesforce and ServiceNow via A2A.
The second addition is Multivariate Anomaly Detection, a new machine learning function within Confluent's Flink-based compute layer, currently in Early Access.
Traditional anomaly detection assesses single metrics in isolation - a temperature spike, say, flagged as unusual because it exceeds a threshold. The multivariate version looks at multiple metrics together, understanding the relationships between them. As Cascallares explained, if temperature is rising, but so is footfall in the same physical space, that changes the interpretation. More context should mean better signals, fewer false positives. These ML functions run as Apache Flink jobs, which means they inherit Confluent's full governance and data lineage capabilities. .
Live, usable, governed
What is more interesting than the product announcements themselves was the argument Cascallares was making about what agents actually need to work in production. Beneath the A2A framing, there are three distinct priorities, according to Confluent.
The first is about freshness. Cascallares said:
The ability to interact with information that is fresh...
As we know, relying on data that is two hours old, one day old, or in some cases one month old, reduces any effectiveness gain from AI adoption. That’s the core of the data streaming argument, and it's one Confluent has been making for years. But it’s more pertinent in an agent context, because an agent making decisions or coordinating with other agents based on stale data isn't just inefficient - it's potentially a corporate risk issue.
The second is about usability. When I asked about the pain points customers are raising, the answer was consistent: too much wrong, missing, or unclean data feeding into their AI projects. Confluent has long argued for a ‘shift left’ approach,where organizations clean and process data as close to the source as possible, rather than inheriting the mess downstream. Confluent provides Single Message Transforms (SMTs) at the connector level for lighter-touch transformations, and Flink for more complex processing and pre-aggregation.As Cascallares put it:
If you feed your agents with garbage, they will probably call other agents in a very orchestrated way - but still with garbage.
The third claim - and arguably the most important - is about governance. Confluent, alongside many other vendors, is making a play to be the governance layer for agent interactions. When I asked Cascallares directly whether Confluent was positioning itself as an orchestration or governance layer for agents operating across different platforms and data sources, he walked through the full chain capabilities in the platform: governance starting at the Connect layer, continuing through Kafka with access control and Schema Registry, through Flink processing, all the way out to Tableflow - which then exposes data in open formats like Apache Iceberg. He said:
If we think about our end-to-end governance - it pretty much starts at the data ingestion layer. It depends on how you ingest data into our platform. If you are using our Connect layer, with its out-of-the-box connectors, you start governance there from the outset. If you have your own application that is producing data to Kafka - not a connector out of the box - you can still use our governance tools. You can instrument your producer applications to start capturing the breadcrumb of the data.
Then, of course, once the data lands in Kafka, you have full governance: access control, data governance. In addition to that, we also have schema governance with products like Schema Registry and all our libraries that support it. When you need to process the data, we have tools like Kafka Streams and Apache Flink - you have the full flavor of governance.
There's also a point-in-time data lineage view, allowing teams to see not just how data is flowing now, but how it was flowing seven days ago. Every agent built on SQL, he noted, runs as a Flink job. This means it inherits all of that governance coverage automatically. The immutable log, he said, is "our bread and butter, our secret sauce" - and for auditability and replayability in an agent context, that's a potential lifesaver.
What our CIO network is telling us
The challenge with all of this is that Confluent's argument is technically sound, but organizationally, most enterprises are still doing the work to capitalize on this sort of setup.
In our January 2026 micro-pulse survey of 124 technology leaders across our network, the dominant themes around AI implementation experiences were challenges in production and scaling (flagged by 18 respondents), governance and risk management concerns (15), and human factors and change management (12). The overall sentiment was 40% negative, 35% neutral, and only 25% positive. One respondent described the experience as being “a lot of work to take a POC to production”.
This speaks to what Cascallares was really describing when I pressed him on the change management question. His answer focused on Confluent's deployment flexibility - fully managed across all three major cloud providers, or on-premise via Confluent Platform and the newer Confluent Private Cloud for organizations with data sovereignty requirements. It's a sensible answer to a technology question. But the organisational question runs deeper, still.
Confluent's own research, the Quick Thinking 2.0 report released this week based on a survey of 200 UK business leaders, provides some useful context here. Of those surveyed, 62% say they now use AI to make the majority of their decisions, but 71% admit that data is already out of date by the time it reaches them. That contradiction is the entire Confluent argument in a single statistic. And 91% say they would feel more confident in their decisions if they had access to real-time data. The organisational readiness to build and maintain real-time data use for AI-enabled environments, is a real challenge.
My take
The "your agents need live, usable, governed data" argument is what enterprises have been working towards for decades, and the Confluent is building out capabilities in its platform to support the art of the possible at this moment in time.
The bigger issue of ‘shifting left’ to sort your data, adopting streaming-first data infrastructure when data is held in organizational silos, dealing with budgets sitting with business units, and managing teams that aren't built for event-driven architectures, isn't a technology problem. It's an organizational change that requires a huge amount of focus and discipline, not just tech investment. And as our network data consistently shows, that's where AI ambition struggles.
What Cascallares left me with, and what I think is genuinely useful advice for enterprise buyers (and has been for years): start with your data. Check its quality. Understand your compliance requirements. Then decide on models, approaches, and whether agents are right for your use case. In a market full of agents being deployed on top of fragile, stale, ungoverned data foundations, that's a solid suggestion.