Main content

Why is enterprise AI stuck? OpenSearchCon Europe 2026 says the bottleneck has moved from the model to the data

By Alyx MacQueen April 16, 2026
Dyslexia mode
Excerpt:
Three keynote speakers, one consistent argument. The hard part of enterprise AI in 2026 is not the model – it is getting the right data, with the right context, into the right place at the right time. OpenSearch thinks it has an answer.

Bianca Lewis at OpenSearchCon Europe

OpenSearchCon Europe kicked off in Prague this morning with a good look at where open source search is heading. Nobody on stage spent their time on Large Language Models (LLMs). That absence was informative .

For two years, enterprise AI coverage has been dominated by model size, parameter counts, and which cloud provider had the most Graphics Processing Units (GPUs) under contract. The OpenSearch Software Foundation used its opening keynote to argue that the bottleneck has moved. It now sits in the data layer, in query intent, and in the gap between what a user wants and what a search system has been told to retrieve.

The bottleneck has shifted

The conference opened with Bianca Lewis, Executive Director of the OpenSearch Software Foundation, in conversation with Jim Curtis, Director of Data AI and Analytics at S&P Global. Curtis has just released new research on search and vector databases, with stats that made the point: 35% of enterprises cite insufficient data access as a significant blocker to AI adoption, with a further 40% describing it as disruptive and requiring additional work to resolve.

Lewis observed:

We always thought the bottleneck might be in the infrastructure there, or in the application there. And what you’re saying is the bottleneck is in the data structure.

Vector databases have become essentially universal – nearly every major database vendor now supports the vector data type. The remaining challenges are organizational: data sitting in Slack, in Customer Relationship Management (CRM) systems, on laptops, in specialized financial platforms. The technology is there, but enterprise readiness has not caught up – something that diginomica has seen recently in its own independent research.

Lewis noted that enterprises are not really after search at all. They are after insight:

At the end of the day, enterprises really just want an insight. I want to sort of in natural language tell you what I want, and I want something delivered back.

OpenSearch in the agentic era

Carl Meadows, Chair of the Governing Board at the OpenSearch Software Foundation and Director of Product Management for OpenSearch at Amazon Web Services (AWS), observed that what has changed about OpenSearch is who uses it. The platform was historically a tool for experts – search engineers, Site Reliability Engineers (SREs), DevOps teams. Now it is opening up to search generalists, AI agents, and a new category of agent builders. Increasingly, the consumers of OpenSearch are the agents themselves.

To give some context, Meadows demonstrated Claude Code on stage and asked it to build a hybrid search application from a movie dataset using OpenSearch Launchpad, a new AI-powered tool packaged as an OpenSearch agent skill, with the caveat for the audience:

Not only am I doing a demo, I’m doing a demo with a non-deterministic system.

Within a few minutes, with holding notes including "sautéing..." and "gallivanting...", Launchpad had analyzed the data, proposed a plan, set up an ingestion pipeline, downloaded an embedding model from Hugging Face, indexed the data, and launched a working user interface. This work traditionally requires serious search expertise and days of setup.

Meadows also showcased the OpenSearch Relevance Agent, which analyzes user behavior to suggest and implement improvements to search relevance. A new OpenSearch Agent Server supports it as a multi-agent orchestration platform.

Observability and sovereignty

Observability announcements got significant airtime too. The new OpenSearch Observability Stack bundles OpenTelemetry Collector, Data Prepper, OpenSearch itself, Prometheus, and OpenSearch Dashboards into a single deployment that launches with one command. Logs, traces, and metrics are correlated natively. Meadows commented on what that means commercially:

Traditionally, to get this level of functionality, you were paying commercial vendors quite a bit of money.

More interesting was his reasoning for OpenSearch Agent Hub, a new headless local operation installed via npx that captures agent traces for monitoring and testing. Agent trace data, Meadows argued, is uniquely sensitive:

Agent trace data can be some of the most sensitive data in my corporation, right? Because it’s how the agents are interacting with my private data, and how it’s interacting with customers. I might not really love trusting that to a cloud vendor.

Sovereignty, in other words, has moved from being all about compliance, to architecture.

The keynote carried news too. CERN has joined the OpenSearch Software Foundation as an associate member. Socrates Trifonas, who leads the OpenSearch service at CERN, delivered the announcement via video. CERN now runs 130 OpenSearch clusters in production, indexing more than 1.3 petabytes of data, mostly for log analytics but also for AI application testing and authority database work. CERN also operates the world’s largest particle accelerator – useful context when you're evaluating whether a platform can handle real scale! BigData Boutique, OpenSource Connections, and Resolve Technology were named as new members alongside CERN.

Decisions, not information

The final keynote came from Dom Couldwell, Product Management Leader for OpenSearch at IBM, and was the morning’s most provocative session. Couldwell argued that search has been oversold to the enterprise as a semantic problem, when a lot of enterprise search is still fundamentally lexical. The industry has rushed to vector search as if it were universal. He observed:

Similarity doesn’t always give you relevance. And relevance isn’t some beautiful circle. It is a messy thing.

His example was a large German parts manufacturer whose warehouse staff were struggling with product lookups on handheld devices. Multiple vendors had pitched semantic search. Couldwell’s team realized the staff were specialists using jargon and product IDs, and proposed a lexical approach instead. The result was 80% of the problem solved for 10% of the projected cost.

Couldwell introduced two ideas worth looking out for in the future. One is around organizations that bolted vector search onto an existing operational database, or went all-in on a vector-only platform, and are now hitting scale, cost, and transparency problems. The other, which I suspect will be quoted back at the industry for months, is “reduce time to why.” Enterprises do not need more information. They need to understand why one answer is more useful than another, and act on it with confidence.

My take

I will be sitting down with Bianca Lewis and Carl Meadows later today, and several threads from the morning are worth pulling on directly – not least the vector reality check Couldwell laid out on stage. The keynote made for a great start to an exciting event – the model-centric era of AI coverage has obscured the far less glamorous work of getting enterprise data access, intent, and context right. There was openness about what vector search cannot do – and right now that humility is needed for the enterprise buyers and technical managers the Foundation is trying to reach. More to follow.

Disqus Comments Loading...