Your browser doesn’t support HTML5 audio
If you have ever spent an afternoon wading through log files trying to work out whether a spike in error rates is a performance problem, a security incident, or just something weird happening with a third-party integration – welcome, dear reader. You already understand the core problem Elastic is trying to solve, even if the company has not always made it easy to articulate.
The "Search AI company" label Elastic now applies to itself is doing a lot of work. It is not a pivot – Massimo Merlo, AVP for UK, Ireland, Iberia and Italy, is emphatic about that in conversation. Search has always been the foundation. What has changed is how clearly Elastic can now explain why that foundation matters beyond finding things quickly. Merlo says:
At the core of everything we do is still the open source roots, the flexibility, the extensibility, the basis of search, because that's something you can apply to pretty much any data problem.
And data problems, he notes, are increasingly just business problems wearing a different hat.
Pretty much every business problem is a data problem.
Security has quietly become the front door
Merlo joined Elastic with a software engineering background, which gives him an instinct for following where the real complexity lives rather than where the marketing points. And right now, in his region, it points firmly at security. Observability – specifically log management – was the traditional entry point for most enterprise customers. That has shifted, he notes:
Security is probably the primary area where people are really taking advantage of Elastic.
Two things are driving this. Volume is one: the sheer scale of threat activity has outgrown the tools built to detect it, and customers are making painful trade-offs between how much data they can afford to analyze and how much risk they are willing to carry. Agility is the other. The threat landscape has changed in ways that make fixed-pattern detection increasingly inadequate:
You're no longer saying, 'this is the pattern I'm looking for, and that indicates a threat.' It's constantly changing.
Multi-stage attacks – where each compromised component introduces the next – require something that can move through data quickly without being told exactly what it is looking for.
Elastic's open source community model is genuinely relevant here rather than just a talking point: millions of contributors continuously updating detection patterns means the platform reflects a collective understanding of the threat landscape rather than whatever a single vendor's research team has catalogued. THG, the UK e-commerce retailer formerly known as The Hut Group, with revenues of more than £2 billion (approximately $2.5 billion US), has put this to work at scale – pulling in 25,000 events per second from around 100 different feeds. Mean time to respond to security incidents is down 60%. First-line triage, which used to consume 90% of the security team's working hours, now accounts for 50% – which sounds like it is still a lot until you consider what the team is doing with the other half: proactive threat hunting rather than reactive log-sifting.
Large Language Models are clever - they are not always right
The AI section of almost every enterprise technology conversation right now follows a familiar arc: capability, promise, caveat. Merlo skips straight to the caveat, which is refreshing. A customer observation he cites has stuck with him:
AI can be a very good way of getting the wrong answer quickly, because it's not grounded in the trusted data – your own enterprise data.
Large Language Models (LLMs) have genuine reasoning capability. What they lack, without grounding, is access to the specific context that makes an answer useful rather than plausible. Elastic's position in that architecture is to supply the context layer – retrieving accurately from data an organization already owns and trusts, bridging it to the model's reasoning capability. This is the mechanism behind Retrieval Augmented Generation (RAG), though the more useful way to say it is: stop asking AI to guess when you have the actual answer somewhere in your own systems.
Reed.co.uk, the UK's largest online job site, illustrates what this looks like when it is working. With 11 million registered candidates and close to 100,000 roles listed daily, search relevance is not an abstract quality metric – it directly determines whether job seekers find relevant work and whether employers get useful applications. Using vector embeddings in Elasticsearch, Reed has lifted candidate click-through rates by 20%, improved application completion rates by 30%, and reduced cost-per-hire by 20% for recruiting organizations. The difference between keyword matching and semantic understanding, at that scale, is not incremental.
The convergence problem nobody has quite solved yet
Interestingly, observability data and security data can largely be described as the same data, collected by different teams, stored in different systems, analyzed with different tools, and almost never compared. System logs, application performance metrics, network activity, even physical access records – all of it is relevant to understanding whether something unusual is happening. Merlo states:
Everything that we've collected in terms of what's going on with our systems can all be applied to a security posture — but right now it's very siloed.
The Met Office is a useful reference point for what closing that gap looks like in practice. Running some of the most complex computing infrastructure in the country – including one of the UK's most powerful supercomputers, across a mix of on-premise, cloud, and Software-as-a-Service (SaaS) platforms – the Met Office previously had no consistent approach to log management across teams. John MacGrillen, Solutions Architect at the Met Office, describes it in a case study:
There were pockets where logging was managed extremely well, but there was no consistent approach, making it impossible to unlock the value in our data.
Elastic Cloud, running on AWS and Azure, now ingests more than two billion data logs daily. The practical result is a unified view across systems that previously could not be correlated – including the ability to detect, during major vulnerability responses, whether any on-premise system had attempted to connect to a malicious IP address.
When I ask Merlo what success looks like in 12 months – without polishing up the observability crystal ball, his answer is consistent with everything he has said. More customers recognizing that security and observability are the same data problem viewed from different angles, and consolidating accordingly. He explains:
More and more, if we can continue those conversations – the much broader insight you get from converging those two things is where AI starts to become, in this kind of area, really, really valuable. Because it will make the links that maybe haven't occurred to people.
My take
Observability has spent years being treated as infrastructure cost – necessary, unglamorous, the kind of thing you fund just enough to avoid getting paged at 3am. The argument Elastic is making, and the direction the industry is clearly moving, is that this framing has always been wrong. If your observability data is rich enough to tell you that a system is degrading, it is also rich enough to tell you that something anomalous is happening that your security team would want to know about. The problem has never been the data. It has been the organizational habit of keeping the people who look at it in separate rooms.
Merlo is right that convergence is where the real value lives. Where enterprises will struggle is not with the technology – Elastic's platform makes the technical case clearly – but with the structural reality that Site Reliability Engineering and Security Operations teams often have separate budgets, separate vendor relationships, and limited shared vocabulary. Getting those conversations to happen requires someone with standing in both rooms simultaneously.