Main content

What if natural language interfaces are a journey not a destination?

Ian Thomas Profile picture for user Ian Thomas April 14, 2026
Summary:
An interview with integration vendor Celigo about Ora, its new conversational agent, sparks some thoughts about the future of how we interact with AI and the limitations of chat.

People carryng speech bubble icons sitting in a row, CRM conversation concept © Rawpixel.com - Shutterstock
(© Rawpixel.com - Shutterstock)

As anyone who spends time in meetings will know, chatting with colleagues is pleasant — but not always productive.

We can spend an hour in conversation, feel like we’ve made progress, and yet still not have achieved a shared understanding. Because chat is, by its nature, exploratory, open-ended, and reliant on ambiguous signals.

And that’s just as true of AI as it is of people.

Over the past year, there has been a constant flow of vendors releasing chat interfaces for their existing products — and it often feels as if natural language is being pushed as the dominant future interaction model for all software.

But as these systems move from novelty to production, a different constraint is starting to emerge. Chat makes it easier to ask for something when you don’t know exactly what you want — but also makes it harder to express what you want with enough specificity to be actionable. More importantly, it makes it difficult to check whether the result matches your intent, because the internal representations we are used to using as navigational aids have been replaced by a wall of text.

In a recent briefing with Celigo, this tension showed up in a very practical way.

The company has introduced a new natural language interface, Ora, designed to let users control the platform through conversation — but done so in a way that makes the results visible in its existing visual models. So far so good on reflecting back results in a way that allows verification against intent.

But that pairing also points to something more fundamental — that natural language may not be the end-state of interaction at all, but the mechanism through which more specific affordances are discovered.

Chat is powerful when we need to navigate ambiguity or cannot express exactly what we want — but can also introduce needless confusion and imprecision when a stronger and more precise domain model exists. And as Matt Graney, Chief Product Officer of Celigo noted in passing during our recent discussion, by exposing where those ambiguities lie, chat may also become the primary tool for defining its own replacement.

Pairing the verbose with the visual

But to unpack why this might happen we need to start with the way in which flexible natural language can be paired with precise domain models — something Celigo has been working on in its most recent releases.

Celigo has always been known as a high-scale, API-led integration platform which is configured and managed through visual interfaces. And while Graney believes this architecture has always been powerful, he also suggests that the need to build familiarity with the platform’s domain models and tooling has also been a source of friction for many users.

To reduce that friction and increase participation, Graney says that Celigo has introduced Ora — a natural language agent designed to make integration and automation more accessible. He explains:

What we've optimized for is an agentic build experience that makes it much more accessible to users across the board, whether it's ‘build me a flow from scratch’, ‘create a connection to an app I've never seen before’ or ‘look at the audit log to understand what change caused this to break.’

Crucially, however, Graney points out that this conversational layer is not intended to replace the existing interface, but to work alongside it — ensuring that the impacts of requested changes can clearly be seen and verified. He says:

It doesn't replace the UI because the UI is available for the user to review what the AI did. So it's this complement — a way of understanding exactly what's being built or operated.

But Graney says that this complementary approach also changes how work can be structured and managed — breaking complex tasks into specific and comprehensible steps that can be individually visualized, reviewed and refined:

I might have looked into building an end-to-end flow with, say, five different steps, each represented visually. We have a stepwise model where you can review each one and say ‘accept,’ ‘reject’ or ‘iterate more on this one.’

In that sense, the interface is not being replaced so much as rebalanced, with chat handling ambiguity and existing visual canvases ensuring precision of interpretation. But while this pairing helps to verify that intent has been correctly interpreted within the scope of existing models and features, it doesn’t help when the product doesn’t yet have them.

Bypassing the visual with the verbose

While we initially focused on models and features for building shared understanding, Graney brought forward another benefit for natural language — that it can serve users with features that don’t yet exist. He explains:

Natural language can also be used to do things that can't be done through the UI — because it has the ability to join information together. I might have a list of users with different rights and roles and an audit log showing logins — but I can't necessarily pull those together through the UI easily. Whereas, with Ora, I can say, ‘Tell me which of the admins have logged in in the last seven days,’ and it's able to essentially deduce that. And that's one of the powers of a natural language interface. It allows us to address a whole range of edge problems that we couldn't have anticipated in the UI.

This effectively extends the scope of the product to any use case the underlying data can support — which is useful for edge cases but not necessarily the best way of delivering frequently requested behaviour.

Because once you can see those natural language requests, you can start to replace them with more efficient affordances built directly into the UX.

A very important idea that emerged from a very minor aside.

Replacing the verbose with the visual

After explaining the benefits of pairing natural language, however, Graney threw out an aside that made me rethink the future trajectory of UX.

While extolling the virtues of natural language integration at every level of the platform — useful for users with limited platform-specific skills or needs that fall outside existing features — Graney threw out the idea that aggregated data about the things people were trying to do would be a valuable resource in steering future UX development.

It's very difficult to understand, en masse, what it is that users want to do if it's beyond what the product already does. But the natural language requests provide an additional way for us to understand product usage — of course I have to stress that this is done in a anonymized, aggregated fashion with the redaction of payload information — but I think it's a good augmentation.

Natural language becomes a way of surfacing latent demand that product teams have historically struggled to see — not by asking users what they want, but by observing what they repeatedly try to do.

This reminds me of the ‘desire paths’ laid by a number of US universities such as Ohio and Michigan. Rather than build pathways across campus according to the ineffable logic of planners, these institutions allowed students to walk wherever they wanted before laying paths over the tracks — essentially reflecting the ways students wanted to walk rather than the ways the university might have felt they should.

In this sense, natural language becomes a similar medium of discovery, rather than an end-state of its own.

Because in the case of a product company like Celigo, there are almost infinite possible paths through the product, with no guarantee which will be used. Building all of them makes no sense. But where patterns of demand emerge, it becomes viable to formalize them — to turn a rutted track into a smooth path.

Which means that rather than grow into the ‘one modality to rule them all’, chat is actually likely to play a far more nuanced role.

For low proficiency users, it becomes a way to access existing functionality without the upfront learning curve — and when paired with domain models becomes part of a robust system for verifying intent.

For users pushing beyond current capabilities, it enables edge cases — using the underlying data and infrastructure of the product to put together responses that are not perfect, but feel like ‘just in time’ features.

But most interestingly — at least to me — it may also be a tool for replacing itself — identifying the patterns of demand that make it worth replacing ambiguous natural language with more efficient and effective purpose-built experiences.

Creating a product that not only eliminates ambiguity and unnecessary reliance on AI, but which also ensures that the vast majority of users end up walking along their collective desire paths.

My take

Celigo’s Ora release keeps it aligned with a rapidly converging market, but its emphasis on visibility and stepwise verification highlights something more fundamental.

As AI systems move into production, the bottleneck is no longer generation — it is verification. We can now produce outputs far faster than we can confidently check them, which makes environments that expose and structure those outputs far more valuable.

That challenge is already visible in areas like coding where the sheer volume of textual assets to be reviewed has spiralled beyond comprehension, leading AI researchers to shift focus from what AI can generate to how its outputs can be inspected, steered and trusted.

Because if we’re going to build systems we can actually live with, then we’re going to need ways of interacting with them that reduce cognitive load rather than amplify it.

And that is where the second idea begins to follow. If natural language introduces ambiguity, and ambiguity increases the cost of verification, then one way of managing that cost is to remove the ambiguity altogether — by translating repeated patterns of intent into more precise affordances.

Buttons, models, levers, gizmos and gadgets all evolved to make complex work easy — enabling us to simply press the button confident in the fact that we’ll get exactly what we expect every time.

Without the need to describe it in interminable detail, add guardrails and examples, and then crawl through a metric tonne of slop to make sure we got what we wanted.

In that sense, chat — while useful — doesn’t feel like the preferable solution for a sane society. I don’t describe things I can draw, I don’t inspect the insides of my toaster to get toast, and I don’t need to explain to my car in detail how to turn the lights on and then get out to make sure it actually turned the lights on rather than killed my cat.

Which means that chat starts to look less like a destination and more like a discovery mechanism — a way of surfacing repeated patterns that can be reduced to something simpler, like the pristine lawns that offered up infinite paths but ultimately served only to reveal the favoured few.

A way, in short, of identifying the big red buttons we need our products to offer us so that we can just get on with our lives.

Which makes Graney’s aside more significant than it first appeared.

Not just as a product insight, but as a broader philosophical stance — one that suggests we shouldn’t aim to plaster AI across every possible surface of our lives — I see you big tech — but instead use it to eliminate its own unnecessary complexity.

Loading
A grey colored placeholder image