Main content

You’re doing agents all wrong – why you should focus on human attention

Ian Thomas Profile picture for user Ian Thomas December 19, 2025
Summary:
Most AI agent projects in the enterprise focus on autonomous automation of established processes - but does the real value lie elsewhere?

Gray cat with attitude showing paws on wooden table with yellow chair behind and in foreground the word wrong! in yellow
(© 101cats from Getty Images Signature via Canva.com)

What if agent projects are failing because they’re being applied to the wrong problems?

This was the assertion of Duncan Anderson — CEO and co-founder of boutique AI agency, Barnacle Labs, in a recent discussion with me. It was an interesting provocation — and one I had gone out of my way to find. Because I wanted to shift from understanding the agentic narrative of established mainstream automation vendors to understanding the stories of those implementing new technologies in new ways.

And what emerged was an entirely different way of framing the value and purpose of agents. Less about automation and more about attention — and about removing operating model friction that, until now, has been resistant to systematic improvement.

Finding new agentic perspectives

At diginomica we necessarily spend a lot of time talking to vendors I’ve dubbed ‘enterprise-incrementalists’ — vendors with whom enterprise leaders have long-standing relationships. And while these conversations give us a deep understanding of what’s going on in the core of the enterprise, they also introduce the risk of incremental perspectives. Because when you are used to using process automation for standardization and efficiency gains it’s natural that this perspective shapes your view of any new technology.

But often the real stories during a transformation exist at the edges — the rough, jagged frontiers at which emerging techniques bump up against people with hitherto unresolvable challenges. Places where neat operating models grind up against the messy reality of customers, suppliers, or internal coordination — the cliff-edge beyond which standardized workflows fear to tread.

And while the enterprise-incrementalists continue to focus on augmenting core processes with AI in search of efficiencies, the more disruptive message of the techno-optimists means they’re often a magnet for people struggling with these edge cases.

Which is why, in my discussion with Anderson, it became clear that the most powerful application of agents may lie not in delivering incremental gains to the already abundant automation footprint, but in directing scarce human attention toward sources of friction at the edges.

Agents != Automation

We start with Anderson’s suggestion that the biggest mistake organizations can make when adopting agents is to treat them as just another tool in the automation toolbox — an augmentation to existing workflow platforms that incrementally extend the circle of work that can be made more efficient. He tells me:

I think there are a lot of people who are thinking about the problem space as more workflow- and process-orientated. But I think they’re misguided.

Given that the majority of enterprise-incrementalists I’ve spoken to insist that workflow is exactly the way in which you should think about the problem space, this is a clear point of difference from which to start our discussion.

Instead, Anderson argues, the real shift organizations need to make is to view agents as an enabler of new possibilities, not an incremental add-on to existing technologies. In particular, he suggests that the defining change brought about by agentic AI in the enterprise is neither technical nor architectural — it is economic. He goes on:

The types of problems that a true agent can work on are not problems that we’ve traditionally thought we could automate. It’s work that currently isn’t done at all — because it was too expensive to have humans do it, and the technology couldn’t do it because it required too much judgment and nuance.

From Andersons perspective, therefore, the natural field of operation for agents is not the automation or augmentation of core processes — where existing technologies already excel — but the transformation of organizational whitespace. Areas where friction has accumulated because work cannot be standardized or automated — and continuous human judgment has never been economically viable. He explains:

We’re not used to the idea that a piece of technology could automate this kind of work. It requires a degree of creativity that most people don’t have, because they’re constrained by their existing frames of reference. As a result, nearly all of the things we’ve built in the agent world have come about from senior business conversations.

That observation reminded me of an art course I took as a teenager. Rather than drawing the object itself, we were taught to draw the whitespace around it — the lines that defined the absence of the thing. Because recognizing the object too early forces the brain into pattern reproduction, pushing the hand to recreate internalized abstractions rather than what the eye actually sees.

I could imagine that the same dynamic plays out inside organizations. IT teams, managers, and vendors naturally pattern-match using their existing frames of reference — technologies, tasks, or products. The organizational brain sees agents and immediately categorizes them using what it already knows — automation — and forces the organizational hand to comply. Executives, by contrast, tend to view the organization through outcomes — a vantage point that makes it easier to see those gaps between the lines. Anderson tells me:

It often starts with a very nebulous idea from the top. We spend a couple of weeks talking to them before offering some ideas. And the response is often, ‘Oh wow — you can build that?’ People even say, ‘I don’t believe this is possible.’ That shows how hard it is to break out of existing assumptions. There’s a higher responsibility on us.

What all of these examples have in common is not a particular technology choice, agent architecture, or maturity curve. Instead, they point to a more fundamental constraint that has shaped how modern enterprises operate for decades — largely unnoticed because it has never been economically viable to address.

The scarcity of human attention.

Modern enterprises are not short of data, expertise, or intent — but remain largely blind to opportunities that depend on an affordable ability to have the right people pay continuous attention. No unassisted human can stay on top of information signals across every customer relationship, supplier interaction, policy interpretation, or evolving dependency across an organization’s ecosystem.

But Anderson’s explanation surfaced a new perspective — agents not as automatons, but as arbiters of scarce human attention.

The power of attention at scale

To ground these concepts in practice, Anderson uses the example of a large financial services group:

We're working with a large insurance company who’re redoing all of their risks and controls. It's a lot of bureaucracy, manual work, and money.

The challenge, he explains, is that centrally defined risk policies are adapted by business units. Over time, local interpretations can drift — but the central team has no practical way of knowing how far, or in which direction. Big Four audits were expensive and infrequent. All of which left the risk team without a clear line of sight — and with the job of cleaning up any issues after they had already happened.

Anderson explains that this kind of problem is a huge opportunity for agents:

We've built an agent to read all of these customized risks and controls, compare them to the central definitions, and say, ‘hold on a minute, this one's diverged too far.’ Something they couldn’t do with humans because it would be prohibitively expensive.

This is a classic example of organizational whitespace — where the organization has the expertise and the information, but cannot match them together at scale.

By using the flexibility and scale of agents, Anderson explains that the organization was able to surface where risk exposure was accumulating and immediately direct expert attention to the right places. Crucially, he explains that the agents do not take action or enforce policy — they simply make what was previously invisible, visible. Humans can then apply their judgment where it actually matters. He goes on:

The agent is doing the grunt work — a heap of stuff in 30 minutes that would take a human a week or more. And so as an employee, you've got a week’s worth of work done in half an hour. And if it's not completely perfect, that doesn't really matter because you’re always going to synthesize what it comes up with anyway.

Seen through this lens, the value of agents becomes clearer. By collapsing the marginal cost of reading, comparing, and reasoning over vast quantities of unstructured information, they make it possible to continuously pay attention to these unattended spaces — looking for signs of problems, opportunities, or organizational drift — and to surface what deserves human focus.

In this way, agents are not automating tasks or squeezing incremental efficiency from existing processes. They are eliminating previously unaddressable sources of operational friction by directing timely human attention to the right places.

Which also reframes the future role of humans in a much more positive light.

In Anderson’s view, rather than automate people out of processes or place them into hellish AI-managed loops, agents should be viewed as powerful sense-makers — absorbing the cost of exploration so that humans can focus their attention and judgment on things of consequence that would otherwise be missed. He says:

We’re still in the early days, but there is plenty of work which falls into this category. Not giving agents autonomy to make critical decisions like approving a million-pound payment or something — which would be insane — but using agents to help humans work more effectively.

Which is why, in Anderson’s view, questions of agent architecture and autonomy are secondary to the more basic economic shift. One that moves agents from tools for automating what is already understood, to tools for focusing attention on what is not.

Perhaps making attention, not automation, the most consequential economic unlock of the AI era.

My take

In seeking out Anderson, I deliberately went out of my way to find a different voice — originally to test both the robustness of my agent taxonomy and the workflow-centric narratives of vendors in the enterprise-incrementalist camp.

And despite expecting the conversation to center on autonomy, I instead found something quite different — and arguably far more interesting. Not a fundamental disagreement with the enterprise-incrementalists over the safety of autonomy, but a distinct economic argument for scaling attention rather than automation.

What struck me most during this realization was that Anderson’s worldview does not extend from his technology ecosystem, but from his audience. By engaging primarily with senior business leaders — often more technically disinterested than the buyers targeted by mainstream vendors — he arrives at a very different vantage point from which to consider the value of agentic AI. One shaped less by pattern-matching in favor of systems that have always been easily optimized, and more by the whitespaces that have resisted optimization until now — the friction and systematic gaps in enterprise capability that most operational staff simply lack the vantage point to see.

The contrarian perspective this affords him stands in sharp contrast to the incremental — and sometimes dehumanizing — language of automation vendors, whose focus on workflow optimization can make everything look like a nail that just needs to be hammered harder — including people. Ouch.

From Anderson’s viewpoint, however, AI is not about replacing humans, placing them into AI-managed loops, or chasing ever-diminishing efficiency gains in spaces that are already easy to optimize. Instead, it ultimately gifts human experts greater value — and, ironically, greater agency — by increasing their leverage through agents that act as connective tissue between abundant information and scarce attention.

From there, it becomes possible to see a more expansive view of the organizational whitespaces in which the most meaningful operating model benefits of AI may lie.

In a recent article exploring AI’s role in innovation and creative work, I expressed my frustration that so many AI narratives collapse back into unimaginative stories about efficiency and cost reduction. While that piece examined how AI might bridge the whitespaces between different phases of creation, my conversation with Anderson left me with the sense that this idea can scale much further than I had anticipated.

Not just bridging phases within a single kind of creative flow, perhaps, but uncovering rich new seams of possibility in the long-neglected gaps between our islands of standardized workflow.

Which is a good reason to ask yourself — might I be doing agents all wrong?

Loading
A grey colored placeholder image