Main content

Something for the weekend - where AI accountability goes to die when you stick a ‘human in the loop’ and call it ‘governance’

Stuart Lauchlan Profile picture for user slauchlan April 10, 2026
Summary:
"Computer says no!" was only the start of our problems...

refusal

A few years back the BBC comedy sketch show Little Britain had a running gag in which an unsuspecting soul would run into someone in authority in a bank or a travel agents or a school with a request for information or a service. The person they were talking to would be sat behind a PC, take down all their information, be perfectly polite, then follow through with the words, “Computer says no!”.

It was a very simple idea and, while other elements of Little Britain may not be entirely to everyone’s sensibilities these days, it's one that endures, particularly today when, “AI says no!.” would be the 2026 equivalent. Or perhaps more importantly in certain contexts - firing off ballistic missiles, springs to mind - “AI says yes!”.

With the rise of agentic AI has come an accompanying sentiment, that of the ‘human in the loop’. This is the person who checks AI’s homework. The living, breathing person whose job was not taken by a bot, but whose life now involves making sure that AI doesn’t overreach itself or come to stupid, potentially dangerous, conclusions all of its own.

This is the person who Anthropic wants to ensure is involved in battlefield scenarios, whatever the US Department of War thinks on the subject. On a less apocalyptic level, this is the person who checks that the AI realizes that two plus two does indeed make four and not some more colorful variant of the same. Or the employee with years of experience who knows that no, you can’t just make up new customer service rules on the hoof.

So, the ‘human in the loop’ is a VIP, agreed? It’s certainly another box to tick on the part of enterprises adopting agentic AI, even if that amounts to little more than posturing on the part of the organization. In other words, can we trust that the ‘human in the loop’ really is in that loop or just seen to be there? What is the difference between governance and the appearance of governance AKA covering your corporate ass?

Researching the answers

That’s what some new global research from HFS Research and Altimetrik set out to explore as part of its study, Humans at the helm of AI. The two polled 505 senior executives across Global 2000 organizations to understand how AI decisions are made, who owns the outcomes, how confident workforces are, and how accountability travels across partners and platforms.

Their findings do not make for reassuring reading - the helm is empty and the loop is hollow. Or as the study puts it:

Enterprises are not failing to adopt AI. They are failing to lead it.

This is not down to C-suite leadership making a conscious decision not to put in place proper, rigorous checks and balances. Don’t confuse commission with omission here - they just haven’t done things properly, but made a pretence of doing so, as much to convince themselves as others perhaps? The study argues:

The helm is empty, not because leadership has abdicated but because no-one has been asked to sit in it. Most enterprises are not executing a strategy. They are managing a portfolio of experiments that has been left to find its own direction...Without a declared destination, AI keeps moving while leadership debt compounds quietly behind it.

Part of the problem here lies in an uncomfortable reality that few want to admit to, namely that while vendors bang on about the positive growth benefits of AI tech, many - most? - buyers starting point for cutting a check is cost reduction, cited as the top driver by over half (52%) of respondents to the study. That’s a fundamental flaw, argues the report:

Cost reduction is not a strategy; it is what fills the space where strategy should be...Cost reduction requires no vision, no ownership model, and no declared direction. It survives every board presentation precisely because it commits to nothing.

Interestingly, only six percent of respondents feel that the CEO is ultimately accountable for AI strategy, but when it goes wrong, a fifth of respondents reckon that the same office will be leading the discussion about what happened! This compounds the problem, warns HFS/Altimetrik:

When the people responsible for AI performance are not the same people responsible for business performance, the lessons from AI results accumulate in the wrong place. Technology teams learn what broke. Business leaders learn that something went wrong. The accountability loop is not closed. It is triggered by failure and then reset.

And if you’re the CIO or CTO, stand by - you’re in the firing line and you’re guilty until proven innocent it seems:

For the CIO or CTO, the exposure is specific. You are accountable for deployment, for cost, and for the post-incident conversation, but not for the strategic decisions that would have prevented the failure. That is not a technology problem. It is an authority design problem, and it requires a business leadership response.

Lacking

And that response needs to be more than a token nod towards a ‘human in the loop’, although this is already the standard response when enterprises are challenged on their AI governance strategies. But when the study put this to the test, the loop was found to be lacking.

Take the most basic of scenarios - what happens if the ‘human in the loop’ disagrees with the AI? If the human is genuinely there as a  governance guard, the answer should be clear - the AI’s work needs checking! But in practice, only 25% of respondents say that human judgement would clearly prevail, with 14% actually arguing that the machine should be be assumed to carry more weight!

But most dangerously perhaps is the fudge that appears to dominate in most organizations with 30% of respondents talking about conflict resolution through “joint reviews” and 29% saying that it needs to be dealt with on a case-by-case basis. As the study states bluntly, it smacks of all coming down to whoever is in the room at any given time:

That is not a governance system. It is a negotiation with no rules, repeated across thousands of decisions, with no consistent principle determining the outcome.

Still, if there are ‘joint reviews’, at least the ‘human in the loop’ can check the AI’s homework, eh? Er, not so fast...only 18% of organizations reckon to have “clear visibility” into both what AI recommends and the reasoning behind it, while seven percent, who should hang their heads in shame, say their teams rely on AI decisions they do not fully understand.

The majority of respondents (58%) say they rely on having experts try to explain AI decision-making or understand the outputs, but not the underlying reasoning. The report argues:

Governance requires people who can interrogate AI, challenge it, and override it when it is wrong. Most enterprises have built the conditions that make all three of those things professionally risky. Not by design. By default.

The study concludes:

When the machine is wrong and no-one knows who owns the outcome, the human at the helm does not exist. There is only a liability with no address. Enterprises have not outsourced a service. They have outsourced a decision, failing to document who is responsible for it.

My take

Enterprises have not failed to put humans in the loop. They have failed to make the loop mean anything.

Lauchlan says no!

And yes to the bottom line from HFS/Altimetrix’s research:

The AI decade will not be defined by who built the most capable models. It will be defined by who built the most capable humans to direct them.

Image credit - Pixabay

Loading
A grey colored placeholder image