Main content

diginomica research - enterprises are spending millions on data. Here's what they told us in private

By Alyx MacQueen March 31, 2026
Dyslexia mode
Excerpt:
New independent research from diginomica and Serendipitus.io finds that enterprise data health is far worse than industry benchmarks suggest - and that the market telling organizations to deploy AI faster is the last place they should look for honest answers.

A utility provider went to its regulator with a £2.7 billion (approximately $3.4 billion) capital request. It was rejected ‐ not because the spending was unjustified, but because the organization couldn't substantiate it. After years of platform investment, transformation programs, and data governance initiatives, it couldn't prove to an external body how it had spent its own money. The data wasn't traceable enough to justify its own spending.

This is what data dysfunction looks like when it finally meets an unmovable deadline – and it's far more common than the market would have you believe.

Maureen Blandford of Serendipitus and I spent several months finding out how common it is. We interviewed 18 senior enterprise practitioners – Chief Information Officers (CIOs), Chief Data Officers (CDOs), commercial leaders, and marketing heads across financial services, utilities, public sector, professional services, and enterprise technology – under conditions of total anonymity. No vendor involvement. No PR approval. No approved quote list. Just frank conversations about what's actually happening with data in large organizations when no one is watching.

The money is going in - the results aren't coming out

Here's a question worth putting to your leadership team. If someone stood up in your next board meeting and said: "We'd like to dedicate 80 full-time equivalents to manually reconciling data that our systems should share automatically, with no plan to reduce this over time. We'll also maintain an unknown number of shadow data sets because no one trusts the official numbers. Every C-suite presentation will require Master of Business Administration (MBA)-level homework to assemble. We expect this to continue indefinitely" - how long before they were shown the door?

That proposal would be dead before the slide changed. And yet that is precisely what most large enterprises are currently funding. They just don't call it that, because the cost is distributed across hundreds of people doing it quietly as part of their normal jobs, invisible in any budget line, showing up only in the distance between what organizations can do and what they should be able to do.

The numbers our participants gave us are extraordinary. Between 30% and 70% of professional time is gone to manual data assembly, reconciliation, and verification rather than analysis or decision-making. A utilities CIO describes losing "1,000+ person days per year" to data reconciliation alone. A professional services firm runs 400-500 people on data overhead. A €400 million (approximately $435 million) firm is built, ultimately, on spreadsheets and people.

These aren't under-resourced organizations. They've bought the platforms, hired the consultants, and followed every playbook the market has handed them. IDC forecasts that organizations globally will spend nearly $4 trillion annually on digital transformation by 2027 — and that's before counting the organizational capacity absorbed by data work that never appears in any technology budget. The money is clearly going in. The question our participants answered – with considerable frankness – is where it actually goes.

The market telling you to move faster isn't on your side

While this has been happening, the enterprise technology market has developed a remarkable consensus of 'if you aren't deploying AI right now, you are falling behind'. Industry benchmarks report breathtaking adoption intentions, with multiple surveys putting the proportion of organizations planning autonomous AI deployment within two years at 90% or above. Conferences overflow with urgency. The pitch, delivered loudly and repeatedly, is that organizations moving fast will win and those that hesitate will be left behind.

The eulogizing leaves something out, though. Gartner has predicted that 60% of AI projects will be abandoned through 2026 because they are not supported by AI-ready data foundations. The same research found that 63% of organizations either don't have, or are unsure whether they have, the right data management practices for AI. The urgency narrative and the readiness reality are pulling in opposite directions.

We found three of our 18 participants with anything resembling a live agentic AI implementation. The rest describe pilots that hadn't proven value, initiatives that were stalling, and individual employees using ChatGPT on their own initiative. One financial services Marketing leader explains: 

AI initiatives are first and foremost data projects – it is a nightmare in terms of getting AI projects off the ground.

The reason most AI initiatives stall isn't that organizations are moving too slowly – it's that they're trying to build on data that can't support the weight. As a former Chief Executive at a global technology company puts it: 

The minute you put AI on top of bad data, it gets crap.

No platform purchase resolves that. And the research that keeps telling you to move faster is largely funded by the people selling you the platform.

When we asked practitioners whether they trust their data, roughly half said yes. When we asked what percentage they'd pass to their CEO without checking it first, the answer – from the same people, at the same organizations – was basically zero. Millions of dollars of investment disappears every year, into verification cycles, shadow data sets, and MBA-level homework that nobody budgets for because nobody officially acknowledges it's happening.

What this research is actually for

The practitioners we interviewed are not failing. They are CIOs at 10,000-person companies and Managing Directors at €400 million (approximately $435 million) firms who have done everything the market told them to do. They are operating inside structural conditions that predate their tenure – systems not designed for cross-functional data sharing, incentives that reward data hoarding, organizations built before anyone decided data was a strategic asset.

What we found was not incompetence but exhaustion. Not ignorance but frustration. The daily grind of making broken systems work well enough to keep the business running, while being told loudly and repeatedly by a market of vendor-funded research that the answer is another product, another implementation, another transformation program.

Industry benchmarks measure stated trust – whether practitioners say they trust their data. We measured behavioral trust – whether they'd actually stake their reputation on it. The distance between those two things is what the benchmarks can't see, and it's where the real story lives. The organizations being told they're falling behind because they haven't deployed AI agents yet are, in many cases, the ones being intellectually honest about whether their data is ready. That honesty deserves better than a market that keeps selling urgency as the answer to a problem that urgency didn't create and can't fix.

If any of what you've just read sounds like the organization you work in, the full Enterprise Data Health Study is available to download below. It won't tell you to move faster. It will tell you what's actually in the way.

Disqus Comments Loading...