Enterprise hits and misses - time for an enterprise data health gut check. Plus: are context graphs a trillion dollar enterprise play?
- Summary:
- This week - time for a diginomica research reveal: enterprise data health is the topic, and the wake-up call is now. Are context graphs a trillion dollar opportunity - and what we can take from this debate? A couple rants, a few whiffs - and it's another spunky week in the enterprise.
Lead story - diginomica research - enterprises are spending millions on data. Here's what they told us in private
A major new, vendor-independent research piece is out, co-authored by Alyx MacQueen of diginomica and Maureen Blandford of Serendipitus (the overview is here - the full paper is free with sign up). The problem statement, aka the gist?
Enterprise data health is far worse than industry benchmarks suggest - and that the market telling organizations to deploy AI faster is the last place they should look for honest answers.
Let's put a face on this. Via Alyx:
A utility provider went to its regulator with a £2.7 billion (approximately $3.4 billion) capital request. It was rejected ? not because the spending was unjustified, but because the organization couldn't substantiate it. After years of platform investment, transformation programs, and data governance initiatives, it couldn't prove to an external body how it had spent its own money. The data wasn't traceable enough to justify its own spending.
This is what data dysfunction looks like when it finally meets an unmovable deadline – and it's far more common than the market would have you believe.
And: far too much human toil is going into this troubling status quo:
The numbers our participants gave us are extraordinary. Between 30% and 70% of professional time is gone to manual data assembly, reconciliation, and verification rather than analysis or decision-making. A utilities CIO describes losing "1,000+ person days per year" to data reconciliation alone. A professional services firm runs 400-500 people on data overhead. A €400 million (approximately $435 million) firm is built, ultimately, on spreadsheets and people.
But hold up. The now-widely-understood enterprise AI reality check is: poor data leads to crummy AI. So what does this say about our effervescent AI press release quotes forward-thinking AI projects? Alyx:
No platform purchase resolves that. And the research that keeps telling you to move faster is largely funded by the people selling you the platform.
Ergo, this independent research project - one with a very different research philosophy:
No vendor involvement. No PR approval. No approved quote list. Just frank conversations about what's actually happening with data in large organizations when no one is watching.
We hope you like it - and we'd love to hear your feedback/predicaments/ideas. Alyx also published a follow-on piece that includes research reactions from a range of software vendors: diginomica enterprise data health research - the data is broken and everybody knows it.
Where do we go from here? MacQueen and Blandford share lessons and guidance; I'm sure we'll have more of that to come. In closing, a few of my own quick takes:
- I believe this problem is almost-universally prevalent in large organizations. But: I've seen some smaller companies come closer to that elusive "single source of truth." Those SMB stories are instructive, even if they don't scale.
- Can AI help with the data quality dilemma? To a point, but there are caveats galore, and it's not a solution unto itself. At best, it's part of a broader "let's get it right this time!" data quality initiative. Domain experts are still needed to validate data and catch problems, so maybe don't fire them.
- AI can't solve the problem, but if it creates a renewed enthusiasm to break down stubborn process/data/people silos, then that's one of the best AI "outcomes" you can have.
- "AI readiness" is a thing. I've seen companies rationalize renewed data quality investment by stacking up smaller/data wins. Those wins might be modest, but they pile up - as does organizational confidence.
- AI agents need different kinds of data layers and structures (more on this below, read on...) This is a good news/bad news thing - but it does open up new vendors for customers to kick tires on, and new ways of thinking about the data quagmire. Sometimes it's more fun to skate your data to where the puck is going than to resign yourself to data laggard status...
Diginomica picks - my top stories on diginomica this week
- Want to fail at AI? Instructive case studies & AI’s ‘last mile problem’ - Brian's AI last mile inquiry comes with an unusual reader warning: "Caution: some project descriptions may be disturbing to sensitive readers."
- Is agentic AI undermining offshoring? Open Reply asks the question - Katy addresses a burning question for operational cost management.
- Get off the phone! From 'fluffy' to data-driven - how BT Group is managing its journey to an omni-channel CX balance - Stuart delves into omni-channel pitfalls (and results), via a business no one in the UK has neutral feelings on...
- the diginomica network podcast - CIO Subhash Chandra Jose takes EBRD beyond the sunset - Time for another diginomica network discussion, via host Mark Chillingworth - inside a five year transformation.
Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:
- Want AI outcomes? Yes - but how do customers get there? Inside Oracle's agentic apps news with Steve Miranda - How should enterprises navigate the tech hype carnival to get to business value? I went behind the news with Oracle's Steve Miranda. Also see: Phil's NetSuite upgrades its MCP support to help connect AI assistants to data and processes.
- Striking the human/AI workforce balance - human intelligence matters as much as its artificial counterpart, argues Salesforce's Marc Benioff - Stuart on Salesforce's current AI/human stance: "Human intelligence has to co-exist alongside its artificial counterpart, is the thrust of the thesis. It’s not an either/or scenario, [Benioff] posits." Also see: Stuart's Slack's coverage and use cases, including: Slack in the real world - making Mr Beast roar and the Engine rev up with context.
diginomica's event coverage rolls on:
- Alyx has more fresh ideas/hot topics via KubeCon Europe 2026: SUSE wants to take the cognitive load out of infrastructure – and Liz is how it plans to do it, and Every GPU has to work with PyTorch to reach the market - so who's making sure it stays open?
- Sage's analyst event featured spirited dialogue on the future of finance - here's my roundup, via on-the-record interviews and podcasts: What separates a good finance agent from a weekend project? Inside Sage's AI architecture with CTO Aaron Harris.
A few more vendor picks, without the quotables:
- Look what the Easter Bunny brought Hubspot customers - a shift to outcome-based pricing for Customer and Prospect Agents - Barb
- How Mphasis NeoZeta is bringing banking back-end systems into the AI era - Katy
- How Acclaim Autism cut patient onboarding from six months to four days with Appian - Gary
Jon's grab bag - Stuart unfurled a couple of vintage rants this week: Months after deciding AWS and Microsoft do have unhealthy market dominance, the UK competition regulator decides the best course of action is...do nothing. WTF?!? And: the Annoyance Economy is a real thing: How annoying! The $165 billion industry that means that everything sucks! Get used to it!
Stuart also rounded up the news that needed a bit of airtime in The long and the short of IT - the week in digibytes. Madeline has a tech-for-good use case with plenty of data privacy insights: How Mumsnet is using AI to improve the lives of women without giving away its valuable data. Gary surfaced more data lessons in How one-click donations and smarter address data are helping UK charities raise more. George asks a looming question: Here's a thing - what if shadow AI is actually telling us something useful? As in:
My gut sense is that shadow AI is largely a reactive response to the guardrails and crippled tooling imposed by compliance processes on end users. What if enterprises flipped that script by leading with employee empowerment and treating GRC as an essential component of it rather than a constraint?
Indeed...
Best of the enterprise web
My top six
Are context graphs a trillion dollar opportunity? As per my research, no. But: "context" is definitely the best hope to make enterprise LLMs "smarter," and relevant for the most possible enterprise workflows. Foundation Capital kicked off this debate, and then brought on Box's Aaron Levie for a video response. I wouldn't call it a rebuttal, but it's definitely a reframing. (Levie also posted a response on X). A proper response requires a full article, but for now, a quick take:
- Context absolutely matters if you want agents to be more reliable/relevant.
- Providing the right context for agents is not just about "decision traces" and context graphs, as Foundation Capital asserts. It will vary by industry/company/role - but proper context definitely includes structured, unstructured, and external data sources - a wider variety of data than we've ever really attempted to bring to bear on workflows/decisions before. Though Foundation Capital underestimates some of what good "systems of record" bring to the table here (including automated workflows), their critique of the inadequate grasp/pursuit of these issues/opportunities by incumbent vendors is accurate.
- Some of the data tools to capture/feed agents context are established, some are new, and some are not yet built. It's not about some magic tech (like "context graphs,"). I think it might be more about a (buzzword overdose alert) real-time semantic layer but - there will be multiple ways forward here. It starts with the business problem. Then we decide if AI is relevant, and if our data quality/variety is up to the job.
- Context doesn't solve for probabilistic LLM agents or make them "intelligent." (Which is a big reason why this isn't a trillion dollar play). But you could argue it makes agents smarter for your task, objective, or process. Context helps LLMs (as do smaller, domain-specific models), but does not fundamentally change their pros and cons. Use case design (and risk mitigation) are still everything.
- Everyone told you to deploy AI agents. No one told you what happens to your SOC when you do - Louis Columbus is on the case again: "CrowdStrike, Cisco, and Palo Alto Networks all announced agentic SOC tools at RSAC 2026. A VentureBeat analysis of all three architectures finds none shipped an agent behavioral baseline — the foundational capability security teams need before they can set policy for AI agents running on enterprise endpoints."
- "I started to lose my ability to code": Developers grapple with the real cost of AI programming tools - A thoughtful review of the pros and cons, by David Cassel.
- Generative Engine Optimization: The New Tech Hustle or a CX Reality? - Another engaging CRM Konvo recap from Thomas Wieberneit: "The market is obsessed with using AI to generate content. This is a fatal error. LLMs prioritize depth, expertise, and authoritativeness. If you feed your channels with recycled marketing fluff, you will achieve nothing but instant mediocrity."
- Intro to Reality Pentesting - one of the most original posts on security I've read in a long time, honing in on cognitive security, and the perpetual weak security link: we, the fallible humans.
- Designing an end-to-end technology workforce for the AI-first era - I'm not sure you can design a workforce, but these are the right questions.
Whiffs
Things that make you go hmm....
Paul McCartney Banned From Reddit After Promoting Himself in Paul McCartney Subreddit www.404media.co/paul-mccartn...
-> I'm a fan of strong community moderation but this does seem extreme lol
Microsoft doesn't like to be left out of the whiffs section for too long...
Microsoft's AI in its own terms: "use Copilot at your own risk" www.techspot.com/news/111949-...
so - we're embedding risky tools everywhere, we'll leave it to you to determine that risk.
-> err, doesn't risk factor into any reasonable assessment of value as well?
Perhaps more of a 'yikes' than a whiff, but hey, this is our world:
Zooming Out: WebinarTV’s Rampant Scraping of Online Meetings cyberalberta.ca/zooming-out-...
-> There are so many unethical business practices wrapped up in this lawsuit-waiting-to-happen it's hard to know where to begin...
See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.