What can enterprise AI learn from 700 million ChatGPT users?
- Summary:
-
Consumer usage patterns may reveal a somewhat overlooked dimension of AI value creation - but translating these insights to enterprise contexts requires some creative thinking.
While enterprise AI initiatives struggle with a reported 95% failure rate, according to MIT research, 700 million ChatGPT users generate 2.5 billion daily interactions. As such, one might ask why business is allegedly struggling with its use case, when it’s fairly safe to infer that a significant proportion of the world’s population has found some value in generative AI?
Reading the research, this divergence raised some questions for me: Are enterprises and consumers discovering value in fundamentally different ways? And if so, what can organizations learn from consumer adoption patterns?
Whilst we are often quick to dismiss consumer adoption in the world of business, it is worth considering that in recent decades consumer technology adoption has driven trends in how we adopt new tools at work and interact with customers (particularly around mobile and social…)
As such, new research from the National Bureau of Economic Research, analyzing 1.5 million ChatGPT conversations, offers new insight into how AI creates value in practice across the app’s entire user base. The findings challenge some conventional assumptions about AI's applications out in the wild, while highlighting both opportunities and limitations for enterprise adoption.
The data reveals something unexpected: rather than primarily using AI for task automation, users are getting satisfaction from what the researchers call "decision support" - seeking information and advice to inform better choices. This pattern, whilst happening amongst consumer usage, may offer some clues to how enterprises could deliver more value - albeit with some ‘creative’ assumptions being made and more complex frameworks needed to establish ROI.
Also, it’s worth reiterating that obviously consumer ChatGPT usage is voluntary, low-stakes, and individual. Whilst enterprise usage involves compliance, security, integration with existing systems, and organizational change management. The value creation systems are fundamentally different, which makes comparison…difficult - but let’s have a go regardless!
The consumer preference for decision support
What is interesting about the NBER study is that it highlights how 49% of ChatGPT interactions involve "Asking" - that’s seeking information or advice - compared to 40% classified as "Doing” (task-oriented activities). The remaining 11% involve "Expressing"—personal reflection and creative exploration.
What makes this distribution noteworthy is its trajectory and satisfaction metrics. By June 2025, the "Asking" category had grown to 51.6%, while "Doing" declined to 34.6%. Moreover, "Asking" messages consistently receive higher user satisfaction ratings, suggesting users find greater value in AI as a thought partner than as a task executor.
The researchers write:
We argue that ChatGPT likely improves worker output by providing decision support, which is especially important in knowledge-intensive jobs where better decision-making increases productivity.
Looking more broadly at the enterprise technology market, it’s fair to say that a lot of the use cases we are seeing - and ROI discussions - are heavily weighted toward process automation and efficiency gains. This is, of course, understandable. Organizations face pressure to demonstrate measurable returns, and automation offers clearer metrics: reduced headcount, saved hours, improved throughput. However, according to the MIT report, the returns being seen in these areas aren’t what was hoped.
And although the MIT study did face criticism - given that tech initiatives historically show high failure rates amongst early adopters - the 95% failure rate grabbed headlines everywhere. It’s worth considering, are we placing enough value on decision making support? Would this ‘soft metric’ change those results?
A hybrid opportunity emerges
It might be worth thinking about this differently. Rather than viewing decision support and automation as competing priorities, successful organizations may need to embrace both. And we may need different frameworks for measuring value. The consumer data suggests a both/and rather than either/or approach, with different applications serving different needs.
Consider how this might work in practice:
- Automation: For repetitive, rule-based processes, automation remains key. Customer service chatbots, document processing, and data entry benefit from AI's ability to handle high-volume, structured tasks consistently and efficiently.
- Decision Enhancement: For strategic planning, market analysis, and complex problem-solving, AI's decision-support capabilities may offer greater value. Here, AI serves not as a replacement for human judgment but as an amplifier.
- Hybrid Applications: Many of the most promising enterprise applications combine both elements. An AI system might automate data gathering and initial analysis (doing) while providing insights and recommendations for human decision-makers (asking).
The NBER study found that 81% of work-related messages involve "obtaining, documenting, and interpreting information; and making decisions, giving advice, solving problems, and thinking creatively." This suggests knowledge workers are already discovering AI's decision-support value (potentially through shadow IT adoption?). Interestingly, only 4.2% of messages were coding (very much a focus of enterprise AI and a ‘doing’ question).
Also, users with graduate degrees show stronger preference for "Asking" over "Doing," while 47% of messages from computer-related occupations seek advice compared to 32% from non-professional roles.
There’s a lot to unpack from the research. For example, writing - fundamentally a "Doing" task - represents 42% of work-related ChatGPT messages. But even here, two-thirds involve editing and improving existing text rather than creation from scratch, suggesting users seek AI's judgment and refinement capabilities alongside its generative abilities.
One crucial point to consider in this perceived enterprise-consumer value gap may be our ability to measure ‘Asking’ versus ‘Doing’. Traditional ROI calculations do very well at quantifying automation: reduce headcount by X, save Y hours, achieve Z% efficiency gain. Decision support value is more complex - but could offer higher value, long-term returns.
For instance, IBM research acknowledges that "better decision-making as executives and team leaders make more accurate decisions in less time" represents "soft ROI"—less straightforward to measure but potentially affecting long-term organizational health more significantly than operational efficiency gains.
Equally, Strategic Decisions Group has documented "billions of dollars of added value through better strategic decisions" over decades of consulting engagements. This suggests frameworks exist for measuring decision quality - even if it is challenging. The problem lies not in the impossibility of measurement but in developing different metrics and having patience for longer-term returns (albeit not always easy in quarter-to-quarter businesses).
My take
The overall usage data presented here, and the central thesis, obviously shouldn’t be taken as an argument for abandoning automation in exchange for decision support. And obviously a lot of the usage is both work and consumer - although it would be naive to assume that consumers aren’t bringing their AI tools to the workplace, despite what their IT department supports.
Equally, I’m drawing broad conclusions from a variety of specific research that is assessing enterprise AI in its early stages - and trying to map this onto broader consumer trends. That’s risky. Asking ChatGPT for support in life decisions isn’t the same as a restricted enterprise environment.
That being said (and I know this is a contradiction) it might be worth considering that organizations focusing exclusively on automation may be missing some opportunities. The 700 million ChatGPT users have discovered value in AI as a thought partner and advisor - applications that many enterprises may not yet be considering, as so much attention is being placed on AI agents and process automation.
As MIT's failure rate data indicates, current enterprise approaches aren't delivering expected returns. While multiple factors contribute to these failures, the consumer research offers one lens for reconsideration: perhaps the highest value doesn't lie in replacing human work but in augmenting human judgment. Yes, vendors are talking about this, but I think the value in comparing both should be considered more closely. Instead of a bottom up approach focusing on efficiency gains, perhaps we should look at a top-down approach that adds more value?
Of course, the question for enterprise technology buyers isn't whether to pursue automation or decision support, but how to thoughtfully combine both approaches. Organizations that develop capabilities in both - and the measurement frameworks to evaluate them - may find themselves better positioned to capture AI's full potential. Although, as already noted, this may need more of a long-term lens, with the inclusion of ‘soft value’ included, which isn’t always easy when building a business case.
But it’s worth considering that, perhaps more often than current enterprise strategies assume, AI’s greatest value lies in helping humans make better decisions - rather than assuming the value lies in just frees up our time from boring, repetitive tasks…