Enterprise hits and misses - AI adoption sparks an employee well-being debate, as agentic use cases gain traction
- Summary:
- This week - AI adoption is inevitable, but employee well-being is another matter. Where do the solutions lie? Agentic use cases from commerce to collaboration are piling up, and so are the lessons. After a break, look who's back in the whiffs section...
Lead story - How can employers ensure AI does not wreck employee mental health?
Cath's latest two-parter hits on the anxious elephant in the room. Some of us hoped that AI would be about freeing us up from the admin grind, to do our best work - perhaps aided by a smart AI assistant.
But too often, AI comes off as a hamster wheel accelerator, where we KPI our way to extinction do the "AI first" thing to keep pace, while hoping our employers don't roll the dice on our expendability via the same tools that were supposed to make work better.
The biggest thing I took from this series? The situation is complicated, and so are the answers. For every disconcerting AI impact, there might be another we didn't anticipate. Some teams/companies/individuals may thrive; others will be left behind. As Cath put it in Is AI good or bad for employee health and wellbeing? Can you answer that one? You better learn how to!
Over-relying on an AI that tells you what you want to hear can cause some people to reduce how much they interact with other humans, who may be perceived as more problematic. This, in turn, can bring about isolation and loneliness, which generates its own mental health issues.
As for some possible actions/responses, check Cath's How can employers ensure AI does not wreck employee mental health? (Spoiler - lunchtime yoga and a weekly mindfulness session aren't going to cut it anymore...). This quote from Dr. Shonna Waters jumped out:
Training is important, but not more important than clarity. Organizations are spending millions on skills development, but 70% of employees say the biggest barrier to using AI confidently is understanding what the rules are. So, start with clarity and move onto training. Provide clear guidelines on acceptable use and embed the technology into workflows so the ambiguity is taken out.
Another way AI can cause stress is called out in Alyx's masterful When I selected, 'Rather not say', Gemini said, 'I'll decide for you'. In case it's not obvious, here's why that just won't do! Alyx's run-in with Gemini is indicative of a problem across AI services. They write:
In Google account settings, a gender field states that gender may be used for personalization across services. I had selected "Rather not say". The system generated meeting notes referring to me as "she". People who have known me for a long time occasionally make the same mistake - that's part of human error and easily done. A system, however, encodes the inference into institutional records and circulates it without a mechanism for correction. The design choice is to infer identity rather than accept user-provided information.
Question: if today's AI is truly "smart," in all the ways we would hope our digital counterparts could be, wouldn't it be capable of asking us how we want to be referred to and adjust accordingly, rather than "infer" in a probabilistic way, as per architectural policy? I'm reminded of a human at Google who, when I pointed out that Google search has been pulling the wrong author profile for years, told me, "we just have to wait until the machine gets it right."
To paraphrase Alyx, we need to stop seeing such things as glitches or bugs or edge cases, and starting acknowledging them as AI design choices. With proper AI use case design and ethical data management, we can do much better - with today's AI technology. Today's AI may not always be smart, but our system design that oversees/supports/informs that AI can be. Alyx concludes:
The fixes are known: where users have declared a preference, respect it; where identity is uncertain, default to neutral; where the system gets it wrong, provide a way to correct the record. And underneath the technical fixes, a simpler expectation which shouldn't be that hard to ask for – when a system decides something about who you are, you should be able to ask where that came from and on what basis. Systems worth trusting can answer that. Most can't.
Diginomica picks - my top stories on diginomica this week
- Sparks fly as Walmart's AI shopping assistant gets ready to go global - can Sparky take Walmart into agentic commerce nirvana and transform the shopping experience? Stuart: "Customers who use Sparky have an average order value that's about 35% higher than non-Sparky customers, so expect further emphasis to be placed here."
- DORA the Explorer - how to unpack AI capabilities paradoxes - and why you need to know if you're in coding - how mature is AI coding? George's piece includes a keeper quote from DORA: "More importantly, we argue that individual effectiveness should not necessarily be pursued as a goal in and of itself. Rather, individual effectiveness is a means to realize greater organizational, team, and product performance, as well as improved developer well-being."
- Making it pay - disruptive forces in the commerce sector demand more than just a tie-up with OpenAI. Mastercard and PayPal CEOs make their pitches - as Stuart notes, "non-traditional" competitors are emerging in agentic commerce. But "Mastercard and PayPal execs reckon they are ready for the disruption."
- CIO Jasim Abdul Rahman uses centralization to power up Power International Holding - Mark Chillingworth has a fresh CIO profile via the diginomica network, which includes the creation of an AI hub for employees, complete with governance framework.
Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:
- Dropbox FY2025 - flat results and guidance, still betting on Dash for growth - Phil assesses Dropbox's pros and cons: "Content repositories like Dropbox are a lot better placed to collate and make sense of that context than individual applications, and therefore vendors in the digital teamwork space do have a unique opportunity."
- Figma crosses $1 billion in revenue - but it's who's doing the designing that matters - Derek delves into a very intriguing earnings report: "When Figma Make launched, my instinct was that it was primarily interesting as a prototyping tool - something that would be useful for design teams looking to move faster. The 60% non-designer figure shifts that framing."
- The Julia Child method - how Atlassian taught 14,000 non-developers to build AI agents - A notable story from Alyx on how Atlassian is now operating agents at scale internally - one per employee and counting: The numbers here are genuinely impressive," writes Alyx - agreed. But the caveat is well worth noting also: "One of the things I appreciated most about this conversation is the honesty about limitations. Saraf does not pretend that Atlassian has solved enterprise AI integration. The MCP gaps he describes are the same interoperability challenges that plague every organization trying to connect disparate systems. Atlassian benefits from running its own stack, but the moment third-party tools enter the picture, the friction returns." The single platform versus the heterogeneous landscape issue will remain for quite some time...
- CX for industries heats up - SAP's Balaji Balasubramanian on why retailers must balance experience, operations, and, yes - AI - speaking of the heterogeneous enterprise, I air out my take on why industry CX is even more compelling to me that "intelligent" CX, and Balasubramanian responds, while adding insight on SAP's retail CX news/pursuits.
A few more vendor picks, without the quotables:
- Five years from now, there will be more AI legacy systems to deal with than any other kind! Infosys co-founder Nandan Nilekani has some enterprise pragmatism to dole out as the deployment gap widens - Stuart
- Appian's 'AI needs process' thesis goes mainstream - now it has to prove the math works - Alyx
- "AI that executes on its own, not AI that supports" - where are humans in Fujitsu's bold software engineering vision for the future? - Stuart
Jon's grab bag - Stuart examined the agentic commerce players in Making it pay - disruptive forces in the commerce sector demand more than just a tie-up with OpenAI. Mastercard and PayPal CEOs make their pitches. He also looked at Doordash's conundrums/opportunities in As DoorDash drives through a huge tech re-platforming, can it deliver a future built on AI and AVs?
Madeline published a two part series on Acumatica Summit's candid women in tech panel... Part one: The key to success as women in tech? Learnings from Acumatica Summit - record your wins, say yes, and don’t fixate on perfect answers. Meanwhile, George has an enterprise view of the big AI talent skirmish: OpenAI's AI talent war with Anthropic - leaving aside the point scoring, here's how the personal agent meme might shape the enterprise.
Mark Chillingworth took on the false dichotomy of innovation versus governance in How UK CIOs are governing AI without killing innovation - banking, retail, and academia perspectives. But sometimes regulation does go awry, as Stuart warns (with vigor) in No, Prime Minister, emphatically no! Why the UK Government's ill-judged VPN clampdown plans smack of desperation and danger:
We need considered, informed, consistent regulation and legislation, not an unseemly scrabble not only to clamber on the passing bandwagon, but to try to look as though you’re up front steering the damn thing when you don’t actually know how to drive!
A quote of the week nominee for sure...
Best of the enterprise web
My top six
- How attackers hit 700 organizations through CX platforms your SOC already approved - Louis Columbus has some cautionary words on CX data: "No tool in a security operation center leader’s stack inspects what a CX platform’s AI engine is ingesting, and attackers figured this out. They poison the data feeding it, and the AI does the damage for them."
- Walmart exec says it’s ‘unfortunate’ that other companies are slashing workforces in the name of AI—it’s offering training to 1.6 million workers instead - Bring on the contrast between companies not hiring/training new workers, and those who want to compete with young talent and AI.
- What AI Security Research Looks Like When It Works - We see plenty of concerning AI security stories, but it's good to see one on the other side, with Anthropic finding 500+ security holes in open source systems.
- Reports find agentic AI is running into limits of how work is organized - Ron Miller parses Deloitte data on where the enterprise AI roadblocks are currently:
- How "Context Graphs" are Redefining the AI Landscape - Constellation's Esteban Kolsky critiques the surge of interest in context graphs... But he also provides useful links to more content on why context signals are a data obsession (more on this in my ZohoDay review piece also).
- ZohoDay 2026 - the unscripted podcast review with Brian Sommer - more ZohoDay coverage to follow, but for now, this on-the-clock podcast gets into Zoho's ERP announcement, the pursuit of the AI stack, and the risks in the SaaS software market.
Whiffs
Someone from Anthropic claimed that AI coding was "largely solved." Well, Amazon apparently hasn't solved it:
Amazon Web Services vibe-codes itself an outage or two pivot-to-ai.com/2026/02/20/a...
"the agentic tool determined that the best course of action was to 'delete and recreate the environment'"
-> everything is awesome lol
After a week or two out, Microsoft easily elbowed back into the whiffs section again:
Microsoft Copilot ignored sensitivity labels twice in eight months — and no DLP stack caught either one venturebeat.com/security/mic...
"For four weeks starting January 21, Microsoft's Copilot read and summarized confidential emails despite every sensitivity label and DLP policy telling it not to."
And did you know that you, fellow human, are consuming too much energy? Sam Altman would like a word:
Sam Altman would like to remind you that humans use a lot of energy, too | TechCrunch techcrunch.com/2026/02/21/s...
"it also takes a lot of energy to train a human"
-> now that is a world class misdirection, kudos Mr. Altman on that one. One difference I can think of: we need food to live.
See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.