Main content

Enterprise hits and misses - AI adoption sparks an employee well-being debate, as agentic use cases gain traction

Jon Reed Profile picture for user jreed February 23, 2026
Summary:
This week - AI adoption is inevitable, but employee well-being is another matter. Where do the solutions lie? Agentic use cases from commerce to collaboration are piling up, and so are the lessons. After a break, look who's back in the whiffs section...

snowboard-crash

Lead story - How can employers ensure AI does not wreck employee mental health?

Cath's latest two-parter hits on the anxious elephant in the room. Some of us hoped that AI would be about freeing us up from the admin grind, to do our best work - perhaps aided by a smart AI assistant. 

But too often, AI comes off as a hamster wheel accelerator, where we KPI our way to extinction do the "AI first" thing to keep pace, while hoping our employers don't roll the dice on our expendability via the same tools that were supposed to make work better.

The biggest thing I took from this series? The situation is complicated, and so are the answers. For every disconcerting AI impact, there might be another we didn't anticipate. Some teams/companies/individuals may thrive; others will be left behind. As Cath put it in Is AI good or bad for employee health and wellbeing? Can you answer that one? You better learn how to!

Over-relying on an AI that tells you what you want to hear can cause some people to reduce how much they interact with other humans, who may be perceived as more problematic. This, in turn, can bring about isolation and loneliness, which generates its own mental health issues.

As for some possible actions/responses, check Cath's How can employers ensure AI does not wreck employee mental health? (Spoiler - lunchtime yoga and a weekly mindfulness session aren't going to cut it anymore...). This quote from Dr. Shonna Waters jumped out: 

Training is important, but not more important than clarity. Organizations are spending millions on skills development, but 70% of employees say the biggest barrier to using AI confidently is understanding what the rules are. So, start with clarity and move onto training. Provide clear guidelines on acceptable use and embed the technology into workflows so the ambiguity is taken out.

Another way AI can cause stress is called out in Alyx's masterful When I selected, 'Rather not say', Gemini said, 'I'll decide for you'. In case it's not obvious, here's why that just won't do! Alyx's run-in with Gemini is indicative of a problem across AI services. They write: 

In Google account settings, a gender field states that gender may be used for personalization across services. I had selected "Rather not say". The system generated meeting notes referring to me as "she". People who have known me for a long time occasionally make the same mistake - that's part of human error and easily done. A system, however, encodes the inference into institutional records and circulates it without a mechanism for correction. The design choice is to infer identity rather than accept user-provided information.

Question: if today's AI is truly "smart," in all the ways we would hope our digital counterparts could be, wouldn't it be capable of asking us how we want to be referred to and adjust accordingly, rather than "infer" in a probabilistic way, as per architectural policy? I'm reminded of a human at Google who, when I pointed out that Google search has been pulling the wrong author profile for years, told me, "we just have to wait until the machine gets it right.

To paraphrase Alyx, we need to stop seeing such things as glitches or bugs or edge cases, and starting acknowledging them as AI design choices. With proper AI use case design and ethical data management, we can do much better - with today's AI technology. Today's AI may not always be smart, but our system design that oversees/supports/informs that AI can be. Alyx concludes: 

The fixes are known: where users have declared a preference, respect it; where identity is uncertain, default to neutral; where the system gets it wrong, provide a way to correct the record. And underneath the technical fixes, a simpler expectation which shouldn't be that hard to ask for – when a system decides something about who you are, you should be able to ask where that came from and on what basis. Systems worth trusting can answer that. Most can't.

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:

  • Dropbox FY2025 - flat results and guidance, still betting on Dash for growth - Phil assesses Dropbox's pros and cons: "Content repositories like Dropbox are a lot better placed to collate and make sense of that context than individual applications, and therefore vendors in the digital teamwork space do have a unique opportunity."
  • Figma crosses $1 billion in revenue - but it's who's doing the designing that matters - Derek delves into a very intriguing earnings report: "When Figma Make launched, my instinct was that it was primarily interesting as a prototyping tool - something that would be useful for design teams looking to move faster. The 60% non-designer figure shifts that framing."
  • The Julia Child method - how Atlassian taught 14,000 non-developers to build AI agents - A notable story from Alyx on how Atlassian is now operating agents at scale internally - one per employee and counting: The numbers here are genuinely impressive," writes Alyx - agreed. But the caveat is well worth noting also: "One of the things I appreciated most about this conversation is the honesty about limitations. Saraf does not pretend that Atlassian has solved enterprise AI integration. The MCP gaps he describes are the same interoperability challenges that plague every organization trying to connect disparate systems. Atlassian benefits from running its own stack, but the moment third-party tools enter the picture, the friction returns." The single platform versus the heterogeneous landscape issue will remain for quite some time...
  • CX for industries heats up - SAP's Balaji Balasubramanian on why retailers must balance experience, operations, and, yes - AI - speaking of the heterogeneous enterprise, I air out my take on why industry CX is even more compelling to me that "intelligent" CX, and Balasubramanian responds, while adding insight on SAP's retail CX news/pursuits.

A few more vendor picks, without the quotables:

Jon's grab bag - Stuart examined the agentic commerce players in Making it pay - disruptive forces in the commerce sector demand more than just a tie-up with OpenAI. Mastercard and PayPal CEOs make their pitches. He also looked at Doordash's conundrums/opportunities in As DoorDash drives through a huge tech re-platforming, can it deliver a future built on AI and AVs? 

Madeline published a two part series on Acumatica Summit's candid women in tech panel... Part one: The key to success as women in tech? Learnings from Acumatica Summit - record your wins, say yes, and don’t fixate on perfect answers. Meanwhile, George has an enterprise view of the big AI talent skirmish: OpenAI's AI talent war with Anthropic - leaving aside the point scoring, here's how the personal agent meme might shape the enterprise.

Mark Chillingworth took on the false dichotomy of innovation versus governance in How UK CIOs are governing AI without killing innovation - banking, retail, and academia perspectives. But sometimes regulation does go awry, as Stuart warns (with vigor) in No, Prime Minister, emphatically no! Why the UK Government's ill-judged VPN clampdown plans smack of desperation and danger:

We need considered, informed, consistent regulation and legislation, not an unseemly scrabble not only to clamber on the passing bandwagon, but to try to look as though you’re up front steering the damn thing when you don’t actually know how to drive!

A quote of the week nominee for sure...

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top six

Overworked businessman

Whiffs

Someone from Anthropic claimed that AI coding was "largely solved." Well, Amazon apparently hasn't solved it: 

Amazon Web Services vibe-codes itself an outage or two pivot-to-ai.com/2026/02/20/a...

"the agentic tool determined that the best course of action was to 'delete and recreate the environment'"

-> everything is awesome lol

Jon Reed (@jon.diginomica.com) 2026-02-22T22:27:38.759Z

After a week or two out, Microsoft easily elbowed back into the whiffs section again: 

Microsoft Copilot ignored sensitivity labels twice in eight months — and no DLP stack caught either one venturebeat.com/security/mic...

"For four weeks starting January 21, Microsoft's Copilot read and summarized confidential emails despite every sensitivity label and DLP policy telling it not to."

Jon Reed (@jon.diginomica.com) 2026-02-22T23:01:43.096Z

And did you know that you, fellow human, are consuming too much energy? Sam Altman would like a word: 

Sam Altman would like to remind you that humans use a lot of energy, too | TechCrunch techcrunch.com/2026/02/21/s...

"it also takes a lot of energy to train a human"

-> now that is a world class misdirection, kudos Mr. Altman on that one. One difference I can think of: we need food to live.

Jon Reed (@jon.diginomica.com) 2026-02-22T22:40:20.767Z

See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Image credit - Waiter Suggesting Bottle © Minerva Studiom, Overworked Businessman © Bloomua, - all from Adobe Stock. Feature image - Snowboard crash @svariophoto - Shutterstock.com.

Disclosure - Oracle, SAP, Zoho, Atlassian, Acumatica and Salesforce are diginomica premier partners as of this writing.,

Loading
A grey colored placeholder image