Main content

Enterprise hits and misses - Claude Mythos needs a reality check, neurodiverse workplaces aren't there yet, and event season rolls on

Jon Reed Profile picture for user jreed April 13, 2026
Summary:
This week - Anthropic's marketing of Claude Mythos was extravagant, but what is the reality for enterprise CISOs? Neurodiverse workplaces are better workplaces, but we aren't there yet. Context is key to enterprise AI - what are we learning? Events are still in season, and so are podcasts.

Robot's Hand Hitting Gavel At Desk © AndreyPopov - Canva.com
(© AndreyPopov - Canva.com)

Lead story - Neurodiversity in the workplace - where are we now? 

Recent data indicates the tech sector thinks it's doing well with neurodiverse workplaces - but that doesn't seem to be the case. 

Cath picks up the story in a notable new series, starting with Employers are convinced they provide effective support to neurodivergent employees. Lived experiences suggest otherwise: 

Employers consistently rate their readiness more positively than neurodivergent employees report, with the latter describing higher unmet adjustment needs, lower psychological safety, and great exposure to microaggressions across all sectors.

When we don't get this right, you could argue that we all lose. Cath cites the views of Sara Lobkovich, the Founder of OKR coaching:

Lobkovich also makes the point that workplaces with purely neurotypical ways of working are often not very positive for any employees - although they inevitably tend to be particularly damaging for neurodiverse ones. 

As Lobkovich says, we lose out on the best of our talent: 

It’s about performance being left on the table as people’s perceptions are marginalized due to their communications style or lack of social capital and the way they fit into the workplace.

What can employers do about it? Cath picks that up in How employers can enable neuro-inclusion without alienating neuro-typical staff: 

Ultimately though, Harris believes the secret to success is engaging in “universal design”, where the underlying principle is ‘if one brain thrives, everyone’s do”.

The article digs into the tactics... But for an overall workplace philosophy, I love this quote from Grant Harris of GTH Consulting: "It’s also about moving from friction to flow." Too often, we overlook the friction other team members experience, and lose out on the flow of their best contributions. But we can take a break from fantasizing about human-free workplaces change that...

Diginomica picks - my top stories on diginomica this week

Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:

.NEXT 2026 coverage - Mark Chillingworth issues a series of missives from the Nutanix .NEXT event. Pull quote

You can see in some quarters a similar rush to move workloads to agentic AI that may not deliver a real cost saving. Analyzing the workload and how that workload marries to the bottom line is an essential skill of CIOs and will be tested by the AI frenzy. 

Coverage highlights:

A few more vendor picks, without the quotables:

Jon's grab bag - Stuart is back with another recap of news-not-to-miss in The long and the short of IT - the week in digibytes. Cath bears down on the energy management imperative in Universities model domestic energy use to help UK hit net zero goals. Stuart blew a gasket or two in Something for the weekend - where AI accountability goes to die when you stick a ‘human in the loop’ and call it ‘governance’

Still, if there are ‘joint reviews’, at least the ‘human in the loop’ can check the AI’s homework, eh? Er, not so fast...only 18% of organizations reckon to have “clear visibility” into both what AI recommends and the reasoning behind it, while seven percent, who should hang their heads in shame, say their teams rely on AI decisions they do not fully understand.

Fortunately for us, he had a bit more spleen left on Monday, vis-a-vis the misadventures of "Sam(wise) Altman": Monday Morning Moan - madness, Molotovs, and Mordor. Is one AI ‘Ring to rule them all’ really worth all this? 

Finally, we're in dire need of new approaches to enterprise AI - and details on how to make it all work. I got some of those details from Zoho leadership, via our most recent Executive Intelligence podcast: Executive Intelligence podcast - why does Zoho approach AI differently? With Raju Vegesna and Ram Ramamoorthy.  Zoho is known for memorable (and provocative) AI statements. Want a sample? How about Raju Vegesna, Zoho Chief Evangelist telling a room full of industry analysts: "If someone can pull the plug on intelligence, are we really sovereign?"

Best of the enterprise web

Waiter suggesting a bottle of wine to a customer

My top seven

Claude Mythos - important topics, but unhelpful market noise

AI security is a huge issue - and we do need to monitor the models that can help (or hinder) our security/privacy. But Anthropic's sensationalized marketing tactics, e.g. 'We're not releasing Mythos to the general public,' is profoundly unhelpful to individuals (and enterprises) alike. Security expert Bruce Schneier has been following these issues closely for eons.  In this recent video appearance, Schneier makes three crucial points: 

  • If Mythos is "too dangerous" to release, why is a model small enough to run on a smartphone finding the exact same security flaws? "You don't need Mythos to find the vulnerabilities they found."
  • AI security cuts both ways: "The implications of all of these models for cybersecurity is significant, but also we should recognize that the technology benefits both the attacker and the defender."
  • Open source code has flaws: "Open source code is actually not it's not as well audited as people expect it to be, and there's a complete lack of comparison in anthropics announcement to traditional tools like you mentioned earlier as well."

Using AI models to assess code vulnerabilities is a good thing. But: if Mythos is so powerful, so incredible at detecting vulnerabilities, then why did Anthropic just leak 512,000 lines of their own source code through a package error? And why did they leave 3000 internal files, including a draft blog post about Mythos, sitting in a publicly searchable, misconfigured data store?

Does anyone else see a credibility gap in releasing such sensational news after an unintentional exposure of Claude Code's source code - arguably at the core of Anthropic's enterprise play - into the world? Look, Anthropic are near-genius level in their AI marketing chops. But there's a big enterprise catch: earning the trust of big companies is about making verifiable claims, not creating technical capability fog for buyers to unravel. This thoughtful analysis from the AI security institute is more what CISOs are looking for. 

Overworked businessman

Whiffs

Readers know I like my whiffs satirical and lighthearted, but those were hard to come by this week. Sorry, serious whiff time. This was the lightest I could muster: 

WebinarTV Secretly Scraped Zoom Meetings of Anonymous Recovery Programs www.404media.co/webinartv-se...

-> rapidly climbing the ranks of the sleaziest "businesses" of all time

Jon Reed (@jon.diginomica.com) 2026-04-13T13:38:52.761Z

Other seriously whiffy headlines: 

At least this bad news won't affect most of us, unless we figure out how to upload our brains into virtual jars: 

Universe expected to decay in 10⁷⁸ years, much sooner than previously thought (Yeah, this one's from 2025, but it's new to me so...)

See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.

Image credit - Waiter Suggesting Bottle © Minerva Studiom, Overworked Businessman © Bloomua, - all from Adobe Stock. Feature image credited above.

Disclosure - Oracle, SAP, ServiceNow, UiPath, Sage and Salesforce are diginomica premier partners as of this writing.

Loading
A grey colored placeholder image