Lead story - Neurodiversity in the workplace - where are we now?
Recent data indicates the tech sector thinks it's doing well with neurodiverse workplaces - but that doesn't seem to be the case.
Cath picks up the story in a notable new series, starting with Employers are convinced they provide effective support to neurodivergent employees. Lived experiences suggest otherwise:
Employers consistently rate their readiness more positively than neurodivergent employees report, with the latter describing higher unmet adjustment needs, lower psychological safety, and great exposure to microaggressions across all sectors.
When we don't get this right, you could argue that we all lose. Cath cites the views of Sara Lobkovich, the Founder of OKR coaching:
Lobkovich also makes the point that workplaces with purely neurotypical ways of working are often not very positive for any employees - although they inevitably tend to be particularly damaging for neurodiverse ones.
As Lobkovich says, we lose out on the best of our talent:
It’s about performance being left on the table as people’s perceptions are marginalized due to their communications style or lack of social capital and the way they fit into the workplace.
What can employers do about it? Cath picks that up in How employers can enable neuro-inclusion without alienating neuro-typical staff:
Ultimately though, Harris believes the secret to success is engaging in “universal design”, where the underlying principle is ‘if one brain thrives, everyone’s do”.
The article digs into the tactics... But for an overall workplace philosophy, I love this quote from Grant Harris of GTH Consulting: "It’s also about moving from friction to flow." Too often, we overlook the friction other team members experience, and lose out on the flow of their best contributions. But we can take a break from fantasizing about human-free workplaces change that...
Diginomica picks - my top stories on diginomica this week
- The thoughts of Chairman Dell - the vision of an enterprise tech survivor as he navigates another industry transition into an agentic era - What does Michael Dell think? Stuart has the answer... Hint: Michael Dell says his company is ready for the supply chain challenges/demand in the hardware sector.
- PyTorch Foundation adds Helion and Safetensors - and the open AI stack gets a little harder to ignore - Alyx gets an embargo lift, and we get some open source AI news. Note: open source AI costs versus frontier models are going to get a lot of attention from discerning CIOs this year. As Alyx writes: "Infrastructure nerdery is an acquired taste, but these are the kinds of problems that matter." Oh heck yes...
- Will AI dis-intermediate traditional Learning Management Systems providers? Not if Skillsoft CEO Ronald Hovsepian has anything to do with it... The learning space is disrupted, but how much? Some Hovsepian quotage via Stuart: "AI is increasing the urgency of workforce readiness. It is widening the skills gap faster than many organizations can close it "
Vendor analysis, diginomica style. Here's my top choices from our vendor coverage:
- ServiceNow ends the AI add-on era and defines its new platform approach - ServiceNow is pushing its AI strategy forward, so what's changed? No one better than Derek to explain where we are now.
- Box's latest agent co-ordinates multiple capabilities focused on enterprise content - Phil on Box's agentic pursuits - including more general purpose agentic orchestration.
- Why UiPath is re-designing its platform around agents that build automations, not just run them - Meanwhile, Alyx checks in on UiPath's agentic progress, including a vertical solutions push: "Rather than selling automation infrastructure to IT, the ambition is to sell defined outcomes to business units."
- Want AI outcomes? Yes - but how do customers get there? Inside Oracle's agentic apps news with Steve Miranda - Oracle recently announced 22 agentic apps - but what advice does Oracle have for customers? That was the kickoff for another discussion with Steve Miranda. Last week, I was at Oracle AI World New York City. More content to follow, but in the meantime, you can check my half-time show podcast recap with Rebecca Wetteman.
.NEXT 2026 coverage - Mark Chillingworth issues a series of missives from the Nutanix .NEXT event. Pull quote:
You can see in some quarters a similar rush to move workloads to agentic AI that may not deliver a real cost saving. Analyzing the workload and how that workload marries to the bottom line is an essential skill of CIOs and will be tested by the AI frenzy.
Coverage highlights:
- .NEXT 2026 - Nutanix tones down the Broadcom rhetoric, lets the customers do the talking
- .NEXT 2026 - AI cost management and FinOps top of mind for digital leaders
A few more vendor picks, without the quotables:
- Expanding access to visual design across the enterprise - Canva customers tell their stories - Phil
- Why Elastic thinks your observability data and your security data are the same problem - Alyx
- Sage analyst briefing - AI innovation progress on show - Brian
- From unused data to improved experiences - how Trillium Health Partners put patient voices to work with Qualtrics - Derek
Jon's grab bag - Stuart is back with another recap of news-not-to-miss in The long and the short of IT - the week in digibytes. Cath bears down on the energy management imperative in Universities model domestic energy use to help UK hit net zero goals. Stuart blew a gasket or two in Something for the weekend - where AI accountability goes to die when you stick a ‘human in the loop’ and call it ‘governance’:
Still, if there are ‘joint reviews’, at least the ‘human in the loop’ can check the AI’s homework, eh? Er, not so fast...only 18% of organizations reckon to have “clear visibility” into both what AI recommends and the reasoning behind it, while seven percent, who should hang their heads in shame, say their teams rely on AI decisions they do not fully understand.
Fortunately for us, he had a bit more spleen left on Monday, vis-a-vis the misadventures of "Sam(wise) Altman": Monday Morning Moan - madness, Molotovs, and Mordor. Is one AI ‘Ring to rule them all’ really worth all this?
Finally, we're in dire need of new approaches to enterprise AI - and details on how to make it all work. I got some of those details from Zoho leadership, via our most recent Executive Intelligence podcast: Executive Intelligence podcast - why does Zoho approach AI differently? With Raju Vegesna and Ram Ramamoorthy. Zoho is known for memorable (and provocative) AI statements. Want a sample? How about Raju Vegesna, Zoho Chief Evangelist telling a room full of industry analysts: "If someone can pull the plug on intelligence, are we really sovereign?"
Best of the enterprise web
My top seven
Claude Mythos - important topics, but unhelpful market noise
AI security is a huge issue - and we do need to monitor the models that can help (or hinder) our security/privacy. But Anthropic's sensationalized marketing tactics, e.g. 'We're not releasing Mythos to the general public,' is profoundly unhelpful to individuals (and enterprises) alike. Security expert Bruce Schneier has been following these issues closely for eons. In this recent video appearance, Schneier makes three crucial points:
- If Mythos is "too dangerous" to release, why is a model small enough to run on a smartphone finding the exact same security flaws? "You don't need Mythos to find the vulnerabilities they found."
- AI security cuts both ways: "The implications of all of these models for cybersecurity is significant, but also we should recognize that the technology benefits both the attacker and the defender."
- Open source code has flaws: "Open source code is actually not it's not as well audited as people expect it to be, and there's a complete lack of comparison in anthropics announcement to traditional tools like you mentioned earlier as well."
Using AI models to assess code vulnerabilities is a good thing. But: if Mythos is so powerful, so incredible at detecting vulnerabilities, then why did Anthropic just leak 512,000 lines of their own source code through a package error? And why did they leave 3000 internal files, including a draft blog post about Mythos, sitting in a publicly searchable, misconfigured data store?
Does anyone else see a credibility gap in releasing such sensational news after an unintentional exposure of Claude Code's source code - arguably at the core of Anthropic's enterprise play - into the world? Look, Anthropic are near-genius level in their AI marketing chops. But there's a big enterprise catch: earning the trust of big companies is about making verifiable claims, not creating technical capability fog for buyers to unravel. This thoughtful analysis from the AI security institute is more what CISOs are looking for.
- AI in Q1 2026: Less Magic, More Context, and the Death of the Outbound SDR - Thomas Wieberneit is on a roll with the CRM Konvos video recaps: "Your shiny new LLM is entirely useless without context. Stop obsessing over foundational models. Instead, focus on your internal data structures first. Curing context blindness means feeding your AI localized, curated data through retrieval-augmented generation (RAG) and structured knowledge graphs."
- The AI transformation manifesto - I don't agree with all of this from McKinsey... e.g. I'm not sure why we're so obsessed with speed and not quality, but there are some instructive points to consider here.
- Your AI talent planning algorithm is common sense - Constellation's Larry Dignan has some tough love on AI versus human talent for us here: "If not careful, enterprises are going to find themselves in a world of hurt and massive experience gaps."
- Video pick - Why Enterprise AI Fails Without a Context Layer - Atlan just released a terrific series on context graphs, semantic layers, and how the pieces fit together, or not. Here's one of the best from the series.
- Fresh podcasts ready for consumption: In addition to my Oracle AI World NYC recap, I issued two more podcasts for those discerning enterprise listeners: Why CX data fragmentation blocks AI progress - a live research review with Rebecca Wettemann, and The state of SAP, and where should customers go from here? An opinionated (and unscripted) review, with Josh Greenbaum and ASUG CEO Geoff Scott. I dropped a vigorous opening take on Scott and Greenbaum - check out their reactions.
Whiffs
Readers know I like my whiffs satirical and lighthearted, but those were hard to come by this week. Sorry, serious whiff time. This was the lightest I could muster:
WebinarTV Secretly Scraped Zoom Meetings of Anonymous Recovery Programs www.404media.co/webinartv-se...
-> rapidly climbing the ranks of the sleaziest "businesses" of all time
Other seriously whiffy headlines:
- Robotaxi passengers left stranded in dangerous traffic after outage hits more than 100 driverless vehicles
- “Educational” AI YouTube videos accused of teaching kids to play in traffic & eat toxic food (everything is awesome)
At least this bad news won't affect most of us, unless we figure out how to upload our brains into virtual jars:
Universe expected to decay in 10⁷⁸ years, much sooner than previously thought (Yeah, this one's from 2025, but it's new to me so...)
See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.