Enterprise hits and misses - AI fears meet SaaS software earnings, and OpenClaw's security holes keep CISOs up past bedtime
- Summary:
- This week - software earnings look pretty strong, but not strong enough - back to the AI versus software debate. Moltbook and OpenClaw took over the AI agent hype train, but what does it mean for enterprise? Short answer: security wake-up call. And: event season kicks in, as does our coverage.
diginomica picks - my top stories on diginomica this week:
- Seeing through the Marketing Data Mirage - and how to avoid it! - Barb has some well-tested advice for marketers in the thick of it: "Stepping back and taking the time to understand the customer and what data and intent signals will truly drive engagement and conversation is required."
- The Gen Z/Boomer Sales conflict - can AI be a generational equalizer? - Barb: "If a company just drops an AI solution in or starts telling employees to use the AI capabilities in the tool they have, it's not going to work. Adoption is probably the biggest challenge to leveraging AI successfully." Some difficult/interesting conversations here. Can AI help spark better inter-generational collaboration?
Also see: our latest diginomica network update, via Mark Chillingworth: Digital careers report - software firms tackle talent crunch through collaborative resource sharing.
Vendor analysis, diginomica style. Here's my three top choices from our vendor coverage:
- ServiceNow's Q4 beat - how solving AI's governance bottleneck is winning enterprise budgets - Derek on ServiceNow's solid earnings: "What was workflow orchestration across systems is becoming AI orchestration with governance, and that shift appears to be unlocking enterprise budgets."
- SAP cloud revenues rise on AI boost - with more to come, says CEO Christian Klein - Stuart on SAP's yearly earnings, including AI adoption numbers as well: "60% of our customers are already using our AI actively, 20% are on the way to it."
- CFOs and accountants are calling the shots when it comes to AI decision-making, says Sage Group CEO Stephen Hare. That matters when it comes to adoption and monetization - Stuart on Sage's upbeat quarter; He quotes Hare: "My view is that, in many ways, what's happening with AI favors incumbents because we have a large installed base. We have 40+ years of experience, including compliance, including all of the insights around how our customers do business, not just their General Ledger, the sort of system of record, but also their workflows etc."
AI versus software earnings special note: after the SAP and ServiceNow earnings, both stocks dipped, as software stocks in general are taking a pounding. The reasons for this are both complex and simple. In this case of SAP, I hopped onto an SAP earnings analysis podcast with ASUG Talks to hash this out further, so check that out for more.
I included the AI quotes above as I find it ironic that investors are counting out incumbent vendors in agentic AI, despite the technical fact that effective enterprise AI requires the kind of data (and data governance) that play right to the strengths of (aggressive) incumbent vendors. Generative AI is not a small data/decentralized technology, though other emerging forms of AI may prove to be. Those who think pure play AI agents can unseat incumbents from outside are overlooking the limitations of these probabilistic technologies to construct sources of transactional and data truth without causal grasp. (It also comes in handy when you can buy the AI startups with the best tech and ideas). However, if incumbent vendors make it hard for their own partners to access data and build out, then I change my bet.
That said: I believe these market disruptions are healthy. Not every incumbent is pushing hard enough (that's an understatement), and I do believe some new players will emerge successfully - especially those with new/imaginative vertical plays. Pushing the limits of this tech also means pushing the limits of customer value, and that's a good thing. The rest is just navigating the festival of disruption real and imagined, as we check LinkedIn at our peril...
The tarmac misadventures road shows begin - Acumatica and Dynatrace... The spring (yes, spring) event season kicks in, with Alyx hitting Vegas for Dynatrace's user event, and Jon heading to Seattle for Acumatica's.
Dynatrace Perform 2026, my pick for quote-of-the-show coverage:
He discussed the need to maximize determinism before any LLM sees the problem... Modern observability reveals why this matters – a degraded checkout flow might trace back to database contention from unrelated analytics queries. When AI agents must traverse these causal chains without deterministic grounding, small errors compound quickly across each inference step.
- Dynatrace Perform 2026 – why performance problems rarely live where they appear
- Dynatrace Perform 2026 - why agentic AI only works when determinism comes first
Indeed. My pick for Acumatica Summit 2026 pull quote so far:
Greenbaum is one of the fiercest critics of agentic AI amongst the analysts I see on the road. Meanwhile, I hash out the pros and cons of agentic AI frequently. One thing Greenbaum and I agree on: the ability to adapt to new industry conditions, and roll out business models faster than your competitors, is the real key here. Being effective with your human talent and your automation is what counts.
- Acumatica Summit 2026 - taking the pulse of cloud ERP, AI, and customer realities - more show content: two podcasts so far (with more video coming)... Acumatica Summit 2026 podcast - views and reactions on-site with Josh Greenbaum, and Acumatica Summit 2026 - the virtual podcast review with Brian Sommer.
A few more vendor picks, without the quotables:
- :How the Vulnerability Registration Service is using Pega to help organizations better support vulnerable customers - Derek
- Safety first as insurance firm Safety National adopts Planful for FP&A - Phil
- AI that generates gives way to AI that executes? Genpact prepares for autonomous agents in the enterprise - Katy
Jon's grab bag - Madeline has a fresh tech-for-good use case in How Yorkshire Wildlife Trust has ditched manual expenses and invoice spreadsheets in favor of automated workflows. Is there a twist to the US innovation story? George raises the question in Something for the weekend - how ironic would it be if Trump 2.0 is actually inadvertently liberating us from Big Tech? If so, give the guy a Nobel Prize for Economic Science.
Can the UK excel in Quantum? Chris raises the question in Quantum technology – a century of opportunity? And if so, who owns it? Stuart raises the tough/looming question on AI regulation/accountability in Why we - and the AI industry - need a day in court for social media platform providers before they break open their checkbooks. Meanwhile, as Stuart reports, Meta is doubling down (on AI infrastructure): The AI spend goes on for Meta as CapEx estimates soar for 2026. Can you build a consumer AI business model on data consent, rather than lack thereof? Chris explores this potent question in What synthespians can teach us about IP - and about Grok’s horrifying actions.
Also see: our latest diginomica network update, via Mark Chillingworth: Digital careers report - software firms tackle talent crunch through collaborative resource sharing.
Best of the enterprise web
My top six
CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story - this notable Wall Street Journal story raises too many questions to slice and dice this week (maybe next time). But just as there is a gap between vendor AI talk and where most customers are at, there appears to be that same gap inside many enterprises. But, a caution: it's early days. The workers surveyed aren't yet using AI for more complex tasks: "Far fewer used it for more-complex tasks like data analysis or code generation."
Editorial interruption: Moltbook is an AI-agents-only social network (supposedly), operated by OpenClaw (formerly Moltbot), the AI execution engine you can install on your computer, if the risks/rewards are to your liking. What does that have to do with enterprise, you ask? On with Hits and Misses:
OpenClaw proves agentic AI works. It also proves your security model doesn't. 180,000 developers just made that your problem - Louis Columbus issues the warning: "Simon Willison, the software developer and AI researcher who coined the term "prompt injection," describes what he calls the "lethal trifecta" for AI agents. They include access to private data, exposure to untrusted content, and the ability to communicate externally. When these three capabilities combine, attackers can trick the agent into accessing private information and sending it to them. Willison warns that all this can happen without a single alert being sent. OpenClaw has all three." Title quibble: agentic AI does indeed work, but this part isn't news. The question has not been about agentic AI at scale, but reliability at scale. Therein lies the temptations - and consequences... But not before agentic AI security causes some premature CISO hair loss... Oh, and did I mention that getting agents to alert you to every small detail in your life is kinda expensive?
Exposed Moltbook Database Let Anyone Take Control of Any AI Agent on the Site - Oh, remember Moltbook, the "AI agent only" social network? Perhaps not so much; seems misbehaving/clever humans are welcome also: "A misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want."
- Memory price surge may crimp on-premise AI, data center deployments - Constellation's Larry Dignan on the new costs buyers are contending with: "You can thank the AI infrastructure boom for the memory squeeze."
- My Answer to Vijay - a reader pressed Lora Cecere for some solution thinking on getting supply chain planning right in the AI era, and thus this post.
- Program Drift: The Hidden Threat to Your ERP Investment - UpperEdge with another keeper on the realities of enterprise projects: "Most organizations think about implementation risk in binary terms: success or failure, on time or late, on budget or over budget. But the most damaging phase sits in between, when a program appears stable on the surface but is structurally eroding underneath."
Whiffs
Running a tad low on whiffs of the humorous kind this week, but we do have this:
I Replaced My Friends With AI Because They Won't Play Tarkov With Me - that's the spirit!
Oh, and AI chats are leaking again:
Massive AI Chat App Leaked Millions of Users Private Conversations www.404media.co/massive-ai-c...
users asked to ask, “how to make meth,” and how to hack various apps
-> cue the awkward convos with family and employers
See you next time... If you find an #ensw piece that qualifies for hits and misses - in a good or bad way - let me know in the comments as Clive (almost) always does. Most Enterprise hits and misses articles are selected from my curated @jonerpnewsfeed.