Workday Rising 25 - Workday rejects the AI jobs apocalypse, and makes its agentic SaaS case
- Summary:
- At Workday Rising 2025, the AI jobs controversy was a hot topic. Workday's stance on this issue goes back to what they've learned building AI agents. This piece takes you through my talks with Workday leadership, and into the heart of their AI (and SaaS) strategy.
The impact of AI on jobs is a serious issue, worthy of vigorous discussion. But frantic 'AI jobs apocalypse' narratives do a disservice to that same discussion. No question, AI has impacted jobs in creative and service professions, as well as junior level roles.
AI versus jobs - front and center at Workday Rising
But when enterprise vendors double down on so-called 'autonomous enterprises,' you get the impression that in a handful of years, we might need to leave a few humans to wipe the dust off of GPUs, and that's about it. That's not a great vibe for AI adoption.
At Workday Rising, we heard a decidedly different tone, starting with CEO Carl Eschenbach's opening keynote remarks. As I wrote:
In yesterday's opening keynote, Workday CEO Carl Eschenbach spoke directly to these issues. He acknowledged we have a "trust issue" with employees about AI. Though Eschenbach did talk up productivity gains, driven by a "new Workday" with AI at the center, Eschenbach made clear that "we need to change the narrative," and that we need to "bring our employees along with us on this journey." He made clear that AI, in Workday's vision, "is not a threat."
In subsequent remarks, Eschenbach went further. As he told industry analysts during an executive Q/A panel:
I just don't share the same sense that AI is going to displace all human workers. Some of our peers are out there talking about 10, 20, 30 percent of the work going away because of AI. I just don't see that happening. It happens in every tectonic shift. People move on and do different roles, different tasks, different jobs. They become more strategic, and we're seeing that happen here.
The Economic Forum believes there will be 11 million jobs, along with 11 million AI jobs created in the next five years, and 78 million net new jobs globally will be added as well. So people aren't going away. We are the fabric of what we represent in the workplace. We leverage technology to become more more productive. So I'm a big fan of AI and people working together. That's why we say 'AI-powered, human-centric in the middle, ready for the future' - and that's hopefully what we've highlighted this week.
Workday's take on humans-versus-agents derives from hands-on engineering lessons. Bolstered by executive AI talent from the likes of Google, Workday pushes back on the so-called "death of SaaS," in a technically-informed way.
AI agents and the purported death of SaaS - Workday squares off
As you might expect, these topics were very much on the minds of the analyst contingent, who are not immune to hype cycles either.
Things came to a head when an analyst asked Workday's exec panel if they foresee a day "where agents are actually going to fulfill the whole end-to-end process and communicate with that back-end data, and the process layer itself disappears" - therefore ushering in the "death of software."
Gerrit Kazmaier, President, Product and Technology at Workday - and one of those Google exec recruits - pushed back. As he explained:
I know it makes for killer headline, when someone says, 'SaaS is dead, or RIP SaaS.'
How far could this go? Kazmaier:
[With AI], you have a process running, and now you have the opportunity, for the first time, to take human reasoning and have a computational reasoning step running to take the next step. That's the automation opportunity that you actually see, when AI is being applied successfully. There is this frontier idea, which is really out there, which is saying that, basically, the model just figures out everything. There is no process anymore, right?
But Kazmaier's AI experience leads him to a different take:
Models are very poor still at instruction following, and it remains to be seen if they ever go past these barriers. It's evident that models are really good at single-task completions - actually only the specific category of tasks, everything which can be trained from public domain knowledge. And it's pretty clear that, when you build sophisticated systems, regardless of what they are, it's actually very hard systems engineering to make models do the right thing as a part of a process.
This is what Peter and I worked on at Google, basically engineering systems that makes AI work in an enterprise context by providing the right information, by telling them what to do now - what's the next step to do? What's the business objective? Are we meeting this objective, having many smaller models around it, basically to figure out: did the model do something correctly? If it di something wrong, how do we correct this error now? [Author's note: 'Peter' is Peter Bailis, Workday CTO, who also previously worked at Google].
However: Kazmaier sees a major fork in the road for SaaS vendors:
The bottom line to your question is: there's going to be two sets of enterprise SaaS vendors. One category is the ones who are just taking AI and running this over the legacy APIs to make the system look smarter. Usually, you see this in the form of a side panel, right? We have this great new system called 'so and so,' right? It pops up on the side, and it tries to automate with a new UI. Those are the ones who will probably never even get close to the value that AI can deliver - and those are the ones who are dead in the future.
"Big bets" on acquired software - behind the Sana news
So what does the winning type of SaaS category look like? Kazmaier:
The other category is the one who is saying, 'Let's re engineer the system for AI' - what you heard this week. 'How do we create a uniform data foundation so that all of our AI models can be trained and powered from it?' 'How do we incorporate AI into our business process engine so that it can call AI agents, basically to take the human cost and compress them down into a software cost'. And: let's rethink - which I think is the most important part - how people interface with software in the future?
You know Sana, right? What is the work experience of the future? It's contextual and intelligent, and it helps people get work done quicker. And I think we are the one company in the second category who has been bold enough to re-engineer the core, build on it, and frankly, make some big bets. [Author's note: see Stuart Lauchlan's piece on Workday's Sana acquisition, Workday Rising 25 - will Workday's $1.1 billion acquisition of Sana Labs really be L&D's "iOS for enterprise" moment?]
Those "big bets" include an unprecedented flurry of acquisitions, including Sana. For Workday, enterprise AI is about providing superior context for AI than a consumer LLM can pull off. And yes, much of that context is thanks to Workday's SaaS platform. In the inevitable second round of "Is SaaS dead?" questions, Kazmaier explained why context matters:
The key point is that all of these agents, they need to have a software interface for which they get exposed. Chat, which is our primary way of consuming Gemini, ChatGPT, or whatever your favorite model is, that just works for a limited number of tasks. It's not really conducive for the enterprise context, right? We do have workflows, like we talked about. We do have states. We have controls, like tables, where people need to check, like Peter said, 'This is the payroll - do we want to run it like that?' So there is an interplay between app and agent.
Yep - that's why Workday is acquiring Sana. Workday intends to transform the user experience in the agentic layer:
This is going to converge in these new experiences like Sana. What it means, from a vendor perspective, is basically giving our applications either agents to automate the workflows, or providing interfaces, so that customers can basically connect their own agents. What we showed with Microsoft yesterday: build something in Microsoft Copilot, interface with Workday for the onboarding experience, and run it through Microsoft's third party ecosystem.
SaaS and agents pitted against each other? That's not how Workday sees it. Kazmaier adds:
But I personally think that whole AI-versus-the-core-application-narrative - that's a falsehood, right? Because in the end, applications are running processes. Processes provide the underpinning for agents to run on. If you design them well together, they're like a flywheel. They make each other better. They're not trading each other off.
Can different LLM architectures get different results? We're about to find out
I've blown several gaskets critiqued the now-infamous MIT "95 percent of AI projects don't make it out of the pilot phase study." This is not the only study to find that generative AI projects really haven't given companies a boost in productivity. But here's what the media frenzy missed: the report's authors are still bullish on AI, because of the five percent that are succeeding - and, more importantly, how they are doing it.
The studies that cast a grim view of generative AI productivity are based on out-of-the-box use of LLMs. Whereas vendors like Workday are putting LLMs into what I call a more "constrained" architecture, using everything from tool calls to smaller models to improve "context" - and output relevance. In an upcoming video with the Workday Evisort team, I talk with Evisort about how they provide superior context with smaller models - models that include only the customers' documents.
Along those lines, we're starting to see some results. These results look much more compelling than "AI helped write me an email." As for documenting the customer benefits of this type of contextual AI architecture at scale, it's early days, but last week in San Francisco, Workday shared a few of their own. One of the best examples came via Rob Enslin, who told analysts:
As an example, our contract negotiation agent, we know it's saving us 45,000 hours, right? We know that we can actually now take that across our master services agreements, our legal agreements, our services, statements of work - and just get super-efficient at a very, very core process right, and actually take the lawyers out of the discussion. And that's made our legal organization effective and focusing on things that matter...
But hold up: if Eschenbach's AI-for-growth, not-just-productivity message is valid, then shouldn't the legal team be having outsized impact? . As Enslin told me:
So if you look at the things that matter, going into new countries, setting up legal systems in new countries, going into governments, building out government legal processes that are different - now, the team's focused on that kind of stuff, versus negotiations.
Before, we would have to go find and hire somebody, and get a new person added and so on. We're now capable of actually doing that with the existing teams that we have. And that also allowed legal folks to start focusing on things like policy. Policy for AI - we've still got to figure out how we're going to drive that. It's not going to go away, right?
Why Workday's Shane Luke is bullish on smaller AI models
During my interview with Shane Luke, Workday's VP Product and Engineering, Head of AI & Machine Learning, I asked him: what aspects of this context approach are most compelling to him? Reasoning engines? RAG with knowledge graphs? Agents using tool verifiers? Or perhaps the impact of smaller models? Luke singled out the orchestration of smaller models:
I do think that multi-step model systems are really exciting. The typical case for somebody who's interacting with LLMs today is that they're interacting with really large scale LLMs online, right? So they're using Grok, or they're using ChatGPT, or they're using Gemini or Claude or whatever, and those are 500-billion-plus parameter models in that model family... Those are great, but they're not something that you can actually reasonably run in an enterprise setting, and stay tenanted, right?
In certain cases, that's totally fine. If all you're doing is prompting, that might be totally fine. But in a lot of cases, you might need something that does more than that. It might actually need to be tuned in some fashion. And so then you can get the benefit of these large scale models - probably interacting with them through an API from a provider - but then also get the benefit of a really specialized model that's maybe been either taught by a larger model, or tuned in some other way.
That's pretty exciting. To me, that gets down to where you have a generalist at the top, and then you have the specialist model that's actually doing the particular task.
Small models are key for enterprise tasks, which have a degree of variability:
One thing that I think is really important - and this is probably under-discussed in enterprise - is that the task, when we talk about a task, and we give it the same name from one company to another... Yes, at a high level, it might be the same task: how you do a change job, or how you sign up for benefits, or how you do whatever task. But the tasks actually vary a lot, right?
That's part of the Workday product. You configure it, set it up, and run it the way you need to do it, the way you want it for your particular business. And a lot of those variations are kind of hidden, and so being able to have these smaller models actually tuned down at a tenant level gives you the potential for accuracy that's much higher than a highly generalized model that doesn't know all those nuances and differences.
That gets back to my Evisort example: train a small model on a few thousand company-specific documents - and if the user wants to add new features, quickly spin up a new model on the fly. That's not viable with "frontier" models. Luke anticipates enterprise wins here:
The potential of having that, the way you could maintain tenancy while doing that, the way you could also get higher accuracy for a particular business, and have these things tuned through business, I think that's exciting. And it's not talked about that much, but I actually think that that's where the winds are going to be in enterprise. [Author's note: Luke's comments on 'maintaining tenancy' are significant, because this is a way for customers to adapt/tune/retrain smaller models with their own private data, in their own 'tenant', while still being managed by Workday. To do this with internally-hosted Large Language Models would be cumbersome, expensive, and not always feasible, especially when it comes to frequently training, tuning or dynamically spinning up smaller models with new training content].
Not to mention that smaller models bring better price points - and lower energy consumption for training and inference than large models. Even though Workday has been careful about AI pricing, at some point the price of compute factors in - and LLM vendors are feeling the investor squeeze, amidst "AI bubble" talk. When LLMs providers bump up pricing, it has downstream impacts across AI markets. Luke:
Unit economics have to work out. Otherwise it doesn't hold in the long run. In the early days of this wave of LLMs and AI, the funding has been so generous at the frontier level that nobody's worried about about it. But that's starting to change, and it has to.
My take - enterprise AI has a lot to prove
Workday seems to have turned a corner, by linking its "human-centric" AI narrative to what this tech is (currently) capable of. But it's early days for this phase of enterprise AI. We'll need many more customer success stories.
We'll also need to see how Workday's new flex credit pricing for AI factors into customers' ROI calculations. Nor do we have significant studies about the project success of contextual AI - what I call "constrained LLMs in compound architectures." (Some have even gone so far as to call this type of architecture "neurosymbolic AI." I don't agree, but that's a debate for another time).
Workday is right: a shift from productivity to productivity-driven growth is important - not just for AI job growth, but for employee morale. Perhaps being fused to an agent is better than losing your job to one, but humans perform best when they are not just over-caffeinated productivity machines, chasing high-volume KPIs from behind.
That said, you know me - I always see room for improvement. Of the new agents announced, Workday's Performance Agent is, on paper, the highest risk agent of the group. I believe the audience could have gained from more detail on how Workday has mitigated/addressed those risks.
Yes, Workday said the right things about agents not actually writing the performance reviews. But in my interviews with Workday's AI leadership, I raised other questions. I'll get more answers on those as we go. For now, the point is: if you've done good work on AI risk, as Workday has done, then by all means use that - and help customers think though this as well.
The type of AI architecture Workday uses is strong on explainability. 'Reasoning' brings more explainability to LLMs themselves (though I don't believe the black box factor ever goes away). Meanwhile, Workday's retrieval context for agentic workflows includes source documents (this is the case for the Performance Agent, for example). I'd like to see even more on AI observability and agentic evaluation, though that is less of an issue when agents aren't fully autonomous, and humans are approving/supervising. But given how robust some agent evaluation tools are, I'd like to see Workday bring that into the mix, at least more than I heard this year.
On we go... For now, check out the full diginomica team coverage of Workday Rising.