Why CEOs can’t delegate AI responsibility – Wrike’s Thomas Scott on leading through AI
- Summary:
- A conversation with Thomas Scott, CEO of Wrike, raises important questions about AI governance and why leaders need to understand the trade-offs involved when handing off work to agents.
Enterprise AI adoption often feels as if it is oscillating wildly between permissive experimentation and iron-fisted governance clampdowns.
But that oscillation is not accidental.
Longstanding management models built to enforce deterministic control are now being asked to govern probabilistic agents — effectively expecting them to tolerate ambiguity they were never designed to handle.
Like most tech leaders, Thomas Scott, CEO of Wrike — which positions itself as a system of record for managing work — has been grappling with these challenges.
And in Scott’s view, the issue is not primarily a tooling problem, but a leadership one. Not because executives need to become prompt engineers, but because without direct exposure to probabilistic behaviour it is almost impossible to understand why existing management assumptions no longer hold. In his view, leaders must experience that ambiguity for themselves — otherwise their governance responses will default to familiar patterns.
Once you start looking, the symptoms of that instability are everywhere. In research commissioned by Wrike at the end of 2025, for example, 82% of employees reported using at least one AI tool — yet fewer than a third described their organization’s AI efforts as “running smoothly”.
Effectively enterprises are introducing systems that can interpret, prioritize and generate outcomes on the fly — while still supervising them as if they have been fully specified in advance.
Which means the answer is not tighter enforcement of existing controls, but a redesign of the management model itself.
That redesign appears to rest on three elements — executive direction grounded in practical experience, a clear understanding of where probabilistic agents can genuinely add value within the existing operating environment, and governance mechanisms built for visibility and feedback rather than pre-defined constraint.
Learn to lead
For Scott, therefore, the transformation of management practice needs to start from personal practice.
And to bring this point home, Scott shares an enthusiastic story about his own personal use of AI within the Wrike platform:
When you really step back, there are all sorts of places where you could very quickly reclaim time. One example is a digital chief of staff I built that scours my 1:1 logs to create the follow-up. And it's an easy-to-understand example that I've been walking around with and telling folks ‘I built this in less than an hour’.
Scott uses this example to make a broader point — that leaders must engage directly with the tools. Not to become experts, but to understand what they are delegating. He continues:
I've been advocating the fact that we're in a hands-on era. I tell the executive team they need to be doers — not subject matter experts but able to understand how the technology works and how to deploy it. Because there’s often a disconnect between executives and the individuals doing the work, in terms of what it takes to deliver.
This feels like a practical perspective — since without that personal literacy, executives will likely continue to attempt to govern probabilistic systems using deterministic instincts — defaulting to rules, restrictions… and retroactive crackdowns.
Integrate to amplify
Personal literacy, Scott argues, is necessary but insufficient. An enthusiastic top-down push for ‘more AI’ without redesigned management practices simply scales experimentation without the necessary structures to scale it:
I see a lot of top-down pressure where CEOs are saying ‘hey, we've got to be doing this’ which results in mass experimentation and a lot of individual usage as people scramble to come up with ideas.
In other words, when experimentation scales without a management model grounded in how tools, data and workflows fit together, an AI ‘strategy’ can quickly descend into chaos and fragmentation.
To prevent that drift, leaders need operating context — not just tool familiarity, but clarity about where probabilistic agents belong within the wider system of work. This need for context-aware leadership is something he says he continually stresses at Wrike:
At Wrike, whether you are customer facing or not, you need to have a working knowledge of how customers operate their businesses such that you can provide thought leadership. And you need to have a working knowledge of other tools — whether it's an LLM or something else. Because otherwise you're not capable of discussing trade-offs and delivery.
Context can also be a means to discipline ambition by narrowing focus. Rather than chasing speculative transformation, Scott emphasizes near-term, measurable outcomes:
One thing I really focus on is not getting lost in everything we could do — 6, 12, 24 months from now — but on delivering ROI now. With agents, customers are already measuring impact as something like a full day per week.
And while Scott promotes that discipline through the lens of value delivery, it’s also worth noting that it can serve a second purpose.
When agents are introduced at the right scale, with clear and measurable impact, they create a contained environment in which management practices can co-evolve. Leaders can observe how discretion plays out in practice, where oversight is non-negotiable, and how feedback should flow. In that sense, disciplined deployment becomes a way of progressively learning how governance itself must adapt.
Trust but verify
While Scott begins by stressing the importance of personal practice and organizational understanding, our discussion eventually turns to the deeper management implications of AI adoption — namely that agents work differently from the tools they displace.
Scott gives an example of customers switching from rules to agents within Wrike — something which provides a useful illustration of the shift:
Previously you had to build a series of automation rules, and specify each in detail. But agents are more flexible because you can just define the general principle.
I suggest that this flexibility comes at the expense of certainty — and that by relying on agents to deliver outcomes rather than pre-defined rules, the system becomes inherently less predictable.
However flexible these systems become, though, Scott believes that humans must ultimately retain accountability for the way in which they function:
Even if we find ourselves in a world where AI could make a better decision, I'd still rather have the human to look at and say, why did you do this?
And that accountability, in his view, only holds if decisions remain visible over time:
Companies have to be able to look at what's going on and put eyes on decisions. For feedback, for when something breaks, for visibility. And we provide that traceability, the visibility to see how work flows all the way through the structure.
The insistence on oversight is revealing and points toward a deeper question — how accountability can be preserved when agents are capable of acting independently without prior specification. Extrapolating from Scott’s logic, that preservation appears to depend on decision graphs that trace work as it unfolds so that humans can retain accountability as the system scales — recording decisions as they happen, building audit trails, and creating feedback loops that enable correction.
And the implication here is subtle but important, offering a potential answer to the oscillations we’ve observed in enterprise AI governance.
Existing governance models, built for deterministic outcomes, are proving to be ill-equipped to manage probabilistic systems at scale. Accommodating them therefore requires moving from a world of pre-defined rules to one of post-hoc evaluation of outcomes — whether the judgements being tracked are made by humans or agents. And that accommodation therefore needs a system of record for work.
In that sense, Scott’s practical emphasis on personal fluency, situational awareness, and audit trails feels less like a collection of discrete observations and more like necessary components of a deeper management shift — from trying to eliminate uncertainty to building the governance infrastructure necessary to manage it.
My take
Whatever your view on AI, it is already reshaping how organizations need to think about governance. And so it feels like something CEOs need to be taking firmly in hand themselves rather than simply delegating to others.
And in this context — and given the amount of hype, over-promising, and scarcely credible agentic visions currently bombarding CEOs — Scott’s insistence that the person leading the transformation should understand it seems like common sense.
But for me the most interesting angle is the shift in management model needed to successfully absorb agents. I frequently see companies spinning their wheels over this as they struggle to reconcile their desire for certainty with the operating reality of agents — a struggle that’s also playing out in the vendor space between enterprise-incrementalists and techno-optimists.
And so it seems to me that the successful adoption of agents will require more than tinkering with existing management controls.
Agents invert the logic of management itself — shifting the entire approach from verify-then-trust to trust-then-verify — something that contrasts with traditional automation logic that seeks to pre-encode safeguards into detailed rules. In a trust-then-verify model, however, agents must be guided by policy but evaluated by outcome.
The difficulty is that most enterprises are not built this way. Governance, compliance and board oversight are designed around deterministic control. Moving to trust-then-verify isn’t a tooling tweak — it challenges deeply embedded assumptions about the management of accountability and risk.
In Scott’s case, that inversion may feel more natural than it might to others because Wrike already operates in the ambiguous space between open-ended creativity and structured execution. Its product is designed to keep work visible, decisions traceable, and accountability explicit and so in that environment, agents may simply feel like another participant.
In this sense, Wrike’s positioning as a “system of record for work” might point to something increasingly important in agentic environments — the need for institutional memory. If agents are to operate under the kind of trust-then-verify logic I suspect then organizations will need systems capable of capturing intent, tracing decisions, and closing feedback loops at scale. Without that institutional memory, probabilistic agents risk either drifting unchecked or being forced back into rigid deterministic workflows that strip away their advantage.
For CEOs, the question, therefore, is not whether to deploy agents, but whether their organizations are structurally prepared to govern them.
Because trust-then-verify demands conscious design — with clarity about accountability, feedback, and learning. Without that, AI adoption risks oscillating between uncontrolled experimentation and reactionary clampdowns.
And if you’re the CEO, resolving that oscillation won’t be a tooling decision — it will be a governance decision.
One that you have to own.