Main content

Here's a thing - what if shadow AI is actually telling us something useful?

George Lawton Profile picture for user George Lawton March 31, 2026
Summary:
AvePoint’s Dana Louise Simberkoff argues that shadow AI could be viewed as a cultural stress test rather than a technology failure.

shadow

If the experience with shadow IT is any guide, enterprises will soon face a rising tide of shadow AI. The recent enthusiasm and hype of new agentic tools like OpenClaw seem likely to make this all the more prescient. All the more so since these new LLM (Large Language Model) and agentic tools promise to solve problems that were not practical with traditional shadow IT services. They can also increase the potential for risks and the consequences.

It bears asking how and why shadow IT, much less shadow AI, might show up on an enterprise’s doorstep in the first place. In theory, at least, if IT, security, compliance, and risk management teams were on the ball, business users would have access to all the capabilities they need to do their jobs without risk. Dana Louise Simberkoff, Chief Risk, Privacy and Information Security Officer at AvePoint, argues:

Employees aren't using unapproved AI tools because they're reckless; they're doing it because the business is moving faster than governance. The difference from shadow IT is that AI doesn't just move data, it makes decisions and acts autonomously. When companies respond with bans, people don't stop using AI; they stop telling you. The organizations managing shadow AI well stop treating employees as 'users' of AI and start treating them as stewards and joint stakeholders. Compliance can mandate ownership. Only culture makes people care about it.

Operating model debt

Simberkoff's framing connects to the notion of operating model debt, which Ian Thomas recently raised. This debt arises from the cumulative cost of doubling down on control and automation rather than redesigning the organization to support distributed judgment. The impact of this debt is now compounding because agents are fundamentally incompatible with organizations built to limit the agency of employees, and, by extension, the agency of the agents they steward. 

One practical example is return-to-office mandates, where organizations implicitly signal a lack of trust. This trust deficit does not disappear when the worker is a machine. But the conventional approach to managing this with tighter controls and the imposition of new processes only ends up burdening employees who want to move faster. 

So, how are security and GRC (Governance, Risk, and Compliance) teams learning to shift from being compliance gatekeepers to cultural architects? And how can executives tell the difference between performing the language of trust versus actually creating the incentive structures that reward distributed judgment? Simberkoff says:

The shift starts by letting go of the idea that controls exist to slow people down. Good controls are like brakes on a car. Brakes were built to allow you to drive fast without losing control. Security and GRC teams become cultural architects when they design systems where the safe path is the easiest path: clear data boundaries, explicit permissions, and a deliberately small blast radius. In organizations where distributed judgment is real, people who flag risk are protected, not sidelined. Near misses are treated as learning signals, not failures. Trust becomes visible through budgets, ownership, tooling, and repeated behavior, not through rhetoric.

Navigating fear of change

An adjacent challenge lies in addressing the fear of change that can arise when new agentic systems threaten to upend existing roles, responsibilities and the agency of people. For example, how do you find ways to enroll people who are legitimately afraid of being replaced by agents, seeing their budgets cut, or having to learn a new process or an entirely new role within the company?

Simberkoff draws on AvePoint's own experience piloting Microsoft Copilot:

We took deliberate steps to select the right participants. We didn't need tech evangelists or AI enthusiasts; we needed real users in the trenches, from sales teams juggling customer engagements to legal and finance teams buried in compliance tasks. These were the people most likely to push Copilot to its limits, expose its flaws, and highlight its strengths. Part of what makes us effective is our humanity, something we have not taught machines. 

Even in my own role as a privacy and security officer, I am constantly required to exercise judgment and ethics, drawing on both analytical and intuitive thinking. I don't see a machine doing that. If we treat AI as a junior assistant and encourage people to partner with it while remaining accountable, I believe we can expect a productive and positive future. 

My take 

My gut sense is that shadow AI is largely a reactive response to the guardrails and crippled tooling imposed by compliance processes on end users. What if enterprises flipped that script by leading with employee empowerment and treating GRC as an essential component of it rather than a constraint?

For example, why not invite everyone curious about agents to collaborate on a competition to see who can identify the most unsafe practices in a sandbox that mirrors the enterprise environment? Something like the service virtualization approach Computer Associates was using for a while, which allowed software developers, integration specialists, and security teams to troubleshoot complex enterprise software before deployment, kind of like a wind tunnel for the enterprise, but with mock data that's okay if it gets lost or stolen.

Then you let people compete to find the most bugs and maybe rate them by severity. For example, the first person to find a particular kind of issue would get points. Maybe one point for a simple thing, or three if it was high severity. And then everyone behind them who accidentally stumbles upon the same security risk would lose a point. Then, at the end of the week of collaboration to discover better ways to use the agents and break things, there'd be a winner. These would be the champions of AI who are simultaneously championing use cases and raising awareness of the problems that could percolate through their departments.
 

Loading
A grey colored placeholder image