Use of agentic AI erodes GDPR compliance as we know it. Wipro's 'privacy by design' comes into its own
- Summary:
- The EU/UK approach to data protection has implications in an agentic AI era...
In the UK and EU where data privacy is regulated, the AI shift from static tools to autonomous “interlocutors” that decompose tasks, call external services, and act on users’ behalf, means that existing compliance procedures for GDPR are creaking.
Traditional governance models were not built for non-human actors that hold evolving memories, make context-sensitive decisions, and can be manipulated by hostile content, yet businesses remain fully liable under GDPR when things go wrong. For executives facing GDPR compliance requirements on the one hand and the pressure to innovate using agentic AI on the other, the challenge is growing.
With the arrival of agentic AI, the world of data privacy compliance enters a very different scenario from the one in which it was first created. There are organisational areas of tension, as consumers expect fast, responsive services but also are aware of the harm that data misuse can cause. There is also a tension between companies wanting to innovate and deploy agents at speed.
According to Ivana Bartoletti, Global Chief Privacy and AI Governance Officer at Wipro:
Compliance then has to enable things, not just prevent them. Primary planning terms have become AI governance terms. With agentic AI we need to put controls around this, because it is introducing a new category of actor, since traditional frameworks were built for tools not action. This is not just an incremental change. This is a structural shift.
The UK and EU have recognised this and have made efforts to simplify GDPR and to improve compliance by adding automated decision-making. The tension is now more visible than ever and this is gaining huge prominence in companies. When it comes to agents the compliance playbook has not caught up to regulate systems that support the decisions that humans make.
Bartoletti provides three main examples of the problems being faced:
1 – the agent composes a task and makes micro-decisions and at every step personal data may be used. The reasoning chain is now a process, not a document. With GDPR and data protection the emphasis is on explainability but agentic systems make this very difficult.
2 – Memory – AI agents memorise things in order to be useful. They have persistent memories about context, about preferences but within GDPR the automation of context information is not permissible. Yet agentic memory is not mapped into organisational data retention policies.
3 – Prompt injections are an under-appreciated serious data protection threat. In this way a bad actor can embed instructions in a document and redirect the behaviour of an agent. This is not theoretical, it is possible, and the organisation is liable regardless of how the attack is delivered.
Can we safely outsource data privacy to a platform?
In order to grapple with all of this Bartoletti suggests we need to start experimenting with agents on safer ground, using only internal enterprise documents and not opening them up with APIs. Secondly, she thinks we must build agentic autonomy based on both human personal preferences and organisational risk mapping:
When we deploy agents using third-party platforms connected to APIs, we do not have much understanding of how they reason or behave and yet we accept this as part of our terms of service. But with GDPR we need to understand how the data is processed and with a platform we are outsourcing the data controller role to a platform, while retaining liability for data privacy.
Consequently, we need transparency which means agents that can minimise potential risks, not just a chain that nobody can understand and this is down to vendor selection. The system has to be designed to allow meaningful interventions. AI agents can be deployed to mitigate these risks based on monitoring agent properties such as autonomy, memory and reasoning. They can be trained for governance, can become a governance tool.
We can create agents that can watch for anomalous data patterns and can detect prompt injections in real-time. These things cannot be done by humans but they can be done by agents to enforce data minimisation, so that agents only use strictly necessary data, stripping personal data of context before it goes through the reasoning chain. This is privacy by design.
Crucial to the approach is the understanding that current legal and governance teams cannot map out and apply interventions because of the dynamism involved with agentic AI. With agentic AI the context is the risk, so the answer is using agentic AI as a governance tool in a dynamic way. This doesn’t happen by default it needs to architected at design stage.
This is where the Wipro Trust Stack comes in as a layered framework to embed governance into technical design rather than providing it as a policy overlay. The Wipro Trust Stack ensures:
- Legible reasoning paths: you can audit what the agent did and why, after the fact
- Bounded agency: the agent operates within defined limits and cannot exceed them without human authorisation
- Goal transparency: the agent's objectives are explicit and cannot be overridden by injected instructions
- Contestability: users and affected parties can challenge agent decisions and get a meaningful response
- Governance by design: privacy, accountability, and oversight are architectural features, not audit afterthoughts
Bartoletti‘s contention is that you need to design agents in a way that they can be trusted or what is the point of them? Keeping a human in the loop provides a false sense of security because humans can be lazy and feckless. Her parting shot is that when we are designing agentic systems, we should not design agents that “befriend” us. She explains:
You do not want to work with an agent that makes you feel good. It should not flatter you, because at a social and a business level humans need to remain in control. You need to see the friction between what you want and what the agent delivers.
My take
The direction of travel Bartoletti is outlining for GDPR compliance is interesting and currently outpaces legal requirements in the UK and EU, where the transparency of the agent’s reasoning has to be clear to the human in the loop who made decisions based on the agent’s information.
However, I do like the idea that we should be keeping a social distance from the agents we create in order to exercise a level of distrust by design. And we should also not forget that there is a big educational requirement to fulfil to ensure that humans have the level of critical thinking required to make meaningful interventions in these agentic systems.