Main content

UK's facial recognition rollout represents an unprecedented surveillance expansion with questionable legal foundations

Derek du Preez Profile picture for user ddpreez August 14, 2025
Summary:
The Government's announcement of 10 new Live Facial Recognition vans marks a significant escalation in mass surveillance capabilities, but fundamental questions remain about proportionality, oversight and the legal framework governing this technology.

An image of someone’s face being scanned by facial recognition software

The UK Government's announcement this week of 10 new Live Facial Recognition (LFR) vans being deployed to seven police forces represents what civil liberties groups are calling an "unprecedented escalation" in mass surveillance technology. While Home Secretary Yvette Cooper frames the rollout as targeting "high-harm offenders," the reality is more complex and concerning than the Government's messaging suggests.

The announcement, made as part of the Government's Neighbourhood Policing Guarantee, will see Greater Manchester, West Yorkshire, Bedfordshire, Surrey, Sussex, Thames Valley and Hampshire receive new mobile units capable of scanning thousands of faces per hour against police watchlists. This brings the total number of forces with regular LFR capabilities to 10, joining the Metropolitan Police, South Wales Police and Essex Police in having this mass surveillance capability.

The scale of scanning 

The numbers involved in current deployments reveal the true scope of this surveillance expansion. South Wales Police, the UK's national lead on facial recognition, scanned 819,943 faces in the last year alone. The Metropolitan Police has scanned 144,000 faces in London deployments, resulting in just eight arrests - none for serious crimes, with many for minor drug offences and shoplifting.

This disparity between the scale of scanning and meaningful results raises fundamental questions about proportionality. As Professor Karen Yeung from Birmingham Law School pointed out during parliamentary evidence sessions, this represents "a prima facie violation of the privacy of 144,000 people in public settings" for minimal law enforcement benefit.

The Government's justification centers on catching "high-harm criminals," but the reality of how watchlists are populated tells a different story. These lists can include not just terrorism suspects and serious criminals, but also vulnerable missing persons, people wanted for minor offences, and individuals subject to bail conditions. In one Metropolitan Police deployment at Oxford Circus, the watchlist contained over 9,700 images - raising serious questions about how each inclusion was justified as "lawful, necessary and proportionate."

Perhaps most concerning is the absence of specific legislation governing LFR use. Police forces rely on what the Metropolitan Police's own legal mandate describes as "a complex patchwork of other laws," primarily common law, to justify deployments. When pressed during parliamentary hearings, forces acknowledged there is no specific legislation covering LFR use - a stark contrast to the European Union's approach.

The European Union's Artificial Intelligence (AI) Act classifies most facial recognition systems as 'high-risk' AI practices and has banned real-time remote biometric identification by police in public spaces, while also prohibiting facial recognition by private sector organizations. This regulatory framework provides clear boundaries that the UK currently lacks.

The Information Commissioner's Office's (ICO) recent guidance attempts to fill some gaps, stating that LFR "does not operate in a legal vacuum" and must be "lawful, fair and proportionate". However, this reactive guidance cannot substitute for comprehensive primary legislation that properly balances law enforcement needs with civil liberties.

The human cost of algorithmic policing

The operational reality of LFR deployments reveals significant problems beyond legal uncertainties. Big Brother Watch has documented numerous cases of misidentification, particularly affecting Black men. In one incident in Romford, a 14-year-old Black teenager in school uniform was surrounded by plainclothes officers and questioned in an alleyway after being misidentified by the system.

Such incidents highlight the "chilling effect" that privacy campaigner Ed Bridges warned about when he successfully challenged South Wales Police in court. The Court of Appeal found the technology violated privacy rights, data protection laws and equality laws, yet deployments have only increased since then.

The technology's accuracy claims also deserve scrutiny. While South Wales Police boasts of "zero false positives" in recent deployments, the Metropolitan Police has had "two false alerts" over 19 deployments. More fundamentally, these statistics only measure technical matching accuracy - they don't account for the broader impact of scanning hundreds of thousands of innocent people without their consent.

The privatization concern

Beyond police use, the expansion into private sector deployment adds another layer of concern. The recent inquiry into shoplifting by the House of Lords Justice and Home Affairs Committee highlighted how retailers are increasingly using facial recognition systems supplied by private companies like Facewatch. This creates what the Committee termed "effectively privatized policing."

Big Brother Watch has identified dozens of retailers using facial recognition, including major chains like Southern Co-op, Sports Direct, and most recently Asda, which launched trials in five Greater Manchester stores. Unlike police systems, these private deployments operate with even less oversight, creating watchlists based on the discretion of security guards rather than any criminal threshold.

The broader AI context

The facial recognition expansion occurs against a backdrop of increasing AI integration across public services. The Government's enthusiasm for AI-driven solutions in policing, healthcare and education raises questions about whether adequate safeguards are being developed to keep pace with technological deployment.

The ICO's guidance emphasizes the need for Data Protection Impact Assessments (DPIAs) and warns that LFR deployments must be "strictly necessary" for law enforcement purposes. However, the retrospective nature of much regulatory guidance means police forces are essentially writing their own rules while regulators scramble to catch up.

The Government's promise of consultation on "appropriate safeguards and oversight" comes after the technology rollout, not before - a backwards approach that prioritizes deployment over democratic scrutiny. Rebecca Vincent from Big Brother Watch described this as "carte blanche to continue to roll it out unfettered" despite pending judicial reviews.

The lack of parliamentary debate on such intrusive surveillance capabilities represents a significant democratic miss. Members of Parliament have never voted to authorize mass facial recognition surveillance, yet it's being rolled out across the country based on executive decisions and police operational judgments.

Police forces defend LFR on grounds of effectiveness in catching serious criminals. The Metropolitan Police reported 580 arrests using the technology over 12 months, including 52 registered sex offenders breaching conditions. These results sound impressive until considered against the hundreds of thousands of law-abiding citizens scanned in the process.

The fundamental question is whether this level of mass surveillance is proportionate to the results achieved. Traditional policing methods, while perhaps less technologically impressive, don't require scanning entire populations to identify specific individuals. The shift toward mass surveillance as standard practice represents a profound change in the relationship between state and citizen that deserves more scrutiny than a Government press release.

My take

The Government's consultation planned for the Fall will be crucial in determining whether the UK develops a rights-respecting framework for facial recognition or continues down the path of unfettered surveillance expansion. The contrast with European Union approaches suggests alternative models exist that balance security needs with civil liberties.

However, the pattern of consultation after deployment rather than before suggests the Government has already made its strategic choice. The challenge for civil liberties groups, parliamentarians and concerned citizens is ensuring this consultation results in meaningful constraints rather than post-hoc justification for mass surveillance.

The expansion of facial recognition represents more than just a policing tool - it's a fundamental shift toward a surveillance state where biometric identification becomes normalized. Without proper legislative frameworks, democratic oversight and meaningful safeguards, the UK risks sleepwalking into an Orwellian future where anonymity in public spaces becomes a historical curiosity.

The technology may be inevitable, but mass deployment without adequate safeguards is a choice - one that deserves far more democratic scrutiny than it has received.

 

 

Loading
A grey colored placeholder image