
Published:
December 2, 2025
The report, from law reform organisation Justice, warns that England and Wales risk becoming an AI “wild west” if 43 police forces continue to experiment without transparency, technical standards and legal clarity.
The deployment of poorly designed AI can embed inaccuracy or discrimination in the process of investigation, leading to unjust outcomes. For example, AI facial recognition technologies have repeatedly been found to be less accurate for darker-skinned people and women, risking wrongful arrests.
The report, which draws on evidence from countries around the world, urges the government to create a central body that can set national rules for the application of AI and help police forces innovate responsibly.
It also calls for national legislation in areas like biometrics, rather than the current practice of individual forces writing their own rules for live facial recognition, which mean people are treated differently depending on where they live.
The speed of the technological revolution and the government’s push to use AI in public services leaves just a small window for urgent reform, the report concludes.
Alex Murray OBE, the National Police Chiefs’ Council AI lead and National Crime Agency Director, who is backing the call for a new central body, said: “This is a pivotal moment for policing. If we do not act decisively and collaboratively, policing risks falling behind, leaving gaps that undermine public trust, effectiveness, and justice.
“Without considered standards, legal clarity, and robust governance, we face fragmented approaches and inconsistent safeguards – all of which will reduce trust.
“The decisions we make now will define how policing uses AI for at least a decade. We must act, because the cost of inaction is too high.”
Ellen Lefley, Senior Lawyer at JUSTICE, says: “The relentless pace of change, financial pressures to policing, plus the lack of any clear standards creates an AI wild west.
“Responsible AI in policing will not happen by accident. We must ensure public trust and fairness so that, wherever you live in England and Wales, police AI is subject to the same technical standards, and we have consistent and clear laws on when police can legally use it, and when they can’t.
“A body with the power to set basic technical rules and give the public a say could ensure new tools respect rights, support responsible innovation and build public trust.
“Meanwhile, we have identified several areas in which legal clarity is urgently needed, with biometrics such as facial recognition top of the list.”
The report recognises that AI can offer real opportunities for improving policing, analysing ever expanding digital evidence and providing valuable assistance with tasks such as translation. But it argues that any new AI policing tool must be accurate, accountable, and uphold the police’s legal duties, and that this will not happen if technical standards are set by the commercial interests of private AI companies.
The research gives various examples of how poorly designed or deployed police AI use risks the erosion of public trust and major miscarriages of justice. For example: