AI in policing: International lessons and domestic solutions

Contents

share this post:

This report asks a central question: what must be in place for AI in policing to be trustworthy, value for money, support policing goals, and comply with human rights and the fair administration of justice?

Context: AI is rapidly reshaping policing

Artificial intelligence (“AI”) is rapidly reshaping public services, including policing. The pace of innovation, the scale of private-sector investment, and the UK Government’s commitment to “mainline AI into the veins of the nation” mean AI deployment in policing is an accelerating reality.  

Although this brings significant opportunities – such as enhanced investigative capability, faster processing of digital evidence and improved risk assessment – it also carries profound risks for human rights, the rule of law, and public trust. These risks are exacerbated by the police’s powerful role in our society.  

Key lessons from home and abroad

The report recognises the opportunities that AI brings to policing, analysing ever expanding digital evidence and providing valuable assistance with tasks such as translation. However, this report also emphasizes that new AI policing tools must be accurate, accountable, and uphold the police’s legal duties. This will not happen by accident, particularly if technical standards continue to be set predominantly by reference to the commercial interests of private AI companies.

This report identifies five lessons learned from both our international research and domestic analysis:

  1. Fitness for Purpose – AI tools must work, and must work for the context in which they are used.
  1. Competent, Ethical and Lawful Use – Human oversight must be real, informed and supported.
  1. Understanding Impacts – AI tools must be evaluated both before and after deployment.
  1. Proportionate and Effective Safeguards – Clear limits, accountability routes and effective redress are essential.
  1. Public Participation – Trust requires visibility, involvement and accountability.

The need for an independent central body & other key recommendations

These lessons revealed a key structural gap in England and Wales, which lacks a central, independent mechanism to coordinate standards, innovation, oversight, research, procurement, and public engagement.  Accordingly, our key recommendations are as follows:  

  • The government should set up an independent central body to establish mandatory technical and governance standards, help forces test against those standards and ensure they are embedded in the tools they buy from the private sector.  
  • Because public participation is a key part of securing transparency, trust, and incorporating vital community perspectives into the design and use of AI, this body should incorporate a citizens’ panel or assembly.  
  • To allow for independence and agility in a fast-moving landscape, this new body should set these standards in statutory codes of practice, like the Forensic Science Regulator does for forensic science in the criminal justice system.  
  • The Home Office should legislate for biometric data and technologies, to clarify a confusing and disaggregated legal landscape. This should not be limited to policing but include other public sector use of these technologies, and private sector use as well.  
  • The independent body would work collaboratively with police forces, regulators, lawyers, the public and academics to continue identify where legal clarity is needed in the future and recommend these to the Home Office.

Read and download the full report  

Click here to read and download the full report, AI in Policing: international lessons and domestic solutions.

Learn more

Learn more about our multiyear programme on AI, human rights and the law here.