UK police machine learning trials go unregulated
Report says there is a lack of clear guidance on the use of algorithms
The use of machine learning by UK police forces to support decision-making is in its infancy and there is a lack of research examining how the use of an algorithm influences officers' decision-making in practice.
That is according to the defence think tank, the Royal United Services Institute, which noted there is a limited evidence base on the efficacy and efficiency of different systems, their cost-effectiveness, their impact on individual rights and the extent to which they serve valid policing aims.
The report, titled "Machine Learning Algorithms and Police Decision-Making Legal, Ethical and Regulatory Challenges", said that there is a lack of clear guidance and codes of practice outlining appropriate constraints governing how police forces should trial predictive algorithmic tools.
"This should be addressed as a matter of urgency to enable police forces to trial new technologies in accordance with data protection legislation, respect for human rights and administrative law principles," the report's authors said.
It added that while machine learning algorithms are currently being used for limited policing purposes, there is potential for the technology to do much more, and the lack of a regulatory and governance framework for its use is concerning.
"A new regulatory framework is needed, one which establishes minimum standards around issues such as transparency and intelligibility, the potential effects of the incorporation of an algorithm into a decision-making process, and relevant ethical issues," the report said.
It urged the creation of a formalised system of scrutiny and oversight, including an inspection role for Her Majesty's Inspectorate of Constabulary and Fire and Rescue Services, is necessary to ensure adherence to this new framework.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
There are also issues over procurement of such systems. The report said that such procurements should "explicitly require that it be possible to retroactively deconstruct the algorithm in order to assess which factors influenced the model's predictions", along with a requirement for the supplier to be able to provide "an expert witness who can provide details concerning the algorithm's operation if needed, for instance in an evidential context".
The report also called for a collaborative, multidisciplinary approach, needed to address the complex issues raised by the use of machine learning algorithms for decision-making.
"At the national level, a working group consisting of members from the fields of policing, computer science, law and ethics should be tasked with sharing real-world' innovations and challenges, examining operational requirements for new algorithms within policing, with a view to setting out the relevant parameters and requirements, and considering the appropriate selection of training and test data," the report said.
It added that it is essential that the officers using machine learning technology are sufficiently trained to do so in a fair and responsible way and "are able to act upon algorithmic predictions in a way that maintains their discretion and professional judgement".
Rene Millman is a freelance writer and broadcaster who covers cybersecurity, AI, IoT, and the cloud. He also works as a contributing analyst at GigaOm and has previously worked as an analyst for Gartner covering the infrastructure market. He has made numerous television appearances to give his views and expertise on technology trends and companies that affect and shape our lives. You can follow Rene Millman on Twitter.