This article is more than 1 year old

MPs call for 'immediate' stop to facial recog in UK as report underlines bias risks in 'pre-crime' algos used by coppers

New report after 12 forces across England and Wales trialled technology

MPs across parties have called for an immediate "stop" to live facial recognition surveillance by the police and in public places.

The joint statement signed by 14 MPs including David Davis, Diane Abbott, Jo Swinson, and Caroline Lucas stated:

We hold differing views about live facial recognition surveillance, ranging from serious concerns about its incompatibility with human rights, to the potential for discriminatory impact, the lack of safeguards, the lack of an evidence base, an unproven case of necessity or proportionality, the lack of a sufficient legal basis, the lack of parliamentary consideration, and the lack of a democratic mandate.

However, all of these views lead us to the same following conclusion: We call on UK police and private companies to immediately stop using live facial recognition for public surveillance.

The call is also backed by 25 rights and technology groups including Big Brother Watch, Amnesty International and the Ada Lovelace Institute.

Such groups have warned about the increasing use of the controversial technology. The Metropolitan Police has used facial recognition surveillance 10 times across London since 2016, including twice at London's Notting Hill Carnival.

Facial recognition is also being used in privately owned public spaces, including, controversially, the King's Cross Estate in Londonthe ICO has already stuck its oar in on that subject.

It follows a report yesterday (PDF), in which British police officials cast doubt on the use of predictive policing algorithms, calling them imprecise and biased.

Security and defence think tank the Royal United Services Institute (RUSI) interviewed police representatives, academics and legal experts about the challenges of using data analytics and algorithms. Machine learning is used to map and predict areas with high crime rates, and the data is then used to direct police on where to patrol, also known as "hotspot policing".

Out of the 43 police forces across England and Wales, only 12 have experimented with predictive policing algorithms, and only three or four agencies are currently deploying the technology, Alexander Babuta, a research fellow of National Security Studies at RUSI and one of the authors of the report, told The Register.

Machine learning models are only good at picking up on patterns in the training data and, therefore, cannot predict rare crimes.

If there are any biases in the data, the algorithms will only serve to enhance them. For example, if a particular area is known for high rates of robberies then sending more police to that area will potentially mean more arrests, creating a positive feedback loop. So instead of predicting future crimes, the software tends to just affect future policing instead.

"We pile loads of resources into a certain area and it becomes a self-fulfilling prophecy, purely because there's more policing going into that area, not necessarily because of discrimination on the part of officers," said one copper.

'Human bias ... introduced into the datasets'

The algorithms are potentially even worse when they're used to predict how likely someone is to commit a crime. Some forces such as the Durham Constabulary and the Avon and Somerset Constabulary have employed the tool to assess recidivism by taking into account the "likelihood of victimisation or vulnerability, and likelihood of committing a range of specific offences".

But, again, if these are given biased data it will lead to certain demographics being targeted. "Young black men are more likely to be stopped and searched than young white men, and that's purely down to human bias. That human bias is then introduced into the datasets, and bias is then generated in the outcomes of the application of those datasets," an official noted.

In fact, neighbourhood officers from Durham can check the local profiles of people with criminal records on their mobile devices through an app that calculates the risk of them reoffending. The results are updated everyday, apparently. But as one expert observed: "Predictive judgments are meaningful when applied to groups of offenders. However, at an individual level, predictions are considered by many to be imprecise."

Algorithms have to crunch through tons of data in order to make accurate predictions. Relying on a single individual's data is probably not enough to get it to work. Even if a tool is effective for groups of people, it doesn't mean it'll necessarily be accurate for a single person.

The RUSI report is the first study into data analytics and algorithms in policing; there will be a second looking into the possible solutions to biases in the technology next year.

"This project forms part of the Centre for Data Ethics and Innovation's (CEDI) ongoing review into algorithmic bias in policing," said Babuta. "The aim is to develop a new national policy framework for police use of data analytics.

"CDEI will shortly publish draft guidance for consultation, and based on feedback provided by policing stakeholders, this guidance will then be revised and refined.

"Our final report will be published in February 2020, and will contain specific recommendations regarding what should be included in a new Code of Data Ethics for UK policing." ®

More about

TIP US OFF

Send us news


Other stories you might like