Zero arrests, 2 correct matches, no criminals: London cops' facial recog tech slammed

False positive rate of 98% doesn't count, say police, because 'checks and balances'

London, UK - March, 2018. Police officers patrolling Leicester Square and Piccadilly Circus in central London.  Pic Paolo Paradiso / Shutterstock.com
Police officers patrolling Leicester Square and Piccadilly Circus in central London. Pic Paolo Paradiso / Shutterstock.com

London cops' facial recognition kit has only correctly identified two people to date – neither of whom were criminals – and the UK capital's police force has made no arrests using it, figures published today revealed.

According to information released under Freedom of Information laws, the Metropolitan Police's automated facial recognition (AFR) technology has a 98 per cent false positive rate.

That figure is the highest of those given by UK police forces surveyed by the campaign group Big Brother Watch as part of a report that urges the the police to stop using the tech immediately.

Forces use facial recognition in two ways: one is after the fact, while cross-checking of images against mugshots held in national databases; the other involves real-time scanning of people's faces in a crowd to compare against a "watch list" that is freshly drawn up for each event.

Big Brother Watch's report focused on the latter, which it said breaches human rights laws as it surveils people without their knowledge and might dissuade them from attending public events.

police at notting hill

London cops urged to scrap use of 'biased' facial recognition at Notting Hill Carnival

READ MORE

And, despite cops' insistence that it works, the report showed an average false positive rate – where the system "identifies" someone not on the list – of 91 per cent across the country. That doesn't mean nine out of ten people seen on camera are wrongly flagged up, instead it means that 91 per cent of people flagged up turned out to be not on the watch list.

The Met has the highest percentage, at 98 per cent, with 35 false positives recorded in one day alone, at the Notting Hill Carnival 2017.

However, the Met Police claimed that this figure is misleading because there is human intervention after the system flags up the match.

"We do not not consider these as false positive matches because additional checks and balances are in place to confirm identification following system alerts," a spokesperson told The Register.

The system, though, hasn't had much success in positive identifications either: the report showed there have been just two accurate matches, and neither person was a criminal.

The first was at Notting Hill, but the person identified was no longer wanted for arrest because the information used to generate the watch list was out of date.

The second such identification took place during last year's Remembrance Sunday event, but this was someone known as a "fixated individual" – these are people known to frequently contact public figures – but who was not a criminal and not wanted for arrest.

Typically people on this list have mental health issues, and Big Brother Watch expressed concern that the police said there had not been prior consultation with mental health professionals about cross-matching against people in this database.

The group described this as a "chilling example of function creep" and an example of the dangerous effect it could have on the rights of marginalised people.

It also raised concerns about racial bias in the kit used, criticising the Met Police for saying it would not record ethnicity figures for the number of individuals identified, either correctly or not.

As a result, it said, "any demographic disproportionately in this hi-tech policing will remain unaccountable and hidden from public view".

This is compounded by the fact that the commercial software used by the Met – and also South Wales Police (SWP) – has yet to be tested for demographic accuracy biases.

"We have been extremely disappointed to encounter resistance from the police in England and Wales to the idea that such testing is important or necessary," Big Brother Watch said in the report.

SWP – which has used AFR at 18 public places since it was first introduced in May 2017 – has fared only slightly better. Its false positive rate is 91 per cent, and the matches led to 15 arrests – equivalent to 0.005 per cent of matches.

For example, the Welsh force's AFR that scanned Brits during the UEFA Champions League week in Cardiff in 2017 had poor results.

The SWP said that false positives were to be expected while the technology develops, but that the accuracy was improving, and added that no one had been arrested after a false match – again because of human intervention.

"Officers can quickly establish if the person has been correctly or incorrectly matched by traditional policing methods, either by looking at the person or through a brief conversation," a spokesperson said.

"If an incorrect match has been made, officers will explain to the individual what has happened and invite them to see the equipment along with providing them with a Fair Processing Notice.”

Underlying the concerns about the poor accuracy of the kit are complaints about a lack of clear oversight – an issue that has been raised by a number of activists, politicians and independent commissioners in related areas.

Government minister Susan Williams – who once described the use of AFR as an "operational" decision for the police – said earlier this year the government is to create a board comprised of the information, biometrics and surveillance camera commissioners to oversee the tech.

Further details are expected in the long-awaited biometrics strategy, which is slated to appear in June.

Big Brother Watch also reiterated its concerns about the mass storage of custody images of innocent people on the Police National Database, which has more than 12.5 million photos on it that can be scanned biometrically.

Despite a 2012 High Court ruling that said keeping images of presumed innocent people on file was unlawful, the government has said it isn't possible to automate removal. This means that they remain on the system unless a person asks for them to be removed.

In March, Williams said that because images can only be deleted manually, weeding out innocent people "will have significant costs and be difficult to justify given the off-setting reductions forces would be required to find to fund it".

The group had little patience with this, stating in the report the government should provide funding for administrative staff to deal with this problem – one person per force employed for a full year at £35,000 would be a total of £1.5m, it said.

"'Costs' are not an acceptable reason for the British Government not to comply with the law," it said. Big Brother Watch said that given the Home Office had forked out £2.6m to SWP for its AFR kit, they were also "hardly a convincing reason".

Big Brother Watch is launching its campaign against AFR today in Parliament. ®

Sponsored: Minds Mastering Machines - Call for papers now open


Biting the hand that feeds IT © 1998–2018