This article is more than 1 year old

London's top cop dismisses 'highly inaccurate or ill informed' facial-recognition critics, possibly ironically

Appears she ignored report that concluded the tech is highly inaccurate

The head of London’s Metropolitan Police, Cressida Dick, has angered critics of facial recognition technology by accusing them of being “highly inaccurate or highly ill-informed.”

Those critics, in turn, have accused her of being ill informed by ignoring an independent report that reveals the technology itself is highly inaccurate: working in just 19 per cent of cases.

Dick gave the annual address at security think tank, Royal United Services Institute (RUSI), on Monday, and spent most of the speech arguing that British plod need to be allowed to use modern technology to combat crime.

But while pushing a message that she welcomed public debate, Dick attacked those that had brought about the debate over facial recognition in the first place; organizations including Liberty and Big Brother Watch.

"Right now the loudest voices in the debate seem to be the critics. Sometimes highly inaccurate or highly ill informed," she told those assembled. "I would say it is for critics to justify to the victims of those crimes why police should not be allowed to use tech lawfully and proportionally to catch criminals." You can watch her here:

Youtube Video

In immediate responses, those critics accused Dick of hypocrisy. “It’s unhelpful for the Met to reduce a serious debate on facial recognition to unfounded accusations of ‘fake news’,” tweeted Big Brother Watch. “Dick would do better to acknowledge and engage with the real, serious concerns - including those in the damning independent report that she ignored.”

Liberty responded similarly: “Fact: Met started using facial recognition after ignoring its own review of two-year trial that said its use of the tech didn't respect human rights. Another fact: scaremongering and deriding criticisms instead of engaging shows how flimsy their basis for using it really is.”

And it's true

Those accusations are true. As we have reported in the past, the Met pilot programs for live facial recognition have been a complete failure. The first trial at the Notting Hill Carnival in 2016 resulted in not a single person being identified. The next year, the trial was repeated despite a number of groups calling for it to be banned.

Again, it was a bust: no one was identified but 35 false positives were recorded. Despite that, the UK government moved forward with a £4.6m ($5.9m) contract for facial recognition software.

Police in London's city centre

London's Metropolitan Police flip the switch: Smile, fellow citizens... you're undergoing Live Facial Recognition

READ MORE

Then, last year, an independent report based on access that the researchers - Professor Fussey and Dr Murray from the University of Essex - had been given to the final six “trials” run by the cops noted that the system had made just eight correct matches out of 42 suggested in total.

They also concluded that the trials were probably illegal since they had not accounted for human rights compliance. Murray said: "This report raises significant concerns regarding the human rights law compliance of the trials… The legal basis for the trials was unclear and is unlikely to satisfy the 'in accordance with the law' test established by human rights law.”

They called for all live trials of facial recognition to be stopped until a series of issues were resolved, including an appropriate level of public scrutiny and debate on a national level.

In addition, fears over how the technology will be used by police on the ground were given serious credence when a man hid his face from a trial system being used in Romford, in East London. He was pulled aside by the police, who decided that such behavior was suspicious and fined £90 ($115) for "disorderly behavior."

Guilty until proven innocent

A film crew happened to be filming at the time and spoke to the man afterwards. “I said, ‘I don’t want me face showing on anything’,” he told the film crew. “If I want to cover me face, I’ll cover me face. It’s not for them to not tell me to cover me face.”

In response to its failed trials, a report claiming the system was illegal, and a man being held and fined for refusing to be filmed, the Met - with backing from the Home Secretary - formally approved its facial recognition system earlier this month.

As for Dick’s speech this morning, she noted that it was not her position to decide “where the boundary lies between security and privacy” before giving her opinion “as a member of public.” That opinion was, in her own words, frank.

“In an age of Twitter and Instagram and Facebook, concern about my image and that of my fellow law-abiding citizens passing through LFR [live facial recognition] and not being stored, feels much, much smaller than my and the public’s vital expectation to be kept safe from a knife through the chest.”

She also listed various “myths” surrounding facial recognition: that the Met stores the images it takes of people (it only stores pictures of people that are identified as potential suspects by the software and deletes that data within 31 days unless it is needed as evidence); that the software makes decisions - Dick says a human copper always makes the final decision; that it will be used for all kinds of crime - Dick says it will only be used for serious crime; that the software has intrinsic biases - Dick claims that “the tech we are deploying is proven not to have an ethnic bias”; and that the Met is being secretive about the technology - Dick says the Met has been “completely open and transparent about it.”

Looking for transparency

Several of those arguments are also suspect. The database used for comparing people walking past the cameras to pictures of suspected criminals contains is thought to contain 12.5 million faces - a far cry from the claim that only “serious crime” is considered by the system.

And despite her claims that the Met has been “completely open and transparent” about the trials, the reality is that it took a Freedom of Information request to get statistics from the Met over quite how ineffective its systems are.

Despite police insistence that the system works, in reality it has an average false positive rate – where the system "identifies" someone not on the list – of 91 per cent across the country. That means that 91 per cent of those flagged by the system as possible criminals were innocent and had been misidentified.

The Met gets around these shortcomings by arguing - as Dick did today - that a human police officer makes the final decision over whether to approach someone by comparing what the system had captured with a photo in the database. So it's the police officer at fault, not the computer instructing them.

Instead, Dick focused on the limited success of the system. “The Met’s trials of LFR resulted in the arrest of eight wanted individuals whom we would otherwise have been very unlikely to identify,” she argued. “Without LFR, those eight individuals who were wanted for having caused harm would probably not have been arrested.”

She then claimed to be open to serious concerns about the system. “I am not of course arguing against criticism per se. As John Stuart Mill advised, truth emerges by exposing ideas and arguments to opposition and counterclaims or open debate. Ideas that face no competitors lack a way of proving their worth.”

As a result she said she had read recent reports “in preparation for today’s speech” that included one by Lord Evans on AI and Public Standards, and research by RUSI published today.

So we're all agreed?

It just so happened that the authors of both of those reports joined Dick for a short panel discussion after her speech. Lord Evans - who, it should be noted, used to head up Britain’s internal security service, MI5 - argued that his report’s “overall conclusion” was that there was “very positive potential for tech” but that there are “holes and vulnerabilities” particularly around “openness, accountability and objectivity.”

The author the RUSI report, Alexander Babuta, also identified “gaps” - most notably the “absence of a national framework” and argued that there needed to be an “impact assessment conducted” prior to the use of modern technologies like artificial intelligence or facial recognition.

But he also argued that it would take too long to wait until legislation is passed for the police not to try out such technologies, arguing that it “can’t wait,” and that there needed to be a policy framework and national guidance as soon as possible.

What Babuta did not note - but his report does - is that the issue of facial recognition was specifically not included in his report’s remit. “Biometric technology, including live facial recognition, DNA analysis and fingerprint matching, are outside the direct scope of this study, as are covert surveillance capabilities and digital forensics technology, such as mobile phone data extraction and computer forensics,” it reads.

Faced with two powerful establishment figures on the same stage, Babuta also downplayed the fact that his report noted “the lack of an evidence base, poor data quality and insufficient skills and expertise as three major barriers to successful implementation.”

It goes on: “In particular, the development of policing algorithms is often not underpinned by a robust empirical evidence base regarding their claimed benefits, scientific validity or cost effectiveness. A clear business case is therefore often absent.”

In other words, there is no evidence that the systems actually work.

Oh say you can't see, by the dawn's early light...

As for one of the biggest concerns of privacy advocates and civil liberties groups - that facial recognition technology has intrinsic biases against certain groups, particularly minorities - that serious issue was brushed aside.

Police Scotland employees stand outside the Scottish Parliament. Pic: Shutterstock

Will Police Scotland use real-time discrimination-happy face-recog tech? Senior cop tells us: We won't... for now

READ MORE

Dick claimed that “the tech we are deploying is proven not to have an ethnic bias.” She went on: “We know there are some cheap technologies that do have bias, but as I have said, ours doesn’t. Currently, the only bias in it, is that is shows it is slightly harder to identify a wanted women than a wanted man.”

It’s not clear where that “proof” came from, but Babuta effectively dismissed the entire issue of racial discrimination as an American problem. Racism, it seems, probably doesn't happen with the British police.

“While predictive policing tools have received much criticism for being ‘racially biased’, with claims that they over-predict individuals from certain minority groups, there is a lack of sufficient evidence to assess the extent to which bias in police use of algorithms actually occurs in practice in England and Wales, and whether this results in unlawful discrimination,” the report states - an argument he also made on stage.

“Most studies purporting to demonstrate racial bias in police algorithms are based on analysis conducted in the US, and it is unclear whether these concerns are transferable to the UK context.”

It’s not clear from this whether those pushing for facial recognition technology in the UK believe that black people look different in Britain from the States, or whether they just believe that American cops are more racist.

But the dismissal of a wealth of evidence that such software can bring with it substantial risk of racial bias as not something Britain has to worry about was somewhat rich coming from a panel that did not include any critics or indeed anyone from a minority.

The big public debate promised by the Met and the UK government on facial recognition technology will seemingly be limited to those that already agree with it. ®

More about

TIP US OFF

Send us news


Other stories you might like