2

Is Live Facial Recognition technology necessary and proportionate? Police forces...

 6 months ago
source link: https://diginomica.com/live-facial-recognition-technology-necessary-and-proportionate-police-forces-defend-scanning
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Is Live Facial Recognition technology necessary and proportionate? Police forces defend scanning hundreds of thousands faces a year

By Derek du Preez

December 13, 2023

Dyslexia mode

facial recognition threat

Police forces in the UK are accelerating their use of Live Facial Recognition (LFR) technology, claiming that in certain scenarios its use has resulted in a number of arrests of people wanted for serious crimes. But it’s not just those wanted for serious crime, the forces also claim that the AI-enabled facial recognition software is helping identify those that are vulnerable or missing too. The technology is helping to support safer events, and is reducing the risk of terrorism, knife crime, and violence against women and girls - that is the claim, at least. 

These outcomes are very emotive and the argument typically goes that any sort of privacy invasion - or sense of unease about having your face scanned and checked against a watchlist for a match - is less important than the safety benefits society will see.

According to privacy campaigners, two police forces in particular - South Wales and London’s Metropolitan Police - are spearheading the adoption of the technology, and have been doing so since 2016/17. This is despite the fact that South Wales Police was taken to court by Ed Bridges, a privacy campaigner, and lost a case regarding the use of the software in a Court of Appeal - with the Court claiming that it violated Bridges’ human rights. 

Hundreds of thousands of faces are still being scanned every year and checked against watchlists that are populated by individual forces. How these watchlists are populated is also of concern, given that it’s down to the police forces involved as to what qualifies someone to end up on one. 

Given this context, it was with interest that both South Wales and The Met appeared before the Justice and Home Affairs Committee this week to explain why the use of LFR is so beneficial and why the protections that have been put in place are sufficient. A Professor of Law and Ethics was in attendance at the hearing too and provided some useful points of contention, however, that should be considered. 

Justification 

Providing evidence to MPs on the Committee was Mark Travis, Senior Responsible Officer for Facial Recognition Technology at South Wales Police. Travis said that the force has a “very clear focus” in terms of deployment of LFR - to tackle the most serious crime and protect the most vulnerable. He explained: 

We have a benchmark, where we determine the necessity and proportionality of the use of the technology. During the summer of this year, we had three large scale events that took place at the Principality Stadium in Cardiff. 

That's a venue that holds somewhere in the region of 60,000 people and we use facial recognition technology to identify people who may be coming to that venue with intent of committing crime. 

That could be serious crimes such as terrorism, it could be crimes against vulnerable people, against young girls and women. And then wider into kind of acquisitive crime, so people who are wanted by the police for having committed crimes, and people who may be involved in acquisitive crime.

We apply a serious threshold. And the decision to deploy the equipment is made by an officer of the rank of Assistant Chief Constable or above. So we're talking about a senior officer within the service to make sure that the benchmark for necessity  and proportionality has been met.

Travis said that when the technology is deployed on the ground, the force has a person who is a specialist in the use of the equipment - and the force has 20 officers in total who are trained to use it. When they determine a match has been made against someone on the watchlist, that information is passed to an officer who will engage with that member of the public, in a “very calm, relaxed way”. Whether or not you trust police forces to engage calmly with suspects of ‘serious crime’, when identified using automated tooling, is up for debate. But Travis is adamant that the technology is highly reliable. He said: 

We have deployed our facial recognition system fourteen times in the last year. And in that time, we have identified a small number of people of interest and we've had no false positives. So the accuracy of the system for us is small in the number of deployments, its small in intrusion and its high quality in terms of its accuracy.

‘Small in intrusion’ is an interesting phrase, given that the benchmark for intrusion is based on a member of the police force stopping you as a suspect - rather than, say, having your face scanned by third party technology in the first place. But that, again, is up for debate. 

Supporting South Wales’ point was Lindsey Chiswick, Director of Intelligence at the Metropolitan Police Service, who provided an example of how the Met used the technology in Croydon, a borough in London, last Thursday evening. She said: 

When we deployed in Croydon, it was to tackle serious and violent crime. Croydon this year has the highest murder rate. It's got the highest number of knife crime related offenses and it's got a really blossoming night time economy, which brings problems like violence against women and girls. 

So that was what they call the ‘intelligence case’ that sits behind the reason why they want to deploy - it's very much linked to that intelligence case. 

We then go to the watch list. The watch list is pulled together on the back of that intelligence case. And on this particular occasion, we were looking at crimes such as violence, other serious offenses, and people wanted on warrant. The watch list is pulled together not based on an individual, it’s based on those crime types. It's then taken to approval from an authorizing officer in the Met, the authorizing officer is superintendent level or above. 

And in terms of results from Thursday night, there were seven arrests. An individual wanted on suspicion of rape and burglary, someone wanted for robbery, someon identified for failing to pay for a road traffic offence, criminal damage and possession of Class A drugs, suspicion of fraud and suspicion of drug supply and robbery. 

So, seven significant offenses and people found who are wanted by the police, who we wouldn't have otherwise been able to find without that technology.

Again, the emotive language being used here is noteworthy. Whether or not a road traffic offence should be considered a ‘serious crime’ in the same way that a rape crime is - where suspects both appear on the same watchlist - is fair question that should be raised. But more on that later. 

The legal mandate

Both forces said that there are a number of pieces of legislation and a number of frameworks that support the legal use of LFR and guide them in how to deploy it in practice. The Met’s Chiswick said: 

On the Met’s website, we've published our legal mandate. The legal mandate takes common law as the primary law. Underneath that there is a complex patchwork of other laws that we pay attention to when we deploy live facial recognition. 

That's underpinned now by the authorized professional practice from the College of Policing. And we are overseen by a number of bodies and commissioners who ensure that we are acting as we should do, according to that professional practice, and according to the legal mandate and policy that we've published online on our website.

However, when pushed on whether or not there is specific legislation that covers the use of LFR, Chiswick added: 

No, there isn't, but looking at Bridges and the Bridges appeal judgments, they found that common law was sufficient to be able to deploy that technology. And that's what we're following. 

And that's what we're building the policy and the legal mandate and standard operating procedures, which go into more detail about how we deploy. That's what we're building that case on.

South Wales Police revealed that in the last year it has scanned 819,943 faces using the technology. But added that it has had zero errors. The Met Police hasn’t had as high an accuracy rate, but insisted that the technology is getting better. Chiswick said: 

We're on our third algorithm update from the provider at the moment, and certainly the number of false alerts has gone right down. The Met has not been as successful as South Wales when it comes to false alerts. Over 19 deployments, and an awful lot of faces scanned, these resulted in 26 significant arrests. We've had two false alerts. That’s well below what the National Physical Laboratory put forward as what to expect. It’s extraordinarily accurate.

Concerns raised

Providing evidence at the session was also Professor Karen Yeung, Interdisciplinary Professional Fellow in Law, Ethics and Informatics, at Birmingham Law School and School of Computer Science. Yeung’s points are definitely worth considering. 

She prefaced her criticism by saying that she believes that the police forces are trying very hard to act responsibly in the use of LFR, but that important questions need to be asked about the legal justification for its use - particularly in terms of how the software is used in an operational context. Firstly, Professor Yeung said: 

What I want to focus on is the question of operational effectiveness. Any time you think about buying a new car or a new piece of kit, will it actually deliver the promised benefit in your specific, contextual circumstances? We’re told all sorts of amazing things about what a new piece of kit might do, but what I really need to know is: is it going to deliver, in practice, the promised benefits?

This is where we need to keep in mind that there's a world of difference between accurate functional performance of matching software, in a laboratory stable setting, and then in a real world setting. 

There's every reason to believe that the technology is getting better, in terms of accuracy. I can't deny that the advances in computer science are traveling at a very great pace. However, there's still a massive operational challenge of converting a match alert into a lawful apprehension of a person who is wanted for genuine reasons that match the test of legality. 

To do that, Yeung said, you have to have police officers on the ground who are capable of intervening in complex, dynamic settings, to lawfully apprehend a ‘match’ made by the software. She added that an alert in itself by the system, does not in and of itself satisfy the common law test of reasonable suspicion. That has to be gained by the officer on the ground, speaking to the person involved. She added: 

So when a police officer stops a person in the street, they do so on the basis of voluntary cooperation or that person producing identification, because the mere fact of the facial recognition alert match is not enough, in my view, to satisfy the reasonable suspicion test. 

This hyper automated version of what is effectively ‘stop and search’ could have implications for trust between the community and the state. It’s worth considering the implications of that. Yeung went on to say: 

If we look at the latest data from the 2022 London Met usage, throughout the year, we're told 144,000 faces were scanned. That is a prima facie violation of the privacy of 144,000 people in public settings in London, for which they made eight arrests, none of which were for serious crimes, many of them were for small drug offenses and shoplifting. 

So there is a divergence between the claims that they only put pictures of those wanted for serious crimes on the watch list. And the fact that in the Oxford Circus deployment alone there were over 9,700 images on that watch list. I'd quite like to know how each of those 9,700 images were justified as lawful, necessary and proportionate, to put those faces on the list. 

All very valid points of concern. Yeung said that these are the questions where the police are not being given enough guidance about because the law is actually very difficult to apply in this context - adding that what you need is human rights expertise to apply it meaningfully. She added: 

The police are really going to struggle in making those lawful and proportionate decisions in specific contexts. So I don't think there is enough guidance. I don't think it's clear enough. And I think we do need a legislative framework that makes that much more straightforward. 

South Wales’ Travis agreed with Yeung that a match through the facial recognition software does not meet a threshold to arrest an individual, but reiterated his point about police engaging with members of the public in a relaxed way, trying to identify whether or not they pose a risk to the public or themselves. He added: 

I am personally of the view that the preventative and the operational benefits of the system do add value, save money and keep people safe. I would also agree that it's a complex area of law. That the application of this requires significant thought and proportionate use. 

The Met’s Chiswick agreed and added: 

There's a lot of wanted people in London. I think there’s a balance here between security and privacy - and for me, I think it's absolutely fair.

The watchlist

Professor Yeung also raised serious concerns about how the police forces’ watchlists are generated. She probed the two police forces in attendance about how those lists are generated and whether or not the police officers involved in working with the public on the ground are given the information at that moment in time about whether a match was based on someone who is a suspect of terrorism vs someone who is vulnerable. How they engage with the public could vary widely depending on that knowledge. 

Travis said: 

We set a criteria within the organization that is agreed at a senior officer level, for each event. And then those categories will be drawn from our systems by classification, to populate the watch list. 

So what you haven't got as an individual officer looking at each individual person to say ‘it's this person, it's that person’. We are looking at groups of people by category so that we can be confident in terms of the nature of the reason they've been brought into the list. 

For example, a missing person, a high risk missing person by age or by vulnerability, is something we can classify readily. It will be the same in relation to issues like counter terrorism suspects. We would use that group of people and that will be drawn together. The watch list will be considered for each deployment and to make sure that the categories were perceived to be necessary. 

So, effectively marking your own homework. Chiswick agreed that this is a similar approach to how the Met generates the watchlists too. However, as Professor Yeung countered: 

There seems to be a bit of a mismatch between claims that the watch list is only populated by those who are wanted for serious crimes…that's the official statement that the Met Police have put out. 

And now we're being told that in fact, the category of people is more variable. The thing that troubles me the most is that in a liberal democratic society, it's essential that the coercive power of the state is used to justify every single decision about an individual. And the fact that we're categorizing people, but we're not actually evaluating each specific person, troubles me deeply.

My take

I don’t see the use of LFR stopping anytime soon. But the legal justification for its use on the scale it’s being deployed, and covering the cases it's being used for, is of serious concern. Hundreds of thousands of peoples’ faces being scanned against watchlists generated by forces for reasons that they get to decide upon as ‘reasonable’, feels questionable. Biometric data is highly sensitive data and the balance between safety and privacy should be decided by regulation, rather than police forces themselves. I’m sure we will be debating this for a number of years to come - with more court cases in the future. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK