3rdPartyFeeds

The Jack Bauers of Europe Love Facial Recognition

(Bloomberg Opinion) -- Facial recognition has friends in Europe. Live trials of real-time face-tracking have taken place over the past year in countries such as the U.K. and France, by and large without falling foul of the continent’s sweeping but haphazardly-enforced data protection laws.This should be a wake-up call for regulators to act. But the trials also offer a taste of the public-safety argument which will be trotted out to defend this intrusive, flawed technology, glamorized by TV shows like “24” to solve fictional terrorist plots.The example of the French city of Nice, whose mayor Christian Estrosi has been doubling down on surveillance technology in the wake of a terrorist attack in 2016, is instructive. An experiment with facial recognition at the city’s carnival in February -- promoted on YouTube as a miracle tool that reunited lost relatives, identified suspicious people and kept the public safe -- was recently deemed a success by the mayor’s office. No complaints from the people taking part, no false positives and a “100%” rate of identification of “persons of interest,” according to Le Monde. The mayor’s report called for a new regulations that would allow the technology’s roll-out in time for the 2024 Paris Olympics, noting that facial recognition was in a legal gray zone, somewhere between narrow data-protection requirements and broader national security laws.His conclusions raise a lot of questions. Talk of perfect or near-perfect accuracy is easy when dealing with relatively small groups of people and images on file: The Nice trial built a database of 50 persons of interest and filmed 5,000 people over three days. But they can also be deceiving. A system with a 99.9% success rate sounds impressive until you consider that a 0.1% false positive rate in 100,000 matches means 100 people have been misidentified. One review of facial-recognition trials by London police found that 63.6%, or almost two-thirds, of computer-generated matches deemed credible by a human operator turned out to be incorrect. Mismatches matter, given the consequences for the people concerned. Some police are using the technology in dubious ways that could easily lead to mistakes. When the New York City Police Department tried to use surveillance photos to trace a man suspected of shoplifting at a drug store in 2017, the facial recognition system produced no matches. Police plugged in an image of Woody Harrelson, claiming he looked like the suspect, and generated a list of possible perpetrators, arresting a man for petty larceny. While no one has alleged they got the wrong man, it’s a disturbing practice that out to make governments consider proposals like a ban or moratorium on this technology seriously.But what about a ticking time-bomb event, a child being kidnapped, a gunman on the loose? That’s surely worth inconveniencing 100 people? “If we have missing kids, sex trafficking, active shooter, active terrorist threat, let’s agree as a community that we’re going to turn this technology on,” says Brent Boekestein, CEO of Vintra Inc., an AI-video analytics startup. Yet these examples always seem to work better in the imagination...

(Bloomberg Opinion) — Facial recognition has friends in Europe. Live trials of real-time face-tracking have taken place over the past year in countries such as the U.K. and France, by and large without falling foul of the continent’s sweeping but haphazardly-enforced data protection laws.

This should be a wake-up call for regulators to act. But the trials also offer a taste of the public-safety argument which will be trotted out to defend this intrusive, flawed technology, glamorized by TV shows like “24” to solve fictional terrorist plots.

The example of the French city of Nice, whose mayor Christian Estrosi has been doubling down on surveillance technology in the wake of a terrorist attack in 2016, is instructive. An experiment with facial recognition at the city’s carnival in February — promoted on YouTube as a miracle tool that reunited lost relatives, identified suspicious people and kept the public safe — was recently deemed a success by the mayor’s office. No complaints from the people taking part, no false positives and a “100%” rate of identification of “persons of interest,” according to Le Monde. The mayor’s report called for a new regulations that would allow the technology’s roll-out in time for the 2024 Paris Olympics, noting that facial recognition was in a legal gray zone, somewhere between narrow data-protection requirements and broader national security laws.

His conclusions raise a lot of questions. Talk of perfect or near-perfect accuracy is easy when dealing with relatively small groups of people and images on file: The Nice trial built a database of 50 persons of interest and filmed 5,000 people over three days. But they can also be deceiving. A system with a 99.9% success rate sounds impressive until you consider that a 0.1% false positive rate in 100,000 matches means 100 people have been misidentified. One review of facial-recognition trials by London police found that 63.6%, or almost two-thirds, of computer-generated matches deemed credible by a human operator turned out to be incorrect. 

Mismatches matter, given the consequences for the people concerned. Some police are using the technology in dubious ways that could easily lead to mistakes. When the New York City Police Department tried to use surveillance photos to trace a man suspected of shoplifting at a drug store in 2017, the facial recognition system produced no matches. Police plugged in an image of Woody Harrelson, claiming he looked like the suspect, and generated a list of possible perpetrators, arresting a man for petty larceny. While no one has alleged they got the wrong man, it’s a disturbing practice that out to make governments consider proposals like a ban or moratorium on this technology seriously.

But what about a ticking time-bomb event, a child being kidnapped, a gunman on the loose? That’s surely worth inconveniencing 100 people? “If we have missing kids, sex trafficking, active shooter, active terrorist threat, let’s agree as a community that we’re going to turn this technology on,” says Brent Boekestein, CEO of Vintra Inc., an AI-video analytics startup. Yet these examples always seem to work better in the imagination than in reality. The perpetrators of the 2013 Boston bombings weren’t identified by facial recognition despite their photos being on file. The terrorist who mowed down partygoers in Nice in 2016 did so in full view of CCTV cameras; it’s hard to see how facial recognition would have helped, given he was not on France’s state security watch list. And even if you assume an accurate match of a gunman’s face, that still may leave only 120 seconds before the first shot.

It’s easy to understand why law-enforcement agencies keep trying to bring this technology back onto the streets. Squeezed national police budgets and the time-consuming work of keeping tabs on lists of suspects, sometimes running into the tens of thousands, create demand for any way of lightening the load. But it’s also easy to understand why proper oversight and limits are vital. The risk of “function creep” is high: What begins as an attempt to track suspected terrorists could quickly slide into crowd control at a protest, analysis of people’s emotions, or identifying petty criminals. Do we want an always-on digital fingerprint?

The European Union has yet to hit upon a unified stance that addresses this. The bloc’s General Data Protection Regulation has a clear definition of “sensitive” data like biometrics, which require consent to process. But there are loopholes in the name of “substantial” public interest — and for national security — which is the kind of thing individual countries care about. The result is inconsistency: In Sweden, a facial-recognition trial in a school led to a $20,375 fine, but in Denmark, the roll-out of AI-powered face-tracking at a football stadium was waved through.

National security shouldn’t be a reason to block clearer and tougher curbs from Brussels. At a time when governments are ramping up their law-enforcement requests for access to Facebook Inc. data, and with reports of Amazon Inc.’s smart doorbell firm Ring working with police, fewer faces to recognize would be nice.

To contact the author of this story: Lionel Laurent at [email protected]

To contact the editor responsible for this story: Stephanie Baker at [email protected]

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Lionel Laurent is a Bloomberg Opinion columnist covering Brussels. He previously worked at Reuters and Forbes.

<p class="canvas-atom canvas-text Mb(1.0em) Mb(0)–sm Mt(0.8em)–sm" type="text" content="For more articles like this, please visit us at bloomberg.com/opinion” data-reactid=”35″>For more articles like this, please visit us at bloomberg.com/opinion

©2019 Bloomberg L.P.

Read More

Add Comment

Click here to post a comment