NYPD 'abuses' facial recognition tech
In some instances, celebrity images were used to find suspects that looked like them
The New York Police Department (NYPD) has been clumsily abusing facial recognition (FR) technology to facilitate arrests when CCTV images were too unclear to identify the person the police were after.
That's according to a report by Georgetown Law's Center on Privacy and Technology (CPT), which found that the NYPD would take a witness statement and then use that as a reference point to edit existing images they had and use the edited version to plug into FR tech to see if a match returned.
In the worst cases, the police simply took high-resolution pictures of a suspect's celebrity doppelgnger to see if a match returned from a police or driving license database.
One such case involving a suspect that supposedly looked like Woody Harrelson actually led to an arrest for petty larceny (minor theft) after the Cheers and Natural Born Killers actor's face was run through FR tech.
"The NYPD constantly reassesses our existing procedures and in line with that are in the process of reviewing our existent facial recognition protocols," Detective Denise Moroney said in a statement given to The Verge.
"No one has ever been arrested on the basis of a facial recognition match alone," she added. "As with any lead, further investigation is always needed to develop probable cause to arrest. The NYPD has been deliberate and responsible in its use of facial recognition technology."
In addition to doctored and A-listers' photos, around half of the police departments in the US analyse composite sketches using FR tech too in an attempt to find a correct match. Maricopa County Sheriff's Office in Arizona and the Washington County Sheriff's Department in Oregon were among those named in the report as using this practice.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
It's a method endorsed by AWS, a company whose blundering FR tech Rekognition was used in US police departments last year but was met with protests after it was found to only be effective in only one in 20 searches.
MIT researchers also found the technology to have a troubling issue with its detection accuracy of different races, particularly with darker-skinned women.
Darker skinned women were misidentified as men in 31% of analyses and women, in general, were misidentified as men 19% of the time too.
"The stakes are too high in criminal investigations to rely on unreliable-or wrong-inputs," said Clare Garvie, author of the Georgetown Law report Garbage In, Garbage Out.
"It is one thing for a company to build a face recognition system designed to help individuals find their celebrity doppelgnger or painting lookalike for entertainment purposes," she added. "It's quite another to use these techniques to identify criminal suspects, who may be deprived of their liberty and ultimately prosecuted based on the match."
The news comes at a time where scrutiny has never been more intense over the questionable technology that no company can seem to get right.
Earlier this week, San Francisco lawmakers banned the technology from its own government agencies, becoming the first city in the country to do so.
Whether it's being confused by gender or being trained without people's consent, there's a certain insidious nature around the technology that's erasing trust everywhere.
The Metropolitan Police admitted that its own FR tech had an abysmal success rate, leading to no arrests at all which prompts questions around the viability of the technology. Even if the technology is 0.1% inaccurate, there's a strong case to suggest that it shouldn't be used at all, especially when making serious decisions such as arrests.
Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.