Canadian mounted police broke law with Clearview AI deal
The use of a facial image database violated the country’s Privacy Act, said Canada’s privacy regulator
The Canadian police force broke the law when using Clearview AI facial recognition software, the country's privacy regulator has ruled.
The Royal Canadian Mounted Police's (RCMP) use of the technology to conduct hundreds of searches of a database compiled illegally by a commercial company was deemed a violation of the Privacy Act, a report from the Office of the Privacy Commissioner (OPC) of Canada found.
Clearview AI was found to have violated Canada's federal private sector privacy law by creating a database of over three billion images scraped from internet websites without users' consent. Clearview users, like the RCMP, could match photographs of people against photographs in the database.
"The use of FRT [facial recognition technology] by the RCMP to search through massive repositories of Canadians who are innocent of any suspicion of crime presents a serious violation of privacy," commissioner Daniel Therrien said. "A government institution cannot collect personal information from a third party agent if that third party agent collected the information unlawfully."
The RCMP had initially stated that it was not using Clearview AI, only to later admit that it had used the company's technology "in a limited way", primarily for identifying, locating, and rescuing children who were victims of online sexual abuse.
However, OPC found that only 6% of the RCMP's searches using the technology appeared to be related to victim identification, with a further 9% attributed to other justifiable law enforcement activities. However, the police force was unable to provide adequate justification for the vast majority (85%) of the searches it conducted, based on Clearview records.
In a statement, the RCMP said it publicly acknowledged its use of the technology in February 2020, and ceased using Clearview AI in July 2020, when the company ended its operations in Canada.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"We acknowledge that there is always room for improvement and we continually seek opportunities to strengthen our policies, procedures and training," said a police spokesperson. "The RCMP has accepted all of the recommendations of the OPC and has already begun efforts towards their implementation."
In May, Privacy International and several other European digital privacy campaigners launched legal action against the controversial US facial recognition firm, Clearview AI. The group claimed that the company's methods for collecting images are in violation of European privacy laws.
Following a cyber attack against Clearview AI in February 2020, it was reportedly revealed that a number of high-profile public agencies, including the FBI, were on the company's client list.
The company gained notoriety after the New York Times ran a feature about its work with law enforcement agencies and how its facial recognition models were trained on three billion images, harvested from social media sites.
Zach Marzouk is a former ITPro, CloudPro, and ChannelPro staff writer, covering topics like security, privacy, worker rights, and startups, primarily in the Asia Pacific and the US regions. Zach joined ITPro in 2017 where he was introduced to the world of B2B technology as a junior staff writer, before he returned to Argentina in 2018, working in communications and as a copywriter. In 2021, he made his way back to ITPro as a staff writer during the pandemic, before joining the world of freelance in 2022.