Google Cloud says it won't sell general facial recognition software
The announcement follows concerns around bias and calls for tighter regulation
Google Cloud has announced that it will not sell general-purpose AI-driven facial recognition technology until the technology is polished and concerns over data protection and privacy have been addressed in law.
"Google has long been committed to the responsible development of AI. These principles guide our decisions on what types of features to build and research to pursue," said Kent Walker, SVP of global affairs at Google. "Facial recognition merits careful consideration to ensure its use is aligned with our principles and values and avoids abuse and harmful outcomes.
"Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions," he added.
It's unclear what these questions are or what needs reworking in the technology, but Walker believes that AI can benefit good causes such as "new assistive technologies and tools to help find missing persons". But despite that, recent movements argue that facial recognition tech needs regulating.
The announcement follows news surfacing about the tech community, specifically AI researchers, lawmakers and technology companies, forming a rare consensus regarding the regulation of facial recognition technology.
The Algorithmic Justice League and the Center of Privacy & Technology at Georgetown University Law Center unveiled the Safe Face pledge earlier this month which aims to get big AI developers to commit to limiting the sale of their tech, including to law enforcement, unless specific laws have been debated and implemented.
The call to action was initiated because of the rising concern around the bias and mass surveillance risks associated with facial recognition technology deployed on a commercial scale.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
Notable signatures on the pledge have so far come from leading researchers and esteemed figures in the tech community but none of the big developers, such as Microsoft, Amazon or Google, have committed as of yet.
This could be because multi-billion dollar contracts are at stake for vendors that develop the first marketable tech in emerging fields such as AI-driven video analysis, according to market researcher IHS Markit. Video surveillance technology is already a market worth $18.5 billion and with AI making the analysis more efficient, it would be unwise for any of the big developers to walk away.
"There are going to be some large vendors who refuse to sign or are reluctant to sign because they want these government contracts," said Laura Moy to Bloomberg, executive director of the Center on Privacy & Technology.
Sundar Pichai, CEO of Google announced back in June a set of AI principles following the mass backlash from Google's staff after its AI tech was being used by the Pentagon's drone program in Project Maven.
The seven principles were drafted to ensure Google develops AI tech in an ethical way and following its publication, Google announced that it would not renew the Pentagon's drone contract.
The same principles have influenced its decision to not market general-purpose facial recognition APIs. One of its AI principles is to avoid creating or reinforcing unfair bias, something current tech has shown to have issues with, specifically with errors around the detection of skin colours other than white.
It's unclear whether the necessary laws that are needed for the technology's implementation will arrive any time soon. Brad Smith, president of Microsoft and chief legal officer put the chances of federal legislation in 2019 at 50-50, in a televised Bloomberg interview.
He predicts that if law comes, it will most likely come as part of a broader privacy bill, adding that there is a much better chance of getting a state or city law drafted first. If that was drafted in a more influential state, such as California, it could spur major vendors to change the way they develop AI in a way that tackles key issues.
Despite the current flaws in facial recognition tech, it can be used for good. In Kent Walker's blog post, he detailed Google's AI and how it's being used to treat diabetic retinopathy, a condition that affects one in three diabetics, causing blindness.
The new technology, which has been in development for years, can detect early signs of diabetic retinopathy before it damages the patient's sight, with the same accuracy as an ophthalmologist.
Specifically targeting underserved regions such as Thailand where there are only 1,400 eye doctors for 5 million diabetics, the AI technology can help perform screens for early signs of the condition in a country where screening rarely takes place.
Connor Jones has been at the forefront of global cyber security news coverage for the past few years, breaking developments on major stories such as LockBit’s ransomware attack on Royal Mail International, and many others. He has also made sporadic appearances on the ITPro Podcast discussing topics from home desk setups all the way to hacking systems using prosthetic limbs. He has a master’s degree in Magazine Journalism from the University of Sheffield, and has previously written for the likes of Red Bull Esports and UNILAD tech during his career that started in 2015.