Google publishes ethical code for AI following Project Maven fallout
Google rules out use of its AI for building weapons, but will continue to work with the military in other areas
Google has ruled out deploying its artificial intelligence technology in weapons development with a set of new principles following the controversy surrounding its involvement in a Pentagon-led military project.
After considerable pressure from its employee's, Google Cloud CEO Diane Greene announced internally last Friday it would not renew its contract with the Pentagon's Project Maven - in which AI technology is being harnessed to improve drone performance - when it expires in 2019.
Now Google has released a set of principles which will assess AI applications against a host of objectives, and said it believes any use of its technology should be socially beneficial, accountable, and incorporate privacy-by-design, and adhere to ethical standards.
"At its heart, AI is computer programming that learns and adapts. It can't solve every problem, but its potential to improve our lives is profound," CEO Sundar Pichai wrote, outlining the seven principles.
"We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right."
Despite ruling out the deployment of AI in weapons "or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to other people", Google said it would continue to work with the military in "many other areas" including recruitment, training, and search and rescue.
"These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe," Pichai continued.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"We believe these principles are the right foundation for our company and the future development of AI. This approach is consistent with the values laid out in our original Founders' Letter back in 2004.
"There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs."
The Pentagon announced Project Maven in May last year partially as a means to relieve intelligence analysts faced with the enormous amount of video surveillance data, with machine learning the key to releasing personnel from performing mundane administrative tasks.
Beyond weapons development, Google's principles touched on a number of pertinent ethical issues in building AI technology, including the fact that datasets, and the algorithms used, often absorb implicit human biases with, for instance, concerns over AI-related racial biases in the US criminal justice system.
Google's decision to publish a set of principles in writing also follows the House of Lords' demands for a cross-sector code of practice earlier this year to ensure AI technology is developed ethically and does not diminish the rights and opportunities of humans.
07/06/2018: Google won't renew its contract with Pentagon-led 'Project Maven'
Google has decided not to renew its contract with a Pentagon-led AI project dubbed 'Project Maven', following mounting pressure from its own employees.
The tech giant's involvement with the project - one which aims to harness AI to improve military drone performance by analysing footage using computer vision algorithms - has come under fire from its employees since the news became public earlier this year.
Despite defending its involvement in the project up to this point, Google Cloud CEO Diane Greene last week announced the company's intention to withdraw from the military work after its contract expires in 2019, according to Gizmodo, reportedly saying the backlash had been terrible for the company.
Giving an internal weekly update on Google Cloud's business on Friday, Greene allegedly said the decision was made at a time when Google had been more aggressively pursuing military work, adding that the company planned to unveil ethical principles about its use of AI very shortly.
More than 3,000 Google employees signed a letter in April demanding the company withdraw from "the business of war", warning that its involvement in military affairs would lead to irreparable damage to the brand, particularly in light of Google's famous 'Don't Be Evil' motto, which was incidentally phased out from its code of conduct last month.
"We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology," the letter, addressed to CEO Sundar Pichai, stated.
"This plan will irreparably damage Google's brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public's trust."
The internal pressure heightened last month as dozens of employees resigned in protest, with reasons ranging from lack of transparency to growing ethical concerns, despite repeated assurances from Google that the work was "non-offensive" in nature.
"Any military use of machine learning naturally raises valid concerns," Google said in response to the letter in April.
"We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies."
This bubbling controversy, which Google finds itself at the centre of, follows Microsoft CEO Satya Nadella's warning that developers, including those within Microsoft, need to take their ethical responsibilities much more seriously when considering the applications of AI.
"We're at that stage where the choices we make are grounded in the fact that technology development doesn't just happen - it happens because us humans make design choices," he said at a London event in May. "Those design choices need to be grounded in principles and ethics - and that's what's the best way to ensure the future we all want."
IT Pro has approached Google for comment.
05/04/18 - Google employees urge CEO Sundar Pichai to cancel Pentagon AI project
Thousands of Google employees have signed a letter demanding the company withdraws from the business of war' in light of involvement in a Pentagon-led AI project.
The letter, first seen by The New York Times, was signed by more than 3,100 employees and follows news last month that Google would lend its TensorFlow programming kits to be used in a US Department of Defense project, dubbed Project Maven, aimed at improving military drone performance by analysing footage via computer vision algorithms.
Addressed to CEO Sundar Pichai, the letter stated: "We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
The letter's signatories argue Google's involvement may cause "irreparable damage" to its brand, while raising deep ethical concerns "amid growing fears of biased and weaponized AI" - caustically citing the company's infamous motto Don't Be Evil'.
A chorus of voices have previously raised the alarm over the scope of applications for artificial intelligence in warfare, with such figures as the late Stephen Hawking and Apple co-founder Steve Wozniak warning against the rise of autonomous weapons' in 2015.
The Pentagon first announced Project Maven last May partially as a way to relieve military and civilian intelligence analysts inundated with enormous volumes of video surveillance data. It is hoped the use of machine learning could release personnel from performing mundane administrative tasks.
In a statement, Google said that while "an important part of our culture is having employees who are actively engaged in the work that we do", its contract was "specifically scoped to be for non-offensive purposes and using open-source object recognition available to any Google Cloud customer".
"Any military use of machine learning naturally raises valid concerns," the statement said. "We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies."
Meanwhile, Google in January opened up its machine learning capabilities to businesses in the form of Cloud AutoML, allowing companies to integrate AI into their applications in less than a day.
Mr Pichai has yet to respond to the letter personally, but all indications point toward Google's continued involvement in Project Maven despite its employees' protestations.
Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.