Tech pioneers vow not to develop AI-controlled weapons
AI shouldn't make life-taking decisions, but signatories don’t rule out use of AI in weaponry altogether


Thousands of developers and more than 100 of the world's leading AI firms and research institutions, including Google DeepMind and Silicon Valley Robotics, have signed a pledge that AI should never be allowed to take a decision to end a human life.
More than 2,400 leading tech industry figures, such as Tesla founder Elon Musk, and 160 companies with investments in AI technologies, have promised to never allow the decision to take a human life to be "delegated to a machine".
In light of the "urgent opportunity and necessity for citizens, policymakers and leaders to distinguish between acceptable and unacceptable uses of AI", the companies signed up to the pledge have also called on governments to establish a set of concrete laws against autonomous weapons.
"We the undersigned agree that the decision to take a human life should never be delegated to a machine. There is a moral component to this position, that we should not allow machines to make life-taking decisions for which others - or nobody - will be culpable," the pledge, organised by the Future of Life Institute (FLI), read.
The signatories argued that such weapons, which also include selecting and engaging targets without human involvement, could be extremely destabilising on a geopolitical level.
They also agreed that removing the element of human control, as well as the close ties with surveillance and data systems, could result in autonomous weapons becoming "instruments of violence and oppression".
"Moreover, lethal autonomous weapons have characteristics quite different from nuclear, chemical and biological weapons, and the unilateral actions of a single group could too easily spark an arms race that the international community lacks the technical tools and global governance systems to manage," the pledge continued.
Get the ITPro daily newsletter
Sign up today and you will receive a free copy of our Future Focus 2025 report - the leading guidance on AI, cybersecurity and other IT challenges as per 700+ senior executives
"Stigmatizing and preventing such an arms race should be a high priority for national and global security."
The pledge comes in the wake of a growing sea of protests - not only from activists and campaigners - but from employees working for some of the world's biggest tech companies, such as Google.
Last month the tech giant said it would not renew a controversial Pentagon-led military project after thousands of its employees publicly criticised the company for its involvement. Dubbed Project Maven, the contract involved Google lending its TensorFlow programming kits to improve the performance of military drones.
But after 3,100 Google employees signed a letter earlier this year demanding the company withdraw from the 'business of war', Google Cloud CEO Diane Greene confirmed in an internal meeting last month the company won't renew the deal once it expires in 2019.
"We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology," the original letter, addressed to CEO Sundar Pichai, stated.
It is unclear, however, as to whether the newly-signed pledge would have contravened Google's involvement in Project Maven, given that it only goes so far as to prevent the decision to take a life being made by a machine - not against the tangential involvement of AI technology in any weapons systems.
Notable absentees from the agreement include Apple, one of the world's biggest investors in artificial intelligence, while only select employees from other major players including Microsoft, IBM and Amazon - not the companies themselves - have signed the pledge.
"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," said Max Tegmark, FLI president as he announced the pledge today in Stockholm during the annual International Joint Conference on Artificial Intelligence (IJCAI).
"AI has huge potential to help the world - if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way."

Keumars Afifi-Sabet is a writer and editor that specialises in public sector, cyber security, and cloud computing. He first joined ITPro as a staff writer in April 2018 and eventually became its Features Editor. Although a regular contributor to other tech sites in the past, these days you will find Keumars on LiveScience, where he runs its Technology section.
-
Third time lucky? Microsoft finally begins roll-out of controversial Recall feature
News The Windows Recall feature has been plagued by setbacks and backlash from security professionals
By Emma Woollacott Published
-
The UK government wants quantum technology out of the lab and in the hands of enterprises
News The UK government has unveiled plans to invest £121 million in quantum computing projects in an effort to drive real-world applications and adoption rates.
By Emma Woollacott Published
-
Top data security trends
Whitepaper Must-have tools for your data security toolkit
By ITPro Published
-
Why bolstering your security capabilities is critical ahead of NIS2
NIS2 regulations will bolster cyber resilience in key industries as well as improving multi-agency responses to data breaches
By ITPro Published
-
SEC data breach rules branded “worryingly vague” by industry body
News The new rules announced last week leave many questions unanswered, according to security industry experts
By Ross Kelly Published
-
Crackdown on crypto needed to curb cyber crime, says expert
News Threat actors would struggle to generate money without the anonymity provided by unregulated digital tokens, but such a move would require worldwide buy-in
By Rory Bathgate Published
-
The gratitude gap
Whitepaper 2023 State of Recognition
By ITPro Published
-
UK gov invites experts to contribute to its overhauled AI regulatory approach
News The new approach will not adopt the EU's centralised model and sits alongside the National AI Strategy and Data Protection and Digital Information Bill
By Connor Jones Published
-
UK government opts against regulation for cyber security standards
News UK Cyber Security Council will move ahead with its planned chartered standards, with the government to monitor its adoption
By Daniel Todd Published
-
Encryption battle plays out in Australian Parliament
News The opposition said that the government is “addicted to secrecy”
By Zach Marzouk Published