Tay scandal taught us to take accountability, says Microsoft CEO
Satya Nadella says Redmond has learned from its disastrous racist chatbot
Microsoft CEO Satya Nadella stressed today that it is the job of AI companies to ensure that artificial intelligence in kept under control.
Speaking at the London launch of his new book, Hit Refresh, IT Pro heard Nadella address the question posed by many AI sceptics regarding what happens if tech companies create an AI that gets out of hand.
"It's up to us. In other words, how do we approach this with a set of design principles that allow us to control what AI we create? Just like good user experience, I would claim there is good AI," he said. "As designers of AI, it's our responsibility."
Microsoft has invested heavily in the space and Nadella considers it to be one of the three main pillars of the company's future, along with quantum computing and mixed reality. It has made progress with AI most notably through its digital assistant Cortana, but in other areas as well, including machine vision and advanced analytics.
Some of its AI experiments, however, have been less successful. One particularly embarrassing failure was Tay, a Twitter-based chatbot powered by machine learning. Designed to emulate a teenage girl, Tay's conversation was supposed to become more natural through learning from social interactions with real users.
This quickly went off the rails, as trolls exploited the system in order to teach Tay to parrot racial slurs, conspiracy theories and other objectionable comments.
Nadella acknowledged that the experiment proved problematic, but said that the company has learnt from the incident.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
"One of the things that has really influenced our design principles is that episode; we have to take accountability. First and foremost, we need to be able to in fact foresee these attacks," he said.
"But the idea that we need to keep the broader goal of having this AI behave properly is our accountability. So how can we test it, how can we make sure that it does not lose control is a lot of places where we're working now."
Adam Shepherd has been a technology journalist since 2015, covering everything from cloud storage and security, to smartphones and servers. Over the course of his career, he’s seen the spread of 5G, the growing ubiquity of wireless devices, and the start of the connected revolution. He’s also been to more trade shows and technology conferences than he cares to count.
Adam is an avid follower of the latest hardware innovations, and he is never happier than when tinkering with complex network configurations, or exploring a new Linux distro. He was also previously a co-host on the ITPro Podcast, where he was often found ranting about his love of strange gadgets, his disdain for Windows Mobile, and everything in between.
You can find Adam tweeting about enterprise technology (or more often bad jokes) @AdamShepherUK.