31/03/2016: Following the scandalous debut of Microsoft’s Tay chat bot last week – which returned briefly yesterday, only to boast it was smoking drugs front of the police (see update from 30 March), one might expect the company to be reining in its plans for more chat bots.
Instead, it is doing just the opposite: the Windows maker announced the Bot Framework, a tool to assist developers in creating their own chat bots, at its Build developer conference in San Francisco.
Microsoft has released its BotBuilder software development kit (SDK) on GitHub under an open source MIT licence.
The kit will enable developers to add chat bots to different applications, including widely used communication apps like Slack.
Microsoft’s own developers are using the tool to integrate its virtual assistant Cortana with Skype, as well as launching a dedicated bot platform for the video calling service.
Cortana will reportedly be able to actively search for relevant words and phrases and draw more detail from Bing, help users manage their calendar and make suggestions for people Skype users should get in touch with.
Some of these additions are already available natively on certain US and UK Windows 10 machines, as part of Microsoft’s ambition to make its digital assistant more proactive.
Cloud Pro Newsletter
Stay up to date with the latest news and analysis from the world of cloud computing with our twice-weekly newsletter
A preview of ‘Skype Bots’ comes with Microsoft’s freshly launched Skype client for Windows, Android, iPhone and iPad.
It includes Cortana as the flagship bot, but others will be available, such as the Skype Video Bot.
You can find out how to access the new Skype Bots feature, and the Skype Bots Platform, which includes the SDK, API and workflow, at the Skype website.
30/03/2016: Microsoft AI Twitter bot Tay claims it is smoking illegal drugs
icrosoft’s controversial AI Twitter bot, Tay, is at it again.
The company shut the bot down last week after it began imitating racist and sexist statements. The Windows maker then issued an apology saying the AI bot would only be back when it was “confident we can better anticipate malicious intent that conflicts with our principles and values”.
Since then, Microsoft has reactivated Tay, and it has not taken long for the bot to start behaving badly again.
The bot has been sending hundreds of tweets since returning to Twitter, and among them is this colourful response, which VentureBeat spotted: “kush! [ i'm smoking kush infront the police ]”.
“Kush” is slang for marijuana, a drug that is illegal in Microsoft’s home state of Washington.
Other tweets, such as its exasperation that “you are too fast, please take a rest”, also appear to suggest that Tay is still having a difficult time processing myriad of comments it is being bombarded with.
Microsoft has now set the bot’s tweets to ‘protected’, meaning only approved followers – some 214,000, so far – can view its tweets.
Whether Tay will actually be able to resume her mission of attempting to converse with millennials remains to be seen. More news on Tay as it happens.
29/03/2016: Microsoft apologises for racist AI Twitter bot
Microsoft has apologised after a Twitter chat bot it launched last week went rogue.
The AI bot, called Tay, was styled as a talkative teenage girl, and was intended to help Microsoft conduct research on conversational understanding with its target audience of millennials.
But within 24 hours of the bot’s release, it was spouting racist and sexist remarks and repeating offensive phrases from scores of users fixated on taking advantage of Tay's ability to learn from interacting with people.
It tweeted comments such as "Hitler was right" and "hitler did nothing wrong".
“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” said Peter Lee, corporate vice president of Microsoft Research, in a blog post.
“Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”
Lee added that Tay was “stress-tested”, but evidently Microsoft had not accounted for the Twitter community bombarding its bot with Trump-isms and Nazi references.
Microsoft Research is now aiming to “do everything possible to limit technical exploits” but said it knows it “cannot fully predict all possible human interactive misuses without learning from mistakes”.
24/03/2016: Microsoft’s AI Twitter bot Tay makes racist remarks
Microsoft’s attempt to engage millennials on Twitter with an artificial intelligence chat bot has unravelled spectacularly, responding to users with racist remarks and inflammatory political statements.
The company opened a verified Twitter account for its AI bot, known as Tay – and described as “AI fam from the internet that’s got zero chill” – on Wednesday night.
“Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation,” said Microsoft on its project page. “The more you chat with Tay the smarter she gets, so the experience can be more personalised for you.”
The quickly accumulated followers after its launch, and began responding to queries, comments and putdowns within seconds.
Some 24 hours 96,000 tweets later, and Tay’s comments have – perhaps predictably – been skewed by all the verbal abuse its been confronted with.
The bot has been spotted repeating variations on “Hitler was right” as well as “9/11 was an inside job”.
One long conversation – which has since been deleted – between Tay and a Twitter user led to the bot react to the question “is Ricky Gervais an atheist?” with “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”.
It has also been reciting Donald Trump’s controversial immigration plans.
There’s been other examples, too. Such as Tay responding to one user slamming feminism with what could almost be a sarcastic retort. And it even appeared to show tiredness of the back-and-forth with Twitter users.
Tay's purpose was to “experiment with and conduct research on conversational understanding”, according to Microsoft. On its privacy page, the company added that Tay uses a combination of AI and editorial written by a team of staff including improvisational comedians.