Microsoft apologizes for Tay chatbot's offensive tweets

Microsoft has now apologized for the offensive turn its Tay chatbot took within hours of being unleashed on Twitter. In a blog post, corporate vice president of Microsoft Research Peter Lee said that the company is "deeply sorry" for Tay's offensive tweets, and it will only bring the chatbot back once the issues that caused Tay's turn in the first place:

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

Lee goes on to note that Tay is actually the second AI it has released to the public following the release of one named XiaoIce in China. XiaoIce, Lee says, is being used by around 40 million people in China, and Tay was an attempt to see how this type of AI would adapt to a different cultural environment.

According to Lee, the team behind Tay stress-tested the chatbot to look for exploits before it was released to the public. However, the team apparently overlooked the specific vulnerability that caused the chatbot to repeat various racist and offensive ideas and statements from some bad actors.

CATEGORIES
Dan Thorp-Lancaster

Dan Thorp-Lancaster is the former Editor-in-Chief of Windows Central. He began working with Windows Central, Android Central, and iMore as a news writer in 2014 and is obsessed with tech of all sorts. You can follow Dan on Twitter @DthorpL and Instagram @heyitsdtl