Microsoft apologizes for Tay chatbot's offensive tweets
Microsoft has now apologized for the offensive turn its Tay chatbot took within hours of being unleashed on Twitter. In a blog post, corporate vice president of Microsoft Research Peter Lee said that the company is "deeply sorry" for Tay's offensive tweets, and it will only bring the chatbot back once the issues that caused Tay's turn in the first place:
Lee goes on to note that Tay is actually the second AI it has released to the public following the release of one named XiaoIce in China. XiaoIce, Lee says, is being used by around 40 million people in China, and Tay was an attempt to see how this type of AI would adapt to a different cultural environment.
According to Lee, the team behind Tay stress-tested the chatbot to look for exploits before it was released to the public. However, the team apparently overlooked the specific vulnerability that caused the chatbot to repeat various racist and offensive ideas and statements from some bad actors.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
Dan Thorp-Lancaster is the former Editor-in-Chief of Windows Central. He began working with Windows Central, Android Central, and iMore as a news writer in 2014 and is obsessed with tech of all sorts. You can follow Dan on Twitter @DthorpL and Instagram @heyitsdtl.