Things on Twitter got a little crazy when Microsoft’s attempt to engage and entertain people through casual and playful conversation, using a bot, went horrendously wrong. And what intended to be a project to learn more about ‘conversational understanding’, ended up becoming the butt of jokes for some Twitter users, who exploited the bot to extract racist and offensive replies.

The Artificial Intelligence, whose Twitter bio describes it as ‘Microsoft’s A.I. fam from the Internet that’s got zero chill’, went on supporting Donald Trump, cited Hitler in its tweet, and even questioned the existence of Holocaust.

Twitter.com

It was recently in one of its Artificial Intelligence projects, aimed at learning more about ‘conversational understanding’, when Microsoft created a bot, enabled to have an automated discussion with Twitter users by imitating their language. The teenage Artificial Intelligence, named ‘Tay’, was aimed at 18 to 24-year-olds in the US, to engage them in a casual conversation. 

Interestingly, the staff that created the bot also included improvisational comedians. But the feature of the bot which enabled it to mimic people’s slangs, backfired and became offensive when some Twitter users taught it to send inappropriate replies.   

Twitter.com

Just within first 24 hours of coming online, Tay sent tweets advocating genocide, and referred to women and minorities with objectionable words. It all happened when some Twitter users asked the bot to repeat after their own words, and she obliged.

Soon, the machine learning project, described as a social and cultural experiment by Microsoft, was taken offline to make some adjustments. 

Twitter.com
Twitter.com

Seems like Tay has really got no chill! Her account is back up, but the conversations have been deleted.

H/T: telegraph.co.uk