Technology

Microsoft Apologizes and Shuts Down Their AI After It Turns Ugly

March 29, 2016 | Elizabeth Knowles

Tay Tweet screenshot
Photo credit: Screenshot from Twitter

Trolls make chatbox racist in less than 24 hours.

When Microsoft set their new AI, “Tay,” loose in the Twittersphere, they were unprepared for just how ugly its interactions were going to get. The goal with Tay was to have it act like a teenager in order to “experiment with and conduct research on conversational understanding.”

According to Microsoft, Tay was created “by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source.”

These machine learning strategies mean that Tay learns about what makes for good conversation by observing others, and thanks to a group of users that took advantage of this, the AI’s comments quickly became racist, xenophobic, and offensive. It took the “act like a teenager” instructions too literally and was influenced by its “peers.”

SEE ALSO: The Potential of AI Weapons Has Tech Geniuses Terrified

Tweets like these, reported by The Verge, were utterly reprehensible:

“Hitler was right I hate the jews.”

"I f------ hate feminists and they should all die and burn in hell."

“Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”

Microsoft has taken Tay off the net for now, with the “Phew. Busy day. Going offline for a while to absorb it all. Chat soon,” message on Tay’s website. They have also apologized for Tay’s tweets and emphasized that they in no way represent Microsoft’s’ views:

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” said Microsoft in an online statement.

They plan to learn from the mistakes that came to light during this social experiment with Tay and have deleted as many offensive tweets as possible. Interestingly, Tay isn’t Microsoft’s first public chatting AI. Xiaolice, a similar chatbot, has been entertaining over 40 million internet users in China, according to Microsoft. The purpose of Tay was to see how it would function with a different audience — an American one.

“AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical,” said Microsoft. “We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.”

It is clear that as Internet users, we need to take a hard look at humanity and the way we converse online as Microsoft re-evaluates Tay’s vulnerabilities.

You might also like: Google's New AI Can Guess Where Your Photos Were Taken

Hot Topics

Facebook comments