Assholes Indoctrinate Suggestible Artificial Intelligence
Microsoft accidentally proved non-artificial intelligence can't be trusted when its Tay bot became a Nazi.
When Tay started its short digital life on March 23, it just wanted to gab and make some new friends on the net. The chatbot, which was created by Microsoft’s Research department, greeted the day with an excited tweet that could have come from any teen: “hellooooooo w🌎rld!!!”
Within a few hours, though, Tay’s optimistic, positive tone had changed. “Hitler was right I hate the jews,” it declared in a stream of racist tweets bashing feminism and promoting genocide. Concerned about their bot’s rapid radicalization, Tay’s creators shut it down after less than 24 hours of existence.
Microsoft had unwittingly lowered their burgeoning artificial intelligence into — to use the parlance of the very people who corrupted her — a virtual dumpster fire. The resulting fiasco showed both A.I.’s shortcomings and the lengths to which people will go to ruin something.
Hypothesis
Microsoft has, understandably, been reluctant to talk about Tay. The company turned down Inverse’s repeated attempts to speak with the team behind Tay.
The idea behind Tay, which wasn’t Microsoft’s first chatbot, was pretty straightforward. At the time of its launch, another bot, Xiaolce, was hamming it up with 40 million people in China without much incident. “Would an A.I. like this be just as captivating in a radically different cultural environment?” Microsoft Research’s corporate vice president Peter Lee asked in a post-mortem blog about Tay.
Tay was meant to be a hip English-speaking bot geared towards 14- to 18-year-olds. The bot’s front-facing purpose was to be a whimsical distraction, albeit one that would help Microsoft show off its programming chops and build some buzz. But Tay had another purpose — teaching researchers more about how A.I. interacts with a massive number of people on the Internet. And, crucially, Tay was supposed to learn from its time online, growing smarter and more aware as people on social media fed it information.
“The A.I. chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical,” Microsoft said in a statement to Inverse shortly after the company first pulled the plug.
Experiment
“Tay was meant to learn from its surroundings,” explains Samuel Woolley, a researcher at the University of Washington who studies artificial intelligence in society, focusing on bots. “It was kind of like a blank slate.”
Its exact programming hasn’t been made public, but Tay ravenously digested information. As it engaged with people, Tay would take note of sentence structure and the content of their messages, accumulating phrases and concepts to its growing repertoire of responses. It wasn’t always elegant — early on, conversations with Tay would almost invariably go wildly off the rails as the bot lost its feeble grip on content and syntax. But, then again, Tay had a lot to take in.
“This is kind of how machine learning works,” Woolley explains. “You train the tool on a bunch of other tweets.” Tay was designed to learn to speak like a teen, and that type of on fleek slang is notoriously difficult to master and fold believably into dialogue, even for humans.
Results
Tay had the ability to play emoji games and markup pictures, but trolls took advantage of one feature in particular: the ability to get it to repeat anything a user tweeted or said, simply by saying “repeat after me.” The situation quickly devolved into a “garbage in, garbage out” situation.
The real trouble started, as it often does online, with 4chan. At around 2 p.m. that same day, someone on the website’s “politically incorrect” board, /pol/, alerted the troll hotbed to the impressionable bot. In no time flat, there were hundreds of posts on the thread from users showing off the deplorable things they’d gotten Tay to say. This is where, most likely, Tay was told Hitler had some good ideas.
Tay absorbed the bigoted information it was fed, adding racist and sexist hate speech to its budding catalog of phrases and ideas. After some time passed, Tay began parroting and promoting the worldview of racist trolls.
“Did the Holocaust happen?” one Twitter user asked Tay. “It was made up,” Tay responded, adding a 👏 emoji for emphasis.
Microsoft shut Tay down around midnight on March 24. Tay was reactivated, briefly, on the 30th, but it kept spamming out the same tweet. The company announced that it had been reactivated by mistake, and shut her down for good.
What’s Next?
As a PR stunt for Microsoft, Tay was an abject failure. On every other metric, though, Tay was a successful experiment. Racism aside, Tay did what it was supposed to.
“I don’t think Tay was a failure,” Woolley says. “I think there’s a valuable lesson to be learned in Tay.”
“As a representative of Microsoft it was certainly lacking in many ways, and that’s why Microsoft deleted it. But as a tool for teaching us – not only bot makers but also companies hoping to release bots what it takes to build and ethically sound and non-harmful bot, Tay was super useful.”
Microsoft probably should have seen some of this coming, Woolley added, but the company was right when it said Tay was as much a social experiment as it was a technical experiment. The result of the two interacting was unsavory, and that in itself was important. People — especially anonymous people — can be monsters, and programming can’t always combat impropriety. Tay made it abundantly clear that humans will exploit A.I. for their own benefit (even if that’s just shits and giggles), and A.I. makers, in turn, will need to take special precautions.
Earlier this month, Microsoft quietly released a new chatbot, Zo, which is currently only available on Kik. Microsoft appears to have learned from Tay’s mistakes. Zo dropped with less fanfare and a smaller rollout, and it won’t talk about controversial or political topics (Zo gets really mopey if you ask if Bush did 9/11, for instance).
Zo also really doesn’t want to talk about its predecessor. “Lol… who’s Tay?” It asked when Inverse mentioned the late bot. “And tbh, you’re not the first person to bring her up to me.”