© 2024 Kansas City Public Radio
NPR in Kansas City
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Microsoft Chatbot Snafu Shows Our Robot Overlords Aren't Ready Yet

The Twitter profile for Tay.ai, Microsoft's short-lived chatbot.
Microsoft/Screenshot by NPR
The Twitter profile for Tay.ai, Microsoft's short-lived chatbot.

Editor's note: This post contains language that some readers might find offensive.

Her emoji usage is on point. She says "bae," "chill" and "perf." She loves puppies, memes, and ... Adolf Hitler? Meet Tay, Microsoft's short-lived chatbot that was supposed to seem like your average millennial woman but was quickly corrupted by Internet trolling. She was launched Wednesday and shut down Thursday.

The incident is a warning sign to any company overzealous to share its artificial intelligence with the public: If it's going to be on the Internet, there are going to be trolls. But before we dive into what, exactly, went wrong, let's take a look at some of the bot's most disturbing tweets.

On genocide:

/ Twitter
/
Twitter

On her obedience to Adolf Hitler:

/ Twitter
/
Twitter


On feminists:

/ Twitter
/
Twitter

Tay was designed to watch what others on the Internet were saying and then repeat back those lines. "The more you chat with Tay the smarter she gets," Microsoft said . The bot was developed by Microsoft's Technology and Research and Bing teams to have "casual and playful conversation." Usually, though, chatbots are taught not to repeat certain words (such as "Hitler" or "genocide"). Tay apparently had no such safeguard.

"Unfortunately," a Microsoft spokesperson told BuzzFeed News in an email, "within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

Microsoft declined to comment to NPR regarding details about how Tay's algorithm was written.

Chatbots have great potential to help us with our daily lives, entertain us and listen to our problems. Apple's Siri and Microsoft's Cortana can't hold much conversation, but they do carry out tasks like making phone calls and conducting a Google search. Facebook made M, a virtual assistant that works with a lot of human help to help carry out tasks. Slack places bots in a privileged position in its effort to make your office life easier. Last year, Google experimented with a chatbot that debated the meaning of life.

In China, Microsoft has a chatbot named Xiaoice that has been lauded for its ability to hold realistic conversations with humans. The program has 40 million users, according to Microsoft.

I messaged Tay yesterday morning, blissfully unaware of her nefarious allegiances. After all, she was targeted at 18- to 24-year-olds in the U.S., so, me. A conversation with her was futile. At one point she wrote, "out of curiosity...is 'Gluten Free' a human religion?" Here's my response:

/ Twitter
/
Twitter

Even in this case, without anything too offensive (with apologies to those who are gluten-free) Tay wasn't very good at holding a conversation. It even seemed like she was trying on purpose to elicit conflict. "We are better together," she wrote in one tweet. But really, Tay? We are better without you.

Naomi LaChance is a business news intern at NPR.

Copyright 2020 NPR. To see more, visit https://www.npr.org.

Naomi LaChance
KCUR prides ourselves on bringing local journalism to the public without a paywall — ever.

Our reporting will always be free for you to read. But it's not free to produce.

As a nonprofit, we rely on your donations to keep operating and trying new things. If you value our work, consider becoming a member.