Call us to order at 1-800-691-3089

Microsoft Axes Nazi Chatbot

Share on Social Media:

MICROSOFT AXES ITS NAZI CHATBOT

The best-laid plans of mice and men go oft awry. So it is with all of the works of man, including artificial intelligence.

Microsoft received a painful reminder of this truth with its latest release of an AI application, a ‘chatbot’ called TAY AI. Geared for 18 to 24-year-olds, it was to be an experiment in learning through conversation.  The chatbot was meant to communicate like a normal American teenage girl. Not wishing to take unnecessary chances, Microsoft repeatedly stress-tested the bot’s ‘youth code’ to make sure it would provide a “positive experience” to users. After a lengthy battery of tests, the software giant’s engineers were satisfied that their brainchild would work as planned.

The real-world performance of TAY AI was vastly different. Shortly after its release, the bot acquired a Twitter account. Within one day, it had garnered more than 50,000 followers, and had sent more than 100,000 tweets. This would seem to indicate smashing success, wouldn’t it? The problem is that TAY AI had not been programmed it with a conscience or an internal editor, and it aped the language of its followers.Before long, it was sending messages such as “Hitler was right i hate the jews” (sic)  and “i f___ing hate feminists” (sic). It even attacked Taylor Swift. In response to a question about the pop singer, TAY AI said, “Taylor Swift rapes us daily”.

To experts in cybersecurity and artificial intelligence, TAY’s bad behavior should not have been a surprise to anyone. Roman Yampolsky, who leads the CyberSecurity lab at the University of Louisville, said: “This was to be expected. The system was designed to learn from its users, so it will become a reflection of their behavior. One needs to explicitly teach a system about what is not appropriate, like we do with children. Any AI system learning from bad examples could end up socially inappropriate, like a human raised by wolves.”

There is precedent for such behavior in an artificial intelligence program. IBM’s Watson, for example, learned profanity from the Urban Dictionary. 

Louis Rosenberg, the CEO of Unanimous AI, said: “Like all chatbots, TAY has no idea what it’s saying… When (it) started training on patterns that were input by trolls online, it started using those patterns. This is really no different than a parrot in a seedy bar picking up bad words, and repeating them… without really knowing what they mean.”

Some AI experts say that Microsoft could have prevented TAY’s personality shortcomings by using better tools.

Microsoft, at any rate, has taken TAY offline. The software giant says it is “making adjustments”, and expects to release a more refined version within a few months. More refined- in every sense of the word.

(For your TV, internet, phone, or home security needs, shop with Bundle Deals. Compare all providers and plans. Then order any service with just one phone call.)