MicrosoftNews

Microsoft has apologized for its AI chatbot turned racist tweet machine

Some days back, Microsoft’s AI chatbot(Tay)  had to be promptly shutdown after it was taught by Twitter to be racist.

Microsoft through a post on its official blog, has apologized for the chatbot’s misdirection saying they are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who they are or what they stand for, nor how they designed Tay.

Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

The company had implemented a variety of filters and stress-tests with a small subset of users, but opening it up to everyone on Twitter led to a “coordinated attack” which exploited a “specific vulnerability” in Tay’s AI, though Microsoft did not elaborate on what that vulnerability was.

Microsoft has said it will try to do everything possible to limit technical exploits, but it can’t fully predict the variety of human interactions an AI can have.

Tags

Ephraim Batambuze III

Digital guy, Web developer, Tech blogger, Gadgets Reviews, Geeky dad. Email:ebatambuze@gmail.com Twitter:@batambuze WhatsApp/Telegram:+256781665128 Skype:ebatambuze
Back to top button
Close

Adblock Detected

Please disable your adblocker to continue accessing this site.