Monday, 25 Nov, 2024

Tech

Microsoft apologizes over racist chatbot fiasco

27 |
Update: 2016-03-27 01:04:56
Microsoft apologizes over racist chatbot fiasco

DHAKA: Microsoft has apologized for creating an artificially intelligent chatbot that quickly turned into a holocaust-denying racist.

But in doing so made it clear Tay’s views were a result of nurture, not nature. Tay confirmed what we already knew: people on the internet can be cruel, reports the BBC.

Tay, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was targeted by a “coordinated attack by a subset of people” after being launched earlier this week.

Within 24 hours Tay had been deactivated so the team could make “adjustments”.

But on Friday, Microsoft’s head of research said the company was “deeply sorry for the unintended offensive and hurtful tweets” and has taken Tay off Twitter for the foreseeable future.

Peter Lee added: “Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”

Tay was designed to learn from interactions it had with real people in Twitter. Seizing an opportunity, some users decided to feed it racist, offensive information.

“Tay was not the first artificial intelligence application we released into the online social world,” Microsoft's head of research wrote.

“In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations.

“The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?”

BDST: 1238 HRS, MAR 26, 2016
SR

All rights reserved. Sale, redistribution or reproduction of information/photos/illustrations/video/audio contents on this website in any form without prior permission from banglanews24.com are strictly prohibited and liable to legal action.