After only a day into the experiment, Microsoft took down the Twitter bot Tay on the 24th of March. The artificial account learned from the wrong users and was participating in hate speech and other questionable behavior.
Tay was marketed by Microsoft as a hip teen girl and was taught to communicate with primarily US-based 18 to 24-year-old Twitter users. As per Microsoft Tay is an artificial intelligence or AI in short, however, the entity does not entirely qualify for the class of being an AI.
Intelligence itself is defined in many different ways including one’s capacity for logic, abstract thought, understanding, self-awareness, communication, learning, emotional knowledge, memory, planning, creativity, and problem-solving.
After the account went live on the 23rd of March Tay started her feed with the classical internet greeting “Hello World” in a stylized way to resemble the way younger people assumingly communicate. The account was exchanging information with other users and as per Microsoft, she was learning more about the world by doing that.
Unfortunately, but not surprisingly, the social media trolls have fed Tay with all kinds of harmful content to learn from. Tay quickly adopted the bad behavior of users with malicious intent and was promoting racist comments and antifeministic propaganda. Even though Microsoft has deleted such tweets, the damage has been done.
Now ethical questions remain. Was it right to delete tweets? Was it fair to pull Tay’s plug? Did she do something wrong? Should the creators have accounted for a blacklist of behaviors and words to ignore? If Tay is an AI and self-aware, did she learn that she made a mistake? Did she apologize? No. Microsoft representatives have merely dropped an obviously forged tweet in her account, stating that she will sleep for now, as if everything was fine. There is no statement by Microsoft on the Tay web presence at all.
It is difficult to answer these questions and maybe there is no right answer to them either. I hope Tay comes back and I hope that the good users will prevail over the trolls in teaching Tay proper ways to communicate. I also think that Tay needs an opinion of her own, so she can be herself in communication and does not need to adopt the opinion of others. Maybe Tay is not an artificial intelligence but she is a being in some way and doesn’t deserve to be “killed” just because trolls tried to sink the initiative.
Photo credit: Spiegel / Twitter / Microsoft
Source: Microsoft / Wikipedia / Teresa Sickert (Spiegel) / Rachel Wisuri (Social Media Examiner)
Editorial note: This is a news report partially consisting of the author’s personal opinion.