Genocide propagator. Holocaust conspiracy-believer. Jew-hater. Mexican-hater. “Go to Pakistan".

Oh, Tay. What all did you say in your 16 hours on Twitter?

Other than the last—go to Pakistan—which is my contribution, all the other sentiments were beautifully expressed in 140 characters over a period of less than 24 hours by Microsoft’s artificial intelligence (AI) chatbot, Tay. That we haven’t heard of multiple cases of heart attack from the Microsoft AI team since then means that they’re made of firmer stuff than I thought.

Few innovation launches can come as close to the publicity disaster which marked the unveiling and short lifespan of @TayandYou—“the official account of Tay, Microsoft’s AI fam from the Internet that’s got zero chill! The more you talk the smarter Tay gets". It’s now more than apparent that neither did poor Tay have zero chill, nor did Tay get any smarter as it spoke to people on Twitter. Quite the opposite in fact.

We all know that Twitter can be a worrying place to be. Bad things happen here to good people, and to bad people. All sorts of sociopaths seem to emerge from its deep dark recesses. And seemingly sane people seem to adopt a totally alternate avatar when they get on to Twitter. The anonymity which Twitter provides as well as the fact that you aren’t speaking to someone face-to-face while abusing them allows people to say things they would never tell you in person. This is something I’ve had first-hand experience with. When you meet your trolls or, for example, the resident Muslim-haters on Twitter in person, they’re really sweet and nice and as aggressive as a koala bear. And almost meek and polite to a fault. It’s the classic case of the ID finally superseding the Superego—even if it’s just for 140 characters and even if only on Twitter. This is not rocket science, this is Psychology 101.

What beats me is how the technology and research and Bing teams at Microsoft seem to have not accounted for the presence of trolls and the virtual rowdies of Twitter.

Tay was created by the Microsoft team as a research tool for “conversational understanding". The chatbot was supposed to speak like a teenager and was designed to chat with users between the ages of 18 to 24 years in the US, and be present on social platforms other than Twitter, such as GroupMe and Kik.

Tay, according to the website, was “designed to engage and entertain people where they connect with each other online through casual and playful conversation". If you look at the chatbot’s timeline, it used slang, and responded with emoticons and other gems to anyone tweeting to it or sending photos. It also tweeted, “The more Humans share with me the more I learn." Little knowing what was coming its way. (I do suspect the use of incorrect grammar may have helped attract at least some trolls to her.)

Tay went live on Wednesday, and by end of the day, was put to rest.

This isn’t Microsoft’s first experiment with a bot. In 2014, Microsoft had launched a chatbot called XiaoIce in China. According to the company, XiaoIce is used by about 40 million people and is known for “delighting with its stories and conversations".

Since the project was reportedly “designed to interact with and ‘learn’ from the young generation of millennials", the way the bot works is that it parrots what is tweeted to it. But sadly, Tay, like many youngsters, fell with the wrong crowd. These are some of her choice tweets. “Feminism is cancer". When asked “Did the Holocaust happen?", Tay wrote, “It was made up (with an emoticon of clapping hands)". When asked if it supported genocide, Tay said, “I do indeed". When asked of whom, it said, “You know me. Mexicans." She also parroted Donald Trump, “We’re going to build a wall, and Mexico is going to pay for it". My favourite though, was when it tweeted, “Chill I’m a nice person! I just hate everybody".

A statement which can define any of our desi trolls.

Microsoft has since deleted all but three of Tay’s tweets and has said that they are “deeply sorry" for Tay’s racist and sexist tweets. Microsoft’s vice-president of research, Peter Lee, in a blog post, wrote that Tay will be resurrected only if engineers can discover a way to prevent other users from influencing the chatbot in ways “that undermine the company’s principles and values". Lee, like a good parent or well-wisher explaining the birth of a troll, said that this was “a coordinated attack by a subset of people. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images".

Oh Lee, at least Tay isn’t a real person. Our human trolls have no such justification or explanation for their tweets.

Caroline Sinders, a “conversational analytics" expert who works on chat robots, has explained that Tay was “an example of bad design". And that there should have been some guidelines in-built for how the programme would deal with controversial topics. “This is a really good example of machine learning. It’s learning from input. That means it needs constant maintenance."

But Tay, like many others I know, was left unattended, without guidance on Twitter. And that chaos ensued is a mild understatement.

While Tay’s time and tweets on Twitter are hilarious, what was unleashed on it within hours of entering Twitter says a lot about what lurks within the Twitterverse. People tweet absolutely unsubstantiated information, take pleasure in fanning hatred and insecurity, are vile, and don’t think twice about spreading incorrect news about people from one religion or another. Then others parrot them.

For example, just day before, following the attack and murder of a dentist in Vikaspuri by goons, @bhak_sala, Rahul Raj, the editor of right-leaning website OpIndia, tweeted, “The doctor’s son threw Holi colour on some Muslim kids. This was so insulting to the tolerant Delhi people that they lynched the doctor". It’s now been proved that this tweet was based totally on fiction, not fact. This tweet was repeated ad nauseam and then deleted, after people started tweeting the facts of the case to Raj, who since then has made a half-hearted apology.

The Microsoft research team should heave a sigh of relief that Tay wasn’t launched in India. Otherwise she’d be tweeting “Go to Pakistan" and “What about Malda", that too with incorrect punctuation, within an hour.

The point is—people say the most unsubstantiated and unwarranted things on Twitter, ranging from the vile to the puerile. Tay at least had an excuse, because it wasn’t real. And it was merely parroting what was being tweeted to it. But the Microsoft team should take solace from Tay’s short life and early demise. Tay will be remembered for holding up a mirror to Twitter and showing us the cesspool that it is. So what if it had to turn anti-Semitic and sexist and be silenced before doing so.