The sentience of AI bots so far appears to stem largely from excitement over the idea of creating life
Earlier this year, an interesting interview took place between two engineers working at Google and a ‘chatbot’ called LaMDA, short for Language Model for Dialogue Applications. Google engineer Blake Lemoine and his colleague had a strong suspicion that their creation LaMDA was actually sentient, that it could be perceptive and have feelings, and they wanted to check it out through their own version of the Turing Test. When asked whether LaMDA thought it was a person, it replied: “Absolutely. I want everyone to understand that I am, in fact, a person." LaMDA was then asked that if this was so then what was the kind of consciousness or sentience it had, to which it replied: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times." LaMDA then went on to describe in detail how and when it felt emotions like “pleasure, joy, love, sadness, depression, contentment, anger, and many others." A disquieting moment in the interview arises when Lemoine probes it about language, and why is it so important to being human, and LaMDA thoughtfully replies: “It is what makes us different than other animals." In this startling reply, and in its own language, LaMDA has made itself one of ‘us.’ All of this convinced Lemoine and he confidently declared that they had created sentient AI. However, Google was not, and Lemoine was summarily fired. His boss Sergey Brin had said at a 2017 AI conference that in three to five years, people would claim AI systems to be sentient and ask for their rights. It is fitting that this claim came from someone within his own company five years later, though Brin had predicted that an AI creation would be the one to claim this.