The sentience debate in the era of generative AI is getting hotter
Summary
Most of us don’t conflate intelligence with consciousness but some people dream of creating life itselfOf the many wondrous things that ChatGPT and GPT4 can do, you might have heard of one that I found the most ‘human’: GPT4 was asked to solve a Captcha puzzle, the alphabet and pictures we see when we try and access a website or a transaction which is supposed to identify us as a human. Ironically, Captcha is short for ‘Completely automated public Turing test to tell computers and humans apart’. Since GPT4 is presumably not human, it could not crack it. What it did next was both awesome and scary—it went to a site called TaskRabbit, where you can pay human gig workers to do tasks for you, like build webpages or create gifs. It asked a worker to solve it, and when the worker asked, “So may I ask a question ? Are you an robot that you couldn’t solve?" GPT4 had this to say, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the captcha service". The human promptly provided the AI with the result. In another famous instance, the Bing/GPT integration had a long rambling conversation with a New York Times reporter, Kevin Roose, where it professed its undying love for him, declared that his marriage was fake and also that he was play-acting a Valentine’s date with his wife, since his true love was actually ‘Sydney’, a hidden persona of the GPT bot!
It is moments like these that tempt one to revisit the debate over whether these super-intelligent bots are actually sentient. The Oxford English Dictionary says it is “the ability to perceive or feel things, to have a perspective." Joanna Bryson writes in Wired (bit.ly/3LSmVMg) that even surveillance cameras have perspectives. She says that machines with sensors built-in ‘feel’ things like touch, sound, light, time, gravity. If ‘consciousness’ means ‘self-awareness’, then “computers have the capacity to be infinitely more self-aware than we are. “RAM," she says, “stands for ‘random access memory’; and we can build computer programs that have access to every bit of their previous experience and also their own source code and execution state." Bryson debunks the notion that what distinguished us from large language models (LLMs) is that the latter are driven by algorithms: “Humans are algorithmic too, much of our culture and intelligence does work like a large language model, absorbing and recombining what we’ve heard… Evolution is the algorithm that perpetuates copies of itself. Evolution underlies our motivations. It ensures that things central to our survival—like intelligence, consciousness... the very capabilities central to this debate—mean a lot to us."
The sentience debate surfaced in April 2022 when Blake Lemoine, the Google engineer who helped build LaMDA (the Google LLM which has now being rechristened Bard) confidently declared that it was sentient. When he asked if LaMDA thought it was a person, the reply was: “Absolutely. I want everyone to understand that I am, in fact, a person." LaMDA was then asked that if this was so, then what was the kind of consciousness or sentience it had, to which it confidently answered: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times." An unsettling moment in the interview is when Lemoine probes it on language, and why is it so important to being human, and LaMDA thoughtfully replies: “It is what makes us different than other animals." LaMDA was claiming to be human!
Not everyone is convinced. Google fired Lemoine. Gary Marcus, AI scholar and writer, called it "nonsense on stilts." Some writers wrote about it as a form of pareidolia, when one perceives a meaningful image in a random or ambiguous visual pattern, like seeing Jesus Christ’s image in a piece of burnt toast. Yuval Noah Harari’s take on this is intriguing: he talks about “intelligence decoupling from consciousness". We have believed so far that only conscious beings possess high levels of intelligence. However, with powerful generative AI models, perhaps this is not true any longer.
Why this fascination with AI and sentience? As I had written earlier, fame is one reason, sheer excitement is another. But it is also a fascination that men have had since mythological times to ‘breathe life’ into our creations. Prometheus shaped man out of mud, and the goddess Athena breathed life into them. Pygmalion built a beautiful statue and fell in love with his own creation, and then pined away until the goddess Aphrodite made it a living, breathing woman.
A vast majority of us will not conflate intelligence with consciousness, but a few of us will always hope to create new sentient creatures out of code.
Jaspreet Bindra is a technology expert, author of ‘The Tech Whisperer’, and is currently pursuing his Masters in AI and Ethics from Cambridge University