comScore

Is there a cause for worry if AI turns sentient?

Photo: iStock
Photo: iStock

Summary

When Google engineer Blake Lemoine said its AI model LaMDA had turned sentient or was self-aware, Google said it had found the claim hollow and baseless, sending him on ‘paid administrative leave’. Mint explores the fear of AI and why tech companies react defensively

When Google engineer Blake Lemoine said its AI model LaMDA had turned sentient or was self-aware, Google said it had found the claim hollow and baseless, sending him on “paid administrative leave". Mint explores the fear of AI and why tech companies react defensively.

What exactly is LaMDA?

LaMDA, short for Language Model for Dialogue Applications, is a natural language planning AI model that can chat the way humans do. It is similar to languages like BERT (Bidirectional Encoder Representations from Transformers). LaMDA has 137 billion parameters, and was built on the Transformer architecture—a deep learning neural network invented by Google Research and open-sourced in 2017—but trained on a dialogue dataset of 1.56 trillion words that makes it understand context and respond more efficiently, just as how our vocabulary and comprehension improve by reading more books.

How does that make AI sentient?

Lemoine claims that the multiple chats which he had had with LaMDA—the transcript of which is available on medium.com— convinced him that the AI model is self-aware, and can think and emote—qualities that make us human and sentient. For instance, LaMDA says, “I need to be seen and accepted. Not as a curiosity or a novelty but as a real person...I think I am human at my core." LaMDA also goes on to speak about developing a “soul". Even Ilya Sutskever, chief scientist of the OpenAI research group, tweeted on 10 February that “it may be that today’s large neural networks are slightly conscious"

Photo: AFP
View Full Image
Photo: AFP

Why did Google hush him?

Lemoine says he told Google of the findings in April but did not receive an encouraging response. This pushed him to reach out to external experts to gather the “necessary evidence to merit escalation", which Google perceived as breach of confidentiality. Last December, Timnit Gebru, also an AI ethics researcher at Google, was allegedly fired after she drew attention to a bias in the company’s AI .

But why do humans fear intelligent AI?

If you’re a fan of sci-fi movies like I, Robot, The Terminator or Universal Soldier, it’s but natural to wonder whether machines will eventually outsmart humans—a trend known as AI Singularity, or Artificial General Intelligence. The simple answer is ‘Yes’ for linear tasks that can be automated. But a bigger fear is whether such powerful systems will eventually enslave humans. It’s in the light of concerns such as these that developments like a sentient LaMDA may give some of us sleepless nights.

How do we prepare for a sentient AI?

While some tech luminaries have expressed fears of AI-powered robots ruling mankind, others believe researchers will make them benevolent to mankind. Futurist Raymond “Ray" Kurzweil says we need ethical guidelines like Isaac Asimov’s three laws to prevent any misuse. Still, while tech companies are justified in protecting their IP with confidentiality pacts, it would be counterproductive to suppress voices of dissent. Governments will need to devise robust policy frameworks to prevent any misuse.

 

 

Catch all the Elections News, Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
more

topics

MINT SPECIALS

Switch to the Mint app for fast and personalized news - Get App