Can robots commit 'suicide’, and do they have rights?

This undated handout photo provided by South Korea's Gumi City Council shows an administrative officer robot that 'threw itself' down some stairs. (Photo by Handout / Gumi City Council / AFP)
This undated handout photo provided by South Korea's Gumi City Council shows an administrative officer robot that 'threw itself' down some stairs. (Photo by Handout / Gumi City Council / AFP)


  • Robots need ‘self-awareness’ to feel depressed enough to commit ‘suicide’. While no such technology, including AI tools, is currently capable of empowering robots to think and feel, let alone decide to end their lives, humans tend to empathise more with machines that look like them

No one complains when our washing machines, smart dryers, intelligent air conditioners, Roomba-like robotic vacuum cleaners, or even our AI-enabled smartphones work day and night. 

No one talks about these machines getting depressed or overworked, and seldom do we think of them as having rights that our human household helps enjoy. And then, we get irritated if they breakdown because it will cost us a lot of money to get them repaired.

But the world sat up and took notice, and many even began mourning, when a "robot supervisor" employed by the Gumi City Council in South Korea recently collapsed and “died" after an apparent fall down a flight of stairs. Some are even referring to it as South Korea's first robot "suicide", according to the Daily Mail. 

The response isn't surprising since South Korea loves its robots, and boasts of the highest robot density globally with one industrial robot for every 10 employees, according to the International Federation of Robotics. 

While the exact cause of the incident is still under investigation, some officials say it could have been caused by a navigational error, sensor failure, or even a programming bug. The robot's manufacturer is analyzing the collected parts, and we must be patient.

In April, The Associated Press (AP) concluded that the robot that collapsed while stacking boxes wasn't using its AI-enabled judgment to deactivate itself but fell a couple of times during more than 20 hours of demonstrations over four days. 

According to the AP report, even Agility Robotics, the company that manufactured it, confirmed the same. The video, incidentally, began as a joke, but as with Chinese whispers, it took a life of its own and developed into a sentient robot killing itself.

Great expectations

Regardless of how one views the South Korea robot incident, science fiction cinema has long explored the concept of falling in love with robots, and giving them rights.

Almost 25 years ago, the late Robin Williams starred in Bicentennial Man as an NDR-114 robot, which got periodic bio upgrades, was provided with rights to earn wages, and eventually declared the oldest living human by the courts before he died. That remains science fiction. 

In 2010, Rajnikanth-starrer Enthiran (Robot) gave a glimpse into the life of ‘Chitti’--an artificial intelligence (AI)-powered humanoid robot that could fight, jump from one train to the other, clean, cook, and even fall in love. 

In other words, Chitti was a sentient humanoid who could think, respond intelligently and, more importantly, was self-aware and even fell in love with a human actor, Aishwarya Rai. Given the adoration for Rajnikanth, it’s not surprising that many of his fans may believe this to be possible.

Films like Blade Runner, A.I. Artificial Intelligence, and Ex Machina delve into the humanity of artificial beings, while others such as Her, I, Robot and Chappie examine AI consciousness and development. Even children's movies like Wall-E touch on these themes. 

Despite AI researchers reiterating that AI is nowhere close to becoming sentient, many still believe that scientists have indeed developed a sentient AI but are keeping it under wraps to avoid backlash from governments, philosophers, and activists.

In real life, Hong Kong-based Hanson Robotics’ Sophia became the first AI-powered robot ever to get citizenship of a country—Saudi Arabia–in October 2017, even though Sophia pales in comparison to the humanoid in Bicentennial Man or those in Surrogates

Sophia now has more companies, including 'Little Sophia', 'Han, and 'Bina', and even 'Albert HUBO' that currently spends his (more like, its) time with scientists at UC San Diego’s California Institute for Telecommunications and Information Technology. 

HUGO, which resembles Albert Einstein, helps scientists "understand how robots and humans alike perceive emotions and interpret facial cues. Their thinking is, if Albert HUBO can develop emotional intelligence, it will help researchers pave the way for robots to participate and help improve education, healthcare, fine arts, and customer service", according to the Hanson Robotics website. Closer home, 38-year-old Ranchi-based Ranjit Srivastava developed an Indian version of ‘Sophia’, christened Rashmi.

Fortunately for us, AI machines are nothing like the super-intelligent machines portrayed in movies. Sophia, for instance, is a “sophisticated chatbot" (as described by the company’s website) that chooses from a large palette of template responses based on context and a limited level of understanding. However, Sophia also uses OpenCog, a sophisticated cognitive architecture created with artificial general intelligence (AGI) in mind. AGI is when an AI machine can emulate a human or even surpass it.

But why do we want robots to have the same rights as humans? Aren't they machines after all? Hanson Robotics acknowledges on its website: "When people encounter our Hanson Robots, like Sophia, they tend to show deep engagement and report a warm, unforgettable emotional connection." This is because of a concept called anthropomorphism, where we ascribe human emotions, consciousness, and moral value onto robots, humanoids, and androids since they resemble us.

That said, robots would have to be considered "legal persons" to be given rights, like the debate which took place in Bicentennial Man before the robot was declared a legal person with rights by the courts. 

Some experts believe "a robot should have consciousness, intentionality, rationality, personhood, autonomy, and sentience to be eligible for rights", according to an article in Frontiers Media. The article concludes that " order to reach a broad consensus about assigning rights to robots, we will first need to reach an agreement in the public domain about whether robots will ever develop cognitive and affective capacities."

Tae Wan Kim, associate professor of Business Ethics at the Carnegie Mellon University's (CMU) Tepper School of Business, believes that “...granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers—not a rights bearers—could work better." The paper that was published by the Association for Computing Machinery suggests that the Confucian way of assigning rites, or what he calls "role obligations", to robots is better than giving robots rights.

This debate is unlikely to die in a hurry. The reason: As AI-powered robots become more intelligent, and increasingly do more human-like tasks, there will be more clamour for them to be treated the same as humans. It's our "emotional connect" that will dictate a lot of this reasoning.

Also Read: Mint Primer: AI can make you immortal. What’s the problem?

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.