Active Stocks
Thu Mar 28 2024 15:59:33
  1. Tata Steel share price
  2. 155.90 2.00%
  1. ICICI Bank share price
  2. 1,095.75 1.08%
  1. HDFC Bank share price
  2. 1,448.20 0.52%
  1. ITC share price
  2. 428.55 0.13%
  1. Power Grid Corporation Of India share price
  2. 277.05 2.21%
Business News/ Opinion / I think, I drive
BackBack

I think, I drive

Driverless cars will need to learn about human survival instincts before they are accepted on the road

Photo: Getty Images/AFPPremium
Photo: Getty Images/AFP

When I first read Sally in the early 1980s, the story was already about 30 years old. It was in 1953 that the great science fiction writer, Isaac Asimov, wrote about a future when no humans would be allowed to drive cars. The only cars on the road would be those with “positronic brains", computers with consciousness.

Asimov’s prescience was remarkable. We are now entering an age when autonomous vehicles (AVs), as they are called in the modern age, are ready for the road. The technical barriers have been surmounted, and it is now clear that AVs are safer than driver-driven cars will ever be.

Up to 90% of all accidents can be prevented if computers take charge of steering cars, said a 2015 report from McKinsey, a consultancy.

More than 87 people die every day in US road accidents (the corresponding figure for India is 350).

“By mid-century, the penetration of AVs and other ADAs (advanced driver-assistance systems) could ultimately cause vehicle crashes in the US to fall from second to ninth place in terms of their lethality ranking among accident types," said the report. Driver error causes up to 94% of car accidents in the US, according to a 2015 US department of transport report.

In May 2015, four of 48 driverless cars in California had been in accidents over the previous six months, reported sciencealert.com, a science news site. While one in 12 does not sound like a sterling safety rate, it emerged that humans caused the mishaps, as was the case with three Google AVs.

“Over the six years since we started the project, we’ve been involved in 11 minor accidents (light damage, no injuries) during those 1.7 million miles of autonomous and manual driving with our safety drivers behind the wheel, and not once was the self-driving car the cause of the accident," Chris Urmson, director of Google’s self-driving car project, wrote in a May 2015 blog post.

The fourth AV, an Audi operated by Delphi, was being driven by a human when it was reportedly struck by another car that crossed a median.

The accident rate of AVs might be negligible in comparison to current rates, but it is unquestionable that there will be some accidents. This is where it starts to get complicated for programmers of driverless cars. When faced with an inevitable crash, autonomous vehicles will need to make logical decisions that save more lives than the few.

Just how discomfited humans could be at that prospect was revealed this week in the journal Science, when a team of researchers from the University of Toulouse in France and the Massachusetts Institute of Technology in the US reported that while people mostly approve of AVs that could make decisions to sacrifice their passengers to save others, they did not want to be in the driverless car that made that decision. In other words, they did not want to be sacrificed.

This is a logical human response to a logical robotic action, and it indicates the complex programming required of AV brains—something that Asimov figured out 63 years ago—before they become global commodities, the authors of the Toulouse-MIT study noted. Programmers will have to equip driverless cars to address an ancient human ethical question: Should the good of the many take precedence over the good of the few?

Six online surveys conducted by the researchers revealed the conundrum that passengers of AVs would force on their cars: Vehicles should have had the smarts to save lives—unless, of course, these cars decided in, an admittedly rare crash scenario, to save the lives of others. That was not acceptable. Building vehicles that can deal with such decision-making “is one of the thorniest challenges in artificial intelligence today", the authors note in Science.

Asimov’s stories were often underpinned by the “first law" of robotics—artificial-intelligence programming in robots with an unwavering focus on preserving human life. The “second law" said a robot must always obey a human being, unless it interfered with the first law.

The first law led to tricky dilemmas, illustrated in Reason, a 1942 story about a robot on a space station with finely developed reasoning ability. The robot, QT1, or “Cutie", finds no way to obey humans and save them from a solar storm. So, he convinces himself and his subordinate robots that the stars and planets do not really exist. Cutie creates a fantasy world and installs himself as prophet, taking charge of the space station and shutting out humans, allowing him to water down the second law. It emerges that Cutie’s positronic brain knows that humans cannot possibly operate the station with the required precision, so to save humans and stay true to his programming, he creates an alternate reality without them. “I, myself, exist because I think," Cutie declares. AVs will have a lot to think about.

Samar Halarnkar is editor of Indiaspend.org, a data-driven, public-interest journalism, non-profit organization. He also writes the column Our Daily Bread in Mint Lounge.

Comments are welcome at frontiermail@livemint.com. To read Samar Halarnkar’s previous columns, go to www.livemint.com/frontiermail

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it's all here, just a click away! Login Now!

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Published: 24 Jun 2016, 12:28 AM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App