On the night of 31 May 2009, pilots David Robert and Pierre-Cédric Bonin were in command of Air France 447, a fourth-generation, glass-cockpit Airbus A330, flying from Rio de Janeiro to Paris. This was a highly sophisticated airline which mostly flew by itself, with pilots required to take control only in case of a rare problem. And when the problem occurred—a minor one, malfunctioning of airspeed sensors—the pilots just couldn’t regain control. The captain of the flight, Marc Dubois, joined them but could not prevent a crash leading to 228 people being killed.

The more recent case of a self-driving car of Uber killing a pedestrian in Arizona, US, is not entirely similar but there is a crucial link: the problem of switching from an automated system to a manual one in times of crisis. The back-up safety driver of Uber was not looking at the road immediately before the accident and therefore could not intervene in time to save the life of the pedestrian. This is not surprising. Multiple studies have shown that it is not easy to get an idle brain to intervene suddenly in moments of crisis.

Of course, the case of Air France 447 was different in the sense that the problem was not just restricted to the pilots’ ability to refocus. It was also about a gradual “de-skilling" of pilots, given the decreasing practice of manually flying planes as automation took root. Automation generally makes systems safer—civil aviation too sees a lot fewer accidents than earlier—but it throws up unexpected problems the moment it fails and humans have to be pressed into action.

Uber should have known about the Air France 447 episode and prepared better. In a September 2017 article for the Harvard Business Review, Nick Oliver, Thomas Calvard and Kristina Potočnik had used the Air France 447 crash as a specific case study to point out the dangers inherent in increasing automation. Quite prophetically, the trio warned: “This issue will only become more pertinent as automation further pervades our lives, for example as autonomous vehicles are introduced to our roads."

Some analysts have pointed out that Uber has lagged behind its competitors in the performance of its self-driving car algorithms. John Krafcik, the chief executive officer of autonomous car development company Waymo, has claimed that his firm’s car would have avoided the accident where Uber’s couldn’t. Apparently, the number of interventions required by Uber’s backup drivers is higher than its rivals’. But even if the algorithms/sensors are improved, and the need for human intervention decreases, the problem of switching from an automated to a manual system doesn’t diminish; in fact, it increases. The more idle time the safety driver has during the ride, the more difficult will it be for him to refocus in critical situations.

There are many other problems that self-driving cars face. One of the most discussed is the equivalent of what philosophers call the “trolley problem". In the original problem, a person has to choose between passively watching an uncontrolled trolley killing five people and actively changing the direction of the trolley to kill just one person. Should the self-driving cars be trained to kill the pedestrian or should it put the life of passengers in danger? Is the choice different for an autonomous car with a safety driver and for a fully autonomous car with no safety driver? The answers to these questions are linked to both market potential and legal liabilities. For instance, will people buy cars which reduce the number of casualties, even if the passengers suffer more than pedestrians? And should the liabilities be higher for a car company which is programmed to hurt pedestrians in order to save passengers?

The increasing adoption of self-driving cars will have an impact on labour markets (drivers will lose jobs) and also raise questions of equity (in lieu of driving jobs lost, well-educated coders will find work in autonomous car development companies).

It is quite likely that self-driving cars will, in due course of time, reduce the number of accidents and fatalities. But as the examples of Uber and Air France 447 show, accidents won’t just vanish. The furore is likely to be greater when accidents occur due to problems of technology rather than human errors, which have been responsible for the majority of mishaps till this day.

Even if society gradually adapts to self-driving cars, and occasional (much fewer than before) technology-induced road accidents, there are many other kinds of changes that one should be looking at. The advent of cars changed the urban landscape and led to the phenomenon of suburban life. Self-driving cars will similarly generate a new pattern of life and its attendant challenges. For example, no threat of “drink and drive" penalties might lead to more drinking. These cars would also be intelligent machines carrying huge amounts of data and, therefore, with it, privacy risks and security challenges.

There is a lot for societies and governments to prepare themselves for. But before that, Uber and others in the business have to figure out algorithms that avoid pedestrians in most cases, and contingent plans in case algorithms fail.

How should governments prepare themselves for the regulatory and legal challenges that self-driving cars will throw up? Tell us at views@livemint. com