On 18 March evening, as 49-year-old Elaine Herzberg was crossing a road in Tempe, Arizona, she got hit by a car and died of her injuries. She became the first pedestrian to be killed by an autonomous car.

Humans are a curious species, more so after a negative event. So after this accident too, several questions were raised. The role of the pedestrian, who suddenly came in front of the car, is not in doubt. But for the first time, answers to a few questions remained ambiguous. Who else is responsible for this accident? The car, the car company, or the autonomous technology of the car?

The observation that the car did not slow down or swerve as it approached Herzberg raised more questions. Did the car do all it could to prevent the accident? Even if the accident was inevitable, more importantly, was the car seen as doing all it can to avoid the accident?

Under normal driving circumstances, it is safe to assume that if a human driver sees a pedestrian crossing the road at close quarters, he will slam the brake, swerve the car. Despite all these actions, the car might still hit the pedestrian and she might die. As soon as the accident happens, the driver of the car, guilty or not guilty, will be in complete shock, will step out of the car and may try to help take the pedestrian to the hospital. The driver will demonstrate clear signs of empathy towards the victim. The police will file a first information report as a first stage of the legal process. With it, the belief is reinforced that the driver will face punishment if the law finds him guilty. With this, there is an emotional closure to the incident.

Instead, in the case of an accident involving an autonomous car, the algorithm would have prevented all preventable accidents, leaving the car blameless. And since technology has replaced the driver in the case of an autonomous car, there is no human agent who can be held responsible for the accident either. The car that was part of the accident parks itself on the side. In its own way, it communicates to the victim and all those who rushed to help the victim, “I have done nothing wrong. I am not responsible for this accident or what happens to the victim". There is no display of regret or empathy from the car. The absence of any of these emotional responses from the car leaves no room for a closure to this whole emotionally intense situation.

By 2040, it is expected that a significant number of cars will be autonomous. Which means although accidents per se would have come down, the remaining accidents will probably involve autonomous cars. The very removal of a human driver has also taken away several advantages the human driver had.

The human driver acted as a crucial personal agent that took the brunt of the emotional turmoil of an accident. He was the answer for the several questions people have after an accident, a deeply emotional event. An apology from the driver or acts of empathy towards the victim became excellent conduits to bring about a closure to an emotionally intense event such as an accident.

Apology is most effective when the person directly involved in the incident makes it. Third-party apologies do not provide the same level of emotional closure. For example, the safety-crisis that Toyota faced in the US a few years back showed that even a successful brand cannot take a negative publicity hit at the company level for too long. They had to resort to one of the most expensive recalls in the automotive industry to retain consumer faith in the brand. So the autonomous car industry has to develop emotionally appropriate rituals to provide closure, without the blame accruing to the company.

Managing the emotional “whys" after a negative event is significant. For example, various religions do an excellent job of stepping in with their explanations during natural disasters, provide answers to many questions and provide an emotional closure to those events. Like religion, the autonomous car industry has to learn to bring closure to the negative moments the autonomous cars will create in the lives of ordinary humans. Otherwise, the fallout can be disastrous.

Artificial Intelligence is making inroads into many fields to make critical life and death decisions—in medicine, investment, legal process, etc. The instances where people need to rely on technology instead of a human to take these critical decisions are on the rise. So, it is important that the future of autonomous technology is well taken care of.

The autonomous car industry should learn from the fall from grace of nuclear energy. The number of people who have died from accidents in nuclear power plants is minuscule compared to people who have died due to electrocution. But the irrational fears and the ensuing emotional distance the nuclear power industry created in the minds of ordinary people were accentuated with the Fukushima Daiichi accident in 2011, and a large part of the industry shut down across the world.

The same could happen to the autonomous car industry. The positive press during the good times should not make the industry complacent. It should be prepared to create a stronger emotional connection between machine and humans.

Technology can afford to be autonomous when things are going right. But when things go wrong, even most autonomous of technologies will need support, human support.

Biju Dominic is the chief executive officer of Final Mile Consulting, a behaviour architecture firm.

Comments are welcome at views@livemint.com