Opinion | Think before getting machines to take moral decisions for us

Should Artificial Intelligence (AI) be programmed to avoid the mental short-cuts that humans take?

If technology magazines are to be believed, Artificial Intelligence (AI) would have to make moral choices in the very near future. Drones in a war zone would have to decide—and decide quickly—whether to drop a bomb on an enemy hideout near a hospital. Self-driving cars would have to make a choice between slamming the brakes suddenly and injuring their passenger or hitting a jaywalking pedestrian. Serious attempts are being made to distil moral principles from observing human decision-making in experimental conditions. There is a strong push to make machines learn from humans and human artefacts on decision-making and morality. Futurologists like Ray Kurzweil predict that by 2029, we would have machines that can do all things that humans do today and could even better them. But can humans do a good job of coaching AI to exercise moral choices?

There is a debate around the issue of whether humans inherit a moral sense or acquire it as we grow. One school of thought argues that morality in humans is deeply intertwined with our evolutionary and cultural past. Charles Darwin alluded to it and more recently E. O. Wilson has been at the forefront of pushing this idea forward. Essentially, it has been suggested that moral preferences have been shaped through millions of years by evolutionary forces. If social psychologist Jonathan Haidt is correct, moral judgements are, in effect, automatic and intuitive. Experiences, family, upbringing and culture play a crucial role in developing a moral sense. But, in both these theoretical cases, what is reasonably clear is that the development of a moral compass takes time. It could be aeons or human years, but it’s still a time-consuming process. Transferring this capability to AI or robots would have to recognize this challenge. Put differently, it is not a simple matter of developing a manual and coding it into robots. That would be hubristic on our part.

Of course, it could be argued that what matters is the decision and not the process per se, and so if we crowdsource decisions for moral dilemmas then the AI would discern patterns and use it as a template for decision-making. This is precisely what “Moral Machine" (Nature, 24 October 2018) did. As an online experimental platform, it collected around 40 million decisions from millions of users around the globe on what their preferred option was when faced with moral dilemmas such as the one on self-driven cars described earlier. Results point to three preferences which the authors recommend as foundation stones for machine ethics: spare humans over animals, spare more lives, and spare younger ones. The results are interesting. But, at the same time, it is also important to note that decisions made by humans, which forms the foundation for building machine ethics, are context-specific and hence not always consistent.

Experiments of Daniel Kahneman and Amos Tversky, the celebrated cognitive psychologists, have convincingly demonstrated the pitfalls of human decision-making. We rely on heuristics or short-cuts for making decisions, and continue to do so even if we are told that the decisions are sub-optimal. Added to this, Prospect Theory, for which Daniel Kahneman won the Nobel Prize in 2002, postulates that humans assign more weight to losses than comparable gains—loss aversion. In fact, multiple experiments have shown that the way a question is framed, in terms of either a loss or a gain, alters our response to that question.

To top it all, various biases plague our choices. Not surprisingly, humans transmit these biases when they teach AI. For example, Amazon’s experimental AI-based hiring tool showed a distinct preference for men over women. This is not surprising if you consider that the AI “learnt" to do this by scanning applications received by the company in the last 10 years, which were predominantly from males.

Leaving aside the world of experiments, how humans respond to real life moral dilemmas also shows the quirks and inconsistencies that characterize us. Take the case of expeditions to Mount Everest. In an article (The Guardian, 28 May 2012) Jon Henley narrates how in 2006, almost 40 climbers went past a dying British mountaineer. None stopped to help. In contrast, a few weeks later, a US climber abandoned his bid to summit and instead joined others to rescue an Australian climber. In 2012, an Israeli climber rescued another mountaineer by carrying him on his back for almost eight hours.

The question is not whether AI could be taught to avoid all human heuristics, but whether or not it is right to avoid them. Perhaps heuristics have served humans well because other humans understand them and know what to anticipate. Second, in a confrontation, how will two machines figure each other out? Third, forget about teaching morals to machines. Is it even moral to replace men (and women) with machines, especially in low-skill occupations? Finally, in the Indian context, it is sobering to note that, based on a perusal of election manifestos, Indian political parties have barely grasped the import of the challenge that AI poses humans.

These are the authors’ personal views.

V. Anantha Nageswaran and S. Raghu Raman are, respectively, dean and professor, IFMR Graduate School of Business, Krea University.