4 min read.Updated: 15 Apr 2019, 10:25 PM ISTV. Anantha Nageswaran & S. Raghu Raman
Should Artificial Intelligence (AI) be programmed to avoid the mental short-cuts that humans take?
If technology magazines are to be believed, Artificial Intelligence (AI) would have to make moral choices in the very near future. Drones in a war zone would have to decide—and decide quickly—whether to drop a bomb on an enemy hideout near a hospital. Self-driving cars would have to make a choice between slamming the brakes suddenly and injuring their passenger or hitting a jaywalking pedestrian. Serious attempts are being made to distil moral principles from observing human decision-making in experimental conditions. There is a strong push to make machines learn from humans and human artefacts on decision-making and morality. Futurologists like Ray Kurzweil predict that by 2029, we would have machines that can do all things that humans do today and could even better them. But can humans do a good job of coaching AI to exercise moral choices?