Behaviour by Brain

Should machines explain themselves better than we do?

Humans can’t quite explain themselves and asking AI systems to do it may hold their adoption back

Biju Dominic
Updated19 Apr 2023, 11:59 PM IST
Photo: Getty
Photo: Getty

On 3 June 1997, during a warm-up match between Brazil and France, Roberto Carlos stunned the world with one of the most spectacular free kicks in football history. It was a 30-yard kick. Carlos took a long 20-yard run-up to take that kick. The ball was kicked at 136kmph. The ball curved around the entire French defence. Goalkeeper Fabien Barthez thought the ball was going off-field into the crowd. So he did not move. But the ball curved back swiftly and magically into the goal. Soccer experts and even physicists have studied in detail every aspect of this kick. But years later, in an interview with ESPN Brasil, Roberto Carlos said, “To be honest, until this day I don’t know how I did that.”

Explainability is the capacity to articulate why a person or system reached a particular decision. As businesses are increasingly relying on artificial intelligence (AI) systems to make decisions, there is a growing insistence that AI systems should be explainable. What data do they use? How do these models derive their conclusions? Are these bereft of all biases? A McKinsey study found that organizations which establish digital trust among consumers through practices such as making AI explainable are more likely to see their annual revenues and earnings grow in double digits. So is AI explainability a desired goal? But the more important question is whether the very humans who insist on the explainability of AI systems can explain themselves.

For a long time, it was assumed that human behaviour is the result of conscious, rational processes. So any explanation humans gave of why they behaved the way they did was almost always considered a sufficient explanation of that behaviour. But with the arrival of neuroscience, this belief started to change. Now it is evident that 99.99% of the 11-million-bits processing capacity of the human brain works at a non-conscious level. So most human behaviours are performed below the thresholds of consciousness. It is quite clear that humans are not capable of explaining why they did what they did. Think of Roberto Carlos’s kick.

The early stages of any human learning involve conscious processes. As the learning progresses further and as the person becomes an expert, this individual no longer needs to think consciously to perform a task of expertise. This task can be performed non-consciously. But what happens if an expert tries to think consciously about his or her performance? No doubt, this will improve its explainability. But the moment a human tries to bring one’s non-conscious learning to the conscious level, s/he tends to ‘choke’. By insisting on bringing in better explainability to AI algorithms, would we ‘choke’ its efficiency?

There is no doubt that techniques which enable AI-system explainability can more quickly reveal errors or areas for improvement. This would make it easier for machine-learning operation teams tasked with supervising AI systems to monitor and maintain those systems efficiently. It is believed that explainability helps organizations mitigate risks. AI systems that run afoul of ethical norms, even if inadvertently, can ignite intense public, media and regulatory scrutiny. If the algorithms are explainable, the legal and risk teams could use the explanation provided by the technical team to establish that the system complies with applicable laws and regulations.

Explainability helps improve risk mitigation. Data from the black boxes of ill-fated aircraft that met with accidents have helped better understand the causes of those accidents and prevent similar crashes. But should more explainability be at the cost of better efficiency?

Despite recent developments in neuroscience, we are far from explaining the ‘why’ of any human action. Humankind has progressed not by improving human explainability, but by taking on more accountability for actions. Can we focus on making AI systems more accountable for the quality of their output than making its algorithms more explainable?

A study by academics at Harvard University, Massachusetts Institute of Technology and Polytechnic University of Milan suggests that excessive explanation of AI systems can create some unique problems. Employees at Tapestry, a portfolio of luxury brands, were given access to an AI-based forecasting model. The employees turned out to be likelier to overrule models they could understand because there were mistakenly sure of their own intuitions.

Those in the know of the intricacies of human behaviour will not be surprised that explainability affects the adoption of AI systems. The feeling of ignorance among the masses that an elite Latin language created in the West and Sanskrit did in India went a long way in enhancing popular acceptance of superiority held by a priestly class in these societies. Feelings of ignorance and of expertise are closely linked. If so, are we compromising better adoption of AI systems by improving their explainability?

There is a part of human behaviour that is considered explainable. Behaviours that are performed strictly as per instructions, for example, where the directives in themselves provide a clear explanation of what is done. In human history, the generation of strict instruction-led behaviour is called slavery. Are today’s growing cries for explainability of AI systems, then, reflective of a desire to keep AI machines enslaved to humans? Will human-imposed chains of explainability curtail the freedom of AI systems to freely express their innovative abilities?

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

Business NewsOpinionColumnsShould machines explain themselves better than we do?
More