Home / Opinion / Columns /  We should demand explainable artificial intelligence
Back

As artificial intelligence (AI) advances, the algorithms powering AI tools have become a ‘black box’ that is difficult to interpret. Even the engineers or data scientists who created the algorithms in use cannot understand or explain what exactly happens inside this ‘black box’ or how a particular AI algorithm arrived at a specific result. So there is now a growing movement for explainable AI (XAI), or interpretable AI, to create tools whose decisions or predictions we humans can understand.

There are many advantages of grasping how an AI-enabled system has arrived at a particular output. Explainability can help developers ensure that their algorithms are working as expected and meeting regulatory standards. It also makes it possible for those affected by an AI decision to challenge or change that outcome. But before we hold AI machines to such high levels of explainability, there is a crucial question: How good are humans in explaining themselves?

Riding a bicycle is considered a very simple human action. But how well can you explain how to ride a bicycle? Most explanations of this ride will not go much beyond “adjust the handlebar properly until you get proper balance and then start to peddle". The numerous small but significant actions taken by a bike rider to stay up and moving is missed in all verbal explanations. What explains this explainability deficiency among humans?

In recent decades, learnings from neuroscience have shown that explaining human decisions is a very difficult task, maybe even an impossible job. Of the 11 million bits of the human brain’s processing capacity, more than 99.99% function at a non-conscious level. Experiments by Benjamin Libet have shown 500 milliseconds before a human supposedly takes a conscious decision, the action potential related to that decision is already noticed at a non-conscious level. This precedence of non-conscious activity before conscious decisions are taken has been confirmed by several other experiments. In the early stages, all learning activities are directed by the conscious brain. But once a person gains expertise, as it happens after we learn to ride a bicycle smoothly, it is the non-conscious part of the human brain that takes charge.

Almost all human behaviour is managed at a non-conscious level, it has been found, and the conscious human brain has very little access to it. So the explanations humans provide for their decisions are nothing but mere rationalizations made after the act has happened. These ‘intelligent explanations’ of why a decision was made has very little connection with what actually happened within the decision-maker’s brain. This facet is also evident in most consumer research studies that uncover huge gaps between what people say and what they really do.

Attempts to make the non-conscious processes in the human brain more conscious could have disastrous consequences. For example, research has shown that when expert golfers were asked to take more time to think consciously about every small detail of their shot, their performance was worse than when they were to take their shot as quickly as possible. This is because, when players have lot of time to play their shots, they tend to overthink their play, and instead of playing at their non-conscious level, they tend play at a conscious level. Conscious thinking could aid explainability, no doubt, but it impedes the performance of human brain.

Human decisions are the result of the activity of billions of neurons, trillions of synaptic connections and hundreds of neurotransmitters. These human decisions are contaminated with many biases and this is reflected in the data that AI algorithms use. Neuroscientists admit that understanding the hugely complex workings of the human brain and so providing explainability for human decisions is nearly impossible. Compared to the complex human brain, however, it might well be easy to make AI algorithms more explainable.

XAI’s quest for explainability is not going to be a smooth ride, though. To attain explainability in AI systems, one could end up using simpler forms of machine learning, such as decision trees, Bayesian classifiers that can provide better traceability and transparency in their decision making.

On the other hand, powerful algorithms such as neural networks, ensemble methods including random forests, and other similar algorithms that provide better performance and accuracy might be discarded to preserve transparency and explainability. So, in our quest for XAI, one should not end up with an algorithm that compromises efficiency for better explainability.

Explainability improves the overall understanding of algorithms and so the whole AI system. But too much understanding has a downside. As the comprehension of something increases, the level of expertise attributed to it may decrease. Studies have shown that AI algorithms that are easily understood are more often overruled by human intuition than algorithms that are less understood. So as we improve the explainability of an algorithms, it could negatively affect AI adoption or the market price AI tools command.

Yet, the lack of explainability of the human brain is a golden opportunity for the AI industry. If AI companies can develop superior ways to remove biases from the input data being used and come up with algorithms that can explain themselves better, AI may steal a march over humans. Indeed, AI machines that compensate for human inadequacies could serve us humans as useful allies.

Biju Dominic is chief evangelist, Fractal Analytics, and chairman, FinalMile Consulting. 

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.
More Less
Recommended For You
×
Get alerts on WhatsApp
Set Preferences My ReadsWatchlistFeedbackRedeem a Gift CardLogout