The complex issue of AI and ethics

How transparent are the decisions taken by artificial neural networks?

Microsoft Corp’s artificial intelligence chatbot, Tay
Microsoft Corp’s artificial intelligence chatbot, Tay

“We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.” This is not the kind of blog post Microsoft Corp. had in mind when it launched its artificial intelligence (AI) chatbot called Tay in March.

Consider another case of Eugenia Kuyda, the co-founder and chief operating officer of a Russian AI startup called Luka Inc. (formerly IO), who has developed an AI chatbot that lets anyone talk to her dearly departed best friend Roman Mazurenko, a fellow tech entrepreneur who died in a car accident in November 2015, according to a 7 October report by the International Business Times. The iOS mobile app Luka can help a user talk to the bot in either English or Russian by adding @Roman.

Google Inc. has its own AI messaging app called Allo, which can predict what a user will want to say and even being able to understand pictures and suggest responses.

All these AI apps, and numerous others, have one thing in common—they rely on deep-learning algorithms that learn from huge unstructured data sets with the help of artificial neural networks. The question, though, is how do these artificial neural networks—loosely modelled on the human brain—make decisions when discerning patterns and making predictions? And how transparent are the decisions taken by these artificial neural networks?

You may liken an artificial neural network to the neuron in the human brain, except that the neuron is simulated with software. There are a finite number of “layers” of such computational neurons, and after the data moves through them all, we get an output. However, unlike the human brain, the artificial neural networks or simulated brains can be basically programmed as we desire—which also introduces subjectivity when it relies on supervised learning.

For instance, what if a machine learning algorithm is based on a complicated neural network, or a genetic algorithm produced by directed evolution? It may prove nearly impossible to understand why, or even how, the algorithm is judging applicants based on their race, assert authors Nick Bostrom and Eliezer Yudkowsky in a 2014 paper, titled The Ethics Of Artificial Intelligence. Bostrom is professor in the faculty of philosophy at Oxford University, UK, and director of the Future of Humanity Institute, attached to the university, while Yudkowsky is a senior research fellow at the Machine Intelligence Research Institute, US.

In a 2 November paper, Rationalizing Neural Predictions, the Massachusetts Institute of Technology (MIT) researchers Tao Lei, Regina Barzilay and Tommi Jaakkola attempted to address these issues by proposing a neural network that would be forced to provide explanations for why it reached a certain conclusion. In one unpublished work, they used the technique to identify and extract explanatory phrases from several thousand breast biopsy reports, according to a report on technology site, ExtremeTech.

The MIT researchers, in their paper, assert that prediction without justification has limited applicability. Their approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction.

The researchers say that they have demonstrated that their encoder-generator framework, trained in an end-to-end manner, gives rise to quality rationales in the absence of any explicit rationale annotations. The approach could be modified or extended to other applications or types of data.

This, of course, does not settle the issue of “unsupervised” or “adaptive” learning, wherein you can run a deep-learning algorithm with no desired output in mind, but let it start evaluating results and adjusting itself as it desires. This can lead to undesired results too but it is also how an artificial neural network may eventually learn, by teaching itself.

This is why the paper cited above suggests that AIs with sufficiently advanced mental states, or the right kind of states, will have moral status, and some may count as persons—though perhaps persons very much unlike the sort that exist now, perhaps governed by different rules. Academics and governments will simply have to do more research to come up with an appropriate policy response to deal with this complex issue.

Cutting Edge is a monthly column that explores the melding of science and technology.