Shivaram Kalyanakrishnan is an assistant professor in the department of computer science and engineering at the Indian Institute of Technology-Bombay. He specialises in artificial intelligence (AI) and is the only author from India who is part of an 18-member study panel of the Stanford University-hosted report titled Artificial Intelligence and Life. Kalyanakrishnan’s expertise broadly fits in the area of machine learning. Called reinforcement learning, it defines what actions software agents should take to maximize a certain type of reward after learning from reward and punishment. In an interview, he urges people to be more optimistic about the things AI can do rather than be obsessed with the fear around AI machines. Edited excerpts:
How was the Stanford AI report conceived? How did you get involved?
In the last 5-10 years, the public has been forced to take notice of AI. You do searches, for instance, and get the information you want. You can talk to your phone using Apple’s Siri. There is so much intelligence in your car. Everywhere, intelligent software is assisting you in your lifestyle in a very noticeable way. The common man’s perception is that AI is taking away jobs. Jobs that people used to do about 10 years back have been replaced either partially or even fully in some cases by software and hardware developments. This triggered the conversation, which led to experts being asked about their opinion of where AI is heading and what can be done about it.
In the last few months, there have been some alarmist views, even by physicist Stephen Hawking. Moreover, even a person like Elon Musk, who is responsible for taking technology forward with his ventures like Tesla and SpaceX, has expressed similar sentiments—that things could go awfully wrong once we have AI doing actions that human beings traditionally have done.
Eric (Horvitz, a computer scientist and technical fellow at Microsoft) has been with Microsoft Research and is primarily responsible for this report. He decided that we needed a more sober assessment of AI—not based on emotions but one that is more rigorous and scientific. We will have a study panel every five years. This is the first one. I was chosen by Peter (Peter Stone, professor at the department of computer science, University of Texas) because the study panel was looking at diversity in age, gender and geography. This came together in May 2015, and we were given a year to do this. As a group, we agreed that it is true that AI is game-changing, that it is moving at a fast pace. Hence, we did not forecast beyond 2030, because it is not scientific to do so when there is so much uncertainty about the concept.
Can you list some of these uncertainties?
Look back to 1995. I don’t think it was possible way back then to predict accurately some of the things that we have today. The Internet has happened, so have smartphones and social networks. It’s the nature of science and technology which makes it difficult to predict beyond a certain period. There are just too many variables.
What was your contributions to the AI report?
My contribution was more to define AI. We take intelligence for granted in human beings. We expect people to be able to do certain tasks, to answer questions meaningfully etc. So what are the operating principles that define intelligence? How is it constructed? What are its mathematical foundations? We have asked these questions earlier too, and even in the 70s we had automata (a machine which performs a range of functions according to a predetermined set of coded instructions).
For instance, you still have churches where clocks do specific tasks (for example, birds chiming) when their gongs strike midnight. The confluence of software and mathematics really took place in the 1950s when computers became available. We had the Eniac (Electronic Numerical Integrator And Computer) which was built during the war (World War II), and it became possible to build computer systems that could exhibit intelligence. So technically speaking, the first such computer systems were aimed at helping soldiers during the War—used to calculate things like the trajectory of a missile; planning logistics, etc. This explains why DARPA (Defense Advanced Research Projects Agency) has been funding computer science in general and AI for a long time. But the pioneers of AI have also been thinking about AI in scientific terms. Scientists like Alan Turing, who can be considered to be the fountainhead of AI despite the fact that he did not have access to real hardware, was one (such, who was), thinking about writing a programme to play chess.
So what exactly is AI?
The term was coined only in 1956 in a workshop (Dartmouth Summer Research Project on Artificial Intelligence) put together by John McCarthy, Marvin Minsky, Claude Shannon and Nathaniel Rochester. So you can think of AI as a branch of computer science but it is multidisciplinary: it has constant inputs from psychologists, linguists, roboticists, etc. You know intelligence when you see it. But how do you get an artificial agent to show intelligence?
Most people would consider a machine to be endowed with AI if it performs like a human being. You can think of examples like Google search or Google Maps. Computers can play chess, which was the holy grail of AI, and now even beat grandmasters. We think of intelligence as a spectrum that is constantly being propelled forward. And AI is what can do this. In some sense, AI is this desire to replicate intelligence in hardware. It is also, in a strange sense, the sum total of what an AI researcher is doing—something in robotics, NLP (natural language processing), etc. AI does not have a rigid definition. Hence we went with (Nils J.) Nilsson’s definition in the AI report (AI is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment).
What does the AI concept comprise?
The broad architecture of an intelligent agent is one that senses the environment, which does some amount of thinking, and then takes actions in the real world. If you consider sensing, there is the whole area of Computer Vision, which tries to make sense of objects we see. We, then, have the field of speech processing that has to do with audio. When you go to actuation, you have fields like robotics, haptics, etc.
AI also comprises machine learning, knowledge representation and logic. Humans typically make changes to their behaviour as they learn from experiences. So if the experience that a programme undergoes somehow changes its behaviour, you can say that the programme is applying machine learning. Now we have data-driven machine learning. Consider the case of hospitals gathering patient data—vital parameters like blood sugar level, age, gender, blood pressure, etc. Now let’s presume the hospital has collected this data for, say, 1,000 patients and diagnosed diabetic patients from this lot. A machine learning algorithm can then sift through this data and learn a predictor and predict whether a new patient (whose body parameters are available) is diabetic or not. Similar is the case with Google search that has moved from also displaying irrelevant links to using machine learning algorithms that have refined this task over the last few years.
The technical base for designing machine learning systems comes primarily from streams like probability and statistics. Then we have NLP, which is the most natural. By speaking natural languages that is quite different from the language that computers use to communicate (1s and 0s—bits and bytes), NLP has got to a state when you can take a shot at machine translation. The basic building blocks for NLP comprise actual models of languages. The linguistic hierarchy of language begins with syntax, moving on to semantics (study of relationships between words and how we construct meaning) and later, pragmatics (e.g. you can use sarcasm, puns—typically where you need an understanding of culture and context). Anyone dealing with NLP has to know these branches.
What about Deep Learning?
Deep Learning is a subset of machine learning. Artificial Neural Networks (ANNs) aim to simulate the human brain. They can be used to map inputs to predictions. There are standard learning algorithms to train ANNs based on data. Neural networks have been around for almost three decades but it was very difficult to train large and deep (with more layers) neural networks. Technically, they get stuck in the so-called Local Minimum (defined as the best solution to a problem within a small neighbourhood of possible solutions as opposed to the concept of ‘global optimum’, which is the optimal solution when every possible solution is considered). Ironically, you do need ANNs with many layers. Deep Learning helps us to successfully train larger ANNs.
Will true AI ever be achieved?
This has divided researchers over the decades. Most people in the AI community subscribe to the view that it does not really matter if machines are exactly replicating what human beings can do, as long as we do things that are intelligent and of value. For example, we have software that can fly an aircraft—very differently from how a human being would do it but it is doing it nevertheless.
What are the challenges?
Explaining AI to the public itself is a challenge. That said, can inputs be automatically learnt or combined in ways that can give higher level abstractions, to make the task of prediction easier? In the context of vision, the inputs to computer vision systems is just images (matrix of 0s and 1s). But deep learning algorithms are able to abstract things out like edges, boundaries, locations of objects, etc. You can have a deep learning system to autonomously drive a car. The software agent needs to reason about stop signs, etc., (and) not only about camera images; it has to learn a host of features from scratch. The other challenge is that hardware needs to match the pace of development in software. The brain also uses less energy than the chips that we are building today, which are simulating our neurons using a different architecture that generates a lot of heat. Hence, there is a move to develop specialized hardware using a neural architecture as opposed to the traditional Von Neumann architecture. If this becomes a reality a decade from now, we will be able to do a lot more with neural networks than we can now. Not only does a neural network need to be good at its task, it should also be energy efficient.
There is a lot of scare-mongering around AI. What is your perspective on this issue?
People are driven a lot by what they see in movies. Part of the common person’s concern is around the Frankenstein that will do nasty things. The fact is that AI is beginning to work in very specialized tasks like controlling an airplane or driving your car. It’s hard to think of any general AI machine that can do all of these tasks in unison—clearly something that humans excel at. As far as AI machines are concerned, this is certainly not on the horizon for at least the next 15-20 years. This scare is unfounded. But the fear that certain types of jobs will vanish because AI (software bots and machines) will do it more efficiently is genuine. Policy makers should prepare themselves for this, and proactively retrain and reskill the people who will be affected to help them cope better with the situation.