A big gap between the sci-fi side of things and achieving reality: Sethu Vijayakumar3 min read . Updated: 17 Dec 2018, 01:42 AM IST
Prof. Sethu Vijayakumar from the University of Edinburgh, also co-director of the Alan Turing Institute's of Artificial Intelligence (AI) programme, says that while we have made some big advancements in AI, we're still far away from what science fiction projects
Mumbai: Sci-fi movies typically project robots, mostly humanoids, in a scary light--ones that attempt to overpower the human race. In an interview, Prof. Sethu Vijayakumar from the University of Edinburgh, also co-director of the Alan Turing Institute’s of Artificial Intelligence (AI) programme, says that while we have made some big advancements in AI, we’re still far away from what science fiction projects. Edited excerpts:
How far removed are sci-fi kind of robotic scenarios from reality?
We can do fairly advanced things in terms of decision making and inference—as in taking data, using some clever machine learning (ML) algorithms and doing inference on data to extract information that is not obviously visible to the human eye. But when it goes from that to changing the state of the world—means using robots to move things around, navigate in a cluttered environment and work with noisy sensors—we’re still quite far away from a robustness perspective. We do a lot of things as proof of concepts in laboratories, but moving that from labs to complicated environments is still a big challenge.
How would you rate robots like Asimo and Sophia?
I, too, did some work on Asimo, which was an amazing piece of engineering in terms of integration, sensing and (other) capabilities. But still, it wasn’t what we call (in today’s terms) an intelligent system. We’re now moving towards systems that can make decisions on their own. The NASA programme that we have (called Valkyrie) is an example. Sophia is another example where they’ve exploited signal processing and the conversational agent side of things quite well. But if you look at Sophia itself, it’s templatized. It cannot actually use hands and legs to pick up things, and make decisions.
We see a big gap between the science fiction side of things and achieving reality. It’s very important that people understand not just the capabilities of AI and robotics, but also the limitations. Without this understanding, we may jump into deploying systems that are not reliable.
What is the scope of robotics and artificial intelligence (AI) in countries like India?
The emphasis in India, I would say, would be more towards using robots and AI technologies in improving the efficiency of doing things. Just like we use laptops and other devices to carry out tasks, as opposed to typewriters, I think of robotics and AI as an enabling tool for improving efficiency. And in the context of India, it is not about taking people out of jobs but actually using these technologies as a way of augmenting their skills--either to do what they’re doing with higher efficiency, or enabling them to do things that were not feasible before from a human perspective.
Can you elaborate on why there is scaremongering (like the ‘Stop Killer Robot’ campaign) around robots, especially humanoids (robots that look like humans)?
If you dig deep into the headline of ‘Stop killer robots’, it is not looking at the notion of a Terminator-like robot that will take over the world. It is more about weaponization, about using robotics and AI technology for fighting wars, and creating automatic AI that would make decisions to fire weapons for example, or differentiate between people during decision making. It’s those kind of cognitive levels decisions where somebody has to be involved.
At the moment, even if you have automated systems in the world, there is a direct line of responsibility that can be traced. So this campaign is saying that we should take care not to automate processes in such critical cases. The same could be said of healthcare robots as well. People are still uncomfortable about such robots taking significant decisions.
Tech luminaries like Elon Musk and Stephen Hawking have raised concerns over AI’s prowess while other academics have taken a pro-AI stance. What’s your take?
We have to develop AI with the right kind of thought processes around it. Particularly if you use publicly funded money to fund researches, we have to make sure that AI does not only benefit the elite, or a few, and that it has to be geared towards social good for all.
AI will create significant imbalances in the current socio-economic strata, and if that happens, we have to put in place measures to ensure that people who are disadvantaged are compensated. We have to make sure that we think through that whole process of displacements, reskilling, etc. Without that we will have a broken society where AI and robotics will enable us to do some things very efficiently, making some of the things that people have been doing for ages, redundant. And there will be significant turmoil in the transition phase which we have to be very careful about.