Artificial intelligence is already all around us: John MacIntyre

Artificial intelligence is perceived very negatively by many in society who don’t understand what AI really is, and what it means to them, says John MacIntyre

Prof. John MacIntyre, pro vice-chancellor (product and partner development), University of Sunderland.
Prof. John MacIntyre, pro vice-chancellor (product and partner development), University of Sunderland.

Mumbai: As pro vice-chancellor (product and partner development) of the University of Sunderland in the UK, Prof. John MacIntyre’s brief includes covering research, innovation, knowledge exchange, employer engagement and regional economy. Since 1996, MacIntyre has also been the editor-in-chief of Neural Computing and Applications—an international scientific peer- reviewed journal published by Springer Verlag.

In an interview, he talks about why artificial intelligence (AI) needs to be looked at more positively and how AI can contribute to society. MacIntyre will also address EmTech India 2017—an emerging tech conference organized by Mint and MIT Technology Review—on 9 March in New Delhi. Edited excerpts:

You completed your PhD in applied AI, focussing on the use of neural networks in predictive maintenance. What prompted you to do this research and what were your research findings?

When I worked in the Middle East, I taught myself programming and did a range of jobs and tasks to build my skill sets. I ended up managing teams and wanted to further my career, but also realized that I needed formal qualifications to do that. So, I returned to the UK, and took a full-time job working night shifts, to allow me to study full-time during the day.

The University of Sunderland had a programme of Combined Sciences that allowed you to take a major and minor option—so I majored in computer science, and my minor choice was physiology—which I chose simply out of personal interest. As it happened, it became very relevant as I then embarked on my doctoral work.

Having achieved a First Class Honours degree, I was offered the chance to do a PhD—and the most interesting option was a programme of research looking at how to improve the performance and reduce costs of a power generation plant through predictive maintenance and condition monitoring. The sponsor company was National Power, and I liked the idea of applying my knowledge in computer science and engineering to a specific industrial problem, and coming up with new ideas.

My physiology minor ended up being relevant because of the choice of using neural networks as a model or technique for pattern recognition and classification, in the face of very noisy and sometimes incomplete data, to provide diagnostics and prognostics for engineers to use in making decisions about maintaining the ancillary plant in power generation stations.

By the time I completed my PhD, we had saved literally millions of pounds for the company, through elimination of catastrophic failures, reduced downtime of generating plant, and reduced costs.

The study of neural networks does involve an interdisciplinary approach. Please elaborate.

Applications of neural networks (and the associated “natural” computational techniques, such as genetic algorithms) are incredibly varied and diverse. This is because the range of techniques can be applied, appropriately, to a wide range of problem types—classification, pattern recognition, optimization and prediction, to name only a few—in an even wider range of sectors and applications e.g. medical, industrial, financial, commercial, geophysical, and so on.

This means that collaborative ventures, where expertise from a range of fields is brought to bear on applying the techniques to help solve a problem or create a solution (not necessarily a perfect solution, but at least an advance on current technology) are becoming commonplace.

Doctors, engineers, bankers, geologists, physicists, metallurgists and computer scientists will all work together in various project teams to focus their collective expertise on applying AI techniques to create advances in knowledge and technology. I see this as the way forward and it is always refreshing to see how the blend of such disciplinary expertise creates a new dynamic to tackle difficult problems.

While there are those who believe in the potential of AI and its applications, a sizeable number of people including Stephen Hawking, Bill Gates and Elon Musk have expressed fears that AI-powered machines could rule over humans? What’s your take on this subject?

This is a major problem and encompasses some really big issues, including understanding, ignorance, focus and ethics. AI is already all around us, sometimes in very visible ways (e.g. Siri) but often in very invisible ways (linked to Internet profiling, banking algorithms, even embedded AI in cameras and washing machines).

These applications would generally be seen as positive, supporting humans in their modern, everyday lives. And yet, still, AI is perceived very negatively by many in society who don’t understand what AI really is, and what it means to them.

As editor-in-chief of the scientific journal Neural Computing and Applications, published by Springer Verlag, I see thousands of scientific papers each year, from all around the world, advancing AI techniques and applications—all of which, I would say, are intended to be positive contributions to society.

The problem is that the general public only see, and quite understandably, take their information from what the media, and in particular, film and TV, put before them. And because that is dominated by negative stories about AI taking over the world, eliminating humans (literally or metaphorically), and rendering humanity obsolete, it’s hardly surprising that most people have a pretty negative view of AI.

I believe the scientific and technical community has a responsibility to counter this negative with “good news” about AI, and to make it understandable, accessible, and therefore less frightening to society.

Tell us something about the work that the University of Sunderland does with its Institute of Automotive Manufacturing and Advanced Practice (AMAP). Do you believe that electric vehicles and connected cars will be the normal by 2025?

Connected cars are already here!

Most new generation vehicles are already IP-enabled devices with sophisticated interfaces, connecting them to the Internet. The next few years will see more developments in how vehicles connect to the environment, for example, the Connected Car programme of Hitachi Data Systems is driving towards the “CFX” concept—where the car can connect to any other Internet-enabled device.

The major developments are linked to the development of “driverless” cars—autonomous vehicles, in effect.

There are many, many difficult issues to resolve before driverless cars will be the norm—and I think that is likely to be decades away. Electric (and other alternatively-fuelled) vehicles are already commonplace, but I don’t think they will have completely replaced the internal combustion (IC) engine by 2025.

It seems to me that we will see, over say the next 20 years, a multi-faceted strategy of development, with even more efficient and clean IC engines being developed alongside improvements in battery technology and range for electric vehicles, and hydrogen and other alternatively-fuelled vehicles also being developed.

Right now, it is impossible to say which will become the dominant technology, or when.

More From Livemint