Intel’s VP and GM, Artificial Intelligence Products Group (AIPG) shares his views on burning questions around the AI. He also explains how to build a robust Artificial Intelligence solution and why it’s important to have the technology deeply baked into the modern chipsets.

Artificial Intelligence (AI) is gradually becoming the backbone of various technology and non-technology companies across the world. AI uses elements such as machine learning, deep learning, and computer vision – with huge data at the core. The elementary purpose of AI is to predict future outcomes and deliver data-driven solutions.

But as with all new technological disruptions, there are apprehensions. And in AI’s case, people have raised concerns that the technology will kill millions of jobs and could be lethal for mankind.

We spoke to Naveen Rao, VP and GM, Artificial Intelligence Products Group (AIPG) at Intel, about these apprehensions. Naveen also explains what actually an AI is, how it is affecting our lives, and what goes into building a robust AI.

Why is Artificial Intelligence important? Why is there a sudden spike in interest in AI?

Three things have changed: First, compute capability has progressed to the point where we’ve crossed the threshold required to support the intense demands of machine intelligence. Second, our world of smart and connected devices (over 200 billion by 2020) has unleashed a data deluge which is required to train many AI algorithms and ripe to be mined for fresh insights. Third, the road to AI is also being paved by a surge of innovation that has pushed us over the tipping point from research to mainstream use. Each new AI algorithmic innovation and use case opens more eyes to the power of AI, leading more innovators to join the community and stimulating an ever-increasing demand for the technology.

The promise of AI is to harness the power of collaborative human-like intelligence to solve the world’s most critical issues. Through AI, we have an emerging opportunity to recognize patterns and extract meaning from the massive volumes of data and use those insights to transform the way businesses operate and how people engage in every aspect of life. AI will bring new capabilities to everything from smart factories to drones to sports to health care and to driverless cars. Data is the common thread across all these applications, and our vision is to harness the power of AI and analytics, making data itself transformational.

AI is the next wave of computing, and by allowing machines to learn, reason, act and adapt in the real world, machine learning is helping businesses unlock deeper levels of knowledge and insights from massive amounts of data.

How does AI benefit end users? Why is it important for them?

From what I’ve seen, the personalization of customer experiences through AI will be the most immediate benefit to end-users. Today, AI-enabled smart devices can understand their customers’ behavior across touchpoints, and integrate the offline and online, in their buying cycles. Machine

learning is taking targeted and personalized advertising to another level by analyzing purchasing patterns and preferences, and using that data to tailor ads, product recommendations and search results to individuals. One can see this in retail, with the omnipresence of tools like chatbot assistance, and biometric verification, which can even pave the way for models like anticipatory shipping and drone delivery.

AI is becoming a key part of personal computing with smarter machines and the integration at chipset-levels. What is the goal of an AI-ready chipset?

Take our chip development as an example, we are now working towards advancing our Neural network Processor (NNP), a product that was designed from the ground up for deep learning workloads, with an emphasis on dense matrix multiplies and custom interconnects for parallelism. The chip design leveraged compute characteristics specific for deep learning. For example, the NNP does not have a managed cache hierarchy and memory management is performed by software, which allows more efficient use of die area by eliminating cache controllers and coherency logic.

While inference workloads today are satisfied with existing solutions ranging from CPUs to Intel FPGAs, we see customer demand for inference accelerators increasing in the future. To ensure customers have the right silicon for the right usage, we are utilizing our inference expertise and leadership to satisfy this upcoming demand as well. As the inference market continues to grow, and customers continue to scale, we will support customers with diverse solutions that run inference from end point devices to the cloud and data center.

One of the common myths around AI is that it may kill jobs. What is your take?

The demand for jobs is shifting, and there will be a continuous shift due to AI over the next decade. Certain jobs will become undervalued and will deploy AI and advanced robotics to finish critical tasks. For this, a certain kind of environment needs to be created for data scientists so that they can meet the demand.

During the industrial revolution, we saw something similar. There was a scare that people’s jobs would be replaced by machines but what happened was that people started to train their skills for higher order, cognitive specializations, and become more competitive in the economy.

Is AI bad for the mankind? What do you think about AI’s role in helping humans?

To get to the point where we have true artificial general intelligence, we’re very far away. We’re not even close to it, and researchers in the AI space know that. We’re building tools that allow us to scale our ability to learn from our data.

If you think about just the evolutionary arc of humanity, information has been a function of the time we’ve lived in. If we were solving problems like shelter and food supply, the information we needed was simply the information to survive. Now that we’ve solved the big problems, we’ve started thinking about creative ways of solving more interesting problems.

Science today is really about analyzing massive data sets. We have to work together using tools that allow us to break that data down to start drawing useful inferences that drive understanding. I see AI as the enabler for communication network that allows us to do bigger things.

Can we totally rely on AI? What about the error rates?

Human biases are self-evident in machine learning as they can transcend, often subconsciously, into machines from their makers. As AI is unique in that it keeps learning adaptively, even when it leaves the lab, cognitive biases inherent in the algorithms to interpret data can express themselves. We’re working tirelessly to de-bias AI by improving the quality and volume of learning datasets, which reduces the possibility of distortion. Expanding the diversity of AI talent pools to include linguists, sociologists, and anthropologists, and explicitly integrating bias testing to the product development cycle, are also imperatives we’re taking seriously.

Let’s look at autonomous driving – we gather data sets in the US or in Europe. They’re a lot more ordered - the way things work there than in New Delhi or Bangalore. If you built a model that works in US, it’s not going to work here. So you need to really understand how the congestion flows and actually, how the drivers think. The designer of the algorithm needs to understand this to actually build the proper solution.

It’s going to very difficult for someone who’s from the US or Europe to come into India and build a solution here. And so I think the opportunity here for people to understand the local ecosystem is really that local, tribal knowledge. You understand how drivers work, you understand how intersections work, you understand how the horn is used, how signals are used. You have to actually put that all into the algorithm to build a real working solution for these markets.

Close