Eric Horvitz is a technical fellow and director at Microsoft Research Labs. A recipient of the Feigenbaum and the Allen Newell Prizes for contributions to artificial intelligence (AI), he is also on the US President’s Council of Advisors on Science and Technology, Defense Advanced Research Projects Agency, and the Allen Institute for Artificial Intelligence. He is also part of the standing committee of Stanford University’s One Hundred Year Study on Artificial Intelligence.

Horvitz, who comes at least once a year to the country to interact with the India labs team, spoke about his work at Microsoft Research. He also shared his thoughts on the benefits and fear of AI, and attempts to address the bias in algorithms. Edited excerpts:

How do you view the role and contribution of Microsoft Research’s India labs?

The folks from this lab visit Microsoft Research multiple times in a year. We also have a semi-annual meeting called the Disruptive Technology Review, where people from the India lab present ideas for this lab about what technologies we think will disrupt us or even Microsoft, and we present those to the senior leaders at Microsoft twice a year now.

Give us some examples of these disruptive technologies.

First of all, we would not have necessarily recognizable brand names that are fashionable to describe them. Instead of just using the word disruptive, we term them as important investments for the future. These include new kinds of programming languages to program intelligent systems. The topic would be new languages and platforms for interactive real-time AI, like having a conversation in a fluid manner with smart agents that understand multiple topics. I am convinced that within a decade or two, we will have very fluid interactions with AI systems but we don’t know how to build them yet.

The other direction we talk about a lot publicly is cloud intelligence versus edge intelligence. How does the cloud talk to edge (devices)? There are also Microsoft Research labs that have been a core source for the company of what we call “Responsible AI". We have done a great deal of work with our policy team under Brad Smith (Microsoft’s president and chief legal officer) on the AETHER (AI and Ethics in Engineering and Research) Committee.

Would the AETHER Committee principles handle AI bias like what happened in 2016 with your company’s Tay bot when it was accused of racial bias?

One concern that arises with AI systems is that while human beings may have biases, they are at least not systematic. However, if an AI system captures the human bias and you replicate this application across the world, there might be this implicit embedded bias that’s systematic now. In the case of Tay, it was actually a malicious attack with people trying to get the systems to do strange things by finding vulnerabilities, and it taught us quite a few lessons about AI-related systems.

We are coming up internally with what we call a responsible AI standard in Microsoft that is applicable across the whole company. We also have a working group that is actually a hotline and mechanism for anybody in Microsoft—engineering team, sales team, field work—to sensitize them to the questions they actually wanted to ask. And there is an “Ask AETHER" line—they can actually request and review or get some policy guidance on the use of AI consequences like does it involve the possibility of harm, emotional or physical, and does it potentially infringe on human rights issues.

Even as some leaders and experts tout the benefits of AI, there are an equal number of influential leaders who harp on its dark side. Your thoughts.

I think that the Western civilization deals with the myths of Frankenstein’s monster that are deep-rooted in our civilization, our society and in our minds. Knowledge is always a good thing, and you will learn how to control in this. That said, I think that I view these (concerns of AI becoming sentient and overpowering humans) more as rough edges than existential threats. I don’t fear the rise of super intelligence. I think that as we build these systems, we will learn how to guide and control them. It is the lack of even basic AI being properly deployed that is a lost opportunity. Here in India, can we build machine learning systems that can figure out when students will not make it to their course work or who will drop out of school, and then use that inference to intervene to guide students back to their curriculum to get certain students selectively to pay more attention when they would drop out? These are actual prototypes we are working on in India here right now.

Close