Google wants to solve new AI problems: Jeffrey Dean
Google senior fellow Jeffrey Dean on his role in implementing the company’s vision towards an AI-first world, as articulated by CEO Sundar Pichai
Tokyo: Jeffrey (Jeff) Dean is a Google senior fellow in a research group at Google where he leads the company’s artificial intelligence (AI) project called Google Brain. Along with his team, Dean, who joined Google in 1999, is currently implementing the company’s vision as articulated by chief executive Sundar Pichai—to build an “AI-first” world. In an interview on the sidelines of a “Google #MadewithAI” event, held recently in Tokyo, Dean explains what this vision encompasses and the challenges involved in implementing it. Edited excerpts:
What are the major steps involved in this process of implementing the Google strategy of building an AI-first world?
The steps involve making products that are useful, help others innovate and solve humanity’s big challenges. We want to solve new, fundamental AI and machine learning (ML) problems, and use those solutions in our products to make them better.
How much progress have you achieved?
Even in the 1980s and 1990s, people were excited about neural networks but we lacked computing power. Now we have powerful computers. Google Brain (launched in 2011) began research in 2012. Computers can now see (computer vision) and they can understand (speech recognition).
Today we have seven products with more than a billion people using them. We have been using AI and ML in our search ranking algorithms. Now we are working to infuse AI in a lot of our other products too. For example, we have Google Photos to make the photos searchable with AI. With Google Translate, you can translate text and speech; Google Allo can analyse your facial structure from your selfie and turn it into icons; Google Lens makes the real world searchable through imagery and augmented reality. We have “Smart Reply” in Gmail and your inbox, etc.
We now share the tools we built for our research with TensorFlow (Google’s open source machine learning software framework), which we released in November 2015. It is now the number one repository for ML on GitHub (an online repository where developers can host and review code, manage projects and build software). We also build cloud APIs (application programming interfaces) and enable other people to build interesting solutions to address their own problems—(for example) to tackle things like healthcare where we believe that ML can make a big difference. Besides, there is a lot of integration of AI with software and hardware—Google’s Pixel Buds (which also have the ability to perform near real-time translation) being an example.
What are some of the major challenges that you are facing when implementing this vision?
Talent is something that is definitely limiting the influence that ML is having on the world today. There just aren’t that many people (with ML skills). This issue can be tackled by training more people within our own engineering group. However, things like automated ML (AutoML) can help circumvent the issue of lack of experts. Educating more people takes a lot of time and is a relatively slow process. There are about a million organizations that should be using ML and have data that could be used for solving a problem in their companies. However, only a few thousand hire ML experts. That is the gap that AutoML can address (by automating the design of ML models).
While you use TensorFlow, other big technology companies use their own ML frameworks such as Caffe from Facebook and the Microsoft Cognitive Toolkit. Is that a concern?
It certainly is hard to train a model on one of these platforms and move it to another. However, a lot of what you are teaching people is concepts—it’s like a programming language. If you know two programming languages, picking up a third one is not too hard. It is similar for frameworks. It pushes all the people working on the frameworks, which is good for the ecosystem.
Do you think “Compact AI” will help further the growth of AI?
It is clearly much better if you can run these kinds of (AI) computations on your device because the latency is better if you have more computational power. However, that is not the case today so you have to use the cloud to run this model. But if we can adopt algorithmic approaches to take a large model and make it smaller, that’s a good thing. You will then be able to run that shrunken version of the model on the phone. There is a lot of work in designing new low-powered circuitry that is similar to the cloud TPUs (Google’s tensor processing units).
You will start to see new phones and other devices with low power and these kinds of accelerators built in. And that will sort of broaden the things it will do. Currently, if you try to run an image model on your smartphone, you will be able to do so but it will drain the battery in say 15 minutes. But if we manage to do so and, say, run an image model continuously on the phone with the battery lasting all day, you can starting looking at the world around you and do the kind of image recognition that Word Lens does all the time.
Am I right in assuming that you do not subscribe to the fear of AI overpowering humans any time soon?
I think it (the fear) is far-fetched. That said, I do think there are real concerns about AI but they are not the ones that most people fear. It is more about how do we take a ML system and deploy it so that it does not have bias and is fair—particularly for things like cars and robots. These are not the kind of things that most people discuss.
But those who fear that AI may soon surpass human intelligence (including Elon Musk and Stephen Hawking) are incredibly smart people. Surely, they need to be taken seriously...
Well! I think they are imagining a particular future and believe that only one possible future path can exist. It is very hard to know what will happen 50-75 years down the road. I personally believe we should focus on immediate to five-year concerns, and then focus on the next set of 5-10 year concerns—not the ones that might exist 75 years from now and those that feature as one of hundred thousand possible scenarios.
How important is a market like India for your company?
For Google, it is very clear that our growing marketplaces of users are coming from countries like India, Indonesia and Brazil. We want to build products that appeal and solve problems that people have in those countries. For example, the work that we are doing in translation is important—addressing the needs for the local market is critical.
Editor's Picks »
- Same-store sales growth trips at Future Retail
- Cipla Q4 FY18 results no reason to reverse stock underperformance
- Dr Reddy’s Q4: It’s a wait and watch, share price spike notwithstanding
- What SBI Q4 results say about the Indian economy and the bank
- Patanjali’s slowing growth does not mean that Colgate’s is accelerating