Mountain View, California: Google Inc. has been a powerful force in championing the use of artificial intelligence (AI) and machine learning and this was further reinforced by CEO Sundar Pichai during his keynote presentation at the company’s I/O Developer Conference on Tuesday at the Shoreline Ampitheatre in Mountain View, California. Besides using AI to solve real world problems and improve healthcare, Pichai and his team reminded the 7,000-plus audience that AI is everywhere it can possibly be in Google’s product portfolio—from its voice-based digital Assistant to the new smart compose feature in Gmail to Google Photos and even Android P, the next release of Android.
But Pichai was also clear “that we can’t just be wide-eyed about what we create. There are very real and important questions being raised about the impact of technology and the role it will play in our lives," said Pichai in the first few minutes of opening the key note.
“We know the path ahead needs to be navigated carefully and deliberately—and we feel a deep sense of responsibility to get this right. It’s in that spirit that we’re approaching our core mission," he added.
AI is working hard across Google products to help users save time. One of the best examples of this is the new Smart Compose feature in Gmail, said Pichai. By understanding the context of an email, we can suggest phrases to help you write quickly and efficiently. Similarly in Google Photos, it will now be easier to share a photo instantly via smart, online suggestions.
AI’s progress in understanding the physical world has dramatically improved Google Maps and created new applications like Google Lens. Maps can now tell you if the business you’re looking for is open, how busy it is, and whether parking is easy to find before you arrive. Lens lets you just point your camera and get answers about everything from that building in front of you ... to the concert poster you passed ... to that lamp you liked in the store window.
The all new Google News that was rolled out during the keynote also uses the best of artificial intelligence, AI techniques that examine a constant flow of information as it hits the web, analyze it in real time and organize it into storylines. This approach means Google News understands the people, places and things involved in a story as it evolves, and connects how they relate to one another. At its core, this technology lets Google synthesize information and put it together in a way that helps the user make sense of what’s happening, and what the impact or reaction has been.
But it is with the next phase of Google Assistant that the real power of AI and language understanding comes to the fore. The new Assistant will now be naturally conversational, visually assistive, and helpful in getting things done. Users will be able to have a natural back-and-forth conversation with the Google Assistant without repeating “Hey Google" for each follow-up request.
The Google Assistant is also coming to navigation in Google Maps this summer, so you can send text messages, play music and podcasts, and get information without leaving the navigation screen. Additionally, the Google Assistant will be used to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours, over the phone. Powered by a new technology called Google Duplex, the Assistant can call businesses on users’ behalf and understand complex sentences, fast speech, and long remarks, so it can get tasks done through a phone conversation. As of today, Google Assistant is available on more than 500 million devices, works with over 5,000 connected home devices, is available in cars from more than 40 brands, and it’ll be available in more than 30 languages and 80 countries by the end of the year.
Machine learning is also at the core of Android P that is built on the key themes of intelligence, simplicity and digital well being. The new Adaptive Battery prioritizes battery power only for the apps and services you use the most, to help you squeeze the most out of your battery. Machine learning has also been used to create Adaptive Brightness, which learns how an individual user likes to set the brightness slider given your surroundings. Technology will help ensure that the new Android P phone will help users better navigate their day. The new feature called App Actions, for instance, will help people get to their next task more quickly by predicting what they want to do next.
Finally, Pichai wants to push Google’s AI capabilities to enhance digital well-being and allow users to achieve a desired balance with technology.
Google’s focus is on giving the user a greater understanding of how you use your phone, as well as more controls, he says. So a new Dashboard in Android, for instance, shows you how you’re spending time on your device, including time spent in apps, how many times you’ve unlocked your phone, and how many notifications you’ve received.
App Timer lets you set time limits on apps, and will nudge you when you’re close to your limit and then gray out the icon to remind you of your goal. The new Do Not Disturb mode silences not just the phone calls and notifications, but also all the visual interruptions that pop up on your screen.
And if you turn your phone over on the table, it automatically enters Do Not Disturb so you can focus on being present. Finally, Wind Down will switch on Night Light when it gets dark, and it will turn on Do Not Disturb and fade the screen to gray at your chosen bedtime to help you remember to get to sleep at the time you want.
The author is at the I/O Conference in Mountain View, California as a guest of Google.