Google plans to give details on new search features powered by artificial intelligence a day after Microsoft Corp. saidit was building the technology behind the chatbot ChatGPT into its Bing search engine.
At an event Wednesday in the French capital, the Alphabet Inc. unit plans to describe new types of search results that include more direct and lengthy textual responses generated in response to complex queries, according to a screenshot of one such result, rather than the snippets and short direct answers and links to outside sites now available through the search engine.
The new AI-powered features will “distill complex information and multiple perspectives into easy-to-digest formats,” Alphabet Chief Executive Sundar Pichai said in an internal company email on Monday.
“We have a lot of hard and exciting work ahead to build these technologies into our products and continue bringing the best of Google AI to improve people’s lives,” Mr. Pichai wrote in the email, which was viewed by The Wall Street Journal.
Google’s announcement of AI enhancements to its search engine is its latest counteroffensive in a tit-for-tat AI battle. It comes a day after Microsoft announced that it would integrate ChatGPT technology into its Bing search engine, allowing users to pose questions in natural language and receive direct responses. Microsoft said last month it is making a multiyear, multibillion-dollar investment in OpenAI, maker of ChatGPT and other AI tools such as Dall-E 2 image-generation software.
Google said Monday it was rolling out a new ChatGPT-like AI service called Bard to a select set of testers, with a broader public launch in coming weeks. The new experimental servicegenerates textual responses to questions posed by users, based on information drawn from the web. Bard, unlike the initial version of ChatGPT, can access information from the real world through Google Search, according to a screenshot of a response by Bard viewed by the Journal.
Those developments are part of a fast-spreading AI war over the commercial potential of so-called generative AI—artificial intelligence that can create content in response to short user inputs—since OpenAI moved to release ChatGPT publicly late last year. Microsoft has promised to integrate capabilities from generative AI tools from OpenAI across all of its products quickly, as well as making them available to outside developers.
Others are jumping into the fray as well. China’s Baidu Inc. is developing an AI-powered chatbot similar to ChatGPT called “Ernie bot,” which it plans to launch next month.
Google said it plans to show Wednesday how it is making the way people search for information “more natural and intuitive than ever before.” It didn’t elaborate ahead of the event, which is set to be streamed live on Google’s YouTube video service.
In an example of a new AI search page that Google published Monday, Google showed a three-paragraph textual response to the query, “Is piano or guitar easier to learn and how much practice does each need?” atop a results page that includes a box of related links with thumbnail images.
In the AI-generated text response, the search engine cites opposing perspectives favoring the ease of each musical instrument and says music teachers recommend at least an hour of daily practice for beginners.
Google has been one of the primary innovators making fundamental artificial-intelligence tools that have helped power some of the latest applications popularizing the technology. Until the new AI race kicked off, the company mostly built AI quietly into its services, such as by improving its automated translations, rather than making a generalized AI product available to the public.
In part, according to Google executives, that is because the company has been reluctant to roll out tools that, like ChatGPT, can sometimes spout false information or nonsense in response to user queries. The company has also been under scrutiny by researchers, regulators and its own staffers to police its own use of AI.
In 2018, Google created a series of AI principles that it said it would apply to its work going forward. Those rules include requirements that the AI tools should be socially beneficial, they should avoid reinforcing biases and they should be built and tested for safety in constrained environments.
Researchers cite many examples of the potential dangers. AI technology known as deep fakes, for instance, can create video that appears to be of real people saying or doing things they never said or did.
Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.