Google warns its own staff about chatbots including Bard, advises not to enter confidential materials: Report

Alphabet has warned its staff about the use of chatbots, including its own Bard, citing its policy on safeguarding information. The company has advised employees not to enter confidential materials into AI chatbots, as human reviewers may read the chats.

Edited By Karishma Pranav Bhavsar
Published16 Jun 2023, 07:23 AM IST
The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York City (Image: Reuters)
The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York City (Image: Reuters)

Alphabet Inc. is warning its staff about how they use chatbots, including its own Bard, four people familiar with the matter told Reuters. This also comes at same time when the company markets its the program around the world.

As per the report, the company has advised employees not to enter its confidential materials into AI chatbots, the people said and the company confirmed, citing long-standing policy on safeguarding information.

The chatbots, among them Bard and ChatGPT, are human-sounding programs that use so-called generative artificial intelligence to hold conversations with users and answer myriad prompts. The report said that added that human reviewers may read the chats, and researchers found that similar AI could reproduce the data it absorbed during training, creating a leak risk.

Also Read: Google Bard vs Google Search: Differences between both AI-powered platforms explained

Moreover, some people also told Reuters that, Alphabet has also alerted its engineers to avoid direct use of computer code that chatbots can generate. 

When Reuters asked for comment, the company said Bard can make undesired code suggestions, but it helps programmers nonetheless. Google also said it aimed to be transparent about the limitations of its technology.

Also Read: Google Bard AI chatbot available in India. Here are 5 things you should know

The concerning factor is how Google wishes to avoid business harm from software it launched in competition with ChatGPT. 

At stake in Google’s race against ChatGPT’s backers OpenAI and Microsoft Corp are billions of dollars of investment and still untold advertising and cloud revenue from new AI programs.

Google’s caution also reflects what’s becoming a security standard for corporations, namely to warn personnel about using publicly-available chat programs.

A growing number of businesses around the world have set up guardrails on AI chatbots, among them Samsung, Amazon.com and Deutsche Bank, the companies told Reuters. Apple, which did not return requests for comment, reportedly has as well.

Also Read: Artificial intelligence to destroy humanity in 5 years: Top CEOs alarmed by AI’s catastrophic potential

According to a survey of nearly 12,000 respondents including from top US-based companies showed that some 43 percent of professionals were using ChatGPT or other AI tools as of January, often without telling their bosses.

Google told Reuters it has had detailed conversations with Ireland's Data Protection Commission and is addressing regulators' questions, after a Politico report Tuesday that the company was postponing Bard's EU launch this week pending more information about the chatbot's impact on privacy.

Worries about sensitive information

Such technology can draft emails, documents, even software itself, promising to vastly speed up tasks. Included in this content, however, can be misinformation, sensitive data or even copyrighted passages from a “Harry Potter” novel.

As per the Google privacy notice updated on June 1 states: "Don’t include confidential or sensitive information in your Bard conversations."

Some companies have developed software to address such concerns. For instance, Cloudflare, which defends websites against cyberattacks and offers other cloud services, is marketing a capability for businesses to tag and restrict some data from flowing externally.

Google and Microsoft also are offering conversational tools to business customers that will come with a higher price tag but refrain from absorbing data into public AI models. The default setting in Bard and ChatGPT is to save users' conversation history, which users can opt to delete.

It "makes sense" that companies would not want their staff to use public chatbots for work, said Yusuf Mehdi, Microsoft's consumer chief marketing officer.

"Companies are taking a duly conservative standpoint," said Mehdi, explaining how Microsoft's free Bing chatbot compares with its enterprise software. "There, our policies are much more strict."

Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs, including its own, though a different executive there told Reuters he personally restricted his use.

Matthew Prince, CEO of Cloudflare, said that typing confidential matters into chatbots was like "turning a bunch of PhD students loose in all of your private records."

(With inputs from Reuters)

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.

Business NewsTechnologyGoogle warns its own staff about chatbots including Bard, advises not to enter confidential materials: Report
MoreLess
First Published:16 Jun 2023, 07:23 AM IST
Most Active Stocks
Market Snapshot
  • Top Gainers
  • Top Losers
  • 52 Week High
Recommended For You
    More Recommendations
    Gold Prices
    • 24K
    • 22K
    Fuel Price
    • Petrol
    • Diesel
    Popular in Technology