Active Stocks
Fri Dec 01 2023 15:59:55
  1. Tata Steel share price
  2. 129.95 1.56%
  1. Reliance Industries share price
  2. 2,393.45 0.72%
  1. NTPC share price
  2. 269.05 2.97%
  1. ICICI Bank share price
  2. 946.35 1.19%
  1. HDFC Bank share price
  2. 1,555.5 -0.22%
Business News/ Companies / News/  IBM framing policy on Gen AI
Back Back

IBM framing policy on Gen AI

IBM is developing a policy to regulate the use of third-party generative AI tools, such as OpenAI's ChatGPT and Google's Bard, by its employees. The company is evaluating the segment and its veracity, as such tools are built on untrusted sources that can't be used, said Gaurav Sharma, vice president at IBM India Software Labs. IBM won't be the first company to look at regulating the use of ChatGPT. Samsung Electronics, Amazon, Apple and global banks, including Goldman Sachs, JP Morgan and Wells Fargo, are among those to have restricted internal use of ChatGPT due to concerns about data security.

IBM won’t be the first company to look at regulating the use of ChatGPT. (Bloomberg News)Premium
IBM won’t be the first company to look at regulating the use of ChatGPT. (Bloomberg News)

IBM is in the process of drafting a policy that will define how third-party generative artificial intelligence (AI) tools such OpenAI’s ChatGPT and Google’s Bard are used by its employees, three senior executives at the technology giant said at its AI Innovation Day event in Bengaluru on 20 June.

Speaking on the rise of generative AI and how such tools are used for internal processes, Gaurav Sharma, vice president at IBM India Software Labs, said the company is evaluating the segment and its veracity, “since these tools are built on untrusted sources that can’t be used." He added that a policy is “still being framed" around the use of generative AI applications such as ChatGPT.

Vishal Chahal, director of automation at IBM India Software Labs, further affirmed the development of an internal policy on the use of such tools.

Work on the policy remains under development, but so far, no outright bans have been put in place. “A general education has been conducted around not putting our code into ChatGPT, but we haven’t banned it," Shweta Shandilya, director at IBM India Software Labs (Kochi) said.

“With every new technology such as the use of other generative AI tools (beyond ChatGPT), deliberations around its usage are an ongoing process," a spokesperson for IBM said respond to a query on the framing of the internal policy on ChatGPT.

IBM won’t be the first company to look at regulating the use of ChatGPT. On 2 May, Bloomberg reported that South Korea’s Samsung Electronics decided to ban the use of ChatGPT among employees after sensitive internal data was deemed to have been leaked. On 25 January, Insider reported Amazon to have issued a similar internal email, asking staff to not use ChatGPT due to concerns with the security of sharing sensitive internal data with OpenAI. On 18 May, The Wall Street Journal reported Apple to have also taken a similar route.

Global banks Goldman Sachs, JP Morgan and Wells Fargo are also deemed to have restricted internal use of ChatGPT, out of concern regarding leakage of sensitive client and customer data to OpenAI’s test bed of data.

IBM’s policy comes as a report, published on 20 June by Singapore-based cyber security firm Group-IB, claimed that data from over 100,000 ChatGPT accounts were scraped and sold on dark web marketplaces.

However, on 22 June, OpenAI said the stolen data was a result of “commodity malware on devices, and not an OpenAI breach."

Explaining why such internal bans are taking place, Jaya Kishore Reddy, co-founder and chief technology officer at Mumbai-based AI chatbot developer said, “There are a lot of chances that generative AI tools can generate misinformation. There is an accuracy problem, and people may even misinterpret the generated information. Further, the data fed into these platforms are used to train and fine-tune responses — this may result in leakage of a company’s confidential information."

On 27 February, Mint reported that companies are wary of deploying tools such as ChatGPT, with concerns including factors such as hallucination of data, potentially inaccurate and misleading information, and no safeguards on retrieval or deletion of sensitive corporate data.

Bern Elliot, vice-president and analyst at Gartner, said at the time, “It is important to understand that ChatGPT is built without any real corporate privacy governance, which leaves all the data that it collects and is fed without any safeguard. This would make it challenging for organizations such as media, or even pharmaceuticals, since deploying GPT models in their chatbots will leave them with no safeguard in terms of privacy. A future version of ChatGPT, backed by Microsoft through its Azure platform, which could be offered to businesses for integration, could be a safer bet in the near future."

Since then, OpenAI has introduced better privacy controls. On 25 April, the company said via a blog post that users can turn off conversation history to have their usage data permanently deleted from its servers after 30 days. It also affirmed that a “for business" version of ChatGPT is under development, which would allow companies greater control over their data.’s Reddy added that companies are presently opting for enterprise-grade application programming interfaces (APIs) from companies like OpenAI that ensure data security, or building their own in-house models.

Milestone Alert!
Livemint tops charts as the fastest growing news website in the world 🌏 Click here to know more.

Catch all the Elections News, Corporate news and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
More Less
Updated: 25 Jun 2023, 10:02 PM IST
Next Story footLogo
Recommended For You
Switch to the Mint app for fast and personalized news - Get App