As the ChatGPT wave gains momentum, Samsung employees have come under fire for accidentally leaking confidential data. The company has reportedly seen three separate instances of employees leaking data within a span of 20 days. Engineers at the semiconductor division had been allowed to use ChatGPT to fix problems with the source code.
However, staffers had erroneously entered secret data - such as the source code for a new program - while interacting with the chatbot. In one notable instance an employee had even shared a recording of an company meeting - which carried sensitive information - while attempting to convert it into notes.
A report my The Register indicates that the company chief had warned the employees against such gaffes.
“If a similar accident occurs even after emergency information protection measures are taken, access to ChatGPT may be blocked on the company network,” he was quoted as warning.
With ChatGPT retaining the data it is fed in order to train itself and improve, these internal updates from Samsung are now in the hands of the chatbot maker.
“As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements…Your conversations may be reviewed by our AI trainers to improve our systems,” OpenAI explains on its website.
The platform however allows individuals to file data deletion requests.
Following the leaks, Samsung Semiconductor is now believed to be working on its own AI for the internal use of employees. According to reports, this will however be limited to processing prompts under 1024 bytes.
Catch all the Business News , Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.