OpenAI CEO Sam Altman has now responded to users jailbreaking ChatGPT to bypass the inbuilt content restrictions. Soon after ChatGPT's launched in November last year, users have been trying to find a way to circumvent safeguards placed by OpenAI. The company for its part has stayed silent on users' attempt to bypass its security measures while bringing in regular updates to the chatbot.
OpenAI CEO Sam Altman has now responded to users jailbreaking ChatGPT to bypass the inbuilt content restrictions. Soon after ChatGPT's launched in November last year, users have been trying to find a way to circumvent safeguards placed by OpenAI. The company for its part has stayed silent on users' attempt to bypass its security measures while bringing in regular updates to the chatbot.
In a conversation with computer scientist and podcaster Lex Friedman, Altman said, “We want users to have a lot of control and get the model to behave the way they want within some very broad bounds."
In a conversation with computer scientist and podcaster Lex Friedman, Altman said, “We want users to have a lot of control and get the model to behave the way they want within some very broad bounds."
Explaining the reason for users to jailbreak ChatGPT, Altman said, “I think the whole reason for jailbreaking is right now, we haven't yet figured out how to like give that to people. And the more we solve that problem, I think the less need there will be for jailbreaking."
Explaining the reason for users to jailbreak ChatGPT, Altman said, “I think the whole reason for jailbreaking is right now, we haven't yet figured out how to like give that to people. And the more we solve that problem, I think the less need there will be for jailbreaking."
The OpenAI CEO also shared his experience with jailbreaking the first-generation iPhone at a very early stage in his life. He said, “ You know, when I was a kid basically I got twice on jailbreaking an iPhone, the first iPhone I think and I thought it was so cool."
The OpenAI CEO also shared his experience with jailbreaking the first-generation iPhone at a very early stage in his life. He said, “ You know, when I was a kid basically I got twice on jailbreaking an iPhone, the first iPhone I think and I thought it was so cool."
The new comments from the OpenAI boss come at a time when users have been experimenting with ChatGPT in a bid to ‘jailbreak’ it. Users have been giving ChatGPT prompts about behaving like a ‘DAN’ or Do Anything Now chatbot. Once in its DAN avatar, ChatGPT is able to speak about things that the chatbot usually doesn't respond to claiming it's just a language model.
The new comments from the OpenAI boss come at a time when users have been experimenting with ChatGPT in a bid to ‘jailbreak’ it. Users have been giving ChatGPT prompts about behaving like a ‘DAN’ or Do Anything Now chatbot. Once in its DAN avatar, ChatGPT is able to speak about things that the chatbot usually doesn't respond to claiming it's just a language model.
Recently, ChatGPT unveiled its newest and most powerful yet language model GPT-4 which has the ability to take inputs in text as well as image format. The company claims GPT-4 comes in the top 10 percent of candidates appearing for competitive exams in the US.