A talented scribe with stunning creative abilities has made a sensational debut. ChatGPT, a text-generation system from OpenAI, has been writing essays, screenplays and limericks. Even its jokes can be funny. Many scientists in the field of artificial intelligence (AI) have marvelled at how humanlike it sounds. And it will soon get better. OpenAI is expected to release its next upgrade, GPT-4, soon and early testers say it’s gotten better.
But all these improvements come with a price. The better that AI gets, the harder it will be to tell human and machine-made text apart. OpenAI must prioritize efforts to label the work of machines, or we could be overwhelmed with a confusing mishmash of real and fake information online. For now, the onus is on people to be honest. OpenAI’s policy for ChatGPT states that when sharing content from its system, users should clearly indicate that it is generated by AI “in a way that no reader could possibly miss” or misunderstand.
To that, I say, good luck. AI will almost certainly help kill the college essay. Governments will use it to flood social networks with propaganda, spammers to write fake reviews and ransomware gangs to write better phishing emails. None will point to the machine behind the curtain. And you will just have to take my word for it that what you’re reading was drafted by a human.
AI-generated text needs some kind of watermark, like how stock photo sellers protect their images and movie studios deter piracy. OpenAI already has a method for flagging another tool called DALL-E with an embedded signature in each image it generates. But it is much harder to track the provenance of text. How do you put a hard-to-remove label on words?
The most promising approach is cryptography. In a lecture last month at the University of Texas at Austin, OpenAI researcher Scott Aaronson gave a glimpse of how the company might distinguish text generated by the even more human-like GPT-4 . He explained that words could be converted into a string of tokens, representing punctuation marks, letters or parts of words, making up about 100,000 tokens in total. The GPT system would then decide the arrangement of those tokens (reflecting the text itself) in such a way that it could be detected using a cryptographic key known only to OpenAI. “This won’t make any detectable difference to the end user,” Aaronson said. Anyone who uses a GPT tool would find it hard to scrub off the signal. The best way to defeat it would be to use another AI system to paraphrase the GPT tool’s output.
But even assuming his method works outside of a lab setting, OpenAI still has a quandary. Does it release the watermark keys to the public, or hold them privately?
If the keys are made public, professors everywhere could run their students’ essays through that software to make sure they aren’t machine-generated, in the same way we check for plagiarism. But that would also make it possible for bad actors to detect the watermark and remove it.
Keeping the keys private can be a business model for OpenAI: charging people for access. Infotech administrators could pay a subscription to scan incoming email for phishing attacks, while colleges could pay a group fee for their professors—and the price to use the tool would have to be high enough to put off ransomware gangs and propaganda writers. OpenAI would then essentially make money from halting the misuse of its own creation.
We also should bear in mind that technology companies don’t have the best track record for preventing their systems from being misused, especially when they are unregulated and profit-driven. But the strict filters that OpenAI has already put place to stop its text and image tools from generating offensive content are a good start.
Now OpenAI needs to prioritize a watermarking system for its text. Our future looks set to become awash with machine-generated information, not just from OpenAI’s increasingly popular tools, but from a broader rise in fake, “synthetic” data used to train AI models and replace human-made data. Images, videos, music and more will increasingly be artificially generated to suit our hyper-personalized tastes.
It’s possible of course that our future selves won’t care if a catchy song or cartoon originated from AI. Human values change over time; we care much less now about memorizing facts and driving directions than we did 20 years ago, for instance. So at some point, watermarks might not seem so necessary.
But for now, with tangible value placed on human ingenuity that others pay for, or grade, and with the near certainty that OpenAI’s tool will be misused, we need to know where the human brain stops and machines begin. A watermark would be a good start.
Parmy Olson is a Bloomberg columnist who covers the technology sector
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.