Climate change deniers are pushing an AI-generated paper questioning human-induced warming, leading experts to warn against the rise of research that is inherently flawed but marketed as neutral and scrupulously logical AFP reported. The paper rejects climate models on human-induced global warming and has been widely cited on social media as being the first "peer-reviewed" research led by artificial intelligence (AI) on the topic.
Titled A Critical Reassessment of the Anthropogenic CO2-Global Warming Hypothesis, it contains references contested by the scientific community, according to experts interviewed by AFP. Computational and ethics researchers also cautioned against claims of neutrality in papers that use AI as an author. The new study -- which claims to be entirely written by Elon Musk's Grok 3 AI -- has gained traction online.
There is overwhelming scientific consensus linking fossil fuel combustion to rising global temperatures and increasingly severe weather disasters. The AI-generated paper seeks to deny this.
Academics have warned that the surge of AI in research, despite potential benefits, risks triggering an illusion of objectivity and insight in scientific research. "Large language models do not have the capacity to reason. They are statistical models predicting future words or phrases based on what they have been trained on. This is not research," argued Mark Neff, an environmental sciences professor.
A team of researchers from several US universities, headed by Abhilasha Ravichander from Carnegie Mellon University, looked into the usually opaque sources used for training Large Language Models (LLMs). In a paper published on the study, the researchers propose a new method for identifying training data “memorized” by models. “In this work, we demonstrate a new method to identify training data known to proprietary LLMs like GPT-4 without requiring any access to model weights or token probabilities, by using information-guided probes,” says the paper, referring to a method the researchers have developed that uses “surprisal” text—unusual words and phrases that would be used by humans writing works of fiction or articles in publications like The New York Times, which a predictive language model should not be able to ‘guess’—to identify that these copyrighted texts have been memorized wholesale by the LLMs. “Our work builds on a key observation: text passages with high surprisal are good search material for memorization probes. By evaluating a model's ability to successfully reconstruct high-surprisal tokens in text, we can identify a surprising number of texts memorized by LLMs,” the researchers write in the abstract to the paper.
The study could have an impact on the several lawsuits against ChatGPT and others claiming that copyrighted material has been used to train the models, which AI companies have denied, partly because the allegations would be difficult to prove.
It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen — a person about to present an argument in a lawsuit — not only had no law degree, but didn't exist at all, AP reports. The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division’s First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute. A man appeared on a video screen, seemingly a lawyer who was representing Dewald. Only, it was an AI avatar created by the plaintiff, which the judges caught on pretty soon.
Dewald later penned an apology to the court, saying he hadn't intended any harm. He didn't have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words.
In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.