Active Stocks
Sat May 18 2024 12:49:03
  1. Tata Motors share price
  2. 952.95 0.76%
  1. Power Grid Corporation Of India share price
  2. 316.85 1.12%
  1. Tata Steel share price
  2. 167.90 0.39%
  1. ITC share price
  2. 436.50 -0.02%
  1. State Bank Of India share price
  2. 821.30 0.42%
Business News/ Technology / News/  Bard, Chatgpt like chatbots used to create dozens of news content: NewsGuard report
BackBack

Bard, Chatgpt like chatbots used to create dozens of news content: NewsGuard report

NewsGuard discovered several instances where the AI chatbots generated false information for articles published on these websites.

According to NewsGuard co-Chief Executive Officer Gordon Crovitz, companies such as OpenAI and Google should be cautious in training their models to prevent them from fabricating news, as the group's report demonstrated. (REUTERS)Premium
According to NewsGuard co-Chief Executive Officer Gordon Crovitz, companies such as OpenAI and Google should be cautious in training their models to prevent them from fabricating news, as the group's report demonstrated. (REUTERS)

NewsGuard, a news-rating group, published a report on Monday revealing the presence of numerous news websites generated by AI chatbots that are appearing online. The report highlights concerns about how this technology could amplify existing fraud techniques.

Bloomberg conducted an independent review of 49 websites and found a wide range of content, from sites with generic names like News Live 79 and Daily Business Post that masquerade as breaking news sites, to sites offering lifestyle tips, celebrity news, and sponsored content. What they all have in common is that none of them disclose that their content is generated by AI chatbots such as OpenAI's ChatGPT or potentially Google Bard from Alphabet Inc. These chatbots are capable of generating detailed text based on simple user prompts, and many of these websites began publishing this year as the use of AI tools became more widespread.

NewsGuard discovered several instances where the AI chatbots generated false information for articles published on these websites. For example, in April, CelebritiesDeaths.com published an article claiming that "Biden [was] dead" and that Kamala Harris was now acting president. Another website created a fake obituary for an architect that included fabricated details about their life and work. Additionally, TNewsNetwork published an unverified story about the deaths of thousands of soldiers in the Russia-Ukraine war, based solely on a YouTube video.

Most of these websites seem to be content farms - low-quality websites created by anonymous sources that generate posts to attract advertising revenue. These sites are located in various parts of the world and publish content in multiple languages such as English, Portuguese, Tagalog, and Thai, as per the NewsGuard report.

A few of these sites made money through advertising "guest posting," a service that allows people to pay for mentions of their business on these websites to boost their search ranking. Some sites also seemed to focus on building a social media following, like ScoopEarth.com, which produces celebrity biographies and has a related Facebook page with 124,000 followers.

Over 50 percent of the identified AI chatbot-generated sites generate income from programmatic ads, which are automatically bought and sold using algorithms. This poses a significant challenge for Google, whose advertising technology generates revenue for half of the sites, and whose AI chatbot Bard may have been used by some of them.

According to NewsGuard co-Chief Executive Officer Gordon Crovitz, companies such as OpenAI and Google should be cautious in training their models to prevent them from fabricating news, as the group's report demonstrated. Crovitz, a former publisher of the Wall Street Journal, stated that using AI models known for creating false information to produce websites that resemble news outlets is a form of fraud disguised as journalism.

Although OpenAI did not immediately respond to a request for comment, the company has previously stated that it employs a combination of human reviewers and automated systems to detect and prevent the misuse of its model, which includes issuing warnings or banning users in severe cases.

When asked by Bloomberg whether the AI-generated websites breached their advertising policies, Google spokesperson Michael Aciman responded that the company prohibits ads from running alongside harmful or spammy content, as well as content that has been plagiarized from other sources. Aciman added that they prioritize the quality of the content rather than its creation process when enforcing these policies, and that they block or remove ads if they detect any violations.

Following Bloomberg's inquiry, Google took action by removing ads from individual pages on some sites and removing ads entirely from websites where pervasive violations were found. The company clarified that AI-generated content is not inherently a violation of its ad policies but is evaluated against existing publisher policies. However, using automation, including AI, to manipulate search result rankings violates the company's spam policies. Google stated that it regularly monitors abuse trends and adjusts its policies and enforcement systems accordingly to prevent abuse within its ads ecosystem.

Noah Giansiracusa, an associate professor of data science and mathematics at Bentley University, said the scheme may not be new, but it’s gotten easier, faster and cheaper.

The actors pushing this brand of fraud “are going to keep experimenting to find what’s effective," Giansiracusa said. “As more newsrooms start leaning into AI and automating more, and the content mills are automating more, the top and the bottom are going to meet in the middle" to create an online information ecosystem with vastly lower quality.

NewsGuard researchers employed various methods to identify the AI-generated news websites. They conducted keyword searches for phrases typically produced by AI chatbots, such as “as an AI large language model" and “my cutoff date in September 2021," using tools such as CrowdTangle, a social media analysis platform owned by Facebook, and Meltwater, a media monitoring platform. The researchers also employed the GPTZero, an AI text classifier that assesses whether particular passages are probable to be composed entirely by AI, to evaluate the articles.

NewsGuard researchers used various tools, including CrowdTangle and Meltwater, to identify websites that use AI chatbots to produce articles. Using AI text classifier GPTZero, the researchers discovered that each of the analyzed sites contained at least one error message commonly found in AI-generated text, and some featured fake author profiles.

 One website, CountyLocalNews.com, published an article written by an AI chatbot that discussed a false conspiracy theory about mass human deaths caused by vaccines. Although many of the identified sites did not have high levels of engagement, some of them generated revenue through programmatic advertising services such as MGID and Criteo. 

Google's ad technology, used by two dozen of the identified sites, prohibits ads from appearing on pages with low-value or replicated content, regardless of how it was generated. After Bloomberg contacted Google, ads were removed from some of the websites. 

Bentley professor Giansiracusa expressed concern about how cheap and accessible the scheme has become, with no cost to the perpetrators.

(With inputs from Bloomberg)

 

Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed - it's all here, just a click away! Login Now!

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
More Less
Published: 01 May 2023, 04:09 PM IST
Next Story footLogo
Recommended For You