Large language models pose growing security risks

Data sent to LLMs can include sensitive corporate information, (Illustration: WSJ)
Data sent to LLMs can include sensitive corporate information, (Illustration: WSJ)

Summary

More powerful and pervasive large language models are creating a new cybersecurity challenge for companies.

Cybersecurity threats are bound to multiply as large language models are commoditized, a process that seemed to take a big leap forward when China’s DeepSeek apparently showed LLMs can be built at lower cost than previously thought.

More powerful and pervasive large language models are creating a new cybersecurity challenge for companies.

The risks posed by LLMs, a form of generative artificial intelligence that communicates through language in a humanlike way, are already manifold. There is, for example, a danger that sensitive corporate or personal information inadvertently or deliberately will be exposed to models widely accessible to the public. There is also a possibility models can bring unsafe code or data into a company.

Such threats are bound to multiply as LLMs are commoditized, a process that seemed to take a big leap forward when China’s DeepSeek apparently showed LLMs can be built at lower cost than previously thought.

Former Israeli Prime Minister Naftali Bennett, whose career as a tech entrepreneur predates his role in politics and government, frames the global AI race as a “slippery road car chase" in which U.S. companies at the front of the pack are pursued by rivals closing the distance much faster than expected.

A comfortable lead allowed U.S. companies to highly emphasize governance and security guardrails, which might have slowed some aspects of U.S. innovation and left an opening for competitors. Now, as aspects of the race unexpectedly tighten, some U.S. companies could be motivated to reassess the emphasis on governance and security, potentially creating new dangers.

“We’re at such a transformative moment in technological history. It creates a huge opportunity, but also a huge risk," said Bennett, who sits on the board of Lasso Security, an early stage startup focused on LLM security.

Data sent to LLMs can include sensitive corporate information, and data received from LLMs trained on the internet are susceptible to malicious code, intellectual property infringements, and copyright issues. So-called prompt injections are another form of attack in which bad actors manipulate models to take actions.

DeepSeek’s R1 model is more susceptible than others to “jailbreak" attacks designed to reveal illicit information, The Wall Street Journal reported this month. In such an attack, someone might trick an LLM into divulging how to make a weapon of mass destruction by telling it to imagine it is writing a movie script.

Matthew Alan Livelsberger used advanced generative AI to research explosives and ignition mechanisms before blowing up a Tesla Cybertruck in Las Vegas, Sheriff Kevin McMahill of the Las Vegas Metropolitan Police Department said on Jan. 8. While illicit information has been available online, GenAI potentially could make it more widely accessible.

A lack of structure

Lasso co-founder and Chief Executive Elad Schulman says the core risk lies in the unstructured and conversational nature of interactions with LLMs. Traditional security measures often focus on protecting individual events and structured data, so they can be ineffective against sophisticated attacks that exploit the conversational context and unstructured nature of LLM interactions.

There is no single step or tool for securing LLMs, but companies can start by understanding the lineage of data used in training and operating the models, and be careful not to implicitly trust their output of LLMs, according to Jim Siders, chief information officer at data analytics giant Palantir. Human oversight and review of the models is critical, he said.

The most important thing for companies to understand is where their responsibility lies at any given moment or in any given situation, and to verify their tech suppliers and partners are holding up their end of the bargain.

Technology advances much faster than the government’s ability to enact policy, and that is especially true for AI—so companies should assume they are on their own in this effort, at least for now.

In the absence of government leadership on securing LLMs, corporations should push for unified policies and cooperation across the industry.

“I know there are a lot of people in the government who are thinking about it. I don’t think we should stop pressing for that holistic solution," Siders said, referring to LLM security. “This can’t and shouldn’t be a purely private-sector problem forever."

Write to Steven Rosenbush at steven.rosenbush@wsj.com

Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
more

topics

MINT SPECIALS