NEW YORK—Cybersecurity experts said that the growing sophistication of artificial-intelligence systems could create a new poverty line for cyber, where companies that can afford to research and develop the technology end up better protected against hackers than those that cannot.
Security chiefs and vendors say AI systems, and in particular generative AI, may offer significant benefits for defending against hacks. These include the ability to quickly sift through vast amounts of data to flag potential compromises, identifying vulnerabilities in products and serving as assistive research tools for analysts.
Yet, experts worry that the promise of AI won’t be available for everyone, and especially those without deep pockets.
“I’m concerned that in cybersecurity, with gen AI, it’s becoming the haves and the have-nots,” said Sean Joyce, global cybersecurity and privacy leader and U.S. cyber, risk and regulatory leader at consulting firm PricewaterhouseCoopers, speaking at a cybersecurity conference hosted by the nonprofit Aspen Institute Wednesday.
Joyce said he was concerned because, while AI has benefits for security professionals, it can also supercharge the abilities of hackers. Phishing emails, for instance, can now be produced quickly, tailored to targets in moments, and lack many of the telltale signs of a fraudulent message, such as spelling and grammar mistakes.
Similarly, while companies can scan for vulnerabilities in software using AI, so too can hackers. Heather Adkins, vice president of security engineering at tech giant Google, suggested that a recent uptick in zero-day vulnerabilities was closely linked to the development of AI platforms. Smaller businesses, such as dentist offices still running outdated versions of the Windows operating system, were the types of companies that risk being outpaced by technological development, according to Adkins.
The promise and peril of AI is such that the Cybersecurity and Infrastructure Security Agency Monday published a strategy outlining how it will use AI, and how it will seek to protect critical infrastructure from AI-enabled hacks. The agency, which is part of the Department of Homeland Security, said it would partner with other government agencies and the private sector to develop, test and evaluate tools.
But the double-edged nature of the technology’s uses means that companies that don’t have the resources or technical skills to understand AI and defend against the attacks enabled by the technology risk being left behind. The issue is a challenge for even technically capable companies, other speakers at the event said.
“When we think about some of the capabilities that AI has around code review, the ability to inject [code], it’s going to be very interesting for us to protect against things we don’t know,” said Jameeka Green Aaron, chief information security officer at identity-management company Okta.
Write to James Rundle at james.rundle@wsj.com
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.