GenAI increasingly powering scams, Wall Street watchdog warns

Summary
More swindlers are using the technology to trick firms and customers, Finra said.The Financial Industry Regulatory Authority said scammers are creating synthetic identification documents to help open fraudulent new brokerage accounts or take over customers’ existing accounts.
Generative artificial intelligence tools are increasingly being used by swindlers in sophisticated ways to scam financial institutions, a Wall Street watchdog warned, calling the problem an emerging risk.
Scammers are, for example, creating synthetic identification documents to help open fraudulent new brokerage accounts or take over customers’ existing accounts, the Financial Industry Regulatory Authority, Wall Street’s self-regulatory arm, said Tuesday in a new report. Swindlers are also in some cases using deepfake technology to create images of GenAI-generated people holding GenAI-created phony IDs, Finra said.
Finra said firms should consider communicating with employees and customers about the risks related to GenAI and the steps they can take to combat them.
The explosive growth in the use of GenAI has been a double-edged sword for financial firms. On one hand, the technology allows them to increase the speed and scale at which they tackle intensive compliance tasks, such as looking into customers’ backgrounds or monitoring transactions for suspicious activity.
But GenAI can also supercharge certain frauds, allowing, for example, voice deepfakes aimed at tricking fraud targets. Finra said it has seen deepfake audio and video being used by swindlers to impersonate well-known finance gurus, as well as GenAI being used to create phishing emails tailored to individual targets.
U.S. law enforcement, including the Federal Bureau of Investigation, has warned about the potential for AI to be used for criminal purposes. Financial-services industry fraud losses in the U.S. could reach $40 billion by 2027, up from $12.3 billion in 2023, because of the impact of GenAI, Deloitte’s Center for Financial Services predicted in a report last year.
Finra said it has also seen GenAI used in the creation of impostor websites that impersonate firms and their staff to lure victims into sending money, and in the creation of advanced malware. The regulator said firms should consider whether their cybersecurity programs adequately address risks associated with the potential exploitation of GenAI.
Separately, Finra noted that most financial firms are proceeding cautiously in their use of GenAI for their own purposes, but have been using it to increase the efficiency of some internal functions, such as summarizing information or conducting analyses across multiple data sets.
Write to Richard Vanderford at Richard.Vanderford@wsj.com