Deepfakes are coming for the financial sector
Summary
With technology advances making artificial intelligence-enabled voice and images more life-like than ever, bad actors armed with deepfakes are coming for the enterprise.Deepfakes have long raised concern in social media, elections and the public sector. But now with technology advances making artificial intelligence-enabled voice and images more lifelike than ever, bad actors armed with deepfakes are coming for the enterprise.
“There were always fraudulent calls coming in. But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new," said Bill Cassidy, chief information officer at New York Life.
Banks and financial services providers are among the first companies to be targeted. “This space is just moving very fast," said Kyle Kappel, U.S. Leader for Cyber at KPMG.
How fast was demonstrated earlier this month when OpenAI showcased technology that can recreate a human voice from a 15-second clip. Open AI said it would not release the technology publicly until it knows more about potential risks for misuse.
Among the concerns are that bad actors could use AI-generated audio to game voice-authentication software used by financial services companies to verify customers and grant them access to their accounts. Chase Bank was fooled recently by an AI-generated voice during an experiment. The bank said that to complete transactions and other financial requests, customers must provide additional information.
Deepfake incidents in the fintech sector increased 700% in 2023 from the previous year, according to a recent report by identity verification platform Sumsub.
Companies say they are working to put more guardrails in place to prepare for an incoming wave of generative AI-fueled attackers. For example, Cassidy said he is working with New York Life’s venture-capital group to identify startups and emerging technologies designed to combat deepfakes. “In many cases, the best defense of this generative AI threat is some form of generative AI on the other side," he said.
Bad actors could also use AI to generate photos of fake driver’s licenses to set up online accounts, so Alex Carriles, chief digital officer of Simmons Bank, said he is changing some identity verification protocols. Previously, one step in setting up an account online with the bank involved customers uploading photos of driver’s licenses. Now that images of driver’s licenses can be easily generated with AI, the bank is working with security vendor IDScan.net to improve the process.
Rather than uploading a pre-existing picture, Carriles said, customers now must photograph their driver’s licenses through the bank’s app and then take selfies. To avoid a situation where they hold cameras up to a screen with an AI-generated visual of someone else’s face, the app instructs users to look left, right, up or down, as a generic AI deepfake won’t necessarily be prepared to do the same.
It can be difficult balancing giving users a good experience and making the process so seamless that attackers can coast through, Carriles said.
Not all banks are ringing alarm bells. KeyBank CIO Amy Brady said the bank was a technology laggard when it came to adopting voice authentication software. Now, Brady said she considers that was lucky given the risk of deepfakes.
Brady said she is no longer looking to implement voice authentication software until there are better tools for unmasking impersonations. “Sometimes being a laggard pays off," she said.
Write to Isabelle Bousquette at isabelle.bousquette@wsj.com