The financial sector faces a new and insidious threat: deepfake media. On November 13, 2024, the Financial Crimes Enforcement Network (FinCEN) issued a new alert to help financial institutions identify fraud schemes associated with the use of deepfake media created with generative artificial intelligence (GenAI) tools. These schemes involve criminals altering or creating fraudulent identity documents to circumvent identity verification and authentication methods.
The threat of deepfake media comes not only from the technology used to create it but also from people’s natural inclination to believe what they see. Therefore, deepfakes do not need to be particularly advanced to spread misinformation effectively.
What is a “Deepfake”?
Deepfake media, or “deepfakes,” are synthetic content generated using artificial intelligence tools to create realistic but inauthentic videos, pictures, audio, and text. Deepfakes may depict real or non-existent people. They can manufacture what appears to be real events, such as a person doing or saying something they did not actually do or say.
How Deepfakes are Used in Banking Fraud
One of the most alarming uses of deepfakes in banking fraud is identity theft. Deepfakes can manipulate Know Your Customer (KYC) protocols and be used to open bank accounts. A scammer can utilize pictures from social media to make a fake video or voice and combine this with stolen personal information to create a fictitious person. As banks increasingly rely on biometric authentication, deepfakes threaten to undermine these systems.
Another opportunity for fraud is social engineering amplified by deepfakes. Criminals may target financial institution customers and employees through sophisticated social engineering attempts in support of other scams and fraud typologies, such as business email compromise (BEC) schemes, spear phishing attacks, elder financial exploitation, romance scams, and virtual currency investment scams.
Scammers can craft personalized phishing attacks using fake videos or voice messages purporting to be from bank officials. These fakes might ask customers to send money or share private information. Criminals have reportedly used GenAI tools to target companies by impersonating an executive or other trusted employee, instructing victims to transfer large sums or make payments to accounts ultimately under the scammer’s control. Similarly, they can request login credentials to “secure” their account.
Identifying and Reporting Deepfake Fraud
To summarize, deepfake media scams are used to open fake accounts, take over someone’s account, trick people with fake messages, or create non-existent people. The realism of deepfakes makes these scams harder to spot, especially for less tech-savvy individuals. Because deepfake fraud erodes trust in banking systems, customers may hesitate to adopt digital services if they fear their identities can be hijacked so easily.
FinCEN’s news release and alert explain deepfake media’s methodology and provide red flag indicators to identify and report related suspicious activity.