Gen AI is ramping up the threat of synthetic identity fraud Gen AI is ramping up the threat of synthetic identity fraud

Awareness critical as criminals use all available tools to commit financial fraud Awareness critical as criminals use all available tools to commit financial fraud

April 17, 2025

Synthetic identity fraud continues to expand, and losses from it continue to increase: They crossed the $35 billion mark in 2023, according to the anti-fraud collaboration platform, FiVerity.

Now, those working to stop it must deal with a volatile accelerant: generative artificial intelligence. Ironically, they’re finding that among the best ways to deal with Gen AI in synthetic identity fraud is … Gen AI.

We follow the evolution of synthetic identity fraud at the Federal Reserve because it’s our job to protect the payments system, and this threatens it and the institutions we regulate. But many aren’t aware of this problem, so we prepared a synthetic identity fraud tool kit and write columns like this one.

Synthetic identity fraud is different from standard identity fraud because the thieves aren’t stealing and assuming the identity of a real person. Instead, they’re creating a fake identity by combining real pieces of personally identifiable information from various sources. It could be this guy’s license number, that woman’s checking account number, and that child’s Social Security number. Then, they use this synthetic identity to steal money, or move it illegally, or trick real people into giving up personal data.

Fraudsters use “synthetics” in a variety of ways. Some use them to acquire a credit card, max that out, and disappear. During the pandemic, fraudsters used them to access emergency government benefits and loans. Once those funds dried up, they focused on opening new accounts with financial institutions and building the relationship by conducting normal transactions. Then, they exploit it by accessing various banking products – like loans they’ll never repay.

Some argue synthetic identity fraud is a victimless crime because no real person’s name is harmed, misused, or violated. But that ignores some things:

  • The damage that can be done to an individual’s credit before they hit adulthood, because fraudsters steal Social Security numbers from children, knowing no one will notice for years
  • The cost to seniors, who build great credit scores the fraudsters exploit by targeting their Social Security numbers
  • The losses businesses suffer and pass on to consumers by charging higher prices

There’s a societal cost whenever people are victimized. And now Gen AI is making synthetic identity fraud harder to detect.

Stolen data is the fuel for Gen AI in synthetic identity fraud

Gen AI creates new content, as opposed to AI, which analyzes patterns and makes predictions. So, thieves use Gen AI to automate the creation of stolen identities. Lots of data is needed, but lots is available: There were thousands of data breaches compromising hundreds of millions of records last year alone.

Gen AI can make fake identities appear legitimate by, for instance, creating records of synthetic parents. It can learn from its mistakes and churn out more of what works. It can also mimic humans in ways that convince real humans to give up the personal information used to create more synthetics or to authenticate account applications.

For instance, Gen AI can imitate a person’s texting style by analyzing their texting history, then trick a friend into giving up vital info. It can make authentic-looking documents using photos found online. It can produce “deepfakes” – realistic audio clips and videos of their fake identities, complete with unique gestures and speech patterns.

Gen AI is a formidable weapon in the hands of synthetic identity fraudsters. But AI and Gen AI may also be the best way to stop them.

Synthetic identities are shallow, and AI can see that

Synthetic identities by their nature have far less depth than a real person’s. Real people have longtime email addresses and phone numbers. They have lengthy credit histories, years of social media interaction, utility bills, fishing licenses, online fantasy football league championships, etc. AI can be trained to seek that kind of information to verify real people and call out synthetics.

Gen AI can also develop scenarios that help institutions better detect fraud and adapt to a shifting threat environment.

So, formidable tools exist to fight this fraud. And as serious as the synthetic identity fraud threat is, there’s no need to panic. Gen AI has made synthetic identity fraud more potent, but that’s true for every type of fraud – wire, check, credit card, you name it. Synthetic identity fraud is unfortunately just one of many types of fraud on the rise.

Still, if panic about synthetic identity fraud isn’t advisable, vigilance is. The Fed will continue to update our synthetic identity fraud toolkit and partner with industry to spot this fraud and stop it. Synthetic identities are a threat that Gen AI amplifies, but the more we look for these fake identities, the better we’ll see them, and the safer the payments system will be.

Check out this podcast interview for more details about synthetic identity fraud and how Gen AI is making it harder to detect.

Media Inquiries? Media Inquiries?

Contact our media relations team. We connect journalists with Boston Fed economists, researchers, and leadership and a variety of other resources.

up down About the Authors