March 2025 | This Month in Generative AI: AI-Powered Romance Scams

The first post I wrote in this series, just a little over a year ago, was about AI-powered scams ranging from crypto scams to fake celebrity giveaways and dangerous fake health cures. Over the past year, these scams have increased in scale and severity.

 

Within this broad range of online scams, romance scams are particularly nasty — and they are on the rise.

 

In a romance scam, a scammer creates a fake online identity to build a romantic relationship with a victim and ultimately persuade that person to send money. The scammer often uses tales of hardship to elicit sympathy. Although this type of scam is not new, the addition of highly realistic, AI-generated images, voices, and videos has led to a troubling escalation in the sophistication of these scams.

 

According to the FBI, romance scams last year cost US consumers over $600 million. This is almost certainly an undercount since many victims may be too scared or embarrassed to report this crime.

 

The co-opted identity of the scammer ranges from a celebrity to a CEO, a military officer to an ordinary citizen. In a particularly unusual story, a French woman was reportedly scammed out of $850,000 over an 18-month period by criminals posing as the actor Brad Pitt. While it is easy (albeit cruel) to mock victims of romance scams — as many did after this story surfaced — these cases are not isolated. Less than a month after this story broke, a woman reportedly lost $375,000 and moved from the US to New Zealand because she believed that she was in a relationship with actor Martin Henderson.

 

Beyond the emotional and financial toll behind these scams, what is particularly striking about the past year is how quickly scammers have co-opted generative AI to increase the sophistication of their attacks. 

 

In the Brad Pitt scam, for example, the criminals sent the victim AI-generated images of Pitt. When asked about the photos, the victim stated, "I looked those photos up on the internet but couldn't find them so I thought that meant he had taken those selfies just for me." This woman was sophisticated enough to perform a reverse image search, but she was still duped by fake images because the reality is that today's AI-generated images are highly visually compelling.

 

I was recently asked to review a video provided to investigators by a victim of a romance scam. The 15-second video message consisted of a handsome man greeting the victim by name and talking about his morning in what seemed a benign but still intimate message. At first glance, it was not clear to the investigators (or me) if the person in the video was part of the scam, or if he was a victim of identity theft and deepfake impersonation.

 

In my initial review, the lighting and consistency across the face led me to believe it was unlikely to be a face-swap deepfake. In this type of deepfakes, an original face in the video — eyebrows to chin and cheek to cheek — is replaced with another face. 

 

On the other hand, it was possible that it was a lip-sync deepfake. In these deepfakes, the voice and mouth movement in the original video is replaced with a new voice (real or AI-generated), and then just the mouth region is AI-synthesized to be made consistent with the voice. This type of deepfake is more difficult to detect because the manipulation is extremely localized. 

 

Most lip-sync deepfakes that I've seen tend to have some obvious tells, including malformed and inconsistent mouth movement. Although I didn't see any of these obvious signs, I analyzed the video in question with one of our forensic analysis tools over at GetReal. This tool, which measures the synchronization between the mouth movement and the voice, found patterns of de-synchronization that are telltale artifacts of a lip-sync deepfake.

 

Then, GetReal's Chief Investigative Officer, Emmanuelle Saliba, tracked down the original video. It was a clip of a Russian ship engineer who, based on his many videos, does not appear to speak English. She then tracked down the precise 15-second clip that was modified to create this sophisticated lip-sync deepfake.

 

What struck me most about this case was that if it was not easy for us — the experts on this topic – to verify whether the video was real or not, what chance does the average person have? 

 

In a particularly cruel twist, a second round of scams have also emerged in which victims are contacted by fake law enforcement officers or attorneys claiming they can reclaim financial losses (for a fee, of course). Vulnerable victims are then subjected to another round of financial losses. 

 

I'm often asked what people can do to protect themselves online. The sad truth is that the average person cannot become an expert in digital forensics. The best protection, therefore, is to be aware of the nature of these scams and the fact that, while many people think they will not fall victim, many do. The internet can be a hostile landscape, and we have to treat it with the right amount of caution and protection.

 

Eventually, as provenance labels become more ubiquitous, they will be useful to help people make better informed decisions about how to react to online content. This is the work of the Content Authenticity Initiative, to accelerate adoption of the C2PA open technical standard.

 

If you or someone you know believes you may be a victim of a romance scam, here are some resources that may be helpful:

 

 

Author bio: Professor Hany Farid is a world-renowned expert in the field of misinformation, disinformation, and digital forensics. He joined the Content Authenticity Initiative (CAI) as an advisor in June 2023. The CAI is an Adobe-led community of media and tech companies, NGOs, academics, and others working to promote adoption of the open industry standard for content authenticity and provenance. Professor Farid teaches at the University of California, Berkeley, with a joint appointment in electrical engineering and computer sciences at the School of Information. He’s also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering Program, and Vision Science Program, and he’s a senior faculty advisor for the Center for Long-Term Cybersecurity. He is also a co-founder and Chief Science Officer at GetReal Labs, where he works to protect organizations worldwide from the threats posed by the malicious use of manipulated and synthetic information.

He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989, his M.S. in computer science from SUNY Albany, and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019. Professor Farid is the recipient of an Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship, and he’s a fellow of the National Academy of Inventors.