What started as a ski vacation Instagram put up resulted in monetary break for a French inside designer after scammers used AI to persuade her she was in a relationship with Brad Pitt.
The 18-month rip-off focused Anne, 53, who obtained an preliminary message from somebody posing as Jane Etta Pitt, Brad’s mom, claiming her son “wanted a girl such as you.”
Not lengthy after, Anne began speaking to what she believed was the Hollywood star himself, full with AI-generated pictures and movies.
“We’re speaking about Brad Pitt right here and I used to be shocked,” Anne instructed French media. “At first, I believed it was faux, however I didn’t actually perceive what was occurring to me.”
The connection deepened over months of day by day contact, with the faux Pitt sending poems, declarations of affection, and ultimately a wedding proposal.
“There are so few males who write to you want that,” Anne described. “I beloved the person I used to be speaking to. He knew methods to speak to ladies and it was all the time very properly put collectively.”
The scammers’ techniques proved so convincing that Anne ultimately divorced her millionaire entrepreneur husband.
After constructing rapport, the scammers started extracting cash with a modest request – €9,000 for supposed customs charges on luxurious items. It escalated when the impersonator claimed to wish most cancers remedy whereas his accounts have been frozen as a consequence of his divorce from Angelina Jolie.
A fabricated physician’s message about Pitt’s situation prompted Anne to switch €800,000 to a Turkish account.

“It value me to do it, however I believed that I may be saving a person’s life,” she mentioned. When her daughter acknowledged the rip-off, Anne refused to consider it: “You’ll see when he’s right here in particular person then you definitely’ll make an apology.”
Her illusions have been shattered upon seeing information protection of the actual Brad Pitt along with his accomplice Inés de Ramon in summer season 2024.
Even then, the scammers tried to take care of management, sending faux information alerts dismissing these reviews and claiming Pitt was truly relationship an unnamed “very particular particular person.” In a closing roll of the cube, somebody posing as an FBI agent extracted one other €5,000 by providing to assist her escape the scheme.
The aftermath proved devastating – three suicide makes an attempt led to hospitalization for despair.
Anne opened up about her expertise to French broadcaster TF1, however the interview was later eliminated after she confronted intense cyber-bullying.
Now dwelling with a buddy after promoting her furnishings, she has filed prison complaints and launched a crowdfunding marketing campaign for authorized assist.
A tragic scenario – although Anne is actually not alone. Her story parallels a large surge in AI-powered fraud worldwide.
Spanish authorities just lately arrested 5 individuals who stole €325,000 from two ladies by related Brad Pitt impersonations.
Talking about AI fraud final 12 months, McAfee’s Chief Know-how Officer Steve Grobman explains why these scams succeed: “Cybercriminals are in a position to make use of generative AI for faux voices and deepfakes in ways in which used to require much more sophistication.”
It’s not simply people who find themselves lined up within the scammers’ crosshairs, however companies, too. In Hong Kong final 12 months, fraudsters stole $25.6 million from a multinational firm utilizing AI-generated government impersonators in video calls.
Superintendent Baron Chan Shun-ching described how “the employee was lured right into a video convention that was mentioned to have many members. The reasonable look of the people led the worker to execute 15 transactions to 5 native financial institution accounts.”
Would you have the ability to spot an AI rip-off?
Most individuals would fancy their probabilities of recognizing an AI rip-off, however analysis says in any other case.
Research present people battle to distinguish actual faces from AI creations, and artificial voices idiot roughly 1 / 4 of listeners. That proof got here from final 12 months – AI voice picture, voice, and video synthesis have developed significantly since.
Synthesia, an AI video platform that generates reasonable human avatars talking a number of languages, now backed by Nvidia, simply doubled its valuation to $2.1 billion. Video and voice synthesis platforms like Synthesia and Elevenlabs are among the many instruments that fraudsters use to launch deep faux scams.
Synthesia admits this themselves, just lately demonstrating its dedication to stopping misuse by a rigorous public pink group check, which confirmed how its compliance controls efficiently block makes an attempt to create non-consensual deepfakes or use avatars for dangerous content material like selling suicide and playing.
Whether or not or not such measures are efficient at stopping misuse – clearly the jury is out.
As corporations and people wrestle with compellingly actual AI-generated media, the human value – illustrated by Anne’s devastating expertise – will in all probability rise.
What started as a ski vacation Instagram put up resulted in monetary break for a French inside designer after scammers used AI to persuade her she was in a relationship with Brad Pitt.
The 18-month rip-off focused Anne, 53, who obtained an preliminary message from somebody posing as Jane Etta Pitt, Brad’s mom, claiming her son “wanted a girl such as you.”
Not lengthy after, Anne began speaking to what she believed was the Hollywood star himself, full with AI-generated pictures and movies.
“We’re speaking about Brad Pitt right here and I used to be shocked,” Anne instructed French media. “At first, I believed it was faux, however I didn’t actually perceive what was occurring to me.”
The connection deepened over months of day by day contact, with the faux Pitt sending poems, declarations of affection, and ultimately a wedding proposal.
“There are so few males who write to you want that,” Anne described. “I beloved the person I used to be speaking to. He knew methods to speak to ladies and it was all the time very properly put collectively.”
The scammers’ techniques proved so convincing that Anne ultimately divorced her millionaire entrepreneur husband.
After constructing rapport, the scammers started extracting cash with a modest request – €9,000 for supposed customs charges on luxurious items. It escalated when the impersonator claimed to wish most cancers remedy whereas his accounts have been frozen as a consequence of his divorce from Angelina Jolie.
A fabricated physician’s message about Pitt’s situation prompted Anne to switch €800,000 to a Turkish account.

“It value me to do it, however I believed that I may be saving a person’s life,” she mentioned. When her daughter acknowledged the rip-off, Anne refused to consider it: “You’ll see when he’s right here in particular person then you definitely’ll make an apology.”
Her illusions have been shattered upon seeing information protection of the actual Brad Pitt along with his accomplice Inés de Ramon in summer season 2024.
Even then, the scammers tried to take care of management, sending faux information alerts dismissing these reviews and claiming Pitt was truly relationship an unnamed “very particular particular person.” In a closing roll of the cube, somebody posing as an FBI agent extracted one other €5,000 by providing to assist her escape the scheme.
The aftermath proved devastating – three suicide makes an attempt led to hospitalization for despair.
Anne opened up about her expertise to French broadcaster TF1, however the interview was later eliminated after she confronted intense cyber-bullying.
Now dwelling with a buddy after promoting her furnishings, she has filed prison complaints and launched a crowdfunding marketing campaign for authorized assist.
A tragic scenario – although Anne is actually not alone. Her story parallels a large surge in AI-powered fraud worldwide.
Spanish authorities just lately arrested 5 individuals who stole €325,000 from two ladies by related Brad Pitt impersonations.
Talking about AI fraud final 12 months, McAfee’s Chief Know-how Officer Steve Grobman explains why these scams succeed: “Cybercriminals are in a position to make use of generative AI for faux voices and deepfakes in ways in which used to require much more sophistication.”
It’s not simply people who find themselves lined up within the scammers’ crosshairs, however companies, too. In Hong Kong final 12 months, fraudsters stole $25.6 million from a multinational firm utilizing AI-generated government impersonators in video calls.
Superintendent Baron Chan Shun-ching described how “the employee was lured right into a video convention that was mentioned to have many members. The reasonable look of the people led the worker to execute 15 transactions to 5 native financial institution accounts.”
Would you have the ability to spot an AI rip-off?
Most individuals would fancy their probabilities of recognizing an AI rip-off, however analysis says in any other case.
Research present people battle to distinguish actual faces from AI creations, and artificial voices idiot roughly 1 / 4 of listeners. That proof got here from final 12 months – AI voice picture, voice, and video synthesis have developed significantly since.
Synthesia, an AI video platform that generates reasonable human avatars talking a number of languages, now backed by Nvidia, simply doubled its valuation to $2.1 billion. Video and voice synthesis platforms like Synthesia and Elevenlabs are among the many instruments that fraudsters use to launch deep faux scams.
Synthesia admits this themselves, just lately demonstrating its dedication to stopping misuse by a rigorous public pink group check, which confirmed how its compliance controls efficiently block makes an attempt to create non-consensual deepfakes or use avatars for dangerous content material like selling suicide and playing.
Whether or not or not such measures are efficient at stopping misuse – clearly the jury is out.
As corporations and people wrestle with compellingly actual AI-generated media, the human value – illustrated by Anne’s devastating expertise – will in all probability rise.