HomeTop StoriesElon Musk's deepfakes contribute to billions of dollars in fraud losses in...

Elon Musk’s deepfakes contribute to billions of dollars in fraud losses in the US

Elon Musk’s Deepfakes Contribute to Billions in Fraud Losses in the US


Elon Musk’s Deepfakes Contribute to Billions in Fraud Losses in the US

05:31

She first saw the ad on Facebook. And then again on TikTok. After seeing what looked like Elon Musk By offering an investment opportunity over and over again, Heidi Swan thought it must be true.

“Looked just like Elon Musk, sounded just like Elon Musk and I thought it was him,” Swan said.

She contacted the company behind the field and opened an account for more than $10,000. The 62-year-old healthcare worker thought she was making a smart cryptocurrency investment from a businessman and investor worth billions of dollars.

But Swan would soon find out that she had been scammed by a new wave of high-tech thieves who used artificial intelligence to create deepfakes.

Even looking back on the videos now, knowing they were fake, Swan still thinks they look convincing.

See also  Gaetz's confirmation is in jeopardy

“They still look like Elon Musk,” she said. “They still sound like Elon Musk.”

swan-pic.jpg
Heidi Swan

CBS News Texas


Deepfake scams are on the rise in the US

As artificial intelligence technology evolves and becomes more accessible, these types of scams are becoming more and more common.

According to Deloitte, a leading financial research group, AI-generated content contributed to more than $12 billion in fraud losses last year and could reach $40 billion in the US by 2027.

Both the Federal Trade Commission and the Better Business Bureau have warned that deepfake scams are on the rise.

A study by AI company Sensity has found that Elon Musk is the most common celebrity used in deepfake scams. A likely reason is his wealth and entrepreneurship. Another reason is because of the number of interviews he has done; the more content someone has online, the easier it is to create convincing deepfakes.

Anatomy of a deepfake

At the University of North Texas in Denton, professor Christopher Meerdo also uses artificial intelligence. But he uses it to make art.

“It’s not going to replace the creative arts,” Meerdo said. “It’s just going to expand them and change the way we understand things that we could do in terms of creativity.”

Although Meerdo sees artificial intelligence as a way to be innovative, he also sees its dangers.

Meerdo showed the CBS News Texas I-Team how scammers can take a real video and use AI tools to replace a person’s voice and mouth movements, making it appear as if they are saying something completely different.


Example deepfake video of Elon Musk

00:19

Technological advances make it easier to create deepfake videos. All anyone familiar with AI needs to create is a single still image and a video recording.

To demonstrate this, Meerdo captured a video of investigative journalist Brian New creating a deepfake of Elon Musk.


CBS News Texas I-Team demonstrates deepfake technology

00:27

These AI-generated videos are by no means perfect, but they should just be convincing enough to trick an unsuspecting victim.

“If you’re really trying to scam people, I think you can do some really bad things with this,” Meerdo said.

How do you recognize a deepfake?

Some deepfakes are easier to spot than others; there may be signs such as unnatural lip movements or strange body language. But as technology improves, it will become harder to tell just by looking.

There is a growing number of websites claiming to be able to detect deepfakes. Using three known deepfake videos and three authentic ones, the CBS News Texas I-Team put five of these websites to an unscientific test: Deepware, Attestiv, DeepFake-O-Meter, Sensity, and Deepfake Detector.

In total, these five online tools correctly identified the tested videos almost 75% of the time. The I-Team approached the companies with the results; their answers are below.


How can you recognize an AI-generated video? 5 tools put to the test

03:52

Deepware

Deepware, a website that is free to use, initially failed to flag two of the fake videos the I-Team tested. In an email, the company said the clips used were too short and that for best results the uploaded videos should be between 30 seconds and one minute. Deepware correctly identified all videos that were longer. According to the company, the detection rate is considered good for the industry at 70%.

The FAQ section on Deepware’s website states: “Deepfakes are not yet a solved problem. Our results indicate how likely a specific video is to be a deepfake or not.”

Deepfake detector

Deepfake Detector, a tool that costs $16.80 per month, identified one of the fake videos as “97% natural voice.” The company, which specializes in detecting AI-generated voices, said in an email that factors such as background noise or music can influence results, but that it has an accuracy rate of about 92%.

In response to a question about guidance for average consumers, the company wrote: “Our tool is designed to be easy to use. Average consumers can easily upload an audio file to our website or use our browser extension to analyze content directly. The tool will provide analysis to help determine if a video may contain deepfake elements using probabilities, making it accessible even to those unfamiliar with AI technology.

Attestiv

Attestiv flagged two of the real videos as ‘suspicious’. According to company CEO Nicos Vekiarides, false positives can be caused by factors such as images and editing. Both authentic videos marked as ‘suspicious’ contained images and edits. The site offers a free service, but also has a paid tier, where consumers can adjust settings and calibrations for more in-depth analysis.

While acknowledging that Attestiv isn’t perfect, Vekiarides said that as deepfakes become harder to spot with the naked eye, these types of websites are needed as part of the solution.

“Our tool can determine if something is suspicious, and then you can verify it with your own eyes and say, ‘I think that’s suspicious,’” Vekiarides said.

DeepFake-O-Meter

DeepFake-O-Meter is another free tool supported by the University at Buffalo and the National Science Foundation. It identified that two of the real videos had a high percentage of AI-generated videos.

In an email, the open platform’s creator said that a limitation of deepfake detection models is that video compression can lead to video and audio synchronization issues and inconsistent mouth movements.

In response to a question about how regular users can use the tool, the company emailed: “Currently, the main result shown to users is the probability value that this sample is a generated sample in different detection models. This can be used as a reference . if multiple models confidently agree on the same answer (for example, more than 80% for AI-generated or less than 20% for real video). We are currently developing a more understandable way to display the results, as well as new models to display those results can yield extensive detection results.”

Sensitivity

Sensity’s deepfake detector correctly identified all six clips and showed a heatmap showing where AI manipulation is most likely.

The company is offering a free trial to use its service and told the I-Team that while it is currently tailored for private and public organizations, its future goal is to make the technology accessible to everyone.

- Advertisement -
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments