She first saw the ad on Facebook. And again on TikTok. After seeing Elon Musk seemingly pitching investment opportunities over and over again, Heidi Swan thought it must be true.
“I thought it was him because he looked like Elon Musk and sounded like Elon Musk,” Swan said.
She contacted the company behind the scenes and opened an account with more than $10,000. A 62-year-old healthcare worker thought he was making a smart investment in cryptocurrencies from a multi-billion dollar businessman and investor.
However, Swan soon learns that he has been tricked by a new high-tech thief using artificial intelligence. deep fake.
Even though I know it’s fake, looking back at the video now, I think Swan looks convincing.
“They still look like Elon Musk,” she said. “They still sound like Elon Musk.”
Deepfake fraud is on the rise in the US
As artificial intelligence technology evolves and becomes more accessible, this type of fraud is becoming more common.
According to Deloitte, a leading financial research group, AI-generated content contributed to more than $12 billion in fraud losses last year and could reach $40 billion in the U.S. by 2027.
The Federal Trade Commission and the Better Business Bureau have both warned that deepfake scams are on the rise.
A study by AI company Sensity found that Elon Musk is the most commonly used celebrity in deepfake scams. One possible reason is his wealth and entrepreneurial spirit. Another reason is the amount of interviews he has done. The more content someone has online, the easier it is to create a convincing deepfake.
Deepfake structure
Christopher Meald, a professor at the University of North Texas in Denton, also uses artificial intelligence. But he uses it to create art.
“It’s not going to replace the creative arts,” Meld said. “It will strengthen them and change the way we understand what we can do in the realm of creativity.”
Meald sees artificial intelligence as a way to be innovative, but he also recognizes its dangers.
Meald told the CBS News Texas I-Team how scammers can record real video and use AI tools to replace a person’s voice and mouth movements to make it seem like they’re saying something completely different. was shown.
Advances in technology have made it easier to create deepfake videos. All an AI-savvy person needs to create is a single still image and a video recording.
To demonstrate this, Meald filmed a video of investigative reporter Brian New and created a deepfake of Elon Musk.
These AI-generated videos aren’t perfect, but they are convincing enough to fool unsuspecting victims.
“If you’re really trying to scam people, I think you can do some pretty bad things with this,” Meld said.
How can I spot a deepfake?
Some deepfakes are easier to spot than others. Signs may include unnatural lip movements or strange body language. However, as technology advances, it becomes difficult to judge by appearance alone.
A growing number of websites claim to be able to detect deepfakes. The CBS News Texas I-Team used three known deepfake videos and three real deepfake videos to identify three of these websites: Deepware, Attestiv, DeepFake-O-Meter, Sensity, and Deepfake Detector. I put five to an unscientific test.
In total, these five online tools correctly identified the tested videos almost 75% of the time. The I-Team communicated the results to the company. Their responses are below.
deep wear
Deepware, a free-to-use website, initially failed to flag two fake videos tested by the I-Team. The company said in an email that the clips used were too short and for best results, videos uploaded should be between 30 seconds and 1 minute. Deepware correctly identified all longer videos. The company says its detection rate is 70%, which is considered good for the industry.
The FAQ section of Deepware’s website states: “Deepfakes are not yet a solved problem. Our results indicate the likelihood that a given video is or is not a deepfake.” are.
deepfake detector
Deepfake Detector is a $16.80/month tool that identified one of the fake videos as having “97% natural audio.” The company, which specializes in finding AI-generated audio, said in an email that the accuracy is about 92%, although factors such as background noise and music can affect the results.
In response to a question about guidance for the average consumer, the company wrote: “Our tools are designed to be easy to use. The average consumer can easily upload audio files to our website and use our browser extensions to analyze content directly. The tool uses probabilities to provide analytics that help determine whether a video contains deepfake elements, making it accessible to those who are not familiar with AI technology.
testimony
Attestiv flagged two of the genuine videos as “suspicious.” False positives can be caused by factors such as graphics and editing, said Nicos Vekiarides, the company’s CEO. Both genuine videos flagged as “suspicious” contained graphics and editing. The site offers a free service, but also has a paid tier where consumers can adjust settings and adjustments for more in-depth analysis.
Vekiarides acknowledged that Attestiv is not perfect, but said this type of website is needed as part of the solution as deepfakes become harder to spot with the naked eye.
“Our tools can determine whether something is suspicious, and then you can see it for yourself and say, ‘Yes, I think it’s suspicious,'” Bekiarides said.
Deepfake-O-meter
DeepFake-O-Meter is another free tool supported by the University at Buffalo and the National Science Foundation. Two of the real-world videos were identified as having a high percentage of AI-generated.
The creators of the open platform said in an email that limitations of deepfake detection models include video compression that can lead to video and audio synchronization issues and mismatched mouth movements.
In response to a question about how everyday users can use this tool, the company responded via email: “Currently, the main result displayed to the user is a probability value that this sample is a sample generated across different detection models. This can be used as a reference.” Multiple models can confidently answer the same answer (for example, more than 80% for AI-generated and less than 20% for real video), we are currently developing a more understandable way to display the results and a new model that can be output. Comprehensive detection results. ”
sensitivity
Sensity’s deepfake detector accurately identified all six clips and displayed a heatmap showing where AI manipulation was most likely.
The company offers a free trial period to use the service, and although it is currently tailored for private and public organizations, the future goal is to make this technology available to everyone. he told the I-Team.
I-Team Investigation
more
more