Fraud using deepfake technology has become "industrial", according to an analysis published by artificial intelligence experts, writes The Guardian.
Tools for creating customized, even personalized scams – using, for example, deepfake footage of Swedish journalists or the president of Cyprus – are no longer niche, but are cheap and can easily be used on a large scale, according to an analysis by the AI Incident Database.
The analysis noted more than a dozen recent examples of "misrepresentation for profit," including a deepfake video of Western Australian Premier Robert Cook promoting an investment scheme, as well as deepfake doctors advertising skin creams.
These examples are part of a trend in which fraudsters are using widely available AI tools to carry out increasingly targeted scams. Last year, a finance officer at a Singapore-based multinational company paid almost $500.000 to fraudsters, believing he was participating in a video call with company management. It is estimated that consumers in the United Kingdom lost £9,4 billion (€10,8 billion) to fraud in the nine months to November 2025.
"The capabilities have suddenly reached a level where almost anyone can produce fake content," said Simon Mylus, a researcher at MIT who works on a project related to the AI Incident Database.
He calculated that "fraud, deception and targeted manipulation" accounted for the largest proportion of incidents reported to the database in 11 of the last 12 months.
"It's become so accessible that there's practically no barrier to entry anymore," he said.
"The scale is changing," said Fred Heiding, a Harvard researcher who studies AI-based fraud.
"It's getting so cheap that almost anyone can use it. The models are getting really good – and they're improving much faster than most experts think."
In early January, Jason Rebholz, CEO of Evoke, an AI security company, posted a job ad on LinkedIn and was almost immediately contacted by an unknown person from his network, recommending a candidate.
Within a few days, he was exchanging emails with a person who, at least on paper, seemed like an extremely talented engineer.
"I looked at the CV and thought: this is a really good CV. And then, even though there were some red flags, I decided to do the interview," Rebholz said, according to the Guardian.
That's when things got weird. The candidate's emails were often marked as spam. The CV had unusual details. However, Rebholz had previous experience with unusual candidates and decided to go ahead with the interview.
When the call began, the candidate's video only appeared after almost a minute.
"The background was extremely fake. It looked really, really fake. The system struggled with the edges around the person – parts of the body would appear and disappear… And when I looked at the face, it was too blurry around the edges," he said.
Rebholz continued the conversation anyway, not wanting to face the embarrassment of directly asking the candidate if it was, in fact, a sophisticated scam. He then sent the recording to a contact at a deepfake detection company, who confirmed that the candidate's video was generated using AI. The candidate was rejected.
Rebholz still doesn't know what the scammer wanted – an engineering salary or trade secrets. While there have been reports of North Korean hackers trying to get jobs at Amazon, Evoke is a startup, not a major global player.
"If this is happening to us, then it's happening to everyone," Rebholz said.
Heiding warns that the worst is yet to come. Right now, voice-cloning technology is extremely good – making it easy for scammers to, for example, pretend to be a grandson or granddaughter in need over the phone. Deepfake videos, on the other hand, still have room for improvement.
This could have extreme consequences: for employment, elections, and society as a whole.
Heiding adds: "That will be the biggest problem – a complete loss of trust in digital institutions, but also in institutions and materials in general."
Bonus video: