TLDR
- Around eight million deepfakes were shared in the UK last year, nearly four times the 2023 figure
- Gambling sector fraud rose 73% between 2022 and 2024, with deepfakes used to bypass identity checks
- UK law enforcement was found “inadequately equipped” to handle AI-fuelled fraud in a 2025 report
- Meta’s own data showed roughly $16 billion in 2024 revenue came from ads on scams and banned goods
- New UK legislation to regulate deepfakes under the Online Safety Act is underway but key powers on scam ads are delayed until at least 2027
The UK is facing a rapid rise in AI-generated deepfake scams, and the country’s regulatory framework is struggling to respond. A growing body of evidence shows that synthetic media fraud has reached an industrial scale, hitting the online gambling industry especially hard.
Around eight million deepfakes were shared in the UK last year. That figure is nearly four times the number recorded in 2023, according to the Home Office’s Accelerated Capability Environment.
A 2026 report from the AI Incident Database described this type of fraud as having gone “industrial.” Fred Heiding, a researcher at Harvard University studying AI-fuelled scams, warned that “the worst is yet to come.”
The online gambling sector has been one of the hardest hit. Industry intelligence firm Gambling IQ found that sector fraud surged 73% between 2022 and 2024.
Deepfakes have been used to get around Know Your Customer checks and to carry out mass bonus abuse on gambling platforms. The technology allows scammers to impersonate real people with convincing voice cloning and video.
Law Enforcement Falls Behind
A 2025 report by the Alan Turing Institute found that UK law enforcement is “inadequately equipped to deal with AI-fuelled fraud.” The report was authored by Joe Burton, Professor of Security and Protection Science at Lancaster University.
Burton was direct in his assessment. “AI-enabled crime is already causing serious personal and social harm and big financial losses,” he said.
He called for giving law enforcement better tools to disrupt criminal groups. Without that, he warned, the criminal use of AI technologies will expand rapidly.
The UK Gambling Commission currently places the main responsibility on operators to prevent crime. Operators must put their own fraud prevention policies and controls in place.
But with AI capabilities advancing quickly, platforms alone cannot handle the problem. Many AI-related scams in the gambling world originate outside regulated platforms entirely.
Social media plays a central role in spreading these scams. Platform algorithms can amplify misleading content by design, prioritizing engagement over accuracy.
In November 2025, Reuters reported that Meta’s own internal data showed about 10% of its 2024 revenue — roughly $16 billion — came from ads linked to scams and banned goods.
Last week, Reuters found that Meta had failed to remove scam content from its UK platform over 1,000 times in a single week. Among the scams were illicit online casinos using deepfakes to attract users.
Regulation Moves Slowly
Ofcom has started writing new rules to regulate deepfakes under the 2023 Online Safety Act and the 2025 Data Use and Access Act. But the regulator’s own guidance shows the limits of the current system.
Some AI chatbots fall outside regulatory scope entirely. They operate as closed systems and do not qualify as search services or platforms with user-to-user interaction.
While the Online Safety Act began taking effect in March 2025, the power to act on paid scam ads has been delayed until at least 2027. That leaves enforcement dependent on voluntary steps by companies like Meta.
Neither the Financial Conduct Authority nor Ofcom currently has direct authority to intervene on these ads. Content generated without external sources, including synthetic images and videos, often falls outside oversight unless it meets specific thresholds.
The burden of deepfake scams continues to fall on platforms and users, even though the systems creating these risks sit beyond their control.
