AI-generated video is being actively weaponised in financial fraud, political disinformation, and personal harassment. Here are the most significant cases of 2025–2026 and what they mean for everyday users.
The $25 Million Deepfake CFO Fraud
A finance employee at a multinational firm transferred $25 million after a video call featuring deepfake versions of the company CFO and colleagues. The perpetrators used real-time face-swap technology. Six arrests followed — but the money was gone. This case proved that real-time deepfake video is a viable corporate attack vector.
AI Voice Kidnapping Scam
A mother received a call featuring a cloned version of her daughter’s voice demanding a $1 million ransom. The daughter was safe. Voice-cloning AI synthesised the audio from social media clips. As audio and video synthesis converge, such attacks become far more convincing.
Sora 2 Celebrity Deepfakes
Within days of Sora 2’s September 2025 launch, synthetic videos of celebrities, politicians, and public figures flooded social media. Watermark-removal tools appeared within a week, stripping the C2PA metadata OpenAI relied on for attribution. Pixel-level detection via tools like our free Sora AI Detector became the only reliable identification method.
Political Disinformation Video
AI-generated video of politicians making fabricated statements circulated before several regional elections. Platforms removed content — but distribution outpaced removal. This is why journalists need fast, reliable detection before publishing.
What to Do
Run suspicious video through our free Sora AI Detector, learn the 10 visual signs of AI video, and read our guide on AI video in legal proceedings. Follow our AI News for ongoing case coverage.
How to Protect Your Organisation From AI Video Fraud
The fraud cases above share a common preventable thread: the targets did not verify the authenticity of the video they were shown before taking action. Establishing simple verification protocols prevents the vast majority of AI video fraud at the organisational level:
- Financial authorisation: Any financial request originating from a video call — especially an unusual or urgent one — must be confirmed through a second independent channel before execution. Call back on a known number. The $25M fraud succeeded because this step was skipped.
- Video evidence review: Any video submitted as evidence or documentation in a business context should be screened with an AI detector before being relied upon.
- Public communications: Before sharing or acting on dramatic video footage from social media or external sources, run a quick detection check. Our free detector takes seconds.
For the broader context on AI-generated video as a disinformation and fraud tool, read our AI video misinformation guide. For the detection methodology that underpins these protections, see our complete detection guide. For the legal dimension when fraud has already occurred, read our AI video legal evidence guide.