Americans over the age of 60 lost $4.9 billion to scams in 2024 — 43% more than the year before. Research presented at Governors State University in April 2026 documents how artificial intelligence, specifically voice cloning and synthetic video, is dramatically accelerating fraud targeting older adults. The study describes three specific vulnerabilities that AI weaponises: difficulty distinguishing legitimate from fraudulent communications, difficulty judging the truthfulness of online information, and limited understanding of how algorithms use personal data.
How AI Makes Elder Fraud Worse
Traditional scam awareness advice — “wait for red flags in the email” or “listen for suspicious accents” — is increasingly ineffective against AI-powered fraud. Voice cloning from a few seconds of publicly available audio can produce a convincing replica of a family member’s voice. Synthetic video can show a “grandchild” or trusted authority figure making a direct request. Personalised phishing emails, generated by large language models from scraped social media data, avoid the grammatical errors and generic language that older adults were taught to recognise as warning signs.
The Grandparent Scam Goes AI
The “grandparent scam” — in which a fraudster calls an elderly victim claiming to be a grandchild in distress — has been supercharged by AI voice cloning. In documented cases from 2024 and 2025, victims received calls featuring realistic voice clones of family members they had spoken to many times, requesting emergency wire transfers. The Jennifer DeStefano case — in which a mother received a call with a cloned version of her daughter’s voice claiming she had been kidnapped and demanding $1 million ransom — illustrates how emotionally devastating these attacks can be even when they fail. As AI voice quality improves, the psychological impact intensifies. See our full AI video fraud cases roundup for documented examples.
Projected Losses: $40 Billion by 2027
Deloitte has projected that generative AI, including deepfakes, could drive US fraud losses to $40 billion by 2027, up from $12.3 billion in 2023. The fintech sector alone saw deepfake incidents surge by 700% in 2023. The combination of rapidly improving generation quality, near-zero technical barrier to entry, and an older population that has not had time to develop deepfake literacy creates what researchers describe as a compounding vulnerability.
Protective Steps for Individuals and Families
- Establish a family safe word: Agree on a verbal code that any family member must provide when calling to request emergency money. A cloned voice will not know it.
- Always call back on a known number: If you receive an unexpected urgent call, hang up and call the person back on their known contact number independently.
- Verify before you transfer: No legitimate emergency requires immediate wire transfer or gift card purchase. Always verify through a second channel first.
- Check suspicious video: If you receive a video message from someone you know that seems unusual, download it and run it through our free Sora AI Detector before acting on it.
For the technical background on how AI video fraud works, read our complete AI video detection guide. For verification techniques applicable to video, see our video authenticity guide. Follow our AI News section for ongoing coverage of AI fraud developments.