Artificial intelligence has arrived in American election campaigns — and it is not leaving. In March 2026, the National Republican Senatorial Committee (NRSC) released an online political advertisement featuring an AI-generated deepfake of Texas Democratic Senate candidate James Talarico.
The clip showed a realistic but entirely fabricated version of Talarico speaking directly to camera for over a minute, reciting his own old social media posts in a lifelike synthesis. It is the longest and most convincing AI-generated political deepfake yet deployed by a major US party organisation.
What Happened: The Talarico Deepfake Ad
The NRSC ad featured a computer-generated version of Talarico — convincing enough in appearance and voice that casual viewers would not immediately recognise it as synthetic. The words “AI generated” appeared in small font in the lower right corner, easily missed.
Talarico himself never filmed the video. The ad drew immediate condemnation from Democratic lawmakers, with Senator Andy Kim of New Jersey calling the content “dangerous and wrong” and demanding national legislative action.
Texas Law Has a Critical Gap
Texas is one of roughly half of US states with a law specifically addressing political deepfakes — but that law only applies in the 30 days before an election. Because the NRSC ad ran months before the November 2026 midterm, it fell outside the law’s scope entirely.
This legal gap is representative of the broader regulatory situation: patchy, delayed, and easily exploited by campaigns willing to push legal and ethical boundaries.
Both Parties Are Using AI Video
The Talarico ad is not an isolated case. Multiple AI-generated attack ads have circulated in the 2026 Texas Republican Senate primary race between incumbent Senator John Cornyn and Attorney General Ken Paxton.
One Paxton ad showed a deepfake version of Cornyn dancing with a Democratic congresswoman. Democratic California Governor Gavin Newsom posted satirical AI-generated video of Trump administration officials in handcuffs. Researchers studying the 2026 midterms describe synthetic media as “likely to become a routine campaign tool” in both parties.
Why This Matters: The Deepfake Election Threat
A peer-reviewed study published in Communications Psychology in early 2026 found that people continue to be influenced by deepfake video content even when they are explicitly told the video is fake before watching it.
This “continued influence effect” makes deepfakes particularly dangerous in electoral contexts: even debunked synthetic video leaves a residue on voter perception. With AI generation tools now accessible to anyone with a browser, the production barrier has effectively reached zero.
Also: Indian Election Deepfakes in Assam 2026
The US is not alone. In April 2026, Asom Jatiya Parishad candidate Kunki Chowdhury filed a criminal complaint in Guwahati, India, after an AI-generated deepfake video distorting her statements was widely circulated on social media during the Assam state elections. She accused the opposing party’s digital cell of orchestrating the manipulation. The incident is the latest in a global pattern of AI synthetic video being deployed as an election interference tool.
How to Verify Political Video
If you encounter dramatic video of a political figure making surprising statements, always verify before sharing. Run any suspicious clip through our free Sora AI Detector for an immediate AI probability score. Look for the 10 visual signs of AI-generated video including unnatural skin, lip sync problems, and physics errors. Read our dedicated guide for journalists verifying AI video and our full AI video detection guide. Follow our AI News section for ongoing coverage of deepfakes in elections.