Proof-of-life videos would be more reliable if we could trust our eyes entirely.
Social media is currently flooded with conspiracy theories about Benjamin Netanyahu being killed or injured and replaced with AI-generated deepfakes. Clips featuring the Israeli Prime Minister with extra fingers or a bottomless coffee cup illustrate how proving reality was easier before. Evidence suggests Netanyahu is alive, but AI’s ability to replicate people convincingly in different formats makes dispelling rumors challenging. This reflects the distrust in one’s own eyes today.
Theories emerged after Netanyahu’s Friday press conference livestream. A broadcast clip was shared online, allegedly showing Netanyahu with six fingers on his right hand. Older AI tools struggled with hands, leading to speculation that Israel uses deepfake footage to conceal Netanyahu’s death in an Iranian missile strike.
Closer inspection reveals the “extra” finger results from video degradation and lighting. Fact checkers like Snopes and Politifact debunked claims of AI generation. The video’s 40-minute runtime also exceeds current AI video model capacities.
To counterclone theories, Netanyahu shared a video on his X account, counting his fingers in a coffee shop. Social media quickly claimed visual inconsistencies indicated AI deepfakes. Some comments noted moments in the video where liquid moved unnaturally within Netanyahu’s coffee cup, and his ring appeared to vanish into his skin, though these could result from video degradation.
The clips lack metadata from systems like C2PA Content Credentials or SynthID, which could verify their authenticity or track AI tool usage. Platforms like Instagram and YouTube, which tag AI-generated or manipulated content, provided no indications of the footage’s authenticity or fabrication.
People seek real assurances, especially amid Iran, Israel, and US conflicts. Our online environment lacks tools to facilitate this, forcing constant adaptation by learning professional fact checker methods or relying on others to detect fakes.
Even before AI’s rise, people feared news manipulation—like the viral Kate Middleton proof-of-life photoshoot revealing a botched edit—and now it’s worse. AI tools generate content with fewer obvious “tells,” increasing difficulty in verifying photo or video authenticity. This fuels a trust crisis even without manipulation evidence, as seen in Netanyahu’s original video.
Uncertainty fuels war distrust. In a Truth Social post, President Donald Trump accused Iran of using AI for disinformation, depicting false US attacks, and suggested treason charges for media outlets. Though AI-generated disinformation is common, Trump himself has used deepfakes for political chaos, sharing more AI-generated memes and disinformation than policy bulletins.
Yet, Trump audaciously told reporters AI is “very dangerous” and should be handled carefully. Perhaps his administration could lead by example. For now, we can’t trust how people hold their coffee cups.
