“The Emergence of AI-Created Forgeries Signals the Start of the ‘Deep Skepticism’ Age”

"The Emergence of AI-Created Forgeries Signals the Start of the 'Deep Skepticism' Age"

“The Emergence of AI-Created Forgeries Signals the Start of the ‘Deep Skepticism’ Age”


### The Age of “Profound Uncertainty”: Tackling the Trials of AI-Crafted Media

In recent times, the swift progression of artificial intelligence (AI) has instigated notable transformations in how we produce, engage with, and interpret media. A significant concern arising from this technological shift is the emergence of photorealistic AI-created images and videos, commonly termed “deepfakes.” These AI-crafted creations are saturating social media platforms such as X (formerly Twitter) and Facebook, sparking a new level of skepticism regarding the legitimacy of digital content. This trend, which I refer to as “profound uncertainty,” is altering our interactions with media and prompting deep inquiries into truth, trust, and the trajectory of information.

### What is Profound Uncertainty?

Profound uncertainty signifies the increasing doubt towards genuine media owing to the advent of generative AI. As AI resources become more widely available and adept at generating highly believable fake content, individuals are more frequently questioning the reliability of what they encounter online. This skepticism brings notable ramifications: individuals are now more plausibly asserting that real incidents did not transpire, implying that documentary proof was fabricated using AI technology.

While the notion of profound uncertainty is not entirely novel, its tangible effects in the real world are coming into sharper focus. Since the term “deepfake” first appeared in 2017, AI-generated media has advanced swiftly, leading to numerous occasions where profound uncertainty has been weaponized. For instance, conspiracy theorists have alleged that President Joe Biden has been substituted by an AI-driven hologram, and former President Donald Trump has accused Vice President Kamala Harris of employing AI to falsify crowd sizes at her events. In another case, Trump dismissed a photo of himself with writer E. Jean Carroll, who successfully sued him for sexual assault, calling it an AI fabrication.

These instances underscore how profound uncertainty is leveraged to undermine legitimate evidence and manipulate public perception.

### The Surge of Deepfakes and the Endurance of Doubt

Doubt has historically served as a political instrument, stretching back to ancient periods. The contemporary manifestation of this strategy, propelled by AI, represents the latest development in a tactic designed to sow uncertainty, thus manipulating public opinion and obscuring the truth. AI has emerged as a modern tool for deceivers, enabling them to fabricate or alter images, audio, text, or videos that seem genuine.

The term “deepfake” was coined by a Reddit user who shared AI-generated pornography, swapping out the faces of performers for those of others. Since that time, the technology has evolved dramatically, allowing anyone to create convincing counterfeit media easily. In the 20th century, trust in media was partly rooted in the time, skill, and resources necessary to produce documentary images and films. Yet, as AI-generated content becomes increasingly widespread, that trust is diminishing.

This decline in trust reaches beyond political discussions and legal frameworks. It also impacts our collective comprehension of historical occurrences, which depend on media artifacts for documentation. As AI-generated content grows more advanced, our interpretation of truth in media must be reevaluated.

### Profound Uncertainty in the Judicial System

The rise of deepfakes is also fostering concerns within the legal system. In April 2024, a group of federal judges deliberated on the potential for AI-generated deepfakes to introduce false evidence or cast doubt on authentic evidence during court trials. The judges emphasized the hurdles of authenticating digital evidence in a time of increasingly sophisticated AI technology. While no immediate amendments to the rules were proposed, the conversation highlights the necessity of tackling AI-related challenges in legal contexts.

The capacity of deepfakes to undermine the integrity of the judicial system is a critical worry. If deepfakes can cast doubt on authentic evidence, it could result in wrongful convictions or acquittals, further diminishing public faith in the justice system.

### The Deceiver’s Dividend: A New Instrument for Misinformation

In 2019, legal scholars Danielle K. Citron and Robert Chesney introduced the term “deceiver’s dividend” to encapsulate the implications of profound uncertainty. The deceiver’s dividend denotes the ability of deceivers to discredit authentic evidence by asserting it was fabricated via AI. As society grows increasingly aware of the hazards posed by deepfakes, deceivers can manipulate this understanding to evade accountability for their deeds.

The deceiver’s dividend is particularly alarming as it intensifies distrust in conventional news outlets and undermines the pillars of democratic dialogue. As objective truths wane in influence, opinions often overshadow facts, creating a ripe environment for authoritarianism and the proliferation of misinformation.

### The Ramifications on Social Trust and Historical Narratives

Profound uncertainty extends beyond contemporary events and legal matters. It also possesses the potential to skew our comprehension of history. As AI-generated content becomes more commonplace, there exists a risk that future generations will find it difficult to differentiate between authentic and fabricated historical accounts. This could result in a situation where the legitimacy of historical events is called into question.