A burgeoning industry of deepfake detection startups employs AI to counter AI. As AI tools for creating fake audio, video, and images have become widely available, companies like Reality Defender, Pindrop, and GetReal have emerged to fight manipulated media. These startups use machine learning to identify deepfakes, which are often used for fraud, harassment, and creating nonconsensual content. Reality Defender uses an inference-based model to detect deepfakes by training AI on real and fake content.
Attempts to create a convincing fake voice of the author for family members were unsuccessful, despite efforts to fine-tune the voice. While the technology is improving, it still struggles with authenticity, especially with familiar voices. Family members can detect deepfakes better than colleagues or corporate entities. Tools need to work quickly; text-to-speech was slow but high-quality, whereas a faster response compromised quality.
The deepfake detection industry mainly targets corporate fraud as the threat landscape evolves. Companies face scams involving fraudulent job applicants and impersonation of executives. As AI models improve, these attacks become easier to execute, highlighting the need for robust detection tools. Detection software is primarily designed for big companies, with consumer solutions not yet widely available.
Raising awareness about the threat remains a challenge. Consumer-focused solutions could potentially create inconsistencies in protection. Reality Defender likens the future of deepfake detection to antivirus software integrated into services. Although personal deepfake detection is limited, the industry is evolving to protect institutions and, eventually, consumers. Despite deepfakes not fooling family members, the potential for scams remains, and the industry’s growth is crucial for tackling this emerging threat.
