Samsung shows little urgency to safeguard reality from AI deepfakes.
On Thursday morning, I attended a Q&A panel with four senior Samsung smartphone executives. Until 2025, Samsung held the title of the world’s largest smartphone manufacturer and, by extension, the largest maker of cameras. It now ranks second after Apple.
When handed the microphone, I posed the following question:
We observe a societal divide between those who desire AI to perform impressive feats with their photos and videos and those who oppose AI’s involvement due to concerns about eroding our capacity to trust content as real, impacting the notion of photographic evidence.
Despite metadata tools like C2PA largely failing to control this issue, I inquired if Samsung has devised any novel strategies to curb AI images from dominating.
Samsung’s executives had no groundbreaking ideas to present. However, credit goes to Won-Joon Choi, COO and R&D chief of the mobile division, for addressing the question directly. He acknowledged the erosion of reality as an issue Samsung is keen to tackle.
Choi and other Samsung leaders indicated the need to strike a balance between the pursuit of photographic authenticity and empowering smartphone users to be “more creative.” They deflected responsibility by framing it as a collective industry challenge, suggesting broader discussions are necessary. Samsung claims to have partially addressed this by adding a removable watermark to AI-generated images.
Another executive hinted that perceptions of AI-generated content might improve over time.
Some responses included:
“We recognize the issue, given the significant AI-generated content,” Choi said. “We must provide solutions for creativity while grappling with differentiating real from fake content. This industry-wide problem demands a collective solution.”
Addressing C2PA’s perceived failure, he added, “Though some see C2PA as a failure, it still offers a way to verify AI-created content. We must continue industry-wide efforts to resolve this.”
He concluded, “With joint efforts, I believe we can solve this.”
As observed by my colleague Jess Weatherbed, there’s concern that rhetoric about the industry addressing such challenges may substitute genuine action.
Responding to Samsung’s recent AI initiatives, Samsung America exec Dave Das noted the ongoing evaluation of AI’s role in advertising, acknowledging initial AI use had clear feedback.
“We’re discerning AI’s right application, ensuring transparency between AI-generated and authentic content,” Das stated.
He portrayed this as a balancing act of business priorities rather than social responsibility, focusing on offering creators choice—striving for “the right balance.”
Later, tech reporter Rich DeMuro queried Samsung on the possibility of easing watermark removal from AI photos, asking if customers might prefer unmarked images, like for Christmas cards.
Drew Blackard, Samsung America’s SVP of mobile product management, responded that satisfying consumer desire to remove watermarks, while maintaining authenticity mechanisms, might lead to dual solutions if needed.
“At present, authenticity concerns lead us to focus on watermarking both in metadata and the image. Not all services do this,” he explained.
Blackard suggested perceptions of AI content might shift positively over time, much like the initial apprehension toward user-generated content, which eventually integrated into accepted norms.
There’s an open question whether Samsung and other smartphone makers have pondered the potential unfavorable shift in AI-generated content perception, especially if job losses and fraudulent ease become prevalent. It is worth considering preemptive industry action before facing backlash for contributing to subsequent issues.
