It’s no surprise that Google dedicated a significant amount of time to highlight the new AI features available on its Pixel 10 smartphones earlier this week. From Gemini Live to AI image editing, Google claims the [Pixel 10 series](https://www.bgr.com/1945289/google-pixel-10-series-release-date-price-specs/) is an AI-enhanced powerhouse. Concurrently, individuals are increasingly finding it challenging to determine whether content is generated by humans or AI. Hence, Google intends to notify you whenever you utilize AI to edit a picture on the Pixel 10.
Indeed, according to Google’s statements, every photograph captured with the Pixel 10 will feature a digital watermark adhering to the latest industry standard established by the Coalition for Content Provenance and Authenticity ([C2PA](https://c2pa.org/)). This standard is represented by the aforementioned watermark, which currently displays as a “cr” logo in the corner of the image. When engaged with, this watermark will reveal more information about the photo, such as when the credentials were issued, who created the image, what app or device was utilized, and whether any AI tools were employed.
### Simple credentials verification
Google collaborated with the C2PA back in 2024 to indicate to users when a photo has been edited or produced with AI. Since then, the company has been actively developing the latest iteration of Content Credentials and is even integrating it directly into Google Search. This means that any image containing C2PA metadata will be instantly recognizable due to the small “cr” watermark.
Google has already incorporated this new standard in Google Lens, Images, Circle to Search, and it will be accessible in the Google Photos app within the upcoming weeks. With the Pixel 10 embracing C2PA as a fundamental aspect of the phone’s photographic experience, it wouldn’t be unexpected to see other smartphone makers adopting similar measures soon.
This is a necessary move to help regulate AI-generated content, particularly since [it’s becoming increasingly difficult to discern if something was created with AI](https://www.bgr.com/tech/you-wont-believe-this-movie-was-made-with-ai/). It’s also beneficial that you will be able to view the credentials even if the image wasn’t produced with AI, as this will help mitigate concerns, especially if individuals can see the photo originated from a device or application that supports the standard. Given [Trump’s AI agenda for America](https://www.bgr.com/1920174/americas-ai-action-plan-revealed-by-trump-administration/) focuses on limiting moderation, it’s reassuring to witness Google and others proactively addressing these gaps.