Google Photos may soon be able to tell users if an image is AI-generated. This feature could help identify AI-generated images by displaying IPTC credit metadata. Code snippets found in the latest version of Google Photos indicate that the app might soon include features to help users identify AI-generated images.
The new code includes references to “credit” and “digital source type” IDs, likely designed to show the image’s creation source and credit tag. For instance, images generated through Google Gemini usually have a “Made with Google AI” credit tag visible in the image’s EXIF data. Google Photos also tags images edited with its Magic Editor with labels like “AI-Generated with Google Photos.”
Though the exact function of the “ai_info” ID remains unclear, it is speculated that it might identify the specific AI or model used to generate the image.
These new functionalities would likely appear in the image details section of the app. While these changes are not yet live in Google Photos, the update is expected to roll out in the near future. This new feature aims to help users navigate the increasing presence of AI-manipulated imagery in our digital lives.
As generative AI advances, it is easy to see it as yet another area where machines are taking over. But humans remain at the centre of AI art, just in ways we might not expect. With the aid of AI image generators like Dall-E 3, Stable Diffusion, and Midjourney, and the generative features integrated into Adobe’s Creative Cloud programs, you can now transform a sentence or phrase into a highly detailed image in mere seconds.
Images, likewise, can be nearly instantly translated into descriptive text. Although there were plenty of precursors, it wasn’t until January 2021 that AI artists became widespread news, thanks to platforms like Dall-E. Neural networks, proposed as early as 1943, have been developing for decades.
By 2015, algorithmic processes could form simple sentences to describe an image. Quickly, researchers realized they could reverse the order: input tags or natural language to produce images. However, this reversal was not straightforward.
Early attempts often constrained both the style of an image and its subject matter.
Identifying AI-generated images in Photos
While today’s text-to-image tools have achieved significant automation, their architecture and maintenance still rely on human input.
This intersection of human creativity and machine learning continues to shape the evolving landscape of AI-generated art, revealing hidden traces of humanity embedded within. A Google image search screenshot showing a staggering number of AI-generated results over real-world examples has stirred significant debate online. The example in question?
A simple search for “baby peacock” yielded fifteen results, of which only four were real images. This phenomenon has driven many creatives and users to call for stricter regulations on AI-generated content to curtail the spread of misinformation. The issue extends beyond just misleading search results.
Recently, AI’s role in the ongoing presidential race has highlighted broader concerns about the regulation and labelling of AI-generated content. As AI-generated images become more prevalent, differentiating between real and fake becomes increasingly challenging. Various communities have expressed their frustration on social media platforms such as X and Reddit.
One Reddit user lamented the difficulty of finding quality inspiration as AI-generated content proliferates, while another called generative AI “the most overwhelmingly net-negative tech advancement” in recent memory. There is growing consensus that search engines should offer options to filter out AI content, and that robust AI guidelines and regulations are urgently needed. In addition to search engines, social media platforms are also experiencing a flood of AI content.
Artists and creators, who often rely on platforms like Google for sourcing reliable reference images, are finding it increasingly difficult to find authentic content. The current state of affairs raises concerns about the future reliability of search engines. As AI continues to learn and produce more content, the risk of it feeding on its own generated data could lead to a cyclical degradation of content quality online.
Navigating this AI-driven landscape requires vigilance. Awareness and education about AI misinformation are vital in preserving the integrity of human creativity. For those seeking refuge from AI inundation on traditional platforms, exploring alternative platforms where communities are reclaiming their space might offer some relief.
The growing chorus of voices calling for change underscores the urgency of addressing the impact of AI on digital content.









