Skip to content
Home News Google To Label AI-Generated Images In Search

Google To Label AI-Generated Images In Search

Google To Label AI-Generated Images In Search

In recent months, images created or altered by artificial intelligence have increasingly dominated Google search results. This trend has made it challenging for users to locate genuine content. To address this issue, Google announced on Tuesday that it would soon start labeling search results that feature AI-generated or AI-modified images.

This labeling will be visible through the “About this image” feature and will be implemented across Google Search, Google Lens, and the Circle to Search function on Android devices. Additionally, the tech giant plans to incorporate this labeling in its advertising services and is exploring similar identification methods for YouTube videos, with further updates expected later this year.

AI-generated images appearing in Google search.
Digital Trends

To identify these AI-generated images, Google will use metadata from the Coalition for Content Provenance and Authenticity (C2PA). Earlier this year, the company joined C2PA as a steering committee member. This metadata will help trace the origin of images, detailing when and where they were created, as well as the tools and software used in their production.

Several leading companies have affiliated with C2PA, including Amazon, Microsoft, OpenAI, and Adobe. However, there has been limited participation from hardware manufacturers, with only a few models from Sony and Leica currently supporting the standard. Some notable AI generation tool developers, such as Black Forest Labs—known for the Flux model utilized by Grok—have opted not to adopt this standard.

The rise of online scams employing AI-generated deepfakes has surged dramatically over the past couple of years. In one instance from February, a financier in Hong Kong fell prey to a scheme in which scammers impersonated the company’s CFO during a video call, leading to a loss of $25 million. Furthermore, a study conducted in May by verification provider Sumsub revealed a staggering 245% increase in scams involving deepfakes globally between 2023 and 2024, with U.S. cases rising by 303%.

David Fairman, the chief information officer and chief security officer of APAC at Netskope, noted in May that “the public accessibility of these services has lowered the barrier for cybercriminals,” adding that they no longer need advanced technological skills to execute these scams.

  • Rukhsar Rehman

    A University of California alumna with a background in mass communication, she now resides in Singapore and covers tech with a global perspective.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.