Google's New SynthID Detector: Verifying AI-Generated Content

A new portal helps identify AI-created images, audio, video, and text from Google AI models.

Google has launched the SynthID Detector, a verification portal designed to identify AI-generated content created with Google AI. This tool can detect invisible watermarks across various media types, offering crucial transparency in the evolving AI landscape. It helps users verify the authenticity of digital content.

Sarah Kline

By Sarah Kline

December 4, 2025

4 min read

Google's New SynthID Detector: Verifying AI-Generated Content

Key Facts

  • The SynthID Detector identifies AI-generated content from Google AI models.
  • It can detect invisible SynthID watermarks in images, audio, video, and text.
  • Over 10 billion pieces of content have already been watermarked with SynthID.
  • The detector highlights specific portions of content where watermarks are found.
  • Google is partnering with NVIDIA and Adobe to expand SynthID's use and detection.

Why You Care

Ever wonder if that viral image or compelling audio clip is actually real? With AI advancements, telling the difference is getting harder. Google just launched a new tool to help you verify AI-generated content. Why should you care? Because understanding content origins is vital in today’s digital world.

What Actually Happened

Google recently announced the SynthID Detector, a new verification portal, according to the announcement. This portal helps identify content created using Google AI. It provides detection capabilities across various media types. This includes images, audio, video, and text. The goal is to offer essential transparency in the rapidly evolving generative media landscape. The portal can also highlight which parts of the content are more likely to have been watermarked with SynthID.

SynthID is a , invisible watermark. It remains detectable even after content is shared or transformed, as the company reports. Originally for AI-generated imagery, SynthID now covers text, audio, and video. This includes content from Google’s Gemini, Imagen, Lyria, and Veo models. Over 10 billion pieces of content have already been watermarked with SynthID, the team revealed.

Why This Matters to You

This new tool offers practical implications for anyone consuming or creating digital media. Imagine you are a journalist verifying a source. Or perhaps you are a content creator wanting to assure your audience of authenticity. The SynthID Detector provides a clear method for verification.

Here’s how the SynthID Detector works:

  1. Upload Content: You upload an image, audio track, video, or text created with Google’s AI tools.
  2. Scan for Watermarks: The portal scans your uploaded media. It detects if the content, or specific portions, contain a SynthID watermark.
  3. View Results: The portal presents the results. If a watermark is detected, it highlights the likely watermarked parts.

This means you can quickly check the origin of content. “The portal provides detection capabilities across different modalities in one place and provides essential transparency in the rapidly evolving landscape of generative media,” as mentioned in the release. How might this change your approach to sharing information online?

The Surprising Finding

What’s truly surprising is the sheer scale and robustness of SynthID’s application. The team revealed that over 10 billion pieces of content have already been watermarked. This challenges the common assumption that AI watermarking is still in its infancy. It shows a significant, proactive effort by Google to embed transparency from the outset. What’s more, SynthID’s ability to persist through various content transformations is unexpected. Many might assume such watermarks would be easily removed or degraded. This nature ensures the watermark remains detectable even when content is modified or re-shared, the research shows. This capability is crucial for maintaining content integrity across the internet.

What Happens Next

Google is currently rolling out the SynthID Detector to early testers. It will become more broadly available in the coming months, according to the announcement. Journalists, media professionals, and researchers are encouraged to join the early testing program. This broader access is expected by late 2024 or early 2025. For example, a news organization could integrate this tool into its fact-checking workflow. This would allow for rapid verification of submitted media. The company is also expanding its environment. They partnered with NVIDIA to watermark videos generated by their AI models. What’s more, Google is collaborating with Adobe, as mentioned in the release. This partnership aims to enable other partners to detect SynthID watermarks. Our advice for readers is to stay informed about these developments. Consider how content verification tools can enhance your digital literacy. Content transparency remains a complex challenge, the company reports. Continued collaboration within the AI community is vital to broaden access to these transparency tools.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice