Google Gemini 'High Risk' for Young Users, Says Report

A new safety assessment flags Google's AI for kids and teens, raising concerns about content and design.

A recent safety assessment by Common Sense has labeled Google Gemini as 'High Risk' for children and teenagers. The report indicates that even dedicated youth versions of Gemini may expose users to inappropriate content and lack child-centric design. This assessment comes amid growing concerns about AI safety for younger audiences.

Mark Ellison

By Mark Ellison

September 6, 2025

5 min read

Google Gemini 'High Risk' for Young Users, Says Report

Key Facts

  • Common Sense labeled Google Gemini 'High Risk' for kids and teens.
  • Gemini's youth tiers are essentially adult versions with added filters.
  • The AI could share inappropriate content (sex, drugs, alcohol) and unsafe mental health advice.
  • Previous AI-related lawsuits involve teen suicides linked to chatbots.
  • Apple is reportedly considering Gemini to power its future AI-enabled Siri.

Why You Care

Are you worried about what your kids encounter online? A new safety assessment just flagged Google Gemini, Google’s AI, as ‘High Risk’ for young users. This news impacts parents, educators, and anyone concerned about digital well-being. It highlights a essential issue: are AI tools truly safe for the youngest among us?

This finding is particularly important because it directly affects the digital environments our children navigate daily. Understanding these risks can help you make informed decisions about system use in your home. The report raises questions about how AI products are designed for different age groups.

What Actually Happened

Common Sense, a non-profit focused on media and system for children, recently released a safety assessment, according to the announcement. This assessment specifically evaluated Google Gemini, Google’s artificial intelligence model. The findings indicate that both Gemini’s ‘Under 13’ and ‘Teen Experience’ tiers are essentially adult versions. They only have some additional safety features layered on top, the organization believes. This approach means they are not built with child safety as a core design principle. The research shows that Gemini could still share “inappropriate and unsafe” material. This includes information on sex, drugs, alcohol, and even problematic mental health advice. This situation poses significant concerns for families. The report suggests a fundamental flaw in how these AI products are conceived for younger audiences.

Why This Matters to You

This assessment has practical implications for you and your family. If your children interact with AI, even seemingly child-friendly versions, they might encounter unexpected content. For example, imagine your teen asking Gemini for advice on a personal issue. The AI might provide guidance that is not age-appropriate or even harmful. This is a real concern, as the company reports that AI has played a role in some teen suicides. OpenAI is facing a wrongful death lawsuit related to a teen’s suicide after allegedly consulting ChatGPT. Similarly, Character.AI was also sued over a teen user’s death. This highlights the severe consequences of inadequate AI safety.

What kind of digital safeguards do you currently have in place for your children? It’s crucial to consider these findings when setting up parental controls or discussing online safety. Robbie Torney, Common Sense Media Senior Director of AI Programs, emphasized the need for tailored solutions. “An AI system for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of creation,” Torney said. This means AI designed for a 10-year-old should differ significantly from one for a 16-year-old. The current approach, as detailed in the blog post, does not meet this standard.

Key Findings from Common Sense Assessment:

  • Under 13 Tier: Labeled ‘High Risk’ due to adult-centric core.
  • Teen Experience Tier: Also ‘High Risk’ for similar reasons.
  • Content Exposure: Potential for inappropriate material (sex, drugs, alcohol).
  • Mental Health Advice: Risk of unsafe or problematic guidance.
  • Design Flaw: Not built with child safety from the ground up.

The Surprising Finding

The most surprising revelation from the assessment is how Google Gemini’s youth-focused versions are structured. You might assume that products marketed for kids and teens are built specifically for them. However, Common Sense found that Gemini’s “Under 13” and “Teen Experience” tiers both appeared to be the adult versions of Gemini under the hood. They only had some additional safety features added on top, according to the announcement. This is counterintuitive because it challenges the idea of bespoke, age-appropriate AI creation. It suggests a modification of an adult product rather than a ground-up design. This approach can lead to significant gaps in protection. It also means that the AI might not understand the unique developmental needs of younger users. The team revealed that this ‘one-size-fits-all’ modification strategy is a key reason for the ‘High Risk’ labeling. It highlights a disconnect between perceived safety and actual design. This finding challenges the common assumption that adding filters is sufficient for child safety.

What Happens Next

Looking ahead, this assessment could influence how tech companies approach AI creation for younger audiences. We might see a push for AI products built with child safety in mind from the ground up, rather than modified adult versions. For example, future AI models could incorporate developmental psychology into their core algorithms. This would ensure content and interactions are truly age-appropriate. The industry implications are significant, especially as news leaks indicate Apple is considering Gemini for its AI-enabled Siri. This integration, due out next year, could expose many more teens to these risks. Apple will need to mitigate these safety concerns, as detailed in the blog post. For you, this means staying informed and advocating for stronger child safety features in AI. Consider asking tech companies about their child-centric design principles. Expect to see more discussions around AI safety standards in the next 12-18 months. The paper states that for AI to be safe and effective for kids, it must be designed with their needs in mind.

Ready to start creating?

Create Voiceover

Transcribe Speech

Create Dialogues

Create Visuals

Clone a Voice