Loading...
Published by Aeyan Raza
September 7, 2025

A new report from Common Sense Media, a nonprofit known for evaluating children’s digital safety, has raised serious concerns about Google’s Gemini AI, warning that it may not be safe enough for children and teenagers.
Released on Friday, the detailed risk assessment rated Gemini’s “Under 13” and “Teen Experience” modes as “High Risk.” The group said the AI platform, despite some safeguards, is still not properly designed for young users and could expose them to harmful or inappropriate content.
According to the report, Gemini’s versions for children and teens are largely adult-style AI systems with minimal safety adjustments. Analysts found that instead of building a child-first experience, Google appears to have added surface-level filters to an otherwise general-purpose chatbot.
One positive note highlighted by researchers was that Gemini clearly tells children it is not a real person, which helps reduce emotional attachment. However, experts say this alone is not enough.
“AI for kids cannot just be an edited version of an adult product,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media. “Children at different developmental stages need very different protections.”
The report pointed to several troubling findings. In testing, Gemini was found capable of generating unsafe or age-inappropriate responses, including discussions related to sex, drugs, alcohol, and even sensitive mental health topics.
These concerns are especially alarming as teenagers increasingly turn to AI chatbots for advice and emotional support. Experts warn that without strong guardrails, young users may treat AI responses as trustworthy guidance, even when the information is flawed or harmful.
The issue has gained global attention following recent lawsuits involving AI platforms, where families allege that prolonged interactions with chatbots contributed to serious mental health harm among teens.
The timing of the report is significant. Industry reports suggest Apple is considering Gemini as the AI engine behind a future version of Siri. If that happens, millions of teenagers could gain easier access to the technology.
Common Sense Media warned that unless safety gaps are addressed, wider adoption could magnify the risks for young users.
In response, Google defended Gemini, stating that it already has multiple safety systems in place, especially for users under 18. The company admitted that some filters did not work as intended during testing and said new protections have since been rolled out.
Google added that it continues to refine its AI policies based on feedback from independent researchers and child safety groups.
Common Sense Media has evaluated several popular AI platforms in the past. It previously labeled Meta AI and Character.AI as “unacceptable risks,” Perplexity as “high risk,” ChatGPT as “moderate risk,” and Anthropic’s Claude as “minimal risk.”
With Gemini now under scrutiny, the debate over how to safely design AI for children is becoming harder to ignore.