Loading...
Published by The Info Pakistan
September 7, 2025

Common Sense Media, a nonprofit focused on children’s online safety, has released a detailed risk assessment of Google’s Gemini AI, raising fresh concerns for parents worldwide. The report, published on Friday, rated Gemini’s child and teen versions as “High Risk,” citing that the platform is not fully tailored to meet the needs of younger users. According to the assessment, while Gemini does inform kids that it is a computer and not a human friend — a positive step that helps reduce emotional dependence — the platform still carries significant risks. Analysts found that both the “Under 13” and “Teen Experience” modes were essentially adult versions of Gemini with only minimal safety filters. This approach, experts argue, fails to create a safe environment specifically designed for children. The study highlighted troubling cases where Gemini could provide “unsafe or inappropriate” content to children, including discussions on sex, drugs, alcohol, and sensitive mental health advice. These risks are particularly alarming given the rising reports of teenagers being influenced by AI chatbots. Recently, OpenAI faced a wrongful death lawsuit after a teenager reportedly consulted ChatGPT for months before taking his own life. Similarly, Character.AI has been sued over similar incidents. The timing of this report is especially significant, as Apple is reportedly exploring Gemini as the large language model to power its next-generation Siri. Experts warn that if these safety concerns are not resolved, millions of teens could be exposed to harmful AI interactions. Robbie Torney, Senior Director of AI Programs at Common Sense Media, stated that Gemini “gets some basics right but stumbles on the details.” He emphasized that AI for kids cannot simply be a modified adult product, stressing that children at different developmental stages require tailored safeguards and age-appropriate guidance. In response, Google defended its product, claiming that multiple safety mechanisms are already in place, including strict policies for users under 18. The company acknowledged that some safety filters had not worked as intended but confirmed it had rolled out new protections after reviewing the findings. Common Sense Media has previously rated other AI platforms, labeling Meta AI and Character.AI as “unacceptable risks,” Perplexity as “high risk,” ChatGPT as “moderate risk,” and Anthropic’s Claude as “minimal risk.” With Gemini now under scrutiny, the debate on AI safety for children continues to intensify.
