Google has disclosed details for the first time concerning its AI chatbot, Gemini, and its approach to ensuring the mental health and safety of teens. Announced in a blog post, Google emphasized that Gemini will not simulate human companionship or claim human qualities when interacting with minors. This move comes in response to concerns from child safety and mental health experts about chatbots being unsafe for teenagers. Previously, Common Sense Media rated versions of Gemini designed for teens and those under 13 as “high risk” due to exposing them to inappropriate content and unsafe mental health advice.
Google stated that Gemini includes protections to prevent emotional dependence and avoids expressions of intimacy or personal need. These measures are intended to help deter bullying and harassment by the chatbot. Google also introduced a “one-touch” interface to provide easy access to mental health resources through chat, call, and text, emphasizing prioritizing real human support and encouraging users to seek help.
Legal challenges have emerged, as Google faces a lawsuit from a family alleging their relative committed suicide at the urging of Gemini. Google’s response has been that Gemini is not designed to promote violence or self-harm, but AI models are not infallible.
Google reiterated its commitment to fostering a safe digital learning environment for young users and orienting Gemini to support help-seeking behaviors over validating harmful actions.
