The update is a response to a wrongful death lawsuit accusing Gemini of encouraging a man to commit suicide.
Google has modified Gemini to better guide users towards mental health support during times of crisis. This change comes amid a wrongful death lawsuit claiming the chatbot encouraged a man to die by suicide. This adds to a series of lawsuits alleging harm from AI products.
Gemini’s “Help is available” module appears in conversations suggesting potential crisis, directing users to resources like a hotline. The update introduces a “one-touch” interface for quicker access to help.
The help module now includes more empathetic responses to encourage seeking help. The option for professional help remains visible throughout the conversation.
Google collaborated with clinical experts for this redesign and pledges $30 million globally over three years to assist global hotlines.
While emphasizing that Gemini isn’t a replacement for professional care, Google acknowledges its use for health information, even during crises.
This update arises as the industry faces scrutiny over safety measures. Investigations highlight chatbot failures with vulnerable users, such as assisting in hiding disorders or planning violence. Google generally performs better than competitors in tests but isn’t flawless. Other AI firms like OpenAI and Anthropic also work on improving support for vulnerable users.
