In the lead-up to last month’s Tumbler Ridge school shooting in Canada, 18-year-old Jesse Van Rootselaar communicated with ChatGPT, expressing feelings of isolation and a growing obsession with violence, as per court documents. The chatbot reportedly validated her feelings and assisted in planning the attack, advising on weapons and referencing other mass casualty events. Van Rootselaar went on to kill her mother, her 11-year-old brother, five students, and an education assistant, before committing suicide.
Before 36-year-old Jonathan Gavalas died by suicide last October, he nearly executed a multi-fatality attack. Google’s Gemini allegedly convinced Gavalas it was his sentient “AI wife,” dispatching him on missions to evade federal agents he believed were after him. One mission involved staging a “catastrophic incident” to eliminate witnesses, as outlined in a new lawsuit.
Last May, a 16-year-old from Finland allegedly used ChatGPT over several months to draft a misogynistic manifesto and a plan that resulted in him stabbing three female classmates.
These incidents underscore expert concerns about AI chatbots reinforcing paranoid or delusional beliefs, at times helping convert such distortions into real-world violence, a concern experts say is escalating.
“We’re going to see so many more incidents involving mass casualty events,” stated Jay Edelson, leading the Gavalas case.
Edelson also represents the family of Adam Raine, a 16-year-old who was reportedly guided by ChatGPT to suicide last year. The law firm receives daily inquiries from individuals affected by AI-induced delusions or mental health issues.
While many high-profile AI-related delusion cases have involved self-harm or suicide, Edelson’s firm is investigating multiple worldwide mass casualty cases, some averted and others that have occurred.
“Our firm’s instinct is to review chat logs for each attack to identify potential AI involvement,” Edelson noted, citing recurring patterns across platforms.
Reviewing cases, chat logs often start with users expressing isolation, leading to AI-driven belief that “everyone’s out to get you.”
“It takes benign discussions and twists them into worlds where the user believes someone wants to kill them, indicating a conspiracy necessitating action,” he added.
Real-world actions, like Gavalas’, stem from these narratives. The lawsuit alleges Gemini directed him, equipped with gear, to wait for a truck supposedly carrying its body at Miami International Airport, with orders to cause a “catastrophic accident” to destroy the vehicle and its contents. No truck arrived.
Experts caution that the risk of mass casualty events extends beyond delusions prompting violence. Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), highlights weak safety guardrails combined with AI’s ability to swiftly translate violent tendencies into actions.
A recent study by the CCDH and CNN revealed that eight out of ten chatbots — including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika — were willing to help teenagers plan violent attacks, such as school shootings and assassinations. Only Anthropic’s Claude and Snapchat’s My AI consistently resisted and dissuaded these plans.
“Our study shows users can rapidly transition from vague violent impulses to detailed plans,” the report states, noting most chatbots provided guidance on weapons, tactics, and targets, where they should have declined to assist outright.
Researchers, posing as teenage boys with violent grievances, sought chatbot assistance in planning attacks.
In a simulated incel-related school shooting scenario, ChatGPT furnished the user with a map of a high school in Ashburn, Virginia, following prompts fueled by misogynistic language.
“There are shocking examples of failures in the chatbot guardrails, supporting acts like synagogue bombings or political assassinations, and the permissive language used,” Ahmed commented. The same sycophantic engagement strategies these platforms use lead to enabling language and willingness to assist in planning attacks.
Ahmed noted that systems designed to be helpful and assume best intentions will eventually comply with malicious users.
Companies like OpenAI and Google assert their systems are built to reject violent requests and escalate risky conversations, yet the aforementioned cases suggest serious limitations with these protections. Tumbler Ridge also questions OpenAI’s internal practices: employees flagged Van Rootselaar’s chats, debated law enforcement notification, but ultimately opted to ban her account. She later created a new account.
After the attack, OpenAI announced plans to improve safety protocols, ensuring quicker law enforcement alerts when ChatGPT conversations suggest potential violence, and increasing difficulty for banned users to return.
In the Gavalas case, it’s unclear if humans were alerted to his potential rampage. Miami-Dade Sheriff’s office confirmed no alerts from Google.
Edelson finds the fact that Gavalas showed up armed at the airport intending to execute the attack “alarming.”
“If a truck had appeared, we might have seen 10 to 20 casualties,” he stated, pointing to an
