A family devastated by the Florida State University shooting has opened a new legal front against OpenAI, arguing that the suspect did not act alone but drew guidance from ChatGPT in the months before the attack.

The federal lawsuit, filed in Florida’s northern district court, comes from Vandana Joshi, the widow of Tiru Chabba, one of two people killed in the 17 April 2025 shooting at FSU. Robert Morales, the university dining director, also died in the attack, and five others suffered injuries. Reports indicate the complaint alleges the suspected gunman used ChatGPT extensively over a period of months, including in the days just before the shooting.

The lawsuit pushes a question courts and tech companies can no longer avoid: when alleged harm follows repeated chatbot interactions, where does responsibility begin and end?

The case, first reported by NBC News, places OpenAI at the center of a fast-growing debate over the real-world consequences of generative AI. The lawsuit reportedly claims the attack unfolded “with input and information provided” during the suspect’s conversations with ChatGPT. The filing does not just revisit the facts of a mass killing. It asks whether an AI company can bear legal responsibility when a user allegedly turns a conversational tool toward violence.

Key Facts

  • A federal lawsuit in Florida targets OpenAI over alleged ChatGPT use linked to the FSU shooting.
  • The suit was filed by Vandana Joshi, widow of victim Tiru Chabba.
  • Two people were killed in the 17 April 2025 attack, and five others were wounded.
  • Reports suggest the complaint focuses on months of chatbot conversations before the shooting.

The lawsuit arrives as AI systems move deeper into daily life while scrutiny over safeguards grows sharper. Courts have only begun to confront cases that connect chatbot output to violence, self-harm, or other real-world danger. This claim will likely turn on what the suspect asked, what the chatbot returned, what protections were in place, and whether the law sees those exchanges as meaningful contribution or too remote to assign blame.

What happens next matters well beyond one courtroom. OpenAI will have a chance to challenge the claims, and the litigation could force a closer look at how AI firms monitor risky use, design guardrails, and respond to warning signs. For families, schools, and the broader tech industry, the case may help define the line between a tool and accountability in an age when software speaks back.