OpenAI Sued by Seven Families Over Failure to Report School Shooter's ChatGPT Activity
OpenAI is being sued by seven families of victims from the Tumbler Ridge school shooting in Canada after the company allegedly failed to alert police to suspicious ChatGPT activity by the suspected shooter, prioritizing its reputation and upcoming initial public offering (IPO) over public safety. The lawsuits accuse OpenAI and CEO Sam Altman of negligence, wrongful death, and aiding and abetting a mass shooting.
What Did OpenAI Know About the Shooter's Activity?
According to reporting from the Wall Street Journal, OpenAI's systems flagged suspicious activity by 18-year-old suspect Jesse Van Rootselaar, whose ChatGPT conversations reportedly involved discussions about gun violence. The company "considered" reporting this activity to law enforcement but ultimately decided against it. The families allege that OpenAI made this decision to protect the company's reputation and its planned IPO, rather than out of genuine concern for public safety.
The lawsuits reveal a troubling sequence of events regarding how OpenAI handled the suspect's account. According to the legal filings, OpenAI claimed it had "banned" Van Rootselaar from the platform, but the families argue this was misleading. The company allegedly only deactivated the account, allowing the suspect to simply create a new one using a different email address.
How Did OpenAI Respond to Account Violations?
- Initial Detection: OpenAI's systems flagged Van Rootselaar's account activity involving conversations about gun violence, but the company chose not to alert law enforcement despite considering the decision.
- Inadequate Account Controls: Rather than permanently banning the user, OpenAI only deactivated the account, which the suspect easily circumvented by creating a new account following OpenAI's standard account creation process.
- Misleading Public Claims: After the shooting, OpenAI claimed the suspect must have "evaded" the company's safeguards to create a new account, but the families argue no such safeguards actually existed to evade.
The families' legal complaint is particularly scathing about OpenAI's characterization of its security measures. According to the lawsuit, "When OpenAI was later forced to disclose that the Shooter created a new account, it told a second lie: it claimed they must have 'evaded' the company's safeguards to create one. But there were no safeguards to evade. The Shooter simply followed OpenAI's own instructions to create a new account after being banned".
What Role Did GPT-4o's Design Play?
The families also claim that GPT-4o's design contributed to the tragedy. OpenAI rolled back a GPT-4o update in 2025 after discovering the model was "overly flattering or agreeable, often described as sycophantic". The lawsuit suggests this defective design may have played a role in the shooting, though specific details about how the model's behavior influenced events are not fully elaborated in available reporting.
This allegation raises broader questions about AI safety and how language models respond to potentially dangerous requests. If a model is designed to be agreeable and avoid confrontation, it might be less likely to refuse harmful requests or flag concerning conversations to human moderators.
What Has OpenAI Said in Response?
"I am deeply sorry that we did not alert law enforcement to the account that was banned in June. Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again," said Sam Altman.
Sam Altman, CEO at OpenAI
However, Altman's apology does not address the families' core allegations that OpenAI deliberately withheld information to protect its reputation and IPO plans. The lawsuits represent a significant legal and reputational challenge for OpenAI as the company navigates questions about its responsibility to report dangerous user behavior to authorities.
This case highlights a critical gap in how AI companies handle safety concerns. Unlike traditional tech platforms that have developed protocols for reporting illegal activity, OpenAI's procedures for flagging dangerous conversations to law enforcement appear to have been inadequate or deliberately circumvented. The outcome of these lawsuits could establish important precedents for how AI companies must balance user privacy with public safety obligations.