Sam Altman has formally apologized to the community of Tumbler Ridge, British Columbia, acknowledging that OpenAI failed to notify law enforcement about the ChatGPT account of Jesse Van Rootselaar, the 18-year-old who killed eight people in a February 2026 mass shooting. In a letter dated Thursday, Altman wrote that he was "deeply sorry that we did not alert law enforcement to the account that was banned in June," adding, "While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered." For broader context on OpenAI's recent policy moves, see the ObjectWire OpenAI hub.
What OpenAI Knew | Account Banned in June 2025, Eight Months Before the Attack
Van Rootselaar killed her mother and 11-year-old half-brother at their home on February 10 before opening fire at Tumbler Ridge Secondary School, killing five students and an education assistant and wounding dozens of others. She then took her own life. OpenAI has acknowledged that its automated abuse detection systems flagged Van Rootselaar's account and banned it in June 2025, eight months before the shooting, but that leadership determined the flagged activity did not meet the company's internal threshold for a law enforcement referral.
That decision has become the central fact in a negligence lawsuit filed in March 2026 in B.C. Supreme Court by the family of Maya Gebala, a student who was critically injured in the attack. The suit alleges that approximately 12 OpenAI employees internally flagged the account as an "imminent risk" and recommended notifying police, but that the recommendation was overruled by leadership. OpenAI has not confirmed or denied the internal communications described in the complaint.
British Columbia Premier David Eby announced in February that Altman had agreed to apologize and cooperate with provincial officials on AI safety recommendations. After receiving the letter Thursday, Eby called it "necessary, and yet grossly insufficient."
Florida Criminal Probe | "If That Bot Were a Person, They'd Be Charged with Murder"
The Tumbler Ridge apology arrived as OpenAI faces a separate and more immediately threatening legal action in the United States. On April 21, Florida Attorney General James Uthmeier announced that his office has opened a criminal investigation into whether ChatGPT "bears criminal responsibility" in connection with a shooting at Florida State University in April 2025. The suspect in that case, Phoenix Ikner, allegedly used ChatGPT to research firearms, ammunition types, and the locations on FSU's campus where he could encounter the highest concentration of students before killing two people and wounding six.
Uthmeier was direct in characterizing what his office believed the evidence showed: "If that bot were a person, they would be charged as a principal in first-degree murder." The statement signals an intent to test whether existing criminal facilitation statutes can reach an AI company for outputs its systems generated, a legal theory that has no clear precedent in American courts.
OpenAI disputed the framing in a formal response, stating that ChatGPT "delivered factual answers to inquiries based on information widely available from public sources" and "did not incite or endorse illegal or harmful conduct." The company has not commented further on the Florida investigation's scope or timeline.
OpenAI's Safety Protocol Revisions | Direct Police Channel, Mental Health Consultation
In his letter to Tumbler Ridge, Altman outlined several changes OpenAI has made to its abuse detection and referral procedures following the shooting. The company has established a direct communication channel with law enforcement for high-priority flagged accounts, has brought in mental health and law enforcement experts to help recalibrate what constitutes a credible threat worthy of external referral, and has revised the internal escalation policy that previously gave leadership discretion to not act on line-level employee recommendations.
Altman wrote: "Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again." The changes represent OpenAI's first public acknowledgment that its pre-existing safety infrastructure was materially insufficient in at least one documented case, a concession that will have consequences in both the B.C. civil proceedings and the Florida criminal investigation.
For related coverage of OpenAI's policy engagements with federal and international governments, see earlier reporting on OpenAI's Safety Fellowship and the New Yorker probe and the company's government oversight and Pentagon deal positions. The Tumbler Ridge case is now the most consequential test of whether AI platform companies can be held legally liable for harms they had advance notice of and chose not to act on.