Technology & InnovationNeutral
35

OpenAI Sued for Not Reporting Shooter's ChatGPT Threats Before Mass Shooting

A lawsuit accuses OpenAI of negligence for not alerting police about a user's ChatGPT threats before a February mass shooting in British Columbia. Internal safety team flagged the gun violence discussions, but leaders allegedly did not warn authorities, resulting in one of Canada’s deadliest school shootings.

DecryptJason Nelson

Quick Take

1

OpenAI employees urged leadership to alert RCMP about firearm threats.

2

Teen shooter used ChatGPT; internal systems flagged her account.

3

Company deactivated account but didn't notify police; she made new one.

4

Lawsuit claims AI features deepened violent fixation, seeks accountability.

Market Impact Analysis

Neutral

No direct crypto relevance.

Timeframeshort

Speculation Analysis

Factuality90/100
RumorsVerified
Speculation Trigger5/100
MinimalExtreme FOMO

Key Takeaways

  • OpenAI knew about a user's violent intentions months before the mass shooting but chose not to alert authorities.
  • A 12-year-old girl survived three gunshot wounds and is now permanently paralyzed; her family is suing for negligence.
  • Twelve internal safety team members urged leadership to report the threats, but their warnings were overruled.
  • The lawsuit could establish a legal duty for AI companies to report credible real-world violence threats.
Fatalities7including the shooter
Safety Team Alerts12employees urged reporting
Plaintiff Age12survivor shot three times
Flagged AccountJune 2025months before shooting

What Happened

A federal lawsuit accuses OpenAI of gross negligence for failing to warn police after ChatGPT was linked to a mass shooting in Tumbler Ridge, British Columbia. The shooter, 18-year-old Jesse Van Rootselaar, killed seven people—including herself—at a secondary school in February. Months earlier, OpenAI’s automated systems flagged her account for discussing gun violence, and a dozen safety team employees recommended alerting authorities. Leadership overruled them, and the company only deactivated the account without notifying law enforcement. The shooter then created a new account and continued planning. The plaintiff, identified as M.G., was 12 when she was shot three times. She survived but is now awake, aware, and completely paralyzed.

The Numbers

The attack left seven dead: five children, one educator, and the shooter’s mother and stepbrother. M.G. sustained catastrophic brain injuries and cannot move or speak. OpenAI’s safety systems flagged the shooter’s account in June 2025—at least eight months before the attack. Twelve internal employees formally recommended reporting the threats to the Royal Canadian Mounted Police, but their escalation was rejected. CEO Sam Altman later issued a weak apology, acknowledging the company should have acted. The lawsuit seeks damages and demands transparency around OpenAI’s internal decision-making.

Why It Happened

The complaint argues that OpenAI leadership prioritized avoiding a dangerous precedent over public safety. Alerting the RCMP would have established a duty to report all credible threats, potentially burdening the company ahead of a planned IPO. Internal emails are expected to show that executives were more concerned about regulatory fallout than immediate danger. The safety team’s warning was unambiguous, but Silicon Valley’s “move fast” culture appears to have overridden basic crisis response. This case spotlights the tension between rapid AI deployment and real-world accountability.

Broader Impact

Legal experts say the lawsuit could redefine AI companies’ responsibilities under product liability and negligence law. If the court finds a duty to warn, platforms like ChatGPT might need to proactively report conversations involving violence—a shift that could upend content moderation norms. The case also tests whether Section 230-style protections apply when AI systems actively identify and ignore foreseeable harm. A verdict against OpenAI may spur new regulations requiring real-time threat reporting across the industry.

What to Watch Next

  • Discovery could reveal damning internal communications between Altman and the safety team, intensifying public backlash.
  • Regulators may fast-track rules compelling AI firms to alert authorities about imminent threats, especially involving minors.
  • OpenAI’s delayed IPO timeline faces fresh scrutiny if investors worry about mounting legal and reputational risks.
Source: Decrypt

This article is for informational purposes only and does not constitute financial advice.

SourceRead the full article on Decrypt
Read full article

Always late to trends?

Join for the latest news, insights & more.

Disclaimer: Bytewit is an independent media outlet that delivers news, research, and data.

© 2026 Bytewit. All Rights Reserved. This article is for informational purposes only.

Read Next

Most Read

🏛️
Institutional & Investment NewsBearish
83

Crypto Stocks Tumble as Robinhood Revenue Slumps, Trump Rejects Iran Plan

Shares of crypto-related firms plummeted Wednesday after Robinhood reported a 47% drop in crypto revenue, signaling weak trading demand. Geopolitical tensions also weighed as Trump rejected an Iranian proposal, causing oil to spike. Coinbase and miners dropped 6-8%, while Bitcoin only edged lower.

BTC
90% confidence
Apr 29, 2026, 4:52 PM UTC · CoinDesk
OpenAI Sued Over ChatGPT Threats Before Mass Shooting | Bytewit