Why OpenAI didn't stop the Tumbler Ridge school shooting

Why OpenAI didn't stop the Tumbler Ridge school shooting

Imagine knowing a killer's plan seven months before they pull the trigger. That’s the nightmare scenario currently haunting Ottawa after revelations that OpenAI, the creator of ChatGPT, banned the account of a future mass shooter but didn't tell a soul. Jesse Van Rootselaar, an 18-year-old from the small mining town of Tumbler Ridge, British Columbia, killed eight people on February 10, 2026. Among the dead were five children and a teacher. She then took her own life.

What makes this tragedy particularly gut-wrenching isn't just the loss of life; it’s the fact that OpenAI’s internal systems flagged Van Rootselaar for "misuses in furtherance of violent activities" as early as June 2025. They saw the red flags. They debated whether to call the police. They chose to ban the account and stay silent. Now, Canada’s Artificial Intelligence Minister Evan Solomon has summoned OpenAI’s top safety officials to a face-to-face meeting in Ottawa to explain why a "banned" notification was the only consequence for a teenager describing gun violence to a chatbot.

The gap between a ban and a phone call

OpenAI’s defense is built on a technicality that sounds increasingly hollow to the families in Tumbler Ridge. The company says the content on Van Rootselaar’s account didn't meet their internal "imminent and credible risk" threshold. Basically, she was talking about violence, but she hadn't given a specific time, place, or method that triggered an automatic referral to law enforcement.

It’s a classic Silicon Valley problem. Tech companies love automated systems because they're cheap and scalable, but those systems lack the human intuition to spot a brewing storm. In Van Rootselaar’s case, she spent several days in June 2025 describing detailed scenarios involving firearms. A dozen OpenAI staffers reportedly debated the case. Some wanted to alert the RCMP. Others worried about user privacy and the potential for "false positives" that could stigmatize a young person struggling with mental health.

Privacy won. The account was nuked, and Van Rootselaar was left to her own devices. Literally. She even managed to create a second account and keep using the platform, a detail OpenAI only discovered after the shooting made national headlines.

Why Canada is done waiting for tech companies to self regulate

Minister Evan Solomon isn't just looking for an apology. He’s looking for blood—or at least, for a new set of rules that forces AI companies to act more like doctors or teachers. In Canada, professionals have a "duty to report" when they suspect a minor is in danger or poses a threat. Right now, AI companies have no such legal obligation. They operate in a grey zone where they can see everything but feel responsible for nothing.

The meeting in Ottawa this week focused on three specific failures:

  • The Threshold Problem: Why is "imminent" the only standard? If a user is consistently fantasizing about mass murder, why wait for them to name a date?
  • The Ban Loophole: Van Rootselaar was banned in June, but she just opened a new account. OpenAI’s "repeat violator" detection failed to keep a known high-risk user off the platform.
  • The Information Silo: British Columbia officials revealed that OpenAI met with them just one day after the shooting for a pre-planned meeting about opening a Canadian office. Incredibly, OpenAI didn't mention they’d previously banned the shooter until days later.

OpenAI admits it would have reported her today

In a move that’s being called "cold comfort" by B.C. Premier David Eby, OpenAI sent a letter to Solomon on February 26, 2026, admitting that their own policies have already changed. If Van Rootselaar were using the platform today with those same prompts, OpenAI says they would have called the police.

The company claims they’ve "enhanced" their referral protocols by consulting mental health and behavioral experts. They're making the criteria more flexible, moving away from the rigid "time and place" requirements. They’ve also committed to establishing a direct, 24/7 point of contact for Canadian law enforcement.

But for the 2,400 residents of Tumbler Ridge, these updates come seven months too late. The reality is that OpenAI’s "learnings" are written in the blood of students who were 12 and 13 years old.

The privacy vs safety trap

You'll hear tech advocates argue that we shouldn't turn AI into a "private surveillance wing" for the police. They're not entirely wrong. If every edgy teenager who writes a violent story for a creative writing class gets a visit from the RCMP, we've created a digital panopticon. Marginalized groups, including the LGBTQ+ community (Van Rootselaar was a transgender woman), often bear the brunt of over-policing.

However, there's a massive difference between "edgy" and "escalating." Criminology experts point out that Van Rootselaar wasn't just a random user; she had a history of mental health issues and police contact. The guns used in the attack had actually been seized by police and then returned to her home. She was a person in crisis, and her interactions with ChatGPT were a clear cry for help—or a rehearsal for horror.

What you can expect next

Don't expect this to blow over with a few "enhanced protocols." This tragedy has jumpstarted a legislative push in Ottawa that had been stalled for years.

  1. Mandatory Reporting Laws: Canada is likely to introduce legislation that sets a baseline standard for when AI firms must notify the authorities. No more internal debates by a dozen staffers in San Francisco; if the AI flags violence, the police get the data.
  2. Accountability for Evasion: If a banned user can simply refresh their IP and start again, the ban is useless. Expect new requirements for "identity-linked" safety for high-risk accounts.
  3. Data Transparency: The RCMP is currently combing through Van Rootselaar’s digital footprint. Once those transcripts are public, the pressure on OpenAI will be immense.

If you're using these tools, realize that the era of "what happens in the chat stays in the chat" is over. OpenAI has already signalled that they are prioritizing safety over privacy in the wake of Tumbler Ridge. They're building a direct pipeline to the RCMP because the alternative—being blamed for another mass shooting—is a corporate death sentence.

Check your own internal safety policies if you're a developer or a business owner using AI. The liability shifted the moment that letter hit Minister Solomon’s desk. If you see something that looks like a threat, don't wait for a "credible" date. Report it.

LY

Lily Young

With a passion for uncovering the truth, Lily Young has spent years reporting on complex issues across business, technology, and global affairs.