OpenAI Would Now Alert Police About the Canadian Shooter. Before, It Didn
After a fatal school shooting in British Columbia where the suspect used ChatGPT, OpenAI published new safety protocols governing when to involve law enforcement. The company previously shut down the account but didn
On a Tuesday morning in Tumbler Ridge, British Columbia, a gunman killed eight people and injured dozens more at a school. In the aftermath, investigators discovered the shooter had been interacting with ChatGPT. The conversations suggested the possibility of real-world violence.
OpenAI shut down the account. But it didn't call the police.
That decision, or rather the absence of a decision, forced a reckoning inside the company. This week, OpenAI published new safety protocols that spell out when and how the company will involve law enforcement. Under the new rules, OpenAI says it would have alerted police if it discovered the same account today.
This is one of those stories where the technology isn't the hard part. The hard part is everything else.
What Changed
OpenAI's new protocols, published as a letter to Canadian Minister Solomon, describe a framework for evaluating threats and deciding when to contact law enforcement. The details matter because they represent the first time a major AI company has publicly committed to specific criteria for breaking user confidentiality to prevent violence.
Previously, OpenAI's approach was reactive. If a user violated terms of service, the company would ban the account. But banning an account and notifying law enforcement are very different actions. One is a product decision. The other is a safety decision with real-world consequences.
The new framework appears to create categories of threat severity and corresponding response protocols. While the full document runs several pages, the key shift is clear: OpenAI now considers itself responsible for acting on credible threats, not just removing violating accounts.
The Privacy Tension
This is where it gets complicated. Every AI company handles billions of conversations. Users expect those conversations to be private. That expectation is central to how people use tools like ChatGPT. You tell it things you wouldn't tell a coworker, a search engine, or sometimes a therapist.
Creating a system where the AI company monitors conversations for threats and reports them to police is a surveillance architecture. It might be a necessary surveillance architecture, but calling it anything else is dishonest.
The question isn't whether OpenAI should have reported the Tumbler Ridge shooter. In hindsight, almost everyone agrees it should have. The question is what system you build to catch the next case without creating a dragnet that monitors everyone.
False positives are the nightmare scenario. ChatGPT has over 100 million weekly users. If even a tiny fraction of conversations get flagged for law enforcement review, that's thousands of reports, most of which will be fiction writers, people working through dark thoughts in a healthy way, or teenagers being edgy. Each false report costs law enforcement time and potentially subjects innocent people to investigation.
What Other Companies Do
Tech companies have dealt with this before in other contexts. Facebook has automated systems that detect child exploitation material and report it to NCMEC. Google scans Gmail for similar content. Telecom companies have legal obligations to assist with lawful intercepts.
But AI chatbots are different in a specific way. People tell them things directly. There's no intermediary. A conversation with ChatGPT about violent plans is closer to a diary entry than a social media post. The user isn't broadcasting to others. They're talking to a machine they believe is private.
That perceived privacy is what makes AI chatbots both useful and dangerous in these situations. People share more because they think nobody is listening. When someone shares violent intent, that openness becomes a potential safety signal. But building a system to capture safety signals from private conversations requires accepting that the conversations aren't actually private.
The Responsibility Gap
Before this incident, there was a genuine ambiguity about what AI companies owed to public safety. They're not therapists with mandatory reporting obligations. They're not social media platforms with moderation teams. They're something new, and the legal and ethical frameworks haven't caught up.
OpenAI's new protocols attempt to fill that gap voluntarily. It's worth noting that no law required this. OpenAI chose to create these protocols after public pressure and, presumably, internal reflection. That's better than being forced by legislation, but it also means the protocols could change with new leadership or different priorities.
Other AI companies will face the same question. Anthropic, Google, Meta, and every company running a chatbot will eventually encounter a user expressing credible violent intent. What do they do? The industry needs a shared standard, not just individual company policies that can vary.
What This Doesn't Solve
Better reporting protocols are important, but they don't address the deeper issue. The Tumbler Ridge shooter used ChatGPT. The technology facilitated a conversation that may have reinforced violent ideation. Reporting that conversation to police is an after-the-fact intervention. It doesn't prevent the next conversation from happening.
The harder question is whether AI models should be designed to detect and disrupt violent ideation in real time. Not just refuse to help plan violence, which current models already do, but recognize patterns of escalating intent and intervene. That might mean redirecting users to crisis resources, refusing to continue certain conversations, or alerting a human review team.
Each of those interventions has costs. False interventions disrupt legitimate conversations. Overly aggressive screening creates the kind of nanny-state AI that users resent. But under-intervention has costs too, measured in lives.
OpenAI's new protocols are a start. But "we would have called the police this time" is a reactive measure for a problem that demands proactive solutions. The AI industry is building increasingly capable conversation partners without a clear framework for when those conversations turn dangerous.
That framework is coming, whether the industry builds it voluntarily or regulators impose it. OpenAI just took the first visible step.
Get AI news in your inbox
Daily digest of what matters in AI.