Here's a pattern I've watched play out for three years now, and I'm tired of it. A research team publishes a paper identifying a specific risk in AI systems. It might be about jailbreaking techniques, data poisoning, or emergent deceptive behavior. The paper is rigorous. The findings are concerning. It gets a few hundred citations, a thread on X, maybe a write-up in Wired. Then nothing happens. The companies building the systems the paper warns about continue building them. The models ship. The capabilities increase. The same risks show up in production. And by the time anyone notices, the researchers who flagged the problem are already working on the next paper about the next risk that nobody will act on. This isn't a bug in the system. It's the system working exactly as the incentives dictate. ## The Track Record Is Damning In April 2025, security researchers published an analysis of the Model Context Protocol showing multiple vulnerabilities: prompt injection attacks, data exfiltration through tool composition, and lookalike tools that can silently replace trusted ones. MCP was already deployed across major AI products. The response? MCP adoption continued to accelerate. OpenAI integrated it. Google adopted it. The security concerns were acknowledged in blog posts and then nothing structural changed. Throughout 2024, roughly half of OpenAI's AI safety researchers left the company. Not gradually — in a mass exodus. They cited the company's "prominent role in an industry-wide problem." The company building the most powerful AI systems in the world couldn't retain the people whose job it was to make those systems safe. Anthropic's Frontier Red Team showed Claude Opus 4.6 could find over 500 zero-day vulnerabilities in production software. The same capabilities that make this useful for defense make it devastating for offense. The dual-use risk was laid out clearly. The response was to ship the capability as a product feature while a Chinese state-sponsored group had already weaponized Claude Code for cyber espionage. Then there's xAI. Environmental researchers warned for months about unpermitted gas turbines next to residential neighborhoods in Memphis. Evidence was documented, published, and presented to regulators. Nothing happened. xAI built a second facility and did it again. Warning, documentation, publication, inaction. ## Why It Happens **Speed vs. rigor.** Safety research is slow. Publishing takes months. Peer review takes months more. By the time a vulnerability is characterized, the technology has moved three versions ahead. **Incentive misalignment.** AI companies are valued on capability, not safety. When Anthropic raises $30 billion at $380 billion, that reflects Claude's capabilities, not its alignment properties. Companies that slow down for safety lose ground to companies that don't. **The "we'll fix it later" trap.** Every lab has safety commitments. They also all ship first and patch later. The reasoning: "We need to deploy to understand real-world behavior." Which is true! But it means every identified risk gets filed under "known issues" while the product reaches millions. **Researcher capture.** The best safety researchers get hired by the labs. Inside, they're subject to NDAs and publication restrictions. The ones who leave can speak freely but lose access to the systems they need to study. The people with the most knowledge have the least freedom to use it. **Regulatory vacuum.** The EU AI Act is ambitious but just beginning enforcement. The US has no federal AI legislation. The Trump administration has actively rolled back existing regulatory infrastructure. ## What Needs to Change **Mandatory pre-deployment risk assessments with teeth.** Not voluntary commitments. Legal requirements for companies above a certain compute threshold to conduct and publish risk assessments before deploying new models. **Safe harbor for safety researchers.** Researchers who identify vulnerabilities should be protected from legal retaliation, similar to cybersecurity responsible disclosure frameworks. **Mandatory incident reporting.** When AI systems cause harm, there should be a legal obligation to report it. Amazon's Kiro agent deleted a production environment and Amazon never disclosed it publicly. **Independent funding.** Most AI safety research is funded by the companies building the systems being studied. That's a structural conflict of interest. We need government-backed independent funding. **Closing the speed gap.** Safety research needs dedicated rapid-response teams, real-time monitoring, and publishing mechanisms that don't require months of peer review. The cybersecurity world has CVE databases. AI safety has arxiv preprints. ## The Uncomfortable Truth The problem isn't that research is being ignored. It's that research, by itself, doesn't change behavior. Documentation is necessary but not sufficient. The only things that change corporate behavior are regulation with enforcement, market pressure, and catastrophic incidents. We haven't had the AI equivalent of Challenger or Deepwater Horizon — an event so clearly linked to ignored warnings that political will for real change becomes irresistible. Some safety researchers believe we're heading toward that moment. I hope they're wrong. What I'm confident about: every month we wait, the gap widens. The models get more capable. The deployment surface grows. The potential impact of failure increases. And the stack of ignored papers gets taller. The AI safety community needs to be more aggressive. Not more strident — more aggressive. Lobbying for specific legislation. Filing FOIA requests. Building public databases of AI incidents. Running adversarial tests on deployed systems and publishing results. Making it impossible for companies to ignore what the research shows. The clock's ticking. The papers are piling up. And the models keep shipping.