The AI industry runs on talent. Not compute, not data, not capital — talent. And right now, the talent market is the most chaotic it's been since the field exploded in 2022.
I've spent the last three months tracking every major researcher move I could confirm. LinkedIn changes, press releases, conference affiliations, X posts, paper co-authorships. The picture that emerges is striking: the talent is flowing away from the companies you'd expect and toward places that would've seemed unlikely two years ago.
Let me walk through the major labs and show you what's actually happening.
## OpenAI: The Great Exodus
OpenAI's talent problem isn't a secret, but the scale of it is worse than most people realize.
Let's start with the departures that made headlines. In May 2024, Ilya Sutskever — co-founder and chief scientist — left to start Safe Superintelligence Inc. (SSI). Jan Leike, who led the superalignment team, resigned the same week and went to Anthropic, publicly citing disagreements over safety priorities. "Safety culture and processes have taken a back seat to shiny products," Leike wrote on X. That's about as damning a public statement as you'll see from a departing executive.
But Sutskever and Leike were just the most visible departures. The bleed has been continuous. John Schulman, co-founder and head of alignment, left for Anthropic in August 2024. Andrej Karpathy, who built OpenAI's original training infrastructure and later led Tesla's Autopilot team before returning, departed to found Eureka Labs. Barret Zoph and Liam Fedus, key researchers on GPT-4, left for a stealth startup. Jakub Pachocki replaced Sutskever as chief scientist but arrived into a very different organization than the one Ilya helped build.
The pattern tells a story. The departures aren't random. They're concentrated among researchers who care most about safety, alignment, and long-term research. The people staying (or being hired) tend to be product-focused, shipping-focused, revenue-focused. OpenAI is becoming a different company, and the researchers who joined for the original mission are noticing.
CEO Sam Altman has been aggressive about backfilling. OpenAI reportedly offered compensation packages worth $5-10 million per year for senior researchers in 2025, with some packages reaching $20 million for top-tier hires. But money alone can't fix a culture shift. When your co-founder leaves over safety concerns and says so publicly, that shapes how every potential recruit evaluates your offer.
The conversion from nonprofit to for-profit, completed in early 2025, was another friction point. Researchers who joined a nonprofit research lab found themselves at a company valued at $300+ billion with revenue targets and investor pressure. Some embraced it. Many didn't.
## Anthropic: The Talent Magnet
Anthropic is where the safety-minded researchers are going, and it's not close.
Founded in 2021 by Dario and Daniela Amodei (both ex-OpenAI), Anthropic has positioned itself as the "responsible AI" lab. But calling it a safety lab undersells what they've built. With $14 billion in annual run-rate revenue, a $380 billion valuation, and Claude models that lead on agentic benchmarks, Anthropic is both the safety play and the performance play.
Jan Leike joined as head of alignment. Numerous mid-level OpenAI researchers followed. The pitch is straightforward: come build models that are as capable as anything OpenAI ships, but in an environment where safety research isn't treated as a cost center.
The compensation is competitive. Anthropic's Series G at a $380 billion valuation made early equity grants enormously valuable. But several researchers I spoke with said money wasn't the primary draw. "The research culture is different," one former OpenAI researcher told me. "There's genuine intellectual freedom. You can publish. You can collaborate with academia. At OpenAI, everything became about shipping."
Anthropic has also been effective at recruiting from Google. Multiple researchers from Google DeepMind's alignment and interpretability teams have moved over. The company's focus on constitutional AI and interpretability gives these researchers the chance to pursue fundamental questions that Google's increasingly product-driven culture deprioritizes.
The risk for Anthropic is that success changes the culture. A $380 billion valuation comes with expectations. Revenue needs to grow. Products need to ship. The same pressures that transformed OpenAI from a research lab into a product company are now building at Anthropic. Whether Dario Amodei can maintain the research culture while scaling a commercial business is the defining question of the company's next chapter.
## Google DeepMind: Death by Reorg
Google's talent story is complicated by the 2023 merger of Google Brain and DeepMind into "Google DeepMind" under Demis Hassabis. On paper, this created an AI research powerhouse. In practice, it created organizational chaos that's still playing out.
The Brain side of the merger took the worst of it. Google Brain had a distinctive research culture — academic, open, publish-everything. DeepMind's culture was more secretive, more competitive, more focused on ambitious moonshots. When the two merged, DeepMind's culture mostly won. Several Brain researchers who valued the open publication culture left.
Jeff Dean, who was effectively the spiritual leader of Google Brain, was elevated to Chief Scientist — a title that sounds important but moved him away from day-to-day research leadership. Multiple sources described the role as "ceremonial."
The departures have been steady. Noam Shazeer, who co-invented the Transformer architecture at Google Brain, left to co-found Character.AI (which Google then essentially acquired back through a licensing deal — paying roughly $2.7 billion for the privilege of re-hiring talent it had lost). Llion Jones, another Transformer co-author, left to found Sakana AI. Several other Transformer paper authors scattered to various startups.
The irony is acute. Google invented the Transformer — the architecture that powers every major AI model — and then lost most of the people who built it.
Google is still enormous and still recruits well. They offer unmatched compute resources, which matters when you need 10,000 TPUs to run an experiment. The Gemini team has grown significantly. Google DeepMind's work on AlphaFold, weather prediction, and materials science remains world-class.
But the bleeding-edge LLM and agent work? Google is increasingly relying on new hires rather than the researchers who built the foundation. That's a meaningful shift.
## Meta FAIR: The Open-Source Brain Drain
Meta's Fundamental AI Research (FAIR) team was once considered the premier corporate AI research lab. Yann LeCun built it into an academic-style research organization within a tech giant, with publications, open-source contributions, and a reputation for intellectual rigor.
The research output is still strong. Llama 2 and Llama 3 were genuine contributions to the field. Meta's work on self-supervised learning, computer vision (DINOv2, SAM), and audio AI has been excellent.
But the talent dynamics have shifted. FAIR's original identity was pure research. Under Mark Zuckerberg's AI pivot, the pressure to align research with product needs has intensified. The mandate to build AI features for Facebook, Instagram, WhatsApp, and the metaverse means research priorities increasingly serve business goals.
Several senior FAIR researchers have departed. Some went to startups. Some went to academia. Some went to Anthropic or Google DeepMind. The common thread: they wanted to do research on their own terms, not research that needed to justify itself through product metrics.
LeCun himself remains, and his presence still attracts talent. He's the most prominent researcher at any major company, and his public visibility (he's one of AI's most active voices on X) gives FAIR a brand that recruitment teams can't buy. But one person, however brilliant, can't hold an entire organization together if the structural incentives are pulling it apart.
Meta's unique advantage is its commitment to open source. For researchers who care about impact — about their work being used by millions of developers — Meta is still the most attractive destination. Google and Anthropic open-source selectively. OpenAI barely open-sources anything anymore. Meta open-sources its most important models. That matters to a certain kind of researcher.
## The Startup Vacuum
The most interesting talent trend isn't between big labs. It's out of them.
A generation of senior researchers who spent 5-10 years at Google, OpenAI, or Meta is leaving to start companies. Not because they're chasing money (though the VC funding helps). Because they can.
The cost of training a competitive model has dropped dramatically. Open-source infrastructure, cheaper compute, and better training techniques mean a team of 10 researchers can build something that would've required 100 people three years ago. The talent is walking out the door and taking the knowledge with them.
Some notable examples: Ilya Sutskever's SSI, which raised $1 billion before publishing a single paper. Noam Shazeer's Character.AI (before the Google re-acquisition). Arthur Mensch's Mistral AI, founded by ex-DeepMind and ex-Meta researchers, which hit a $6.3 billion valuation. Reka AI, founded by former Google and Meta researchers. Cohere, co-founded by Aidan Gomez, another Transformer paper co-author.
The pattern is clear. The big labs trained a generation of elite AI researchers and then failed to retain them. The researchers took their expertise, their networks, and their intuitions about what works, and they went to build their own things.
## The Academic Comeback
There's a quieter trend worth watching: talent flowing back to academia.
For years, the narrative was that universities couldn't compete with industry. Google and OpenAI could offer 5-10x academic salaries plus compute resources that no university could match. The best grad students went straight to industry. Tenure-track positions went unfilled.
That's starting to reverse. Partly because of money — university AI salaries have risen significantly, and industry-sponsored research grants provide compute access. Partly because of burnout — the pace at big labs is unsustainable, and the pressure to ship products rather than publish papers has made some researchers miss the academic life.
And partly because of something more fundamental: the most important open questions in AI — alignment, interpretability, the theoretical foundations of why these things work at all — are better suited to academic research than product-driven industry work.
Stanford's HAI institute, Berkeley's AI research group, MIT's CSAIL, and CMU's language technologies department have all significantly expanded their AI faculty in the past two years. They're hiring people who did five years at Google or OpenAI and are ready to think on longer time horizons.
## What the Flows Tell Us
Talent flows are a leading indicator. They tell you where the best-informed people in the industry think the future is heading, because those people are voting with their careers.
Right now, the flows say:
**Anthropic is winning the culture war.** The best safety and alignment researchers are going there, and they're bringing their networks with them.
**Google is winning the infrastructure war but losing the people war.** They have the compute and the products, but the organizational chaos is pushing researchers out.
**OpenAI is becoming a product company.** The researchers who wanted to do research have left. The people who want to ship products are arriving. That's not inherently bad, but it's a different organization than the one that built GPT-4.
**Meta is still the open-source destination.** If you want your work used by millions, Meta is where you go. But the tension between research freedom and product mandates is growing.
**Startups are the wild card.** The next breakthrough model might not come from a big lab. It might come from a 15-person team of ex-Google researchers who know exactly what they're doing.
The talent war isn't over. It's barely begun.
Models10 min read
The AI Talent War: Where the Top Researchers Actually Went
OpenAI's lost a dozen senior researchers. Google Brain barely exists anymore. Meta FAIR is bleeding talent to startups. Here's a map of where the top AI researchers actually ended up — and what the talent flows tell us about who's winning.


