Thinking Machines Lab Secures Nvidia Partnership for Gigawatt-Scale AI Training
Mira Murati's startup just got a serious infrastructure boost.
Thinking Machines Lab Secures Nvidia Partnership for Gigawatt-Scale AI Training
By Tessa Fong • March 14, 2026
Mira Murati's startup just got a serious infrastructure boost.
Thinking Machines Lab announced a "long-term gigawatt-scale strategic partnership" with Nvidia to power its AI model training. The deal gives the nine-month-old company access to the compute resources it needs to compete with her former employer, OpenAI.
Sources close to the deal say the partnership involves significant Nvidia hardware commitments and potentially preferential pricing. Neither company disclosed financial terms.
The partnership represents a significant milestone for a startup that many questioned after key personnel departed. Nvidia doesn't partner lightly. The deal signals the chip giant sees Thinking Machines Lab as a serious player in the foundation model race.
The Backstory Matters
Murati left OpenAI last year after serving as interim CEO during Sam Altman's brief ouster and subsequent return. She founded Thinking Machines Lab with explicit goals of building safer, more transparent AI systems, a positioning that distinguished her from OpenAI's perceived move toward commercialization over research.
The company raised a substantial seed round and attracted talent from across the AI industry. Then things got complicated.
Earlier this year, three founding members of Thinking Machines Lab returned to OpenAI, a talent flow that raised questions about the startup's stability and direction. This Nvidia partnership appears designed to answer those questions with capital and capabilities.
The departures sparked industry speculation. Were there disagreements about technical direction? Culture clashes? Or simply the gravitational pull of OpenAI's resources and momentum? None of the departing engineers commented publicly, which left the questions unanswered.
Gigawatt Scale, Explained
"Gigawatt scale" is infrastructure speak that translates roughly to: very, very expensive data centers.
Training frontier AI models requires enormous compute clusters. GPT-4's training run reportedly used around 1 gigawatt-year of energy. That's roughly the output of a nuclear power plant for a year, or the electricity consumption of a small city.
For Thinking Machines Lab to need gigawatt-scale capacity, they're planning training runs at the frontier level. That's not incremental research. That's building foundation models to compete directly with GPT-5 and Claude's successors.
Let me break down what this means practically. A gigawatt of data center capacity typically houses around 50,000 to 100,000 GPUs, depending on power and cooling efficiency. At current H100 prices, that's roughly $2-3 billion in hardware alone, before real estate, cooling systems, and operational costs.
The partnership structure probably doesn't involve Thinking Machines Lab purchasing this hardware outright. More likely, Nvidia is providing capacity access, possibly through a cloud partnership or dedicated hosting arrangement. The details matter for understanding the company's capital structure.
The Nvidia Angle
Nvidia wins regardless of who wins the AI race. Every major lab needs its chips. By partnering with Thinking Machines Lab, Nvidia diversifies its customer base and potentially gets early access to novel research.
The company has been increasingly active in strategic partnerships beyond just selling hardware. Nvidia's investments in AI startups give it visibility into emerging approaches and ensure its hardware remains the default choice as the industry evolves.
For Thinking Machines Lab, the partnership solves the compute constraint that limits most AI startups. You can't build frontier models in a garage. You need thousands of GPUs running for months, and you need the expertise to orchestrate them efficiently.
Nvidia brings both. The hardware is obvious. Less obvious is the software expertise. Training at scale requires distributed systems knowledge, custom kernels, and debugging capabilities that few organizations possess. Nvidia can provide support that accelerates time to results.
There's also a signaling function. Nvidia partnership implies Nvidia's confidence in the startup's technical approach. That makes subsequent fundraising easier and attracts talent who want to work with cutting-edge infrastructure.
The Competitive Landscape
This puts Thinking Machines Lab in a small club of companies with the resources to attempt frontier model training. OpenAI has Microsoft's backing. Anthropic has Amazon and Google. Google DeepMind has Alphabet's infrastructure. Meta builds its own chips and data centers.
Now Murati's startup has Nvidia's support. Whether that's enough to compete remains to be seen. Money and compute are necessary but not sufficient. You also need research insights, engineering talent, and the organizational culture to execute at the frontier.
The talent departures earlier this year suggest challenges on at least one of those dimensions. Partnerships don't solve culture problems.
Consider the competitive dynamics. OpenAI has roughly a three-year head start on research and deployment. Anthropic has arguably the best safety research team in the industry. Google has decades of ML research heritage. Meta has data advantages from billions of users.
Thinking Machines Lab needs a differentiated angle to compete. Murati's reputation helps, but reputation doesn't train models. The company needs to demonstrate capabilities that justify the infrastructure investment.
What They're Building
Thinking Machines Lab hasn't disclosed technical details about its research direction. Murati has spoken publicly about safety and transparency, but those are values, not architectures.
The best guess based on her public statements: the company is exploring approaches to AI alignment that differ from both OpenAI's RLHF methodology and Anthropic's Constitutional AI. Whether that's multimodal world models, novel training objectives, or something else entirely isn't clear.
A gigawatt of training capacity suggests they're planning something ambitious. You don't secure that much compute to fine-tune existing models.
The safety focus might inform technical choices. If Murati believes current architectures are fundamentally difficult to align, she might be pursuing alternative approaches that are safer by design rather than through post-training interventions.
Interpretability research is another possibility. Understanding what models learn and why they behave certain ways requires extensive experimentation. The compute capacity could support that research agenda.
The Funding Picture
The Nvidia partnership doesn't appear to include direct equity investment, just hardware and capacity commitments. Thinking Machines Lab will likely need additional funding rounds to cover operational costs, talent acquisition, and the general expenses of running a frontier AI lab.
I'm hearing the company is already in conversations with potential investors for a Series A. The Nvidia partnership makes that raise easier, demonstrating both credibility and a clear path to scaling research.
The funding environment for AI startups has changed since peak hype in 2024. Investors are more discerning about technical approaches and paths to commercialization. Companies that can show infrastructure partnerships and research differentiation command better terms than those promising vague "AI breakthroughs."
Thinking Machines Lab checks the infrastructure box with this Nvidia deal. The research differentiation remains to be demonstrated.
Market Positioning
Murati's positioning around safety and transparency appeals to certain customers worried about AI risk. Enterprise buyers increasingly ask about safety practices. Government contracts often require safety documentation.
If Thinking Machines Lab can develop models that are genuinely safer or more transparent than alternatives, that's a market advantage, not just an ethical stance.
The challenge is demonstrating safety convincingly. Claims are cheap. Evidence is expensive. The company will need to publish research, submit to third-party evaluation, and build a track record that substantiates its positioning.
What Happens Next
Watch the hiring announcements. A company planning frontier training needs world-class ML researchers, infrastructure engineers, and safety researchers. Where they recruit from tells you about their technical direction.
The first major publication or model release will tell us more than any partnership announcement. Until then, Thinking Machines Lab remains promising but unproven.
Key milestones to watch: research publications establishing the company's technical approach, model releases demonstrating capabilities, and Series A funding announcement revealing valuation and investor confidence.
The next six months will clarify whether this Nvidia partnership marks the beginning of a serious competitor or a well-funded research project. The infrastructure is in place. The execution determines the outcome.
---
Frequently Asked Questions
Who founded Thinking Machines Lab?
Mira Murati founded Thinking Machines Lab after leaving OpenAI, where she served as CTO and briefly as interim CEO. The company focuses on developing safer, more transparent AI systems, distinguishing itself from competitors through its emphasis on alignment research.
What does the Nvidia partnership include?
The partnership provides Thinking Machines Lab with "gigawatt-scale" computing capacity for AI model training. Specific financial terms and hardware commitments weren't disclosed, but it suggests significant infrastructure access comparable to major labs.
How does this affect competition with OpenAI?
The partnership gives Thinking Machines Lab compute resources comparable to well-funded competitors. However, competing with OpenAI also requires research breakthroughs, top talent, and effective execution, which partnerships alone don't guarantee.
What happened to the founding team members who left?
Three founding members of Thinking Machines Lab returned to OpenAI earlier this year. The reasons weren't publicly disclosed, but it raised questions about the startup's direction that this Nvidia partnership may help address.
---
Track AI company developments in our [Companies](/companies) section and [Learning Center](/learn). Compare capabilities in our [Models](/models) directory.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
The research field focused on making sure AI systems do what humans actually want them to do.
An AI safety company founded in 2021 by former OpenAI researchers, including Dario and Daniela Amodei.
Anthropic's family of AI assistants, including Claude Haiku, Sonnet, and Opus.
The processing power needed to train and run AI models.