Reasoning models are AI systems specifically designed to "think" through problems step-by-step before giving an answer.
Reasoning models are AI systems specifically designed to "think" through problems step-by-step before giving an answer. Unlike standard LLMs that generate responses token by token, reasoning models produce an internal chain of thought — sometimes spending minutes working through a problem — which dramatically improves performance on math, science, and logic tasks.
OpenAI's o1 kicked off this category in September 2024, followed by DeepSeek R1 and others. These models trade speed for accuracy: they're slower and more expensive per query, but they crush benchmarks that require multi-step reasoning. The key innovation is training models to generate and evaluate their own reasoning traces, often using reinforcement learning.
A reasoning model solving a complex math proof will show its work — trying different approaches, catching its own mistakes, and building the solution step by step rather than guessing the answer directly.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.