MBMACHINE BRIEF
AnalysisOriginalsModelsResearchStartupsTools
Newsletter

Navigate

  • Home
  • About Us
  • Newsletter
  • Search
  • Sitemap

Content

  • Original Analysis
  • Blog
  • Glossary
  • Best Lists
  • AI Tools

Categories

  • Models
  • Research
  • Startups
  • Robotics
  • Policy
  • Business
  • Analysis
  • Originals

Legal

  • Privacy Policy
  • Terms of Service
Machine Brief|

2026 Machine Brief. All rights reserved.

  1. Home
  2. /Glossary
  3. /RLHF
Back to Glossary
ai

RLHF

Reinforcement Learning from Human Feedback.

Definition

Reinforcement Learning from Human Feedback. The technique that makes language models actually useful as assistants. Humans rank model outputs by quality, a reward model learns these preferences, and the language model is fine-tuned to maximize the reward. How ChatGPT, Claude, and others learned to be helpful.

Share this term

Related Terms

Reinforcement Learning

A learning approach where an agent learns by interacting with an environment and receiving rewards or penalties.

Reward Model

A model trained to predict how helpful, harmless, and honest a response is, based on human preferences.

Instruction Tuning

Fine-tuning a language model on datasets of instructions paired with appropriate responses.

Activation Function

A mathematical function applied to a neuron's output that introduces non-linearity into the network.

Adam Optimizer

An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.

AGI

Artificial General Intelligence.

Explore More

Latest NewsAI NewsMarketsAnalysisFull Glossary

Want to learn more about AI?

Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.

Browse GlossarySubscribe to Newsletter