A learning approach where an agent learns by interacting with an environment and receiving rewards or penalties.
A learning approach where an agent learns by interacting with an environment and receiving rewards or penalties. It discovers optimal strategies through trial and error. How AlphaGo mastered Go, how robots learn to walk, and a key ingredient in training language models through RLHF.
Reinforcement Learning from Human Feedback.
A model trained to predict how helpful, harmless, and honest a response is, based on human preferences.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
The research field focused on making sure AI systems do what humans actually want them to do.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.