Capabilities that appear in AI models at scale without being explicitly trained for.
Capabilities that appear in AI models at scale without being explicitly trained for. As models get bigger, they suddenly gain abilities like in-context learning, chain-of-thought reasoning, and translation between languages they weren't specifically trained on. Debated topic — some argue it's just gradual improvement made visible.
Mathematical relationships showing how AI model performance improves predictably with more data, compute, and parameters.
A model's ability to learn new tasks simply from examples provided in the prompt, without any weight updates.
An AI model with billions of parameters trained on massive text datasets.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.