A standardized test used to measure and compare AI model performance. Examples include MMLU for general knowledge, HumanEval for coding, and ARC for reasoning. Important for tracking progress, but models can be optimized specifically for benchmarks, making real-world performance the better measure.
Massive Multitask Language Understanding.
The process of measuring how well an AI model performs on its intended task.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
The research field focused on making sure AI systems do what humans actually want them to do.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.