When a model is too simple to capture the patterns in the data, performing poorly on both training and test sets.
When a model is too simple to capture the patterns in the data, performing poorly on both training and test sets. The opposite of overfitting. Usually fixed by using a larger model, training longer, adding features, or reducing regularization. Less common in the era of massive models, but still relevant for constrained settings.
When a model memorizes the training data so well that it performs poorly on new, unseen data.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
The research field focused on making sure AI systems do what humans actually want them to do.
The broad field studying how to build AI systems that are safe, reliable, and beneficial.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.