When a model memorizes the training data so well that it performs poorly on new, unseen data.
When a model memorizes the training data so well that it performs poorly on new, unseen data. It learns the noise along with the signal. Signs include a big gap between training accuracy (high) and test accuracy (low). Prevented through regularization, dropout, data augmentation, and early stopping.
Techniques that prevent a model from overfitting by adding constraints during training.
A regularization technique that randomly deactivates a percentage of neurons during training.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
The research field focused on making sure AI systems do what humans actually want them to do.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.