The fundamental optimization algorithm used to train neural networks.
The fundamental optimization algorithm used to train neural networks. It calculates the direction that reduces the error most quickly and takes a step in that direction. Repeated thousands or millions of times, this gradually finds good weights for the network. The 'learning' in machine learning.
The algorithm that makes neural network training possible.
A hyperparameter that controls how much the model's weights change in response to each update.
A mathematical function that measures how far the model's predictions are from the correct answers.
A mathematical function applied to a neuron's output that introduces non-linearity into the network.
An optimization algorithm that combines the best parts of two other methods — AdaGrad and RMSProp.
Artificial General Intelligence.
Browse our complete glossary or subscribe to our newsletter for the latest AI news and insights.