Momentum class
Implements Stochastic Gradient Descent (SGD) with Momentum.
This optimizer accelerates standard SGD by adding a "velocity" term, which is an exponentially decaying average of past gradients. This helps the optimizer build speed in consistent directions and dampen oscillations.
It is a classic and powerful optimizer that is still widely used, especially in computer vision, where it can sometimes lead to better generalization than Adam.
Analogy ðŸ§
Think of a heavy ball rolling down a hill. It builds up momentum, allowing it to roll over small bumps and settle more quickly into the deepest part of a valley.
Example
Optimizer optimizer = Momentum(model.parameters, learningRate: 0.01, momentum: 0.9);
Constructors
Properties
- hashCode → int
-
The hash code for this object.
no setterinherited
- learningRate → double
-
The step size for the gradient updates.
finalinherited
- momentum → double
-
final
-
parameters
→ List<
Tensor> -
The list of model parameters (weights and biases) that this optimizer will update.
finalinherited
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
Methods
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
step(
) → void -
Performs a single optimization step using the momentum update rule.
override
-
toString(
) → String -
A string representation of this object.
inherited
-
zeroGrad(
) → void -
Resets the gradients of all parameters to zero.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited