NAG class

Implements the Nesterov Accelerated Gradient (NAG) optimizer.

NAG is an improvement over standard momentum. While standard momentum combines the previous velocity with the current gradient, NAG first makes a "lookahead" step in the direction of the velocity and then calculates the gradient from that future position. This correction prevents the optimizer from overshooting the minimum and often leads to faster convergence.

Analogy 🧠

Think of a ball rolling down a hill:

  • Momentum is a heavy ball that builds up speed and rolls past small bumps.
  • NAG is a smarter ball that looks ahead at the slope just before its next move. It can slow down if the ground is about to rise, preventing it from overshooting the bottom of the valley.

Example

var optimizer = NAG(model.parameters, learningRate: 0.01, momentum: 0.9);
Inheritance

Constructors

NAG(List<Tensor> parameters, {required double learningRate, double momentum = 0.9})

Properties

hashCode → int
The hash code for this object.
no setterinherited
learningRate → double
The step size for the gradient updates.
finalinherited
momentum → double
final
parameters → List<Tensor>
The list of model parameters (weights and biases) that this optimizer will update.
finalinherited
runtimeType → Type
A representation of the runtime type of the object.
no setterinherited

Methods

noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
step() → void
Performs a single optimization step using an efficient NAG update rule.
override
toString() → String
A string representation of this object.
inherited
zeroGrad() → void
Resets the gradients of all parameters to zero.
inherited

Operators

operator ==(Object other) → bool
The equality operator.
inherited