Optimizer class abstract
The abstract base class for all optimization algorithms.
An Optimizer's role is to hold a model's trainable parameters and update
them according to a specific algorithm (e.g., SGD, Adam) using the
gradients computed during the backward pass.
This abstraction allows the training loop to remain generic; you can easily swap out one optimizer for another without changing the training code.
Training Workflow
The typical training loop sequence is:
loss.backward()- Compute gradients for all parameters.optimizer.step()- Update all parameters using the gradients.optimizer.zeroGrad()- Reset all gradients to zero for the next iteration.
Example
// 1. Collect model parameters and create an optimizer.
Optimizer optimizer = Adam(myModel.parameters, learningRate: 0.001);
// 2. Inside the training loop...
for (var sample in dataset) {
var loss = myModel.forward(sample.input);
loss.backward();
optimizer.step();
optimizer.zeroGrad();
}
Properties
- hashCode → int
-
The hash code for this object.
no setterinherited
- learningRate → double
-
The step size for the gradient updates.
final
-
parameters
→ List<
Tensor> -
The list of model parameters (weights and biases) that this optimizer will update.
final
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
Methods
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
step(
) → void - Performs a single optimization step (parameter update).
-
toString(
) → String -
A string representation of this object.
inherited
-
zeroGrad(
) → void - Resets the gradients of all parameters to zero.
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited