SGD class

Implements the Stochastic Gradient Descent (SGD) optimizer.

This is the most fundamental optimization algorithm. At each step, it updates every parameter by moving it in the direction of the negative gradient, scaled by the learningRate.

The update rule is: parameter = parameter - learningRate * gradient.

While simple and effective for many problems, it can be slower to converge than more advanced adaptive optimizers like Adam or RMSprop.

Example

Optimizer optimizer = SGD(model.parameters, learningRate: 0.01);
Inheritance

Constructors

SGD(List<Tensor> parameters, {required double learningRate})

Properties

hashCode int
The hash code for this object.
no setterinherited
learningRate double
The step size for the gradient updates.
finalinherited
parameters List<Tensor>
The list of model parameters (weights and biases) that this optimizer will update.
finalinherited
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
step() → void
Performs a single optimization step using the basic gradient descent rule.
override
toString() String
A string representation of this object.
inherited
zeroGrad() → void
Resets the gradients of all parameters to zero.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited