SGD class

Stochastic Gradient Descent optimizer with momentum parameter

Gradients applying depends on momentum value:

  • momentum = 0 then:
layer.w = layer.w - gradients.scaled(learningRate)
  • momentum > 0 then:
velocity = velocity.scaled(momentum) - gradients.scaled(learningRate);
layer.w = layer.w + velocity;

Constructors

SGD({double learningRate = 0.05, double momentum = 0})

Properties

biasLearningRate double
The learning rate for biases
getter/setter pairinherited
hashCode int
The hash code for this object.
no setterinherited
learningRate double
The learning rate
getter/setter pairinherited
momentum double
getter/setter pair
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

applyGradients(List<List<Matrix>> gradients, List<Layer> layers, [dynamic parametr]) → void
apply gradients to the layers with Optimizer's logic
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
toString() String
A string representation of this object.
override

Operators

operator ==(Object other) bool
The equality operator.
inherited