synadart library

A limited but fully documented neural network library created for educational purposes.

Classes

Sequential
A Network model in which every Layer has one input and one output tensor.

Enums

ActivationAlgorithm
Algorithms which can be used for activating Neurons

Constants

algorithms → const Map<ActivationAlgorithm, List<ActivationFunctionSignature>>
Map containing all available activation algorithms and their derivatives.

Functions

elu(double x, [double hyperparameter = 1]) double
Exponential Linear Unit - Provides a smooth descent below 0, towards the negative of hyperparameter, or returns x if above or equal to 0.
eluPrime(double x, [double hyperparameter = 1]) double
The derivative of ELU.
gaussian(double x) double
Symmetrical and bell-shaped graph with a peak at 1 and a smooth approach to 0 for both sides of the x-axis.
gaussianPrime(double x) double
The derivative of the Gaussian
lrelu(double x) double
Leaky Linear Unit - Shallow line is seen in the negative x- and y-axes, instead of reducing result to 0 like ReLU.
lreluPrime(double x) double
The derivative of LReLU.
relu(double x) double
Rectified Linear Unit - Negative integers adjusted to 0, leaving positive ones untouched.
reluPrime(double x) double
The derivative of ReLU.
resolveActivationAlgorithm(ActivationAlgorithm activationAlgorithm) ActivationFunction
Resolves an ActivationAlgorithm to a mathematical function in the form of an ActivationFunction.
resolveActivationDerivative(ActivationAlgorithm activationAlgorithm) ActivationFunction
Resolves an ActivationAlgorithm to the derivative of the mathematical function in the form of an ActivationFunction
selu(double x) double
Scaled Exponential Linear Unit - Ensures a slope larger than one for positive inputs.
seluPrime(double x) double
The derivative of the Scaled Exponential Linear Unit
sigmoid(double x) double
Shrinks the range of values to inbetween 0 and 1 using exponentials. Results can be driven into saturation, which makes the sigmoid function unsuited for deep networks with random initialisation.
sigmoidPrime(double x) double
The derivative of the Sigmoid.
softplus(double x) double
Similar to ReLU, but there is a smooth (soft) curve as the result approaches zero on the negative x-axis. Softplus is strictly positive and monotonic.
softplusPrime(double x) double
The derivative of the Softplus
softsign(double x) double
Similar to the hyperbolic tangent, but its tails are quadratic polynomials, rather than exponentials, therefore causing the curve to approach its asymptotes much more slowly.
softsignPrime(double x) double
The derivative of the Softsign
swish(double x) double
Similar to ReLU and Softplus; negative results do occur, but they approach 0 up until x ≈ -10. Delivers comparable or superior results to ReLU.
swishPrime(double x) double
The derivative of the Swish
tanh(double x) double
Hyperbolic Tangent - Utilises exponentials in order to shrink a range of numbers to strictly in-between -1 and 1, -1 and 1 being the mfunction's asymptotes, towards which the curve tends.
tanhPrime(double x) double
The derivative of the Hyperbolic Tangent

Typedefs

ActivationFunction = double Function(double ())
Type defining an activation function taking as a parameter the function to obtain the weighted sum of the inputs and the weights.
ActivationFunctionSignature = double Function(double)
Type defining a bare activation function.