PositionalEncoding class

Injects information about the relative or absolute position of tokens in a sequence.

Since the Transformer architecture contains no recurrence, it has no inherent sense of word order. This layer adds a unique, non-trainable vector to each input embedding, allowing the model to learn from the sequence order.

It uses the standard sinusoidal formula from the "Attention Is All You Need" paper.

Inheritance

Constructors

PositionalEncoding(int maxLength, int dModel)

Properties

dModel int
getter/setter pair
encodingMatrix Tensor<Matrix>
getter/setter pair
hashCode int
The hash code for this object.
no setterinherited
maxLength int
getter/setter pair
name String
A user-friendly name for the layer (e.g., 'dense', 'lstm').
getter/setter pairoverride-getter
parameters List<Tensor>
A list of all trainable tensors (weights and biases) in the layer.
no setteroverride
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

build(Tensor input) → void
Initializes the layer's parameters based on the shape of the first input.
override
call(Tensor input) Tensor
The public, callable interface for the layer.
inherited
forward(Tensor input) Tensor<Matrix>
The core logic of the layer's transformation.
override
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
toString() String
A string representation of this object.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited