QuantizeOp class

Quantizes a TensorBuffer with given zeroPoint and scale.

Note: QuantizeOp does not cast output to UINT8, but only performs the quantization math on top of input. The data type of output tensor is always FLOAT32 except that the Op is effectively an identity Op (in this case, the output tensor is the same instance as the input). To connect with quantized model, a CastOp is probably needed.

If both zeroPoint and scale are 0, the QuantizeOp will be bypassed, which is equivalent to setting zeroPoint to 0 and scale to 1. This can be useful when passing in the quantization parameters that are extracted directly from the TFLite model flatbuffer. If the tensor is not quantized, both zeroPoint and scale will be read as 0.

Inheritance
Implemented types

Constructors

QuantizeOp(double zeroPoint, double scale)

Properties

hashCode int
The hash code for this object.
no setterinherited
isIdentityOp bool
getter/setter pairinherited
mean List<double>
getter/setter pairinherited
numChannels int
getter/setter pairinherited
runtimeType Type
A representation of the runtime type of the object.
no setterinherited
stddev List<double>
getter/setter pairinherited

Methods

apply(TensorBuffer input) TensorBuffer
Applies the defined normalization on given input tensor and returns the result.
inherited
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
toString() String
A string representation of this object.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited