face_detection_tflite library

Face detection and landmark inference utilities backed by MediaPipe-style TFLite models for Flutter apps.

Classes

AlignedFace
Aligned face crop data holder for OpenCV-based processing.
AlignedRoi
Rotation-aware region of interest for cropped eye landmarks.
BoundingBox
An axis-aligned or rotated bounding box defined by four corner points.
DecodedBox
Decoded detection box and keypoints straight from the TFLite model.
Detection
Raw detection output from the face detector containing the bounding box and keypoints.
DetectionWithSegmentationResult
Result combining face detection and segmentation from parallel processing.
Eye
Comprehensive eye tracking data including iris center, iris contour, and eye mesh.
EyePair
Eye tracking data for both eyes including iris and eye mesh landmarks.
Face
FaceDetection
Runs face box detection and predicts a small set of facial keypoints (eyes, nose, mouth, tragions) on the detected face(s).
FaceDetectionTfliteDart
Flutter plugin registration stub for Dart-only initialization.
FaceDetector
A complete face detection and analysis system using TensorFlow Lite models.
FaceDetectorIsolate
A wrapper that runs the entire face detection pipeline in a background isolate.
FaceEmbedding
Generates face embeddings (identity vectors) from aligned face crops.
FaceLandmark
Predicts the full 468-point face mesh (x, y, z per point) for an aligned face crop. Coordinates are normalized before later mapping back to image space.
FaceLandmarks
Facial landmark points with convenient named access.
FaceMesh
A 468-point face mesh with optional depth information.
ImageTensor
Image tensor plus padding metadata used to undo letterboxing.
IrisLandmark
Estimates dense iris keypoints within cropped eye regions and lets callers derive a robust iris center (with fallback if inference fails).
LetterboxParams
Parameters for aspect-preserving resize with centered padding.
Mat
MulticlassSegmentationMask
Extended segmentation mask with per-class probabilities.
OutputTensorInfo
Holds metadata for an output tensor (shape plus its writable buffer).
PerformanceConfig
Configuration for interpreter hardware acceleration and threading.
Point
A point with x, y, and optional z coordinates.
RectF
Axis-aligned rectangle with normalized coordinates.
SegmentationClass
Segmentation class indices in the multiclass model output. The model outputs 6 channels representing probabilities for each class.
SegmentationConfig
Configuration for segmentation operations.
SegmentationMask
A segmentation probability mask indicating foreground vs background.
SegmentationWorker
A dedicated background isolate for selfie segmentation inference.
SelfieSegmentation
Performs selfie/person segmentation using MediaPipe TFLite models.

Enums

FaceDetectionMode
Controls which detection features to compute.
FaceDetectionModel
Specifies which face detection model variant to use.
FaceLandmarkType
Identifies specific facial landmarks returned by face detection.
IsolateOutputFormat
Output format options for isolate-based segmentation to reduce transfer overhead.
PerformanceMode
Hardware acceleration mode for LiteRT inference.
PixelFormat
Pixel format for RGBA output from segmentation masks.
SegmentationError
Error codes for segmentation operations.
SegmentationModel
Selects which segmentation model variant to use.

Constants

eyeLandmarkConnections → const List<List<int>>
Connections between eye contour landmarks for rendering the visible eyeball outline.
IMREAD_COLOR → const int
kMaxEyeLandmark → const int
Number of eye contour points that form the visible eyeball outline.
kMeshPoints → const int
The expected number of landmark points in a complete face mesh.
kMinSegmentationInputSize → const int
Minimum input image size (smaller images rejected).

Functions

allocTensorShape(List<int> shape) Object
Allocates a nested list structure matching the given tensor shape.
bgrBytesToRgbFloat32({required Uint8List bytes, required int totalPixels, Float32List? buffer}) Float32List
Converts BGR bytes to a flat Float32List with 0.0, 1.0 normalization.
bgrBytesToSignedFloat32({required Uint8List bytes, required int totalPixels, Float32List? buffer}) Float32List
Converts BGR bytes to a flat Float32List with -1.0, 1.0 normalization.
clamp01(double v) double
Clamps v to the range 0.0, 1.0. Returns 0.0 for NaN inputs.
clip(double v, double lo, double hi) double
Clamps v to the range lo, hi.
collectOutputTensorInfo(Interpreter itp) Map<int, OutputTensorInfo>
Collects output tensor shapes (and their backing buffers) for an interpreter.
computeEmbeddingAlignment({required Point leftEye, required Point rightEye}) AlignedRoi
Computes alignment parameters for extracting a face crop suitable for embedding.
computeLetterboxParams({required int srcWidth, required int srcHeight, required int targetWidth, required int targetHeight, bool roundDimensions = true}) LetterboxParams
Computes letterbox parameters for resizing srcWidthxsrcHeight to fit within targetWidthxtargetHeight while preserving aspect ratio.
convertImageToTensor(Mat src, {required int outW, required int outH, Float32List? buffer}) ImageTensor
Converts a cv.Mat image to a normalized tensor with letterboxing.
createNHWCTensor4D(int height, int width) List<List<List<List<double>>>>
Creates a pre-allocated [1][height][width][3] tensor structure.
cropFromRoiMat(Mat src, RectF roi) Mat
Crops a rectangular region from a cv.Mat using normalized coordinates.
extractAlignedSquare(Mat src, double cx, double cy, double size, double theta) Mat?
Extracts a rotated square region from a cv.Mat using OpenCV's warpAffine.
faceDetectionToRoi(RectF boundingBox, {double expandFraction = 0.6}) RectF
Converts a face detection bounding box to a square region of interest (ROI).
fillNHWC4D(Float32List flat, List<List<List<List<double>>>> cache, int inH, int inW) → void
Fills an NHWC 4D tensor cache from a flat Float32List.
flattenDynamicTensor(Object? out) Float32List
Flattens an arbitrarily nested tensor to a flat Float32List.
imdecode(Uint8List buf, int flags, {Mat? dst}) Mat
imdecode reads an image from a buffer in memory. The function imdecode reads an image from the specified buffer in memory. If the buffer is too short or contains invalid data, the function returns an empty matrix. @param buf Input array or vector of bytes. @param flags The same flags as in cv::imread, see cv::ImreadModes.
sigmoid(double x) double
Sigmoid activation function.
sigmoidClipped(double x, {double limit = 80.0}) double
Sigmoid with input clipping to prevent overflow.
testClip(double v, double lo, double hi) double
Test-only access to clip for verifying value clamping behavior.
testCollectOutputTensorInfo(Interpreter itp) Map<int, OutputTensorInfo>
Test-only access to collectOutputTensorInfo for verifying output tensor collection.
testComputeClassProbabilities(Float32List rawOutput, int width, int height) Float32List
testComputeFaceAlignment(Detection det, double imgW, double imgH) → ({double cx, double cy, double size, double theta})
testCreateInferenceLockRunner() Future<T> Function<T>(Future<T> fn())
testDeserializeMask(Map<String, dynamic> map) SegmentationMask
testDetectionLetterboxRemoval(List<Detection> dets, List<double> padding) List<Detection>
testExpectedOutputChannels(SegmentationModel model) int
testFindIrisCenterFromPoints(List<Point> irisPoints) Point
testInputHeightFor(SegmentationModel model) int
testInputWidthFor(SegmentationModel model) int
testModelFileFor(SegmentationModel model) String
testNameFor(FaceDetectionModel m) String
testNms(List<Detection> dets, double iouThresh, double scoreThresh) List<Detection>
testNormalizeEmbedding(Float32List embedding) Float32List
testOptsFor(FaceDetectionModel m) → SSDAnchorOptions
testSerializeMask(SegmentationMask mask, IsolateOutputFormat format, double binaryThreshold) Map<String, dynamic>
testSigmoidClipped(double x, {double limit = _rawScoreLimit}) double
testSsdGenerateAnchors(SSDAnchorOptions opts) Float32List
testTransformIrisToAbsolute(List<List<double>> lmNorm, AlignedRoi roi, bool isRight) List<List<double>>
testTransformMeshToAbsolute(List<List<double>> lmNorm, double cx, double cy, double size, double theta) List<Point>
testUnpackLandmarks(Float32List flat, int inW, int inH, List<double> padding, {bool clamp = true}) List<List<double>>

Exceptions / Errors

SegmentationException
Exception thrown by segmentation operations.