face_detection_tflite library

Face detection and landmark inference utilities backed by MediaPipe-style TFLite models for Flutter apps.

Classes

AlignedFace
Aligned face crop data holder for OpenCV-based processing.
AlignedRoi
Rotation-aware region of interest for cropped eye landmarks.
BackgroundImagePainter
Painter that draws a background image scaled to fill the canvas.
BoundingBox
An axis-aligned or rotated bounding box defined by four corner points.
CameraDetectionPainter
CameraFrame
A camera frame packaged for off-thread colour conversion and inference.
CompactCheckbox
Compact checkbox with an inline label, sized for dense settings panels.
CompactSlider
Compact slider with a fixed-width leading label, sized for dense settings panels. Uses 10 divisions per integer step over [min, max] and labels the thumb with a one-decimal value.
DecodedBox
Decoded detection box and keypoints straight from the TFLite model.
Detection
Raw detection output from the face detector containing the bounding box and keypoints.
DetectionsPainter
DetectionWithSegmentationResult
Result combining face detection and segmentation from parallel processing.
Eye
Comprehensive eye tracking data including iris center, iris contour, and eye mesh.
EyePair
Eye tracking data for both eyes including iris and eye mesh landmarks.
Face
Outputs for a single detected face.
FaceDetection
Runs face box detection and predicts a small set of facial keypoints (eyes, nose, mouth, tragions) on the detected face(s).
FaceDetectionCameraOverlay
A composite widget that overlays face-detection and (optional) segmentation results on top of a live camera preview.
FaceDetectionTfliteDart
Flutter plugin registration stub for Dart-only initialization.
FaceDetector
A complete face detection and analysis system using TensorFlow Lite models.
FaceEmbedding
Generates face embeddings (identity vectors) from aligned face crops.
FaceLandmark
Predicts the full 468-point face mesh (x, y, z per point) for an aligned face crop. Coordinates are normalized before later mapping back to image space.
FaceLandmarks
Facial landmark points with convenient named access.
FaceMesh
A 468-point face mesh with optional depth information.
FpsCounter
A simple 1-second rolling FPS counter for camera-preview apps.
ImageTensor
Image tensor plus padding metadata used to undo letterboxing.
IrisLandmark
Estimates dense iris keypoints within cropped eye regions and lets callers derive a robust iris center (with fallback if inference fails).
LetterboxParams
Parameters for aspect-preserving resize with centered padding.
LiveSegmentationPainter
Painter for rendering segmentation mask overlay on live camera feed.
Mat
MulticlassSegmentationMask
Extended segmentation mask with per-class probabilities.
OutputTensorInfo
Holds metadata for an output tensor (shape plus its writable buffer).
PackedYuv
A contiguous YUV buffer produced by packYuv420, ready to hand to a native colour-conversion routine.
PerformanceConfig
Configuration for interpreter hardware acceleration and threading.
Point
A point with x, y, and optional z coordinates.
RectF
Axis-aligned rectangle with normalized coordinates.
SegmentationClass
Segmentation class indices in the multiclass model output. The model outputs 6 channels representing probabilities for each class.
SegmentationConfig
Configuration for segmentation operations.
SegmentationMask
A segmentation probability mask indicating foreground vs background.
SegmentationMaskPainter
Painter for rendering a segmentation mask over a still image.
SegmentationWorker
A dedicated background isolate for selfie segmentation inference.
SelfieSegmentation
Performs selfie/person segmentation using MediaPipe TFLite models.
TimingBadge
Compact tappable badge that displays the total processing time plus a color-coded performance indicator (via performanceLevel). Tapping opens a dialog with per-stage timings (detection, mesh refinement, iris refinement) and a processed/skipped status per stage.
VirtualBackgroundOverlayPainter
Painter that draws background image only on non-person (background) areas. This creates the "virtual background" effect by covering the camera's background with the beach image while leaving the person visible. Uses soft alpha blending at edges for smooth transitions.

Enums

CameraFrameConversion
The colour conversion a CameraFrame's bytes need before being used as a 3-channel BGR image. Detector packages map this to an opencv COLOR_* code at the point of decode, inside their existing detection isolate.
CameraFrameRotation
Optional rotation applied after colour conversion. Detector packages map this to an opencv ROTATE_* code.
FaceDetectionMode
Controls which detection features to compute.
FaceDetectionModel
Specifies which face detection model variant to use.
FaceLandmarkType
Identifies specific facial landmarks returned by face detection.
IsolateOutputFormat
Output format options for isolate-based segmentation to reduce transfer overhead.
PerformanceMode
Hardware acceleration mode for LiteRT inference.
PixelFormat
Pixel format for RGBA output from segmentation masks.
SegmentationError
Error codes for segmentation operations.
SegmentationModel
Selects which segmentation model variant to use.
YuvLayout
Memory layout of a packed YUV buffer produced by packYuv420.

Constants

eyeLandmarkConnections → const List<List<int>>
Connections between eye contour landmarks for rendering the visible eyeball outline.
IMREAD_COLOR → const int
kMaxEyeLandmark → const int
Number of eye contour points that form the visible eyeball outline.
kMeshPoints → const int
The expected number of landmark points in a complete face mesh.
kMinSegmentationInputSize → const int
Minimum input image size (smaller images rejected).
kSegmentationClassColors → const List<Color>
Default per-class colors for the multiclass segmentation overlay, aligned by index with kSegmentationClassLabels (0=BG, 1=Hair, 2=Body, 3=Face, 4=Clothes, 5=Other). Alpha is preserved so overlays composite onto the underlying camera/image.
kSegmentationClassLabels → const List<String>
Semantic labels (indexed 0–5) for classes emitted by the multiclass segmentation model: background, hair, body skin, face skin, clothes, other.

Functions

allocTensorShape(List<int> shape) Object
Allocates a nested list structure matching the given tensor shape.
barQuarterTurns(DeviceOrientation orientation) int
Quarter-turns (clockwise) to rotate a top-bar widget so it reads upright when the device is in landscape. Use with RotatedBox(quarterTurns: ...).
bgrBytesToRgbFloat32({required Uint8List bytes, required int totalPixels, Float32List? buffer}) Float32List
Converts BGR bytes to a flat Float32List with 0.0, 1.0 normalization.
bgrBytesToSignedFloat32({required Uint8List bytes, required int totalPixels, Float32List? buffer}) Float32List
Converts BGR bytes to a flat Float32List with -1.0, 1.0 normalization.
boundsOf(Iterable<Offset> pts) Rect
Compute the axis-aligned bounding rect of a set of offsets.
clamp01(double v) double
Clamps v to the range 0.0, 1.0. Returns 0.0 for NaN inputs.
clip(double v, double lo, double hi) double
Clamps v to the range lo, hi.
collectOutputTensorInfo(Interpreter itp) Map<int, OutputTensorInfo>
Collects output tensor shapes (and their backing buffers) for an interpreter.
computeEmbeddingAlignment({required Point leftEye, required Point rightEye}) AlignedRoi
Computes alignment parameters for extracting a face crop suitable for embedding.
computeLetterboxParams({required int srcWidth, required int srcHeight, required int targetWidth, required int targetHeight, bool roundDimensions = true}) LetterboxParams
Computes letterbox parameters for resizing srcWidthxsrcHeight to fit within targetWidthxtargetHeight while preserving aspect ratio.
convertImageToTensor(Mat src, {required int outW, required int outH, Float32List? buffer}) ImageTensor
Converts a cv.Mat image to a normalized tensor with letterboxing.
coverFitScaleOffset(int sourceW, int sourceH, double viewW, double viewH) → ({double offsetX, double offsetY, double scale})
Cover-fit scale + offset for rendering a source region of size (sourceW, sourceH) into a viewport of size (viewW, viewH).
createNHWCTensor4D(int height, int width) List<List<List<List<double>>>>
Creates a pre-allocated [1][height][width][3] tensor structure.
cropFromRoiMat(Mat src, RectF roi) Mat
Crops a rectangular region from a cv.Mat using normalized coordinates.
detectionSize({required int width, required int height, required CameraFrameRotation? rotation, required int maxDim}) Size
Compute the final detection-image size used by overlay painters to map detector coordinates back onto the widget coord space.
drawBoundingBoxOutline({required Canvas canvas, required BoundingBox bbox, required double scaleX, required double scaleY, required double offsetX, required double offsetY, required Paint paint}) → void
Draw the axis-aligned outline of a BoundingBox transformed by a linear scale + offset. Use a stroked Paint for an outline, or a filled one to tint the interior.
drawLandmarkMarker(Canvas canvas, double x, double y, {double glowRadius = 8, double pointRadius = 5, double centerRadius = 2, Paint? glowPaint, Paint? pointPaint, Paint? centerPaint}) → void
Draw a standard "glow + point + center dot" triple-circle landmark marker at (x, y) in canvas coordinates.
drawSegmentationClassLabels(Canvas canvas, List<int> counts, List<double> sumX, List<double> sumY) → void
Draw multiclass segmentation labels at class centroids (one label per class index whose pixel count exceeds an internal threshold).
drawSkeletonConnections({required Canvas canvas, required List<Offset> scaledPoints, required List<(int, int)> connections, required Paint paint}) → void
Draw straight-line connections between pre-scaled landmark points.
extractAlignedSquare(Mat src, double cx, double cy, double size, double theta) Mat?
Extracts a rotated square region from a cv.Mat using OpenCV's warpAffine.
faceDetectionToRoi(RectF boundingBox, {double expandFraction = 0.6}) RectF
Converts a face detection bounding box to a square region of interest (ROI).
fillNHWC4D(Float32List flat, List<List<List<List<double>>>> cache, int inH, int inW) → void
Fills an NHWC 4D tensor cache from a flat Float32List.
flattenDynamicTensor(Object? out) Float32List
Flattens an arbitrarily nested tensor to a flat Float32List.
imdecode(Uint8List buf, int flags, {Mat? dst}) Mat
imdecode reads an image from a buffer in memory. The function imdecode reads an image from the specified buffer in memory. If the buffer is too short or contains invalid data, the function returns an empty matrix. @param buf Input array or vector of bytes. @param flags The same flags as in cv::imread, see cv::ImreadModes.
maskValidRegion(SegmentationMask mask) → ({int x0, int x1, int y0, int y1})
Compute the valid (non-padding) region of a segmentation mask.
packYuv420({required int width, required int height, required YuvPlane y, required YuvPlane u, YuvPlane? v}) PackedYuv?
Packs a YUV420 camera frame into a single contiguous buffer suitable for native colour conversion (e.g. opencv's cvtColor with a COLOR_YUV2BGR_NV21 / COLOR_YUV2BGR_NV12 / COLOR_YUV2BGR_I420 code).
performanceLevel(int ms) → ({Color color, IconData icon, String label})
Classify detection-time in milliseconds into a display-friendly bucket (label, color, icon) for overlay status indicators.
prepareCameraFrame({required int width, required int height, required List<CameraPlane> planes, CameraFrameRotation? rotation, bool isBgra = true}) CameraFrame?
prepareCameraFrameFromImage(Object cameraImage, {CameraFrameRotation? rotation, bool isBgra = true}) CameraFrame?
Prepare a CameraFrame descriptor from raw camera planes, for use with a detector package's detectFromCameraFrame(...) method.
rotationForFrame({required int width, required int height, required int sensorOrientation, required bool isFrontCamera, required DeviceOrientation deviceOrientation}) CameraFrameRotation?
Compute the rotation needed to present a camera frame upright to an on-device detection model, given the camera's sensor orientation and the device's current physical orientation.
sigmoid(double x) double
Sigmoid activation function.
sigmoidClipped(double x, {double limit = 80.0}) double
Sigmoid with input clipping to prevent overflow.
testClip(double v, double lo, double hi) double
Test-only access to clip for verifying value clamping behavior.
testCollectOutputTensorInfo(Interpreter itp) Map<int, OutputTensorInfo>
Test-only access to collectOutputTensorInfo for verifying output tensor collection.
testComputeClassProbabilities(Float32List rawOutput, int width, int height) Float32List
Test-only: exposes the private class-probability computation for unit tests.
testComputeFaceAlignment(Detection det, double imgW, double imgH) → ({double cx, double cy, double size, double theta})
Test-only: exposes the internal face alignment computation for unit tests.
testCreateInferenceLockRunner() Future<T> Function<T>(Future<T> fn())
Test-only: returns a fresh inference-lock run function for unit tests.
testDeserializeMask(Map<String, dynamic> map) SegmentationMask
Test-only: exposes the private mask-deserialization logic for unit tests.
testDetectionLetterboxRemoval(List<Detection> dets, List<double> padding) List<Detection>
Test-only: exposes the private letterbox-removal logic for unit tests.
testExpectedOutputChannels(SegmentationModel model) int
Test-only: exposes the private expected-output-channel mapping for unit tests.
testFindIrisCenterFromPoints(List<Point> irisPoints) Point
Test-only: exposes the internal iris-center computation for unit tests.
testInputHeightFor(SegmentationModel model) int
Test-only: exposes the private input-height mapping for unit tests.
testInputWidthFor(SegmentationModel model) int
Test-only: exposes the private input-width for unit tests.
testModelFileFor(SegmentationModel model) String
Test-only: exposes the private model-file mapping for unit tests.
testNameFor(FaceDetectionModel m) String
Test-only: exposes the private model-name mapping for unit tests.
testNms(List<Detection> dets, double iouThresh, double scoreThresh) List<Detection>
Test-only: exposes the private weighted-NMS logic for unit tests.
testNormalizeEmbedding(Float32List embedding) Float32List
Test-only: exposes the private embedding-normalization logic for unit tests.
testOptsFor(FaceDetectionModel m) → SSDAnchorOptions
Test-only: exposes the private model-to-SSDAnchorOptions mapping for unit tests.
testSerializeMask(SegmentationMask mask, IsolateOutputFormat format, double binaryThreshold) Map<String, dynamic>
Test-only: exposes the private mask-serialization logic for unit tests.
testSigmoidClipped(double x, {double limit = _rawScoreLimit}) double
Test-only: exposes sigmoidClipped for unit tests.
testSsdGenerateAnchors(SSDAnchorOptions opts) Float32List
Test-only: flattens generated SSD anchors into a Float32List for unit tests.
testTransformIrisToAbsolute(List<List<double>> lmNorm, AlignedRoi roi, bool isRight) List<List<double>>
Test-only: exposes the private iris-to-absolute transform for unit tests.
testTransformMeshToAbsolute(List<List<double>> lmNorm, double cx, double cy, double size, double theta) List<Point>
Test-only: exposes the internal mesh-to-absolute transform for unit tests.
testUnpackLandmarks(Float32List flat, int inW, int inH, List<double> padding, {bool clamp = true}) List<List<double>>
Test-only: exposes the private landmark-unpacking logic for unit tests.

Typedefs

CameraPlane = ({Uint8List bytes, int pixelStride, int rowStride})
A single camera frame plane exposed by a camera plugin.
YuvPlane = ({Uint8List bytes, int pixelStride, int rowStride})
A single YUV plane exposed by a camera plugin, decoupled from any specific Flutter plugin's type (e.g. CameraImage.Plane).

Exceptions / Errors

SegmentationException
Exception thrown by segmentation operations.