mediapipe_face_mesh 1.0.2 copy "mediapipe_face_mesh: ^1.0.2" to clipboard
mediapipe_face_mesh: ^1.0.2 copied to clipboard

Flutter plugin delivering MediaPipe Face Mesh inference on Android and iOS, accepting RGBA or NV21 inputs, backed by an FFI core where C/C++ handles TFLite runtime, preprocessing, and execution.

mediapipe_face_mesh #

Flutter/FFI bindings around the MediaPipe Face Mesh CPU graph.
The plugin bundles the native binaries and a default model, so no extra setup is required.
exposes a simple API for running single snapshots or continuous camera streams.

  • CPU inference with the TensorFlow Lite C runtime (no GPU dependency).
  • Works with RGBA/BGRA buffers and Android NV21 camera frames.
  • ROI helpers (FaceMeshBox, NormalizedRect) to limit processing to a face.
  • Stream processor utilities to consume frames sequentially and deliver FaceMeshResult updates.

Usage #

flutter pub add mediapipe_face_mesh

create #

import 'package:mediapipe_face_mesh/mediapipe_face_mesh.dart';

final FaceMeshProcessor processor = await FaceMeshProcessor.create();

single image porcessing #

          if (Platform.isAndroid) {
            meshResult = _runFaceMeshOnAndroidNv21(
              mesh: _faceMeshProcessor,
              cameraImage: cameraImage,
              face: faces.first,
              rotationCompensationDegrees: rotationCompensation,
            );
          } else if (Platform.isIOS) {
            meshResult = _runFaceMeshOnIosBgra(
              mesh: _faceMeshProcessor,
              cameraImage: cameraImage,
              face: faces.first,
              rotationCompensationDegrees: rotationCompensation,
            );
          }

streaming camera frames #

    if (Platform.isAndroid) {
      _nv21StreamController = StreamController<FaceMeshNv21Image>();
      _meshStreamSubscription = _faceMeshStreamProcessor
          .processNv21(
            _nv21StreamController!.stream,
            boxResolver: _resolveFaceMeshBoxForNv21,
            boxScale: 1.2,
            boxMakeSquare: true,
            rotationDegrees: rotationDegrees,
          )
          .listen(_handleMeshResult, onError: _handleMeshError);
    } else if (Platform.isIOS) {
      _bgraStreamController = StreamController<FaceMeshImage>();
      _meshStreamSubscription = _faceMeshStreamProcessor
          .processImages(
            _bgraStreamController!.stream,
            boxResolver: _resolveFaceMeshBoxForBgra,
            boxScale: 1.2,
            boxMakeSquare: true,
            rotationDegrees: rotationDegrees,
          )
          .listen(_handleMeshResult, onError: _handleMeshError);
    }

Example #

The example demonstrates loading an asset into FaceMeshImage, running a single inference, and drawing the resulting landmarks.
If you need a camera-based example, check https://github.com/cornpip/flutter_vision_ai_demos.git which streams camera frames instead of using an asset.

Model asset #

The plugin ships with assets/models/mediapipe_face_mesh.tflite, taken from the Face Landmark model listed in Google’s official collection: https://github.com/google-ai-edge/mediapipe/blob/master/docs/solutions/models.md.

detail #

.create parameter #

  • minDetectionConfidence: threshold for the initial face detector. Lowering it reduces missed detections but may increase false positives (default 0.5).
  • minTrackingConfidence: threshold for keeping an existing face track alive. Higher values make tracking stricter but can drop faces sooner (default 0.5).
  • threads: number of CPU threads used by TensorFlow Lite. Increase it to speed up inference on multi-core devices, keeping thermal/power trade-offs in mind. (default 2)
  • enableSmoothing: toggles MediaPipe's temporal smoothing between frames. Keeping it true (default) reduces jitter but adds inertia; set false for per-frame responsiveness when you don't reuse tracking context.

Always remember to call close() on the processor when you are done.

0
likes
0
points
185
downloads

Publisher

verified publishercornpip.dev

Weekly Downloads

Flutter plugin delivering MediaPipe Face Mesh inference on Android and iOS, accepting RGBA or NV21 inputs, backed by an FFI core where C/C++ handles TFLite runtime, preprocessing, and execution.

Repository (GitHub)
View/report issues

License

unknown (license)

Dependencies

ffi, flutter, plugin_platform_interface

More

Packages that depend on mediapipe_face_mesh

Packages that implement mediapipe_face_mesh