Google's ML Kit Face Detection for Flutter

Pub Version analysis Star on Github License: MIT

A Flutter plugin to use Google's ML Kit Face Detection to detect faces in an image, identify key facial features, and get the contours of detected faces.

PLEASE READ THIS before continuing or posting a new issue:

  • Google's ML Kit was build only for mobile platforms: iOS and Android apps.

  • This plugin is not sponsor or maintained by Google. The authors are developers excited about machine learning that wanted to expose Google's native APIs to Flutter.

  • Google's ML Kit APIs are ony developed natively for iOS and Android. This plugin uses Flutter Platform Channels as explained here.

    Messages are passed between the client (the app/plugin) and host (platform) using platform channels as illustrated in this diagram:

    Messages and responses are passed asynchronously, to ensure the user interface remains responsive. To read more about platform channels go here.

    Because this plugin uses platform channels, no Machine Learning processing is done in Flutter/Dart, all the calls are passed to the native platform using MethodChannel in Android and FlutterMethodChannel in iOS, and executed using the Google's native APIs. Think of this plugin as a bridge between your app and Google's native ML Kit APIs. This plugin only passes the call to the native API and the processing is done by Google's API. It is important that you understand this concept when it comes to debugging errors for your ML model and/or app.

  • Since the plugin uses platform channels, you may encounter issues with the native API. Before submitting a new issue, identify the source of the issue. You can run both iOS and/or Android native example apps by Google and make sure that the issue is not reproducible with their native examples. If you can reproduce the issue in their apps then report the issue to Google. The authors do not have access to the source code of their native APIs, so you need to report the issue to them. If you find that their example apps are okay and still you have an issue using this plugin, then look at our closed and open issues. If you cannot find anything that can help you then report the issue and provide enough details. Be patient, someone from the community will eventually help you.



  • Minimum iOS Deployment Target: 12.0
  • Xcode 13.2.1 or newer
  • Swift 5
  • ML Kit does not support 32-bit architectures (i386 and armv7). ML Kit does support 64-bit architectures (x86_64 and arm64). Check this list to see if your device has the required device capabilities. More info here.

Since ML Kit does not support 32-bit architectures (i386 and armv7), you need to exclude armv7 architectures in Xcode in order to run flutter build ios or flutter build ipa. More info here.

Go to Project > Runner > Building Settings > Excluded Architectures > Any SDK > armv7

Your Podfile should look like this:

platform :ios, '12.0'  # or newer version


# add this line:
$iOSVersion = '12.0'  # or newer version

post_install do |installer|
  # add these lines:
  installer.pods_project.build_configurations.each do |config|
    config.build_settings["EXCLUDED_ARCHS[sdk=*]"] = "armv7"
    config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
  installer.pods_project.targets.each do |target|
    # add these lines:
    target.build_configurations.each do |config|
        config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion

Notice that the minimum IPHONEOS_DEPLOYMENT_TARGET is 12.0, you can set it to something newer but not older.


  • minSdkVersion: 21
  • targetSdkVersion: 33
  • compileSdkVersion: 33


Face Detection

Create an instance of InputImage

Create an instance of InputImage as explained here.

final InputImage inputImage;

Create an instance of FaceDetector

final options = FaceDetectorOptions();
final faceDetector = FaceDetector(options: options);

Process image

final List<Face> faces = await faceDetector.processImage(inputImage);

for (Face face in faces) {
  final Rect boundingBox = face.boundingBox;

  final double? rotX = face.headEulerAngleX; // Head is tilted up and down rotX degrees
  final double? rotY = face.headEulerAngleY; // Head is rotated to the right rotY degrees
  final double? rotZ = face.headEulerAngleZ; // Head is tilted sideways rotZ degrees

  // If landmark detection was enabled with FaceDetectorOptions (mouth, ears,
  // eyes, cheeks, and nose available):
  final FaceLandmark? leftEar = face.landmarks[FaceLandmarkType.leftEar];
  if (leftEar != null) {
    final Point<int> leftEarPos = leftEar.position;

  // If classification was enabled with FaceDetectorOptions:
  if (face.smilingProbability != null) {
    final double? smileProb = face.smilingProbability;

  // If face tracking was enabled with FaceDetectorOptions:
  if (face.trackingId != null) {
    final int? id = face.trackingId;

Release resources with close()


Example app

Find the example app here.


Contributions are welcome. In case of any problems look at existing issues, if you cannot find anything related to your problem then open an issue. Create an issue before opening a pull request for non trivial fixes. In case of trivial fixes open a pull request directly.