google_mlkit_face_mesh_detection 0.1.0 google_mlkit_face_mesh_detection: ^0.1.0 copied to clipboard
A Flutter plugin to use Google's ML Kit Face Mesh Detection.
Google's ML Kit Face Mesh Detection for Flutter #
A Flutter plugin to use Google's ML Kit Face Mesh Detection for face mesh detection, you can generate in real-time a high accuracy face mesh of 468 3D points for selfie-like images.
Faces should be within ~2 meters (~7 feet) of the camera, so that the faces are sufficiently large for optimal face mesh recognition. In general, the larger the face, the better the face mesh recognition.
If you want to detect faces further than ~2 meters (~7 feet) away from the camera, please see google_mlkit_face_detection.
Note that the face should be facing the camera with at least half of the face visible. Any large object between the face and the camera may result in lower accuracy.
NOTE Since Google's Face Mesh Detection API is still in Beta and only supports Android. Stay tune for updates in their website.
PLEASE READ THIS before continuing or posting a new issue:
-
Google's ML Kit was build only for mobile platforms: iOS and Android apps.
-
This plugin is not sponsored or maintained by Google. The authors are developers excited about Machine Learning that wanted to expose Google's native APIs to Flutter.
-
Google's ML Kit APIs are only developed natively for iOS and Android. This plugin uses Flutter Platform Channels as explained here.
Messages are passed between the client (the app/plugin) and host (platform) using platform channels as illustrated in this diagram:
Messages and responses are passed asynchronously, to ensure the user interface remains responsive. To read more about platform channels go here.
Because this plugin uses platform channels, no Machine Learning processing is done in Flutter/Dart, all the calls are passed to the native platform using
MethodChannel
in Android andFlutterMethodChannel
in iOS, and executed using Google's native APIs. Think of this plugin as a bridge between your app and Google's native ML Kit APIs. This plugin only passes the call to the native API and the processing is done by Google's API. It is important that you understand this concept when it comes to debugging errors for your ML model and/or app. -
Since the plugin uses platform channels, you may encounter issues with the native API. Before submitting a new issue, identify the source of the issue. You can run both iOS and/or Android native example apps by Google and make sure that the issue is not reproducible with their native examples. If you can reproduce the issue in their apps then report the issue to Google. The authors do not have access to the source code of their native APIs, so you need to report the issue to them. If you find that their example apps are okay and still you have an issue using this plugin, then look at our closed and open issues. If you cannot find anything that can help you then report the issue and provide enough details. Be patient, someone from the community will eventually help you.
Requirements #
iOS #
- Minimum iOS Deployment Target: 12.0
- Xcode 13.2.1 or newer
- Swift 5
- ML Kit does not support 32-bit architectures (i386 and armv7). ML Kit does support 64-bit architectures (x86_64 and arm64). Check this list to see if your device has the required device capabilities. More info here.
Since ML Kit does not support 32-bit architectures (i386 and armv7), you need to exclude armv7 architectures in Xcode in order to run flutter build ios
or flutter build ipa
. More info here.
Go to Project > Runner > Building Settings > Excluded Architectures > Any SDK > armv7
Your Podfile should look like this:
platform :ios, '12.0' # or newer version
...
# add this line:
$iOSVersion = '12.0' # or newer version
post_install do |installer|
# add these lines:
installer.pods_project.build_configurations.each do |config|
config.build_settings["EXCLUDED_ARCHS[sdk=*]"] = "armv7"
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
end
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
# add these lines:
target.build_configurations.each do |config|
if Gem::Version.new($iOSVersion) > Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'])
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
end
end
end
end
Notice that the minimum IPHONEOS_DEPLOYMENT_TARGET
is 12.0, you can set it to something newer but not older.
Android #
- minSdkVersion: 21
- targetSdkVersion: 33
- compileSdkVersion: 34
Usage #
Face Mesh Detection #
Create an instance of InputImage
Create an instance of InputImage
as explained here.
final InputImage inputImage;
Create an instance of FaceMeshDetector
final meshDetector = FaceMeshDetector(option: FaceMeshDetectorOptions.faceMesh);
Process image
final List<FaceMesh> meshes = await meshDetector.processImage(inputImage);
for (FaceMesh mesh in meshes) {
final boundingBox = mesh.boundingBox;
final points = mesh.points;
final triangles = mesh.triangles;
final contour = mesh.contours[FaceMeshContourType.faceOval];
}
Release resources with close()
meshDetector.close();
Example app #
Find the example app here.
Contributing #
Contributions are welcome. In case of any problems look at existing issues, if you cannot find anything related to your problem then open an issue. Create an issue before opening a pull request for non trivial fixes. In case of trivial fixes open a pull request directly.