google_mlkit_commons 0.8.1 google_mlkit_commons: ^0.8.1 copied to clipboard
A Flutter plugin with commons files to implement google's standalone ml kit made for mobile platform.
google_mlkit_commons #
A Flutter plugin with common methods used in google_ml_kit.
PLEASE READ THIS before continuing or posting a new issue:
-
Google's ML Kit was build only for mobile platforms: iOS and Android apps. Web or any other platform is not supported, you can request support for those platform to Google in their repo.
-
This plugin is not sponsored or maintained by Google. The authors are developers excited about Machine Learning that wanted to expose Google's native APIs to Flutter.
-
Google's ML Kit APIs are only developed natively for iOS and Android. This plugin uses Flutter Platform Channels as explained here.
Messages are passed between the client (the app/plugin) and host (platform) using platform channels as illustrated in this diagram:
Messages and responses are passed asynchronously, to ensure the user interface remains responsive. To read more about platform channels go here.
Because this plugin uses platform channels, no Machine Learning processing is done in Flutter/Dart, all the calls are passed to the native platform using
MethodChannel
in Android andFlutterMethodChannel
in iOS, and executed using Google's native APIs. Think of this plugin as a bridge between your app and Google's native ML Kit APIs. This plugin only passes the call to the native API and the processing is done by Google's API. It is important that you understand this concept when it comes to debugging errors for your ML model and/or app. -
Since the plugin uses platform channels, you may encounter issues with the native API. Before submitting a new issue, identify the source of the issue. You can run both iOS and/or Android native example apps by Google and make sure that the issue is not reproducible with their native examples. If you can reproduce the issue in their apps then report the issue to Google. The authors do not have access to the source code of their native APIs, so you need to report the issue to them. If you find that their example apps are okay and still you have an issue using this plugin, then look at our closed and open issues. If you cannot find anything that can help you then report the issue and provide enough details. Be patient, someone from the community will eventually help you.
Requirements #
iOS #
- Minimum iOS Deployment Target: 12.0
- Xcode 15 or newer
- Swift 5
- ML Kit does not support 32-bit architectures (i386 and armv7). ML Kit does support 64-bit architectures (x86_64 and arm64). Check this list to see if your device has the required device capabilities. More info here.
Since ML Kit does not support 32-bit architectures (i386 and armv7), you need to exclude armv7 architectures in Xcode in order to run flutter build ios
or flutter build ipa
. More info here.
Go to Project > Runner > Building Settings > Excluded Architectures > Any SDK > armv7
Your Podfile should look like this:
platform :ios, '12.0' # or newer version
...
# add this line:
$iOSVersion = '12.0' # or newer version
post_install do |installer|
# add these lines:
installer.pods_project.build_configurations.each do |config|
config.build_settings["EXCLUDED_ARCHS[sdk=*]"] = "armv7"
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
end
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
# add these lines:
target.build_configurations.each do |config|
if Gem::Version.new($iOSVersion) > Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'])
config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
end
end
end
end
Notice that the minimum IPHONEOS_DEPLOYMENT_TARGET
is 12.0, you can set it to something newer but not older.
Android #
- minSdkVersion: 21
- targetSdkVersion: 33
- compileSdkVersion: 34
Usage #
Creating an InputImage
#
From path:
final inputImage = InputImage.fromFilePath(filePath);
From file:
final inputImage = InputImage.fromFile(file);
From bytes:
final inputImage = InputImage.fromBytes(bytes: bytes, metadata: metadata);
If you are using the Camera plugin make sure to configure your CameraController to only use ImageFormatGroup.nv21
for Android and ImageFormatGroup.bgra8888
for iOS.
Notice that the image rotation is computed in a different way for both iOS and Android. Image rotation is used in Android to convert the InputImage
from Dart to Java, but it is not used in iOS to convert from Dart to Obj-C. However, image rotation and camera.lensDirection
can be used in both platforms to compensate x and y coordinates on a canvas.
import 'dart:io';
import 'package:camera/camera.dart';
import 'package:google_mlkit_commons/google_mlkit_commons.dart';
import 'package:flutter/services.dart';
final camera; // your camera instance
final controller = CameraController(
camera,
ResolutionPreset.max,
enableAudio: false,
imageFormatGroup: Platform.isAndroid
? ImageFormatGroup.nv21 // for Android
: ImageFormatGroup.bgra8888, // for iOS
);
final _orientations = {
DeviceOrientation.portraitUp: 0,
DeviceOrientation.landscapeLeft: 90,
DeviceOrientation.portraitDown: 180,
DeviceOrientation.landscapeRight: 270,
};
InputImage? _inputImageFromCameraImage(CameraImage image) {
// get image rotation
// it is used in android to convert the InputImage from Dart to Java
// `rotation` is not used in iOS to convert the InputImage from Dart to Obj-C
// in both platforms `rotation` and `camera.lensDirection` can be used to compensate `x` and `y` coordinates on a canvas
final camera = _cameras[_cameraIndex];
final sensorOrientation = camera.sensorOrientation;
InputImageRotation? rotation;
if (Platform.isIOS) {
rotation = InputImageRotationValue.fromRawValue(sensorOrientation);
} else if (Platform.isAndroid) {
var rotationCompensation =
_orientations[_controller!.value.deviceOrientation];
if (rotationCompensation == null) return null;
if (camera.lensDirection == CameraLensDirection.front) {
// front-facing
rotationCompensation = (sensorOrientation + rotationCompensation) % 360;
} else {
// back-facing
rotationCompensation =
(sensorOrientation - rotationCompensation + 360) % 360;
}
rotation = InputImageRotationValue.fromRawValue(rotationCompensation);
}
if (rotation == null) return null;
// get image format
final format = InputImageFormatValue.fromRawValue(image.format.raw);
// validate format depending on platform
// only supported formats:
// * nv21 for Android
// * bgra8888 for iOS
if (format == null ||
(Platform.isAndroid && format != InputImageFormat.nv21) ||
(Platform.isIOS && format != InputImageFormat.bgra8888)) return null;
// since format is constraint to nv21 or bgra8888, both only have one plane
if (image.planes.length != 1) return null;
final plane = image.planes.first;
// compose InputImage using bytes
return InputImage.fromBytes(
bytes: plane.bytes,
metadata: InputImageMetadata(
size: Size(image.width.toDouble(), image.height.toDouble()),
rotation: rotation, // used only in Android
format: format, // used only in iOS
bytesPerRow: plane.bytesPerRow, // used only in iOS
),
);
}
CameraImage image; // your image from camera/controller image stream
final inputImage = _inputImageFromCameraImage(image);
Example app #
Find the example app here.
Contributing #
Contributions are welcome. In case of any problems look at existing issues, if you cannot find anything related to your problem then open an issue. Create an issue before opening a pull request for non trivial fixes. In case of trivial fixes open a pull request directly.