firebase_ml_vision 0.9.2

ML Kit Vision for Firebase #

pub package

A Flutter plugin to use the ML Kit Vision for Firebase API.

For Flutter plugins for other Firebase products, see FlutterFire.md.

Note: This plugin is still under development, and some APIs might not be available yet. Feedback and Pull Requests are most welcome!

Usage #

To use this plugin, add firebase_ml_vision as a dependency in your pubspec.yaml file. You must also configure Firebase for each platform project: Android and iOS (see the example folder or https://codelabs.developers.google.com/codelabs/flutter-firebase/#4 for step by step details).

Android #

If you're using the on-device ImageLabeler, include the latest matching ML Kit: Image Labeling dependency in your app-level build.gradle file.

android {
    dependencies {
        // ...

        api 'com.google.firebase:firebase-ml-vision-image-label-model:17.0.2'
    }
}

If you're using the on-device Face Contour Detection, include the latest matching ML Kit: Face Detection Model dependency in your app-level build.gradle file.

android {
    dependencies {
        // ...

        api 'com.google.firebase:firebase-ml-vision-face-model:17.0.2'
    }
}

If you receive compilation errors, try an earlier version of ML Kit: Image Labeling.

Optional but recommended: If you use the on-device API, configure your app to automatically download the ML model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:

<application ...>
  ...
  <meta-data
    android:name="com.google.firebase.ml.vision.DEPENDENCIES"
    android:value="ocr" />
  <!-- To use multiple models: android:value="ocr,label,barcode,face" -->
</application>

iOS #

Versions 0.7.0+ use the latest ML Kit for Firebase version which requires a minimum deployment target of 9.0. You can add the line platform :ios, '9.0' in your iOS project Podfile.

If you're using one of the on-device APIs, include the corresponding ML Kit library model in your Podfile. Then run pod update in a terminal within the same directory as your Podfile.

pod 'Firebase/MLVisionBarcodeModel'
pod 'Firebase/MLVisionFaceModel'
pod 'Firebase/MLVisionLabelModel'
pod 'Firebase/MLVisionTextModel'

Using an ML Vision Detector #

1. Create a FirebaseVisionImage.

Create a FirebaseVisionImage object from your image. To create a FirebaseVisionImage from an image File object:

final File imageFile = getImageFile();
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(imageFile);

2. Create an instance of a detector. #

Get an instance of a FirebaseVisionDetector.

final BarcodeDetector barcodeDetector = FirebaseVision.instance.barcodeDetector();
final ImageLabeler cloudLabeler = FirebaseVision.instance.cloudImageLabeler();
final FaceDetector faceDetector = FirebaseVision.instance.faceDetector();
final ImageLabeler labeler = FirebaseVision.instance.imageLabeler();
final TextRecognizer textRecognizer = FirebaseVision.instance.textRecognizer();

You can also configure all detectors, except TextRecognizer, with desired options.

final ImageLabeler labeler = FirebaseVision.instance.imageLabler(
  ImageLabelerOptions(confidenceThreshold: 0.75),
);

3. Call detectInImage() or processImage() with visionImage.

final List<Barcode> barcodes = await barcodeDetector.detectInImage(visionImage);
final List<ImageLabel> cloudLabels = await cloudLabeler.processImage(visionImage);
final List<Face> faces = await faceDetector.processImage(visionImage);
final List<ImageLabel> labels = await labeler.processImage(visionImage);
final VisionText visionText = await textRecognizer.processImage(visionImage);

4. Extract data. #

a. Extract barcodes.

for (Barcode barcode in barcodes) {
  final Rectangle<int> boundingBox = barcode.boundingBox;
  final List<Point<int>> cornerPoints = barcode.cornerPoints;

  final String rawValue = barcode.rawValue;

  final BarcodeValueType valueType = barcode.valueType;

  // See API reference for complete list of supported types
  switch (valueType) {
    case BarcodeValueType.wifi:
      final String ssid = barcode.wifi.ssid;
      final String password = barcode.wifi.password;
      final BarcodeWiFiEncryptionType type = barcode.wifi.encryptionType;
      break;
    case BarcodeValueType.url:
      final String title = barcode.url.title;
      final String url = barcode.url.url;
      break;
  }
}

b. Extract faces.

for (Face face in faces) {
  final Rectangle<int> boundingBox = face.boundingBox;

  final double rotY = face.headEulerAngleY; // Head is rotated to the right rotY degrees
  final double rotZ = face.headEulerAngleZ; // Head is tilted sideways rotZ degrees

  // If landmark detection was enabled with FaceDetectorOptions (mouth, ears,
  // eyes, cheeks, and nose available):
  final FaceLandmark leftEar = face.getLandmark(FaceLandmarkType.leftEar);
  if (leftEar != null) {
    final Point<double> leftEarPos = leftEar.position;
  }

  // If classification was enabled with FaceDetectorOptions:
  if (face.smilingProbability != null) {
    final double smileProb = face.smilingProbability;
  }

  // If face tracking was enabled with FaceDetectorOptions:
  if (face.trackingId != null) {
    final int id = face.trackingId;
  }
}

c. Extract labels.

for (ImageLabel label in labels) {
  final String text = label.text;
  final String entityId = label.entityId;
  final double confidence = label.confidence;
}

d. Extract text.

String text = visionText.text;
for (TextBlock block in visionText.blocks) {
  final Rect boundingBox = block.boundingBox;
  final List<Offset> cornerPoints = block.cornerPoints;
  final String text = block.text;
  final List<RecognizedLanguage> languages = block.recognizedLanguages;

  for (TextLine line in block.lines) {
    // Same getters as TextBlock
    for (TextElement element in line.elements) {
      // Same getters as TextBlock
    }
  }
}

5. Release resources with close().

barcodeDetector.close();
cloudLabeler.close();
faceDetector.close();
labeler.close();
textRecognizer.close();

Getting Started #

See the example directory for a complete sample app using ML Kit Vision for Firebase.

0.9.2 #

  • Add detection of FaceContours when using the FaceDetector. See README.md for more information.

0.9.1+1 #

  • Update google-services Android gradle plugin to 4.3.0 in documentation and examples.

0.9.1 #

  • Add support for cloud text recognizer.

0.9.0+3 #

  • Automatically use version from pubspec.yaml when reporting usage to Firebase.

0.9.0+2 #

  • Fix bug causing memory leak with iOS images.

0.9.0+1 #

  • Update example app Podfile to match latest Flutter template and support new Xcode build system.

0.9.0 #

  • Breaking Change Add capability to release resources held by detectors with close() method. You should now call detector.close() when a detector will no longer be used.

0.8.0+3 #

  • Add missing template type parameter to invokeMethod calls.
  • Bump minimum Flutter version to 1.5.0.
  • Replace invokeMethod with invokeMapMethod wherever necessary.

0.8.0+2 #

  • Fix crash when passing contact info from barcode.

0.8.0+1 #

0.8.0 #

  • Update Android dependencies to latest.

0.7.0+2 #

  • Fix analyzer warnings about const Rect in tests.

0.7.0+1 #

  • Update README to match latest version.

0.7.0 #

  • Breaking Change Unified and enhanced on-device and cloud image-labeling API. iOS now requires minimum deployment target of 9.0. Add platform :ios, '9.0' in your Podfile. Updated to latest version of Firebase/MLVision on iOS. Please run pod update in directory containing your iOS project Podfile. Label renamed to ImageLabel. LabelDetector renamed to ImageLabeler. Removed CloudLabelDetector and replaced it with a cloud ImageLabeler.

0.6.0+2 #

  • Update README.md
  • Fix crash when receiving barcode urls on iOS.

0.6.0+1 #

  • Log messages about automatic configuration of the default app are now less confusing.

0.6.0 #

  • Breaking Change Removed on-device model dependencies from plugin. Android now requires adding the on-device label detector dependency manually. iOS now requires adding the on-device barcode/face/label/text detector dependencies manually. See the README.md for more details. https://pub.dartlang.org/packages/firebase_ml_vision#-readme-tab-

0.5.1+2 #

  • Fixes bug where image file needs to be rotated.

0.5.1+1 #

  • Remove categories.

0.5.1 #

  • iOS now handles non-planar buffers from FirebaseVisionImage.fromBytes().

0.5.0+1 #

  • Fixes FIRAnalyticsVersionMismatch compilation error on iOS. Please run pod update in directory containing Podfile.

0.5.0 #

  • Breaking Change Change Rectangle<int> to Rect in Text/Face/Barcode results.

  • Breaking Change Change Point<int>/Point<double> to Offset in Text/Face/Barcode results.

  • Fixed bug where there were no corner points for VisionText or Barcode on iOS.

0.4.0+1 #

  • Log a more detailed warning at build time about the previous AndroidX migration.

0.4.0 #

  • Breaking Change Removal of base detector class FirebaseVisionDetector.
  • Breaking Change Removal of TextRecognizer.detectInImage(). Please use TextRecognizer.processImage().
  • Breaking Change Changed FaceDetector.detectInImage() to FaceDetector.processImage().

0.3.0 #

  • Breaking change. Migrate from the deprecated original Android Support Library to AndroidX. This shouldn't result in any functional changes, but it requires any Android apps using this plugin to also migrate if they're using the original support library.

0.2.1 #

  • Add capability to create image from bytes.

0.2.0+2 #

  • Fix bug with empty text object.
  • Fix bug with crash from passing nil to map.

0.2.0+1 #

Bump Android dependencies to latest.

0.2.0 #

  • Breaking Change Update TextDetector to TextRecognizer for android mlkit '17.0.0' and firebase-ios-sdk '5.6.0'.
  • Added CloudLabelDetector.

0.1.2 #

  • Fix example imports so that publishing will be warning-free.

0.1.1 #

  • Set pod version of Firebase/MLVision to avoid breaking changes.

0.1.0 #

  • Breaking Change Add Barcode, Face, and Label on-device detectors.
  • Remove close method.

0.0.2 #

  • Bump Android and Firebase dependency versions.

0.0.1 #

  • Initial release with text detector.

example/README.md

firebase_ml_vision_example #

Demonstrates how to use the firebase_ml_vision plugin.

Usage #

Important If using on-device detectors on iOS, see the plugin README.md on including ML Model pods into the example project.

This example uses the image_picker plugin to get images from the device gallery. If using an iOS device you will have to configure you project with the correct permissions seen under iOS configuration here..

Getting Started #

For help getting started with Flutter, view our online documentation.

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:


dependencies:
  firebase_ml_vision: ^0.9.2

2. Install it

You can install packages from the command line:

with Flutter:


$ flutter pub get

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:


import 'package:firebase_ml_vision/firebase_ml_vision.dart';
  
Popularity:
Describes how popular the package is relative to other packages. [more]
96
Health:
Code health derived from static analysis. [more]
100
Maintenance:
Reflects how tidy and up-to-date the package is. [more]
100
Overall:
Weighted score of the above. [more]
98
Learn more about scoring.

We analyzed this package on Aug 21, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using:

  • Dart: 2.4.0
  • pana: 0.12.19
  • Flutter: 1.7.8+hotfix.4

Platforms

Detected platforms: Flutter

References Flutter, and has no conflicting libraries.

Dependencies

Package Constraint Resolved Available
Direct dependencies
Dart SDK >=2.0.0-dev.28.0 <3.0.0
flutter 0.0.0
Transitive dependencies
collection 1.14.11 1.14.12
meta 1.1.6 1.1.7
sky_engine 0.0.99
typed_data 1.1.6
vector_math 2.0.8
Dev dependencies
firebase_core ^0.4.0
flutter_driver
flutter_test
image_picker ^0.5.0
path ^1.6.2
path_provider ^0.5.0+1
test any