Google ML Kit for Firebase

pub package

A Flutter plugin to use the Google ML Kit for Firebase API.

For Flutter plugins for other Firebase products, see

Note: This plugin is still under development, and some APIs might not be available yet. Feedback and Pull Requests are most welcome!


To use this plugin, add firebase_ml_vision as a dependency in your pubspec.yaml file. You must also configure Firebase for each platform project: Android and iOS (see the example folder or for step by step details).


Optional but recommended: If you use the on-device API, configure your app to automatically download the ML model to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml file:

<application ...>
      android:value="ocr" />
  <!-- To use multiple models: android:value="ocr,model2,model3" -->

On-device Text Recognition

To use the on-device text recognition model, run the text detector as described below:

  1. Create a FirebaseVisionImage object from your image.

To create a FirebaseVisionImage from an image File object:

final File imageFile = getImageFile();
final FirebaseVisionImage visionImage = FirebaseVisionImage.fromFile(imageFile);
  1. Get an instance of TextDetector and pass visionImage to detectInImage().
final TextDetector detector = FirebaseVision.instance.getTextDetector();
final List<TextBlock> blocks = await detector.detectInImage(visionImage);

  1. Extract text and text locations from blocks of recognized text.
for (TextBlock block in textLocations) {
  final Rectangle<num> boundingBox = block.boundingBox;
  final List<Point<num>> cornerPoints = block.cornerPoints;
  final String text = block.text;

  for (TextLine line in block.lines) {
    // ...

    for (TextElement element in line.elements) {
      // ...

Getting Started

See the example directory for a complete sample app using Google ML Kit for Firebase.