tflite 1.1.1

  • Readme
  • Changelog
  • Example
  • Installing
  • 96

tflite #

A Flutter plugin for accessing TensorFlow Lite API. Supports image classification, object detection (SSD and YOLO), Pix2Pix and Deeplab and PoseNet on both iOS and Android.

Table of Contents #

Breaking changes #

Since 1.1.0:

  1. iOS TensorFlow Lite library is upgraded from TensorFlowLite 1.x to TensorFlowLiteObjC 2.x. Changes to native code are denoted with TFLITE2.

Since 1.0.0:

  1. Updated to TensorFlow Lite API v1.12.0.
  2. No longer accepts parameter inputSize and numChannels. They will be retrieved from input tensor.
  3. numThreads is moved to Tflite.loadModel.

Installation #

Add tflite as a dependency in your pubspec.yaml file.

Android #

In android/app/build.gradle, add the following setting in android block.

    aaptOptions {
        noCompress 'tflite'
        noCompress 'lite'
    }

iOS #

Solutions to build errors on iOS:

  • 'vector' file not found"

    Open ios/Runner.xcworkspace in Xcode, click Runner > Tagets > Runner > Build Settings, search Compile Sources As, change the value to Objective-C++

  • 'tensorflow/lite/kernels/register.h' file not found

    The plugin assumes the tensorflow header files are located in path "tensorflow/lite/kernels".

    However, for early versions of tensorflow the header path is "tensorflow/contrib/lite/kernels".

    Use CONTRIB_PATH to toggle the path. Uncomment //#define CONTRIB_PATH from here: https://github.com/shaqian/flutter_tflite/blob/master/ios/Classes/TflitePlugin.mm#L1

Usage #

  1. Create a assets folder and place your label file and model file in it. In pubspec.yaml add:
  assets:
   - assets/labels.txt
   - assets/mobilenet_v1_1.0_224.tflite
  1. Import the library:
import 'package:tflite/tflite.dart';
  1. Load the model and labels:
String res = await Tflite.loadModel(
  model: "assets/mobilenet_v1_1.0_224.tflite",
  labels: "assets/labels.txt",
  numThreads: 1, // defaults to 1
  isAsset: true, // defaults to true, set to false to load resources outside assets
  useGpuDelegate: false // defaults to false, set to true to use GPU delegate
);
  1. See the section for the respective model below.

  2. Release resources:

await Tflite.close();

GPU Delegate #

When using GPU delegate, refer to this step for release mode setting to get better performance.

Image Classification #

  • Output format:
{
  index: 0,
  label: "person",
  confidence: 0.629
}
  • Run on image:
var recognitions = await Tflite.runModelOnImage(
  path: filepath,   // required
  imageMean: 0.0,   // defaults to 117.0
  imageStd: 255.0,  // defaults to 1.0
  numResults: 2,    // defaults to 5
  threshold: 0.2,   // defaults to 0.1
  asynch: true      // defaults to true
);
  • Run on binary:
var recognitions = await Tflite.runModelOnBinary(
  binary: imageToByteListFloat32(image, 224, 127.5, 127.5),// required
  numResults: 6,    // defaults to 5
  threshold: 0.05,  // defaults to 0.1
  asynch: true      // defaults to true
);

Uint8List imageToByteListFloat32(
    img.Image image, int inputSize, double mean, double std) {
  var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
  var buffer = Float32List.view(convertedBytes.buffer);
  int pixelIndex = 0;
  for (var i = 0; i < inputSize; i++) {
    for (var j = 0; j < inputSize; j++) {
      var pixel = image.getPixel(j, i);
      buffer[pixelIndex++] = (img.getRed(pixel) - mean) / std;
      buffer[pixelIndex++] = (img.getGreen(pixel) - mean) / std;
      buffer[pixelIndex++] = (img.getBlue(pixel) - mean) / std;
    }
  }
  return convertedBytes.buffer.asUint8List();
}

Uint8List imageToByteListUint8(img.Image image, int inputSize) {
  var convertedBytes = Uint8List(1 * inputSize * inputSize * 3);
  var buffer = Uint8List.view(convertedBytes.buffer);
  int pixelIndex = 0;
  for (var i = 0; i < inputSize; i++) {
    for (var j = 0; j < inputSize; j++) {
      var pixel = image.getPixel(j, i);
      buffer[pixelIndex++] = img.getRed(pixel);
      buffer[pixelIndex++] = img.getGreen(pixel);
      buffer[pixelIndex++] = img.getBlue(pixel);
    }
  }
  return convertedBytes.buffer.asUint8List();
}
  • Run on image stream (video frame):

Works with camera plugin 4.0.0. Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.

var recognitions = await Tflite.runModelOnFrame(
  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  imageHeight: img.height,
  imageWidth: img.width,
  imageMean: 127.5,   // defaults to 127.5
  imageStd: 127.5,    // defaults to 127.5
  rotation: 90,       // defaults to 90, Android only
  numResults: 2,      // defaults to 5
  threshold: 0.1,     // defaults to 0.1
  asynch: true        // defaults to true
);

Object Detection #

  • Output format:

x, y, w, h are between [0, 1]. You can scale x, w by the width and y, h by the height of the image.

{
  detectedClass: "hot dog",
  confidenceInClass: 0.123,
  rect: {
    x: 0.15,
    y: 0.33,
    w: 0.80,
    h: 0.27
  }
}

SSD MobileNet:

  • Run on image:
var recognitions = await Tflite.detectObjectOnImage(
  path: filepath,       // required
  model: "SSDMobileNet",
  imageMean: 127.5,     
  imageStd: 127.5,      
  threshold: 0.4,       // defaults to 0.1
  numResultsPerClass: 2,// defaults to 5
  asynch: true          // defaults to true
);
  • Run on binary:
var recognitions = await Tflite.detectObjectOnBinary(
  binary: imageToByteListUint8(resizedImage, 300), // required
  model: "SSDMobileNet",  
  threshold: 0.4,                                  // defaults to 0.1
  numResultsPerClass: 2,                           // defaults to 5
  asynch: true                                     // defaults to true
);
  • Run on image stream (video frame):

Works with camera plugin 4.0.0. Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.

var recognitions = await Tflite.detectObjectOnFrame(
  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  model: "SSDMobileNet",  
  imageHeight: img.height,
  imageWidth: img.width,
  imageMean: 127.5,   // defaults to 127.5
  imageStd: 127.5,    // defaults to 127.5
  rotation: 90,       // defaults to 90, Android only
  numResults: 2,      // defaults to 5
  threshold: 0.1,     // defaults to 0.1
  asynch: true        // defaults to true
);

Tiny YOLOv2:

  • Run on image:
var recognitions = await Tflite.detectObjectOnImage(
  path: filepath,       // required
  model: "YOLO",      
  imageMean: 0.0,       
  imageStd: 255.0,      
  threshold: 0.3,       // defaults to 0.1
  numResultsPerClass: 2,// defaults to 5
  anchors: anchors,     // defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]
  blockSize: 32,        // defaults to 32
  numBoxesPerBlock: 5,  // defaults to 5
  asynch: true          // defaults to true
);
  • Run on binary:
var recognitions = await Tflite.detectObjectOnBinary(
  binary: imageToByteListFloat32(resizedImage, 416, 0.0, 255.0), // required
  model: "YOLO",  
  threshold: 0.3,       // defaults to 0.1
  numResultsPerClass: 2,// defaults to 5
  anchors: anchors,     // defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]
  blockSize: 32,        // defaults to 32
  numBoxesPerBlock: 5,  // defaults to 5
  asynch: true          // defaults to true
);
  • Run on image stream (video frame):

Works with camera plugin 4.0.0. Video format: (iOS) kCVPixelFormatType_32BGRA, (Android) YUV_420_888.

var recognitions = await Tflite.detectObjectOnFrame(
  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  model: "YOLO",  
  imageHeight: img.height,
  imageWidth: img.width,
  imageMean: 0,         // defaults to 127.5
  imageStd: 255.0,      // defaults to 127.5
  numResults: 2,        // defaults to 5
  threshold: 0.1,       // defaults to 0.1
  numResultsPerClass: 2,// defaults to 5
  anchors: anchors,     // defaults to [0.57273,0.677385,1.87446,2.06253,3.33843,5.47434,7.88282,3.52778,9.77052,9.16828]
  blockSize: 32,        // defaults to 32
  numBoxesPerBlock: 5,  // defaults to 5
  asynch: true          // defaults to true
);

Pix2Pix #

Thanks to RP from Green Appers

  • Output format:

    The output of Pix2Pix inference is Uint8List type. Depending on the outputType used, the output is:

    • (if outputType is png) byte array of a png image

    • (otherwise) byte array of the raw output

  • Run on image:

var result = await runPix2PixOnImage(
  path: filepath,       // required
  imageMean: 0.0,       // defaults to 0.0
  imageStd: 255.0,      // defaults to 255.0
  asynch: true      // defaults to true
);
  • Run on binary:
var result = await runPix2PixOnBinary(
  binary: binary,       // required
  asynch: true      // defaults to true
);
  • Run on image stream (video frame):
var result = await runPix2PixOnFrame(
  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  imageHeight: img.height, // defaults to 1280
  imageWidth: img.width,   // defaults to 720
  imageMean: 127.5,   // defaults to 0.0
  imageStd: 127.5,    // defaults to 255.0
  rotation: 90,       // defaults to 90, Android only
  asynch: true        // defaults to true
);

Deeplab #

Thanks to RP from see-- for Android implementation.

  • Output format:

    The output of Deeplab inference is Uint8List type. Depending on the outputType used, the output is:

    • (if outputType is png) byte array of a png image

    • (otherwise) byte array of r, g, b, a values of the pixels

  • Run on image:

var result = await runSegmentationOnImage(
  path: filepath,     // required
  imageMean: 0.0,     // defaults to 0.0
  imageStd: 255.0,    // defaults to 255.0
  labelColors: [...], // defaults to https://github.com/shaqian/flutter_tflite/blob/master/lib/tflite.dart#L219
  outputType: "png",  // defaults to "png"
  asynch: true        // defaults to true
);
  • Run on binary:
var result = await runSegmentationOnBinary(
  binary: binary,     // required
  labelColors: [...], // defaults to https://github.com/shaqian/flutter_tflite/blob/master/lib/tflite.dart#L219
  outputType: "png",  // defaults to "png"
  asynch: true        // defaults to true
);
  • Run on image stream (video frame):
var result = await runSegmentationOnFrame(
  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  imageHeight: img.height, // defaults to 1280
  imageWidth: img.width,   // defaults to 720
  imageMean: 127.5,        // defaults to 0.0
  imageStd: 127.5,         // defaults to 255.0
  rotation: 90,            // defaults to 90, Android only
  labelColors: [...],      // defaults to https://github.com/shaqian/flutter_tflite/blob/master/lib/tflite.dart#L219
  outputType: "png",       // defaults to "png"
  asynch: true             // defaults to true
);

PoseNet #

Model is from StackOverflow thread.

  • Output format:

x, y are between [0, 1]. You can scale x by the width and y by the height of the image.

[ // array of poses/persons
  { // pose #1
    score: 0.6324902,
    keypoints: {
      0: {
        x: 0.250,
        y: 0.125,
        part: nose,
        score: 0.9971070
      },
      1: {
        x: 0.230,
        y: 0.105,
        part: leftEye,
        score: 0.9978438
      }
      ......
    }
  },
  { // pose #2
    score: 0.32534285,
    keypoints: {
      0: {
        x: 0.402,
        y: 0.538,
        part: nose,
        score: 0.8798978
      },
      1: {
        x: 0.380,
        y: 0.513,
        part: leftEye,
        score: 0.7090239
      }
      ......
    }
  },
  ......
]
  • Run on image:
var result = await runPoseNetOnImage(
  path: filepath,     // required
  imageMean: 125.0,   // defaults to 125.0
  imageStd: 125.0,    // defaults to 125.0
  numResults: 2,      // defaults to 5
  threshold: 0.7,     // defaults to 0.5
  nmsRadius: 10,      // defaults to 20
  asynch: true        // defaults to true
);
  • Run on binary:
var result = await runPoseNetOnBinary(
  binary: binary,     // required
  numResults: 2,      // defaults to 5
  threshold: 0.7,     // defaults to 0.5
  nmsRadius: 10,      // defaults to 20
  asynch: true        // defaults to true
);
  • Run on image stream (video frame):
var result = await runPoseNetOnFrame(
  bytesList: img.planes.map((plane) {return plane.bytes;}).toList(),// required
  imageHeight: img.height, // defaults to 1280
  imageWidth: img.width,   // defaults to 720
  imageMean: 125.0,        // defaults to 125.0
  imageStd: 125.0,         // defaults to 125.0
  rotation: 90,            // defaults to 90, Android only
  numResults: 2,           // defaults to 5
  threshold: 0.7,          // defaults to 0.5
  nmsRadius: 10,           // defaults to 20
  asynch: true             // defaults to true
);

Example #

Prediction in Static Images #

Refer to the example.

Real-time detection #

Refer to flutter_realtime_Detection.

1.1.1 #

  • Fix error: ';' expected on Android

1.1.0 #

  • Upgrade to TensorFlowLiteObjC 2.2.
  • Add support for GPU delegate.
  • Fix label size for YOLO.

1.0.6 #

  • Add support for resources outside packaged assets.
  • Upgrade version of Flutter SDK and Android Studio.
  • Set mininum SDK version to 2.1.0

1.0.5 #

  • Set compileSdkVersion to 28, fixing build error "Execution failed for task ':tflite:verifyReleaseResources'."
  • Add notes about CONTRIB_PATH.
  • Update pubspec.yaml for Flutter 1.10.0 and later.
  • Update example app to use image plugin 2.1.4.

1.0.4 #

  • Add PoseNet support

1.0.3 #

  • Add an asynch option to offload the TfLite run from the UI thread
  • Add Deeplab support

1.0.2 #

  • Add pix2pix support
  • Make number of detections dynamic in Android

1.0.1 #

  • Add detectObjectOnBinary
  • Add runModelOnFrame
  • Add detectObjectOnFrame

1.0.0 #

  • Support Object Detection with SSD MobileNet and Tiny Yolov2.
  • Updated to TensorFlow Lite API v1.12.0.
  • No longer accepts parameter inputSize and numChannels. They will be retrieved from input tensor.
  • numThreads is moved to Tflite.loadModel.

0.0.5 #

  • Support byte list: runModelOnBinary

0.0.4 #

  • Support Swift based project

0.0.3 #

  • Pass error message in channel in Android.
  • Use non hard coded label size in iOS.

0.0.2 #

  • Fixed link.

0.0.1 #

  • Initial release.

example/lib/main.dart

import 'dart:async';
import 'dart:io';
import 'dart:math';
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:image/image.dart' as img;

import 'package:tflite/tflite.dart';
import 'package:image_picker/image_picker.dart';

void main() => runApp(new App());

const String mobile = "MobileNet";
const String ssd = "SSD MobileNet";
const String yolo = "Tiny YOLOv2";
const String deeplab = "DeepLab";
const String posenet = "PoseNet";

class App extends StatelessWidget {
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: MyApp(),
    );
  }
}

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => new _MyAppState();
}

class _MyAppState extends State<MyApp> {
  File _image;
  List _recognitions;
  String _model = mobile;
  double _imageHeight;
  double _imageWidth;
  bool _busy = false;

  Future predictImagePicker() async {
    var image = await ImagePicker.pickImage(source: ImageSource.gallery);
    if (image == null) return;
    setState(() {
      _busy = true;
    });
    predictImage(image);
  }

  Future predictImage(File image) async {
    if (image == null) return;

    switch (_model) {
      case yolo:
        await yolov2Tiny(image);
        break;
      case ssd:
        await ssdMobileNet(image);
        break;
      case deeplab:
        await segmentMobileNet(image);
        break;
      case posenet:
        await poseNet(image);
        break;
      default:
        await recognizeImage(image);
      // await recognizeImageBinary(image);
    }

    new FileImage(image)
        .resolve(new ImageConfiguration())
        .addListener(ImageStreamListener((ImageInfo info, bool _) {
      setState(() {
        _imageHeight = info.image.height.toDouble();
        _imageWidth = info.image.width.toDouble();
      });
    }));

    setState(() {
      _image = image;
      _busy = false;
    });
  }

  @override
  void initState() {
    super.initState();

    _busy = true;

    loadModel().then((val) {
      setState(() {
        _busy = false;
      });
    });
  }

  Future loadModel() async {
    Tflite.close();
    try {
      String res;
      switch (_model) {
        case yolo:
          res = await Tflite.loadModel(
            model: "assets/yolov2_tiny.tflite",
            labels: "assets/yolov2_tiny.txt",
            // useGpuDelegate: true,
          );
          break;
        case ssd:
          res = await Tflite.loadModel(
            model: "assets/ssd_mobilenet.tflite",
            labels: "assets/ssd_mobilenet.txt",
            // useGpuDelegate: true,
          );
          break;
        case deeplab:
          res = await Tflite.loadModel(
            model: "assets/deeplabv3_257_mv_gpu.tflite",
            labels: "assets/deeplabv3_257_mv_gpu.txt",
            // useGpuDelegate: true,
          );
          break;
        case posenet:
          res = await Tflite.loadModel(
            model: "assets/posenet_mv1_075_float_from_checkpoints.tflite",
            // useGpuDelegate: true,
          );
          break;
        default:
          res = await Tflite.loadModel(
            model: "assets/mobilenet_v1_1.0_224.tflite",
            labels: "assets/mobilenet_v1_1.0_224.txt",
            // useGpuDelegate: true,
          );
      }
      print(res);
    } on PlatformException {
      print('Failed to load model.');
    }
  }

  Uint8List imageToByteListFloat32(
      img.Image image, int inputSize, double mean, double std) {
    var convertedBytes = Float32List(1 * inputSize * inputSize * 3);
    var buffer = Float32List.view(convertedBytes.buffer);
    int pixelIndex = 0;
    for (var i = 0; i < inputSize; i++) {
      for (var j = 0; j < inputSize; j++) {
        var pixel = image.getPixel(j, i);
        buffer[pixelIndex++] = (img.getRed(pixel) - mean) / std;
        buffer[pixelIndex++] = (img.getGreen(pixel) - mean) / std;
        buffer[pixelIndex++] = (img.getBlue(pixel) - mean) / std;
      }
    }
    return convertedBytes.buffer.asUint8List();
  }

  Uint8List imageToByteListUint8(img.Image image, int inputSize) {
    var convertedBytes = Uint8List(1 * inputSize * inputSize * 3);
    var buffer = Uint8List.view(convertedBytes.buffer);
    int pixelIndex = 0;
    for (var i = 0; i < inputSize; i++) {
      for (var j = 0; j < inputSize; j++) {
        var pixel = image.getPixel(j, i);
        buffer[pixelIndex++] = img.getRed(pixel);
        buffer[pixelIndex++] = img.getGreen(pixel);
        buffer[pixelIndex++] = img.getBlue(pixel);
      }
    }
    return convertedBytes.buffer.asUint8List();
  }

  Future recognizeImage(File image) async {
    int startTime = new DateTime.now().millisecondsSinceEpoch;
    var recognitions = await Tflite.runModelOnImage(
      path: image.path,
      numResults: 6,
      threshold: 0.05,
      imageMean: 127.5,
      imageStd: 127.5,
    );
    setState(() {
      _recognitions = recognitions;
    });
    int endTime = new DateTime.now().millisecondsSinceEpoch;
    print("Inference took ${endTime - startTime}ms");
  }

  Future recognizeImageBinary(File image) async {
    int startTime = new DateTime.now().millisecondsSinceEpoch;
    var imageBytes = (await rootBundle.load(image.path)).buffer;
    img.Image oriImage = img.decodeJpg(imageBytes.asUint8List());
    img.Image resizedImage = img.copyResize(oriImage, height: 224, width: 224);
    var recognitions = await Tflite.runModelOnBinary(
      binary: imageToByteListFloat32(resizedImage, 224, 127.5, 127.5),
      numResults: 6,
      threshold: 0.05,
    );
    setState(() {
      _recognitions = recognitions;
    });
    int endTime = new DateTime.now().millisecondsSinceEpoch;
    print("Inference took ${endTime - startTime}ms");
  }

  Future yolov2Tiny(File image) async {
    int startTime = new DateTime.now().millisecondsSinceEpoch;
    var recognitions = await Tflite.detectObjectOnImage(
      path: image.path,
      model: "YOLO",
      threshold: 0.3,
      imageMean: 0.0,
      imageStd: 255.0,
      numResultsPerClass: 1,
    );
    // var imageBytes = (await rootBundle.load(image.path)).buffer;
    // img.Image oriImage = img.decodeJpg(imageBytes.asUint8List());
    // img.Image resizedImage = img.copyResize(oriImage, 416, 416);
    // var recognitions = await Tflite.detectObjectOnBinary(
    //   binary: imageToByteListFloat32(resizedImage, 416, 0.0, 255.0),
    //   model: "YOLO",
    //   threshold: 0.3,
    //   numResultsPerClass: 1,
    // );
    setState(() {
      _recognitions = recognitions;
    });
    int endTime = new DateTime.now().millisecondsSinceEpoch;
    print("Inference took ${endTime - startTime}ms");
  }

  Future ssdMobileNet(File image) async {
    int startTime = new DateTime.now().millisecondsSinceEpoch;
    var recognitions = await Tflite.detectObjectOnImage(
      path: image.path,
      numResultsPerClass: 1,
    );
    // var imageBytes = (await rootBundle.load(image.path)).buffer;
    // img.Image oriImage = img.decodeJpg(imageBytes.asUint8List());
    // img.Image resizedImage = img.copyResize(oriImage, 300, 300);
    // var recognitions = await Tflite.detectObjectOnBinary(
    //   binary: imageToByteListUint8(resizedImage, 300),
    //   numResultsPerClass: 1,
    // );
    setState(() {
      _recognitions = recognitions;
    });
    int endTime = new DateTime.now().millisecondsSinceEpoch;
    print("Inference took ${endTime - startTime}ms");
  }

  Future segmentMobileNet(File image) async {
    int startTime = new DateTime.now().millisecondsSinceEpoch;
    var recognitions = await Tflite.runSegmentationOnImage(
      path: image.path,
      imageMean: 127.5,
      imageStd: 127.5,
    );

    setState(() {
      _recognitions = recognitions;
    });
    int endTime = new DateTime.now().millisecondsSinceEpoch;
    print("Inference took ${endTime - startTime}");
  }

  Future poseNet(File image) async {
    int startTime = new DateTime.now().millisecondsSinceEpoch;
    var recognitions = await Tflite.runPoseNetOnImage(
      path: image.path,
      numResults: 2,
    );

    print(recognitions);

    setState(() {
      _recognitions = recognitions;
    });
    int endTime = new DateTime.now().millisecondsSinceEpoch;
    print("Inference took ${endTime - startTime}ms");
  }

  onSelect(model) async {
    setState(() {
      _busy = true;
      _model = model;
      _recognitions = null;
    });
    await loadModel();

    if (_image != null)
      predictImage(_image);
    else
      setState(() {
        _busy = false;
      });
  }

  List<Widget> renderBoxes(Size screen) {
    if (_recognitions == null) return [];
    if (_imageHeight == null || _imageWidth == null) return [];

    double factorX = screen.width;
    double factorY = _imageHeight / _imageWidth * screen.width;
    Color blue = Color.fromRGBO(37, 213, 253, 1.0);
    return _recognitions.map((re) {
      return Positioned(
        left: re["rect"]["x"] * factorX,
        top: re["rect"]["y"] * factorY,
        width: re["rect"]["w"] * factorX,
        height: re["rect"]["h"] * factorY,
        child: Container(
          decoration: BoxDecoration(
            borderRadius: BorderRadius.all(Radius.circular(8.0)),
            border: Border.all(
              color: blue,
              width: 2,
            ),
          ),
          child: Text(
            "${re["detectedClass"]} ${(re["confidenceInClass"] * 100).toStringAsFixed(0)}%",
            style: TextStyle(
              background: Paint()..color = blue,
              color: Colors.white,
              fontSize: 12.0,
            ),
          ),
        ),
      );
    }).toList();
  }

  List<Widget> renderKeypoints(Size screen) {
    if (_recognitions == null) return [];
    if (_imageHeight == null || _imageWidth == null) return [];

    double factorX = screen.width;
    double factorY = _imageHeight / _imageWidth * screen.width;

    var lists = <Widget>[];
    _recognitions.forEach((re) {
      var color = Color((Random().nextDouble() * 0xFFFFFF).toInt() << 0)
          .withOpacity(1.0);
      var list = re["keypoints"].values.map<Widget>((k) {
        return Positioned(
          left: k["x"] * factorX - 6,
          top: k["y"] * factorY - 6,
          width: 100,
          height: 12,
          child: Text(
            "● ${k["part"]}",
            style: TextStyle(
              color: color,
              fontSize: 12.0,
            ),
          ),
        );
      }).toList();

      lists..addAll(list);
    });

    return lists;
  }

  @override
  Widget build(BuildContext context) {
    Size size = MediaQuery.of(context).size;
    List<Widget> stackChildren = [];

    if (_model == deeplab && _recognitions != null) {
      stackChildren.add(Positioned(
        top: 0.0,
        left: 0.0,
        width: size.width,
        child: _image == null
            ? Text('No image selected.')
            : Container(
                decoration: BoxDecoration(
                    image: DecorationImage(
                        alignment: Alignment.topCenter,
                        image: MemoryImage(_recognitions),
                        fit: BoxFit.fill)),
                child: Opacity(opacity: 0.3, child: Image.file(_image))),
      ));
    } else {
      stackChildren.add(Positioned(
        top: 0.0,
        left: 0.0,
        width: size.width,
        child: _image == null ? Text('No image selected.') : Image.file(_image),
      ));
    }

    if (_model == mobile) {
      stackChildren.add(Center(
        child: Column(
          children: _recognitions != null
              ? _recognitions.map((res) {
                  return Text(
                    "${res["index"]} - ${res["label"]}: ${res["confidence"].toStringAsFixed(3)}",
                    style: TextStyle(
                      color: Colors.black,
                      fontSize: 20.0,
                      background: Paint()..color = Colors.white,
                    ),
                  );
                }).toList()
              : [],
        ),
      ));
    } else if (_model == ssd || _model == yolo) {
      stackChildren.addAll(renderBoxes(size));
    } else if (_model == posenet) {
      stackChildren.addAll(renderKeypoints(size));
    }

    if (_busy) {
      stackChildren.add(const Opacity(
        child: ModalBarrier(dismissible: false, color: Colors.grey),
        opacity: 0.3,
      ));
      stackChildren.add(const Center(child: CircularProgressIndicator()));
    }

    return Scaffold(
      appBar: AppBar(
        title: const Text('tflite example app'),
        actions: <Widget>[
          PopupMenuButton<String>(
            onSelected: onSelect,
            itemBuilder: (context) {
              List<PopupMenuEntry<String>> menuEntries = [
                const PopupMenuItem<String>(
                  child: Text(mobile),
                  value: mobile,
                ),
                const PopupMenuItem<String>(
                  child: Text(ssd),
                  value: ssd,
                ),
                const PopupMenuItem<String>(
                  child: Text(yolo),
                  value: yolo,
                ),
                const PopupMenuItem<String>(
                  child: Text(deeplab),
                  value: deeplab,
                ),
                const PopupMenuItem<String>(
                  child: Text(posenet),
                  value: posenet,
                )
              ];
              return menuEntries;
            },
          )
        ],
      ),
      body: Stack(
        children: stackChildren,
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: predictImagePicker,
        tooltip: 'Pick Image',
        child: Icon(Icons.image),
      ),
    );
  }
}

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:


dependencies:
  tflite: ^1.1.1

2. Install it

You can install packages from the command line:

with Flutter:


$ flutter pub get

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:


import 'package:tflite/tflite.dart';
  
Popularity:
Describes how popular the package is relative to other packages. [more]
93
Health:
Code health derived from static analysis. [more]
99
Maintenance:
Reflects how tidy and up-to-date the package is. [more]
100
Overall:
Weighted score of the above. [more]
96
Learn more about scoring.

We analyzed this package on Jul 9, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:

  • Dart: 2.8.4
  • pana: 0.13.14
  • Flutter: 1.17.5

Analysis suggestions

Package does not support Flutter platform linux

Because:

  • package:tflite/tflite.dart that declares support for platforms: android, ios

Package does not support Flutter platform macos

Because:

  • package:tflite/tflite.dart that declares support for platforms: android, ios

Package does not support Flutter platform web

Because:

  • package:tflite/tflite.dart that declares support for platforms: android, ios

Package does not support Flutter platform windows

Because:

  • package:tflite/tflite.dart that declares support for platforms: android, ios

Package not compatible with SDK dart

Because:

  • tflite that is a package requiring null.

Health issues and suggestions

Document public APIs. (-1 points)

23 out of 23 API elements have no dartdoc comment.Providing good documentation for libraries, classes, functions, and other API elements improves code readability and helps developers find and use your API.

Dependencies

Package Constraint Resolved Available
Direct dependencies
Dart SDK >=2.1.0 <3.0.0
flutter 0.0.0
meta ^1.1.8 1.1.8 1.2.2
Transitive dependencies
collection 1.14.12 1.14.13
sky_engine 0.0.99
typed_data 1.1.6 1.2.0
vector_math 2.0.8 2.1.0-nullsafety