flutter_vision 1.0.0 copy "flutter_vision: ^1.0.0" to clipboard
flutter_vision: ^1.0.0 copied to clipboard

outdated

A Flutter plugin for managing both Yolov5 model and Tesseract v4, accessing with TensorFlow Lite 2.x. Support object detection and OCR on both iOS and Android.

flutter_vision #

A Flutter plugin for managing both Yolov5 model and Tesseract v4 accessing with TensorFlow Lite 2.x. Support object detection and OCR on both iOS and Android.

Installation #

Add flutter_vision as a dependency in your pubspec.yaml file.

Android #

In android/app/build.gradle, add the following setting in android block.

    android{
        aaptOptions {
            noCompress 'tflite'
            noCompress 'lite'
        }
    }

iOS #

Comming soon ...

Usage #

For OCR MODEL #

  1. Create a assets folder and place your labels file and model file in it. In pubspec.yaml add:
  assets:
   - assets/labels.txt
   - assets/yolov5.tflite
  1. You must add trained data and trained data config file to your assets directory. You can find additional language trained data files here Trained language files.

add tessdata folder under assets folder, add tessdata_config.json file under assets folder:

{
    "files": [
      "spa.traineddata"
    ]
}
  1. Import the library:
import 'package:flutter_vision/flutter_vision.dart';
  1. Initialized the flutter_vision library:
 FlutterVision vision = FlutterVision();
  1. Load the model and labels:
final responseHandler = await vision.loadOcrModel(
    labels: 'assets/labels.txt',
    modelPath: 'assets/yolov5.tflite',
    args: {
      'psm': '11',
      'oem': '1',
      'preserve_interword_spaces': '1',
    },
    language: 'spa',
    numThreads: 1,
    useGpu: false);
  1. Make your first detection and extract text from it if you want:

Make use of camera plugin

classIsText parameter set index of class which you want to extract text looking at position of text in labels.txt file.

final responseHandler = await vision.ocrOnFrame(
    bytesList: cameraImage.planes.map((plane) => plane.bytes).toList(),
    imageHeight: cameraImage.height,
    imageWidth: cameraImage.width,
    classIsText: [0],
    iouThreshold: 0.6,
    confThreshold: 0.6);
  1. Release resources:
await vision.closeOcrModel();

For YoloV5 MODEL #

  1. Create a assets folder and place your labels file and model file in it. In pubspec.yaml add:
  assets:
   - assets/labels.txt
   - assets/yolov5.tflite
  1. Import the library:
import 'package:flutter_vision/flutter_vision.dart';
  1. Initialized the flutter_vision library:
 FlutterVision vision = FlutterVision();
  1. Load the model and labels:
final responseHandler = await vision.loadYoloModel(
    labels: 'assets/labels.txt',
    modelPath: 'assets/yolov5.tflite',
    numThreads: 1,
    useGpu: false);
  1. Make your first detection:

Make use of camera plugin

final responseHandler = await vision.yoloOnFrame(
    bytesList: cameraImage.planes.map((plane) => plane.bytes).toList(),
    imageHeight: cameraImage.height,
    imageWidth: cameraImage.width,
    iouThreshold: 0.6,
    confThreshold: 0.6);
  1. Release resources:
await vision.closeYoloModel();

About responseHandler object #

  • Parameters
    • type: success or error.
    • message: if type is success it is ok, otherwise it has information about error.
    • StackTrace: StackTrace about error.
    • data: It is a List<Map<String, dynamic>> on ocrOnFrame, otherwise it is a null.

data: Contain information about objects detected, such as confidence of detection, coordinates of detected box(x1,y1,x2,y2), detected box image, text from detected box and a tag belonging to detected class.

class ResponseHandler {
  String type;
  String message;
  StackTrace? stackTrace;
  List<Map<String, dynamic>> data;
}

Example #

Screenshot_2022-04-08-23-59-05-652_com vladih dni_scanner_example Screenshot_2022-04-08-23-59-42-594_com vladih dni_scanner_example Screenshot_2022-04-09-00-00-53-316_com vladih dni_scanner_example

93
likes
0
pub points
90%
popularity

Publisher

verified publisheryurihuallpa.com

A Flutter plugin for managing both Yolov5 model and Tesseract v4, accessing with TensorFlow Lite 2.x. Support object detection and OCR on both iOS and Android.

Repository (GitHub)
View/report issues

License

unknown (license)

Dependencies

flutter, path, path_provider

More

Packages that depend on flutter_vision