flutter_vision 1.0.0 flutter_vision: ^1.0.0 copied to clipboard
A Flutter plugin for managing both Yolov5 model and Tesseract v4, accessing with TensorFlow Lite 2.x. Support object detection and OCR on both iOS and Android.
flutter_vision #
A Flutter plugin for managing both Yolov5 model and Tesseract v4 accessing with TensorFlow Lite 2.x. Support object detection and OCR on both iOS and Android.
Installation #
Add flutter_vision as a dependency in your pubspec.yaml file.
Android #
In android/app/build.gradle
, add the following setting in android block.
android{
aaptOptions {
noCompress 'tflite'
noCompress 'lite'
}
}
iOS #
Comming soon ...
Usage #
For OCR MODEL #
- Create a
assets
folder and place your labels file and model file in it. Inpubspec.yaml
add:
assets:
- assets/labels.txt
- assets/yolov5.tflite
- You must add trained data and trained data config file to your assets directory. You can find additional language trained data files here Trained language files.
add tessdata folder under assets folder, add tessdata_config.json file under assets folder:
{
"files": [
"spa.traineddata"
]
}
- Import the library:
import 'package:flutter_vision/flutter_vision.dart';
- Initialized the flutter_vision library:
FlutterVision vision = FlutterVision();
- Load the model and labels:
final responseHandler = await vision.loadOcrModel(
labels: 'assets/labels.txt',
modelPath: 'assets/yolov5.tflite',
args: {
'psm': '11',
'oem': '1',
'preserve_interword_spaces': '1',
},
language: 'spa',
numThreads: 1,
useGpu: false);
- Make your first detection and extract text from it if you want:
Make use of camera plugin
classIsText parameter set index of class which you want to extract text looking at position of text in labels.txt
file.
final responseHandler = await vision.ocrOnFrame(
bytesList: cameraImage.planes.map((plane) => plane.bytes).toList(),
imageHeight: cameraImage.height,
imageWidth: cameraImage.width,
classIsText: [0],
iouThreshold: 0.6,
confThreshold: 0.6);
- Release resources:
await vision.closeOcrModel();
For YoloV5 MODEL #
- Create a
assets
folder and place your labels file and model file in it. Inpubspec.yaml
add:
assets:
- assets/labels.txt
- assets/yolov5.tflite
- Import the library:
import 'package:flutter_vision/flutter_vision.dart';
- Initialized the flutter_vision library:
FlutterVision vision = FlutterVision();
- Load the model and labels:
final responseHandler = await vision.loadYoloModel(
labels: 'assets/labels.txt',
modelPath: 'assets/yolov5.tflite',
numThreads: 1,
useGpu: false);
- Make your first detection:
Make use of camera plugin
final responseHandler = await vision.yoloOnFrame(
bytesList: cameraImage.planes.map((plane) => plane.bytes).toList(),
imageHeight: cameraImage.height,
imageWidth: cameraImage.width,
iouThreshold: 0.6,
confThreshold: 0.6);
- Release resources:
await vision.closeYoloModel();
About responseHandler object #
- Parameters
- type:
success
orerror
. - message: if type is success it is
ok
, otherwise it has information about error. - StackTrace: StackTrace about error.
- data: It is a
List<Map<String, dynamic>>
on ocrOnFrame, otherwise it is anull
.
- type:
data: Contain information about objects detected, such as confidence of detection, coordinates of detected box(x1,y1,x2,y2), detected box image, text from detected box and a tag belonging to detected class.
class ResponseHandler {
String type;
String message;
StackTrace? stackTrace;
List<Map<String, dynamic>> data;
}
Example #