flutter_vision 1.1.3 flutter_vision: ^1.1.3 copied to clipboard
A Flutter plugin for managing Yolov5, Yolov8 and Tesseract v5 accessing with TensorFlow Lite 2.x. Support object detection, segmentation and OCR on Android. iOS not updated, working in progress.
flutter_vision #
A Flutter plugin for managing Yolov5, Yolov8 and Tesseract v5 accessing with TensorFlow Lite 2.x. Support object detection, segmentation and OCR on Android. iOS not updated, working in progress.
Installation #
Add flutter_vision as a dependency in your pubspec.yaml file.
Android #
In android/app/build.gradle
, add the following setting in android block.
android{
aaptOptions {
noCompress 'tflite'
noCompress 'lite'
}
}
iOS #
Comming soon ...
Usage #
For YoloV5 and YoloV8 MODEL #
- Create a
assets
folder and place your labels file and model file in it. Inpubspec.yaml
add:
assets:
- assets/labels.txt
- assets/yolovx.tflite
- Import the library:
import 'package:flutter_vision/flutter_vision.dart';
- Initialized the flutter_vision library:
FlutterVision vision = FlutterVision();
- Load the model and labels:
modelVersion
: yolov5 or yolov8 or yolov8seg
await vision.loadYoloModel(
labels: 'assets/labelss.txt',
modelPath: 'assets/yolov5n.tflite',
modelVersion: "yolov5",
quantization: false,
numThreads: 1,
useGpu: false);
For camera live feed #
- Make your first detection:
confThreshold
work with yolov5 other case it is omited.
Make use of camera plugin
final result = await vision.yoloOnFrame(
bytesList: cameraImage.planes.map((plane) => plane.bytes).toList(),
imageHeight: cameraImage.height,
imageWidth: cameraImage.width,
iouThreshold: 0.4,
confThreshold: 0.4,
classThreshold: 0.5);
For static image #
- Make your first detection or segmentation:
final result = await vision.yoloOnImage(
bytesList: byte,
imageHeight: image.height,
imageWidth: image.width,
iouThreshold: 0.8,
confThreshold: 0.4,
classThreshold: 0.7);
- Release resources:
await vision.closeYoloModel();
For Tesseract 5.0.0 MODEL #
- Create an
assets
folder, then create atessdata
directory andtessdata_config.json
file and place them into it. Download trained data for tesseract from here and place it into tessdata directory. Then, modifie tessdata_config.json as follow.
{
"files": [
"spa.traineddata"
]
}
- In
pubspec.yaml
add:
assets:
- assets/
- assets/tessdata/
- Import the library:
import 'package:flutter_vision/flutter_vision.dart';
- Initialized the flutter_vision library:
FlutterVision vision = FlutterVision();
- Load the model:
await vision.loadTesseractModel(
args: {
'psm': '11',
'oem': '1',
'preserve_interword_spaces': '1',
},
language: 'spa',
);
For static image #
- Get Text from static image:
final XFile? photo = await picker.pickImage(source: ImageSource.gallery);
if (photo != null) {
final result = await vision.tesseractOnImage(bytesList: (await photo.readAsBytes()));
}
- Release resources:
await vision.closeTesseractModel();
About results #
For Yolo v5 or v8 in detection task #
result is a List<Map<String,dynamic>>
where Map have the following keys:
Map<String, dynamic>:{
"box": [x1:left, y1:top, x2:right, y2:bottom, class_confidence]
"tag": String: detected class
}
For YoloV8 in segmentation task #
result is a List<Map<String,dynamic>>
where Map have the following keys:
Map<String, dynamic>:{
"box": [x1:left, y1:top, x2:right, y2:bottom, class_confidence]
"tag": String: detected class
"polygons": List<Map<String, double>>: [{x:coordx, y:coordy}]
}
For Tesseract #
result is a List<Map<String,dynamic>>
where Map have the following keys:
Map<String, dynamic>:{
"text": String
"word_conf": List:int
"mean_conf": int}
Example #
Contact #
For flutter_vision bug reports and feature requests please visit GitHub Issues