tflite_plus 1.0.3
tflite_plus: ^1.0.3 copied to clipboard
A comprehensive Flutter plugin for Google AI's LiteRT (TensorFlow Lite) with advanced machine learning capabilities for both Android and iOS platforms.
🚀 TensorFlow Lite Plus Examples #
Comprehensive examples showcasing the power of TensorFlow Lite in Flutter
Ready-to-run examples for AI-powered Flutter apps 🤖
📋 Table of Contents #
- 🎯 Overview
- 🔧 What TensorFlow Lite Plus Can Do
- 📱 Available Examples
- 🚀 Quick Start
- 🏗️ Setup Instructions
- 💡 Usage Examples
- 🌐 Platform Support
- 📚 Learning Resources
- 🤝 Contributing
🎯 Overview #
This folder contains 15+ comprehensive examples demonstrating how to use the tflite_plus plugin for various AI/ML tasks in Flutter applications. Each example is a complete, runnable app that showcases different aspects of machine learning on mobile devices.
The TensorFlow Lite Plus plugin brings Google AI's LiteRT (TensorFlow Lite) to Flutter with advanced capabilities, hardware acceleration, and cross-platform support.
🔧 What TensorFlow Lite Plus Can Do #
| 🖼️ | Image Classification Classify images using pre-trained models like MobileNet, EfficientNet, or your custom models |
| 🎯 | Object Detection Detect and locate multiple objects with bounding boxes using SSD MobileNet |
| 🏃♂️ | Pose Estimation Real-time human pose detection and keypoint tracking using PoseNet |
| 🎨 | Image Segmentation Pixel-level semantic segmentation for detailed image understanding |
| 🎵 | Audio Classification Classify audio events and sounds using YAMNet and other audio models |
| 📝 | Text Classification Sentiment analysis and text categorization with NLP models |
| 🎭 | Style Transfer Apply artistic styles to images using neural style transfer |
| 🔍 | Super Resolution Enhance image quality with ESRGAN super-resolution models |
| 🤖 | BERT Q&A Question answering using BERT models for natural language understanding |
| ✋ | Gesture Recognition Recognize hand gestures and finger movements in real-time |
| 🔢 | Digit Classification Handwritten digit recognition using CNN models |
| 🎮 | Reinforcement Learning Interactive AI agents using reinforcement learning models |
🚀 Key Features #
- Hardware Acceleration: GPU, NNAPI, Metal, and CoreML delegate support
- Cross-Platform: Works on Android, iOS, Linux, macOS, and Windows
- Real-time Processing: Live camera stream analysis and processing
- Multiple Input Types: Support for images, audio, text, and binary data
- Asynchronous Operations: Non-blocking inference with async/await
- Custom Model Support: Load your own trained TensorFlow Lite models
- Performance Optimized: Efficient memory usage and fast inference times
📱 Available Examples #
🖼️ Computer Vision #
| Example | Description | Platforms | Live Stream |
|---|---|---|---|
| Image Classification MobileNet | Classify objects in images using MobileNet | Android, iOS, Desktop | ✅ |
| Object Detection SSD MobileNet | Detect multiple objects with bounding boxes | Android, iOS, Desktop | ✅ |
| Object Detection SSD MobileNet V2 | Enhanced object detection with improved accuracy | Android, iOS, Desktop | ✅ |
| Live Object Detection | Real-time object detection from camera feed | Android, iOS | ✅ |
| Pose Estimation | Human pose detection and keypoint tracking | Android, iOS | ✅ |
| Image Segmentation | Pixel-level semantic segmentation | Android, iOS, Desktop | ❌ |
| Style Transfer | Apply artistic styles to images | Android, iOS, Desktop | ❌ |
| Super Resolution ESRGAN | Enhance image quality with AI upscaling | Android, iOS, Desktop | ❌ |
| Gesture Classification | Hand gesture recognition | Android, iOS, Desktop | ✅ |
| Digit Classification | Handwritten digit recognition | Android, iOS, Desktop | ❌ |
🎵 Audio Processing #
| Example | Description | Platforms | Live Stream |
|---|---|---|---|
| Audio Classification YAMNet | Real-time audio event classification | Android, iOS | ✅ |
📝 Natural Language Processing #
| Example | Description | Platforms | Live Stream |
|---|---|---|---|
| Text Classification | Sentiment analysis and text categorization | All Platforms | ❌ |
| BERT Q&A | Question answering using BERT models | Android, iOS, Desktop | ❌ |
🎮 Advanced AI #
| Example | Description | Platforms | Live Stream |
|---|---|---|---|
| Reinforcement Learning | Interactive AI agents and game playing | Android, iOS, Desktop | ❌ |
🚀 Quick Start #
1. Prerequisites #
- Flutter SDK (>=3.3.0)
- Dart SDK (>=3.9.2)
- Android Studio / Xcode for mobile development
- Git for cloning repositories
2. Clone and Setup #
# Clone the repository
git clone https://github.com/shakilofficial0/tflite_plus.git
cd tflite_plus/example
# Choose an example (e.g., image classification)
cd image_classification_mobilenet
# Install dependencies
flutter pub get
# Download required models and labels
sh ./scripts/download_model.sh # On Unix systems
# or
.\scripts\download_model.bat # On Windows
3. Run the Example #
# Run on connected device or emulator
flutter run
# Or run on specific platform
flutter run -d android
flutter run -d ios
flutter run -d windows
flutter run -d linux
flutter run -d macos
🏗️ Setup Instructions #
Common Setup Steps #
- Add Dependency: Each example already includes the
tflite_plusdependency inpubspec.yaml
dependencies:
tflite_plus: ^1.0.3
- Download Models: Most examples require downloading pre-trained models:
# Navigate to example folder
cd example/[example_name]
# Run download script
sh ./scripts/download_model.sh # Unix/Mac
.\scripts\download_model.bat # Windows
- Platform Configuration: Some examples may require platform-specific setup (automatically handled by the plugin)
Android Setup (Not Mandatory) #
// android/app/build.gradle (Not Mandatory)
android {
defaultConfig {
minSdkVersion 21 // Minimum required (Not Mandatory)
}
}
iOS Setup #
# ios/Podfile
platform :ios, '12.0' # Minimum required
💡 Usage Examples #
Basic Image Classification #
import 'package:tflite_plus/tflite_plus.dart';
class ImageClassifier {
late Interpreter interpreter;
Future<void> loadModel() async {
// Load model from assets
interpreter = await Interpreter.fromAsset(
'assets/models/mobilenet_v1_1.0_224.tflite'
);
}
Future<List<double>> classifyImage(Uint8List imageBytes) async {
// Preprocess image to model input format
final input = preprocessImage(imageBytes);
// Prepare output buffer
final output = List.filled(1001, 0.0);
// Run inference
interpreter.run(input, output);
return output;
}
Float32List preprocessImage(Uint8List imageBytes) {
// Convert image to required format (224x224x3 for MobileNet)
// Normalize pixel values to [-1, 1] or [0, 1] range
// Return as Float32List
}
}
Real-time Object Detection #
import 'package:camera/camera.dart';
import 'package:tflite_plus/tflite_plus.dart';
class ObjectDetector {
late Interpreter interpreter;
Future<void> initializeDetection() async {
interpreter = await Interpreter.fromAsset(
'assets/models/ssd_mobilenet.tflite'
);
}
Future<List<Detection>> detectObjects(CameraImage image) async {
// Convert camera image to model input format
final input = convertCameraImage(image);
// Prepare output tensors for SSD MobileNet
final locations = List.filled(1 * 10 * 4, 0.0); // Bounding boxes
final classes = List.filled(1 * 10, 0.0); // Class IDs
final scores = List.filled(1 * 10, 0.0); // Confidence scores
final numDetections = List.filled(1, 0.0); // Number of detections
// Run inference
interpreter.runForMultipleInputsOutputs(
[input],
{
0: locations,
1: classes,
2: scores,
3: numDetections,
}
);
return parseDetections(locations, classes, scores, numDetections[0]);
}
}
Audio Classification #
import 'package:tflite_plus/tflite_plus.dart';
import 'package:record/record.dart';
class AudioClassifier {
late Interpreter interpreter;
final record = Record();
Future<void> startAudioClassification() async {
interpreter = await Interpreter.fromAsset('assets/models/yamnet.tflite');
// Start recording
await record.start(
encoder: AudioEncoder.wav,
samplingRate: 16000,
);
// Process audio in chunks
Timer.periodic(Duration(milliseconds: 500), (timer) async {
final audioData = await getAudioChunk();
final predictions = await classifyAudio(audioData);
handlePredictions(predictions);
});
}
Future<List<double>> classifyAudio(Float32List audioData) async {
final output = List.filled(521, 0.0); // YAMNet output size
interpreter.run(audioData, output);
return output;
}
}
Text Classification #
import 'package:tflite_plus/tflite_plus.dart';
class TextClassifier {
late Interpreter interpreter;
Future<void> loadModel() async {
interpreter = await Interpreter.fromAsset(
'assets/models/text_classification.tflite'
);
}
Future<Map<String, double>> classifyText(String text) async {
// Tokenize and encode text
final input = tokenizeText(text);
// Run inference
final output = List.filled(2, 0.0); // Binary classification
interpreter.run(input, output);
return {
'positive': output[1],
'negative': output[0],
};
}
Int32List tokenizeText(String text) {
// Convert text to tokens using your tokenizer
// Return as Int32List matching model input shape
}
}
🌐 Platform Support #
| Platform | Status | Notes |
|---|---|---|
| Android | ✅ Full Support | Minimum API 21, NNAPI acceleration available |
| iOS | ✅ Full Support | iOS 12.0+, Metal and CoreML acceleration |
| Windows | ✅ Desktop Support | Limited camera support for live stream |
| macOS | ✅ Desktop Support | Limited camera support for live stream |
| Linux | ✅ Desktop Support | Limited camera support for live stream |
| Web | ❌ Not Supported | TensorFlow Lite FFI limitations |
Feature Support Matrix #
| Feature | Android | iOS | Desktop |
|---|---|---|---|
| File-based Inference | ✅ | ✅ | ✅ |
| Live Camera Stream | ✅ | ✅ | 🚧 Limited |
| Hardware Acceleration | ✅ NNAPI | ✅ Metal/CoreML | ❌ |
| Background Processing | ✅ | ✅ | ✅ |
| Custom Models | ✅ | ✅ | ✅ |
📚 Learning Resources #
Example-Specific Guides #
Each example folder contains:
- README.md: Detailed setup and usage instructions
- screenshots/: Visual examples of the app in action
- scripts/: Helper scripts for model downloading
- lib/: Complete, documented source code
- assets/: Required model files and test data
Key Concepts #
- Model Loading: Learn how to load TensorFlow Lite models from assets or files
- Input Preprocessing: Understand how to prepare data for different model types
- Inference Execution: Master synchronous and asynchronous inference patterns
- Output Postprocessing: Parse and utilize model predictions effectively
- Performance Optimization: Implement efficient memory management and threading
Best Practices #
- Always run inference in background isolates for smooth UI
- Preprocess inputs to match exact model requirements
- Handle model loading errors gracefully
- Use appropriate data types (Float32List, Int32List, etc.)
- Implement proper resource disposal to prevent memory leaks
🛠️ Troubleshooting #
Common Issues #
Model Loading Fails
// Ensure model is in assets and pubspec.yaml is configured
flutter:
assets:
- assets/models/
Input Shape Mismatch
// Check model input requirements
final inputDetails = interpreter.getInputTensors();
print('Expected shape: ${inputDetails[0].shape}');
print('Expected type: ${inputDetails[0].type}');
Performance Issues
// Use hardware acceleration when available
final interpreterOptions = InterpreterOptions()
..addDelegate(GpuDelegate());
final interpreter = await Interpreter.fromAsset(
'model.tflite',
options: interpreterOptions,
);
Getting Help #
- Check individual example READMEs for specific guidance
- Review the main tflite_plus documentation
- Open issues on GitHub
- Join discussions in the Flutter community
🤝 Contributing #
We welcome contributions! Here's how you can help:
Adding New Examples #
- Fork the repository
- Create a new example folder following the existing structure
- Include comprehensive documentation and screenshots
- Add model download scripts
- Test on multiple platforms
- Submit a pull request
Improving Existing Examples #
- Enhance documentation and comments
- Add new features or use cases
- Optimize performance
- Fix bugs and improve error handling
- Add support for additional platforms
Example Structure Template #
new_example/
├── README.md # Comprehensive documentation
├── pubspec.yaml # Dependencies and configuration
├── lib/ # Source code
│ ├── main.dart
│ └── ...
├── assets/ # Models and test data
├── scripts/ # Download and setup scripts
├── screenshots/ # Visual examples
└── test/ # Unit and widget tests
📄 License #
This project is licensed under the MIT License - see the LICENSE file for details.
🎉 Get Started Today! #
Choose an example that matches your use case and start building AI-powered Flutter apps in minutes:
- 🚀 Beginners: Start with Image Classification
- 🎯 Computer Vision: Try Object Detection
- 🎵 Audio AI: Explore Audio Classification
- 📝 Text AI: Check out Text Classification
- 🤖 Advanced: Dive into BERT Q&A or Reinforcement Learning
Happy coding! 🎉
Made with ❤️ by the TensorFlow Lite Plus community