executorch_flutter library
ExecuTorch Flutter Plugin - On-device ML inference with ExecuTorch
This package provides Flutter developers with the ability to run ExecuTorch machine learning models on Android, iOS, macOS, Linux, and Windows with high performance and low latency.
Key Features
- High Performance: Native FFI bindings for minimal latency
- Cross Platform: Identical APIs across all supported platforms
- User-Controlled Resources: Explicit model lifecycle with load/dispose
- Easy Integration: Simple API for loading models and running inference
- Backend Query: Check available hardware acceleration backends
Quick Start
import 'package:executorch_flutter/executorch_flutter.dart';
// Load a model from asset bundle
final model = await ExecuTorchModel.loadFromAsset('assets/models/model.pte');
// Prepare input data
final inputTensor = TensorData(
shape: [1, 3, 224, 224],
dataType: TensorType.float32,
data: imageBytes,
name: 'input',
);
// Run inference
final outputs = await model.forward([inputTensor]);
// Process outputs (List<TensorData>)
for (var output in outputs) {
print('Output shape: ${output.shape}');
}
// Clean up
await model.dispose();
Main Classes
ExecuTorchModel: Main API for loading and running inferenceTensorData: Tensor data representationBackend: Hardware acceleration backend enumerationExecuTorchVersion: Library version information
Processors
ExecuTorchPreprocessor: Base class for input preprocessingExecuTorchPostprocessor: Base class for output postprocessingExecuTorchProcessor: Combined preprocessing and postprocessing
Platform Support
- Android: API 23+ (Android 6.0+), arm64-v8a architecture
- iOS: iOS 13.0+, arm64 (device only)
- macOS: macOS 11.0+, arm64 (Apple Silicon)
- Linux: x64 architecture
- Windows: x64 architecture
For detailed documentation and examples, see the class documentation.
Classes
- BackendQuery
- Query functions for hardware acceleration backend availability.
- ExecutorchManager
- High-level manager for ExecuTorch inference operations
- ExecuTorchModel
- High-level wrapper for an ExecuTorch model instance
-
ExecuTorchPostprocessor<
R> - Abstract base class for output postprocessing
-
ExecuTorchPreprocessor<
T> - Abstract base class for input preprocessing
-
ExecuTorchProcessor<
T, R> - Abstract base class combining preprocessing and postprocessing
- ExecuTorchVersion
- Library version information.
- ModelLoadResult
- Model loading result.
- ProcessorTensorUtils
- Utility class for tensor operations in processors
- TensorData
- Tensor data for input/output.
- TensorUtils
- Utility class for working with ExecuTorch tensors
Enums
- Backend
- Hardware acceleration backends supported by ExecuTorch.
- ExtendedTensorType
- Extended tensor types supported by the FFI layer.
- TensorType
- Tensor data type enumeration.
Extensions
- TensorTypeExtension on TensorType
- Extension on TensorType for extended type conversions.
- TensorTypeFFI on TensorType
- Extension on TensorType for FFI conversions.
Constants
- executorchVersion → const String
- ExecuTorch version this plugin is built against.
Functions
-
setNativeDebugLogging(
bool enabled) → void - Set debug logging on or off.
Exceptions / Errors
- ExecuTorchException
- Base exception class for all ExecuTorch-related errors.
- ExecuTorchInferenceException
- Inference execution errors.
- ExecuTorchIOException
- Network and file I/O errors.
- ExecuTorchMemoryException
- Memory and resource errors.
- ExecuTorchModelException
- Model loading and lifecycle errors.
- ExecuTorchPlatformException
- Platform-specific integration errors.
- ExecuTorchValidationException
- Tensor validation and data errors.
- GenericProcessorException
- Generic processor exception implementation.
- InvalidInputException
- Exception thrown for invalid processor input.
- InvalidOutputException
- Exception thrown for invalid processor output.
- PostprocessingException
- Exception thrown during postprocessing operations.
- PreprocessingException
- Exception thrown during preprocessing operations.
- ProcessorException
- Base exception class for processor-related errors.