tflite_flutter_plus library
TensorFlow Lite for Flutter
Classes
- ByteConversionUtils
- Delegate
- GpuDelegate
- Metal Delegate for iOS
- GpuDelegateOptions
- Metal Delegate options
- GpuDelegateOptionsV2
- GPU delegate options for Android
- GpuDelegateV2
- GPU delegate for Android
- Interpreter
- TensorFlowLite interpreter for running inference on a model.
- InterpreterOptions
- TensorFlowLite interpreter options.
- QuantizationParams
- Tensor
- TensorFlowLite tensor.
- TFLGpuDelegateOptions
- Wraps gpu delegate options for iOS metal delegate
- TfLiteCoreMlDelegateOptions
- TfLiteDelegate
- Wraps a TfLiteDelegate
- TfLiteGpuDelegateOptionsV2
- Wraps TfLiteGpuDelegateOptionsV2 for android gpu delegate
- TfLiteInterpreter
- Wraps a model interpreter.
- TfLiteInterpreterOptions
- Wraps customized interpreter configuration options.
- TfLiteModel
- Wraps a loaded TensorFlowLite model.
- TfLiteQuantizationParams
- Wraps Quantization Params
- TfLiteStatus
- Status of a TensorFlowLite function call.
- TfLiteTensor
- Wraps data associated with a graph tensor.
- TfLiteXNNPackDelegateOptions
- Wraps TfLiteXNNPackDelegateOptions
- XNNPackDelegate
- XNNPack Delegate
- XNNPackDelegateOptions
- XNNPackDelegate Options
Enums
- TFLGpuDelegateWaitType
- iOS metal delegate wait types.
- TfLiteCoreMlDelegateEnabledDevices
- TfLiteGpuExperimentalFlags
- Used to toggle experimental flags used in the delegate. Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
- TfLiteGpuInferencePriority
- TfLiteGpuInferenceUsage
- Encapsulated compilation/runtime tradeoffs.
- TfLiteType
- Types supported by tensor.