tflite library
Classes
- CoreMlDelegate
- iOS
- Delegate
- GpuDelegate
- iOS
- GpuDelegateV2
- Android
- Interpreter
- InterpreterOptions
- Model
- QuantizationParams
- Tensor
- TFLGpuDelegateWaitType
- tensorflow#metal_delegate
- TFLite
- TfLiteCoreMlDelegateEnabledDevices
- tensorflow#coreml_delegate
- TfLiteGpuExperimentalFlags
- Used to toggle experimental flags used in the delegate. Note that this is a bitmask, so the values should be 1, 2, 4, 8, ...etc.
- TfLiteGpuInferencePriority
- TfLiteGpuInferenceUsage
- tensorflow#gpu_delegate Encapsulated compilation/runtime tradeoffs.
- TfLiteStatus
- tensorflow#c_api_types Note that new error status values may be added in future in order to indicate more fine-grained internal states, therefore, applications should not rely on status values being members of the enum.
- TfLiteType
- Types supported by tensor
- XNNPackDelegate
- Android/iOS
- XNNPackDelegateWeightsCache
Extensions
- InterpreterRunner on Interpreter
-
ListShape
on List<
E> - TensorCopyFrom on Tensor
- TensorCopyTo on Tensor
- TensorShape on Tensor
Constants
- TFLITE_XNNPACK_DELEGATE_FLAG_QS8 → const int
- tensorflow#xnnpack_delegate Enable XNNPACK acceleration for signed quantized 8-bit inference. This includes operators with channel-wise quantized weights.
- TFLITE_XNNPACK_DELEGATE_FLAG_QU8 → const int
- Enable XNNPACK acceleration for unsigned quantized 8-bit inference.