appendDefaultProviders method
Automatically selects and appends the best available execution provider.
Priority order:
- GPU: CUDA/TensorRT (NVIDIA) > DirectML (Windows) > ROCm (AMD)
- NPU/Accelerators: CoreML (Apple) > NNAPI (Android) > QNN (Qualcomm)
- Optimized CPU: DNNL (Intel) > XNNPACK (cross-platform)
- Fallback: Standard CPU
This method tries providers in order and uses the first one that succeeds. Always includes CPU as a fallback to ensure models can run.
Usage:
final options = OrtSessionOptions();
await options.appendDefaultProviders(); // Auto-selects best available
final session = OrtSession.fromBuffer(modelBytes, options);
Note: This method runs asynchronously to avoid blocking the UI thread during device capability detection. Make sure to await it before creating your session!
Implementation
Future<void> appendDefaultProviders() async {
var hasProvider = false;
// Try mobile/NPU accelerators
// CoreML for Apple devices (Neural Engine)
if (!hasProvider) {
try {
if (appendCoreMLProvider(CoreMLFlags.useNone)) {
hasProvider = true;
}
} catch (e) {
// CoreML not available, continue
}
}
// NNAPI for Android (Google's acceleration)
if (!hasProvider) {
try {
if (appendNnapiProvider(NnapiFlags.useNone)) {
hasProvider = true;
}
} catch (e) {
// NNAPI not available, continue
}
}
// QNN for Qualcomm chips
if (!hasProvider) {
try {
if (appendQnnProvider()) {
hasProvider = true;
}
} catch (e) {
// QNN not available, continue
}
}
// Try GPU providers first (best performance for most models)
// CUDA/TensorRT for NVIDIA
if (!hasProvider) {
try {
if (appendCudaProvider(CUDAFlags.useArena)) {
hasProvider = true;
}
} catch (e) {
// CUDA not available, continue
}
}
// DirectML for Windows (AMD/Intel/NVIDIA)
if (!hasProvider) {
try {
if (appendDirectMLProvider()) {
hasProvider = true;
}
} catch (e) {
// DirectML not available, continue
}
}
// ROCm for AMD GPUs on Linux
if (!hasProvider) {
try {
if (appendRocmProvider(ROCmFlags.useArena)) {
hasProvider = true;
}
} catch (e) {
// ROCm not available, continue
}
}
// Try optimized CPU providers
// DNNL for Intel CPUs
if (!hasProvider) {
try {
if (appendDNNLProvider(DNNLFlags.useArena)) {
hasProvider = true;
}
} catch (e) {
// DNNL not available, continue
}
}
// XNNPACK for cross-platform CPU optimization
if (!hasProvider) {
try {
if (appendXnnpackProvider()) {
hasProvider = true;
}
} catch (e) {
// XNNPACK not available, continue
}
}
// Always append CPU provider as fallback
// This ensures the model can run even if no accelerators are available
appendCPUProvider(CPUFlags.useArena);
}