llm_dart library
LLM Dart Library - A modular Dart library for AI provider interactions
This library provides a unified interface for interacting with different AI providers, starting with OpenAI. It's designed to be modular and extensible
Classes
- AIModel
- Represents an AI model with its metadata
- AnthropicChat
- Anthropic Chat capability implementation
- AnthropicChatResponse
- Anthropic chat response implementation
- AnthropicClient
- Core Anthropic HTTP client shared across all capability modules
- AnthropicConfig
- Anthropic provider configuration
- AnthropicFile
- Anthropic-specific file object
- AnthropicFileListQuery
- Anthropic file list query parameters
- AnthropicFileListResponse
- Anthropic file list response
- AnthropicFiles
- Anthropic Files API implementation
- AnthropicFileUploadRequest
- Anthropic file upload request
- AnthropicMCPServer
- Anthropic MCP server configuration for the MCP connector feature
- AnthropicMCPToolConfiguration
- Tool configuration for Anthropic MCP servers
- AnthropicMCPToolResult
- Anthropic MCP Tool Result content block
- AnthropicMCPToolUse
- Anthropic MCP Tool Use content block
- AnthropicProvider
- Anthropic provider implementation
- AnyToolChoice
- Model can use any tool, but it must use at least one. This is useful when you want to force the model to use tools.
- Assistant
- Represents an assistant that can call the model and use tools.
- AssistantCapability
- Assistant management capability
- AssistantFunctionTool
- Function tool for assistants
- AssistantResponseFormat
- Response format for assistants
- AssistantTool
- Base class for assistant tools
- AudioAlignment
- Character-level timing alignment for TTS (ElevenLabs specific)
- AudioCapability
- Unified audio processing capability interface
- AudioConfig
- Audio configuration builder for LLM providers
- AudioDataEvent
- Audio data chunk event
- AudioErrorEvent
- Audio error event
- AudioMetadataEvent
- Audio metadata event
-
AudioProviderFactory<
T extends ChatCapability> - Specialized base factory for audio-only providers
- AudioStreamEvent
- Audio stream event for streaming TTS
- AudioTimingEvent
- Audio timing event for character-level alignment
- AudioTranslationRequest
- Audio translation request (OpenAI specific)
- AutoToolChoice
- Model can use any tool, and may elect to use none. This is the default behavior and gives the model flexibility.
- BaseAudioCapability
- Base implementation of AudioCapability with convenience methods
- BaseHttpProvider
- Base class for HTTP-based LLM providers
-
BaseProviderFactory<
T extends ChatCapability> - Base factory class that provides common functionality for all provider factories
- BasicLLMProvider
- Basic LLM provider with just chat capability
- CapabilityUtils
- Utility class for capability checking and safe execution Provides multiple approaches for different user levels and use cases
- CapabilityValidationReport
- Validation report for provider capabilities
- ChatCapability
- Core chat capability interface that most LLM providers implement
- ChatMessage
- A single message in a chat conversation.
- ChatResponse
- Response from a chat provider
- ChatStreamEvent
- Stream event for streaming chat responses
- CodeInterpreterResources
- Code interpreter resources
- CodeInterpreterTool
- Code interpreter tool for assistants
- CompletionCapability
- Capability interface for text completion (non-chat)
- CompletionEvent
- Completion event
- CompletionRequest
- Completion request for text completion providers
- CompletionResponse
- Completion response from text completion providers
-
ConfigTransformer<
T> - Abstract interface for transforming unified config to provider-specific config
- ConfigUtils
- Utility class for common configuration transformations
- CreateAssistantRequest
- Request for creating an assistant
- DeepSeekChat
- DeepSeek Chat capability implementation
- DeepSeekChatResponse
- DeepSeek chat response implementation
- DeepSeekClient
- Core DeepSeek HTTP client shared across all capability modules
- DeepSeekConfig
- DeepSeek provider configuration
- DeepSeekErrorHandler
- DeepSeek-specific error handler
- DeepSeekModels
- DeepSeek Models capability implementation
- DeepSeekProvider
- DeepSeek provider implementation
- DeleteAssistantResponse
- Response for deleting an assistant
- DioErrorHandler
- Dio error handler utility for consistent error handling across providers
- ElevenLabsAudio
- ElevenLabs Audio capability implementation
- ElevenLabsClient
- ElevenLabs HTTP client implementation
- ElevenLabsConfig
- ElevenLabs provider configuration
- ElevenLabsModels
- ElevenLabs Models capability implementation
- ElevenLabsProvider
- ElevenLabs Provider implementation
- ElevenLabsSTTResponse
- ElevenLabs response for STT
- ElevenLabsTTSResponse
- ElevenLabs response for TTS
- EmbeddingCapability
- Capability interface for vector embeddings
- EmbeddingLLMProvider
- LLM provider with chat and embedding capabilities
- EnhancedChatCapability
- Enhanced chat capability with advanced tool and output control
- EnhancedWordTiming
- Enhanced word timing with speaker information (ElevenLabs specific)
- ErrorEvent
- Error event
- FileDeleteResponse
- File deletion response that works across providers
- FileListQuery
- File list query parameters that work across providers
- FileListResponse
- File list response that works across providers
- FileManagementCapability
- File management capability for uploading and managing files
- FileMessage
- File message for documents, audio, video, etc.
- FileMime
- General MIME type for files
- FileObject
- File object that works across different providers
- FileSearchResources
- File search resources
- FileSearchTool
- File search tool for assistants
- FileUploadRequest
- File upload request that works across providers
- FullLLMProvider
- Full-featured LLM provider with all common capabilities
- FunctionCall
- FunctionCall contains details about which function to call and with what arguments.
- FunctionObject
- Represents a function object for assistants (similar to FunctionTool but with optional parameters)
- FunctionTool
- Represents a function definition for a tool
- GeneratedImage
- Generated image information
- GoogleChat
- Google Chat capability implementation
- GoogleChatResponse
- Google chat response implementation
- GoogleClient
- Core Google HTTP client shared across all capability modules
- GoogleConfig
- Google (Gemini) provider configuration
- GoogleEmbeddings
- Google Embeddings capability implementation
- GoogleFile
- Google file upload response
- GoogleLLMBuilder
- Google-specific LLM builder with provider-specific configuration methods
- GoogleMultiSpeakerVoiceConfig
- Google multi-speaker voice configuration
- GooglePrebuiltVoiceConfig
- Google prebuilt voice configuration
- GoogleProvider
- Google provider implementation
- GoogleSpeakerVoiceConfig
- Google speaker voice configuration for multi-speaker TTS
- GoogleTTS
- Google TTS implementation
- GoogleTTSAudioDataEvent
- Google TTS audio data event
- GoogleTTSCapability
- Google-specific TTS capability interface
- GoogleTTSCompletionEvent
- Google TTS completion event
- GoogleTTSErrorEvent
- Google TTS error event
- GoogleTTSMetadataEvent
- Google TTS metadata event
- GoogleTTSRequest
- Google TTS request configuration
- GoogleTTSResponse
- Google TTS response
- GoogleTTSStreamEvent
- Google TTS stream events
- GoogleVoiceConfig
- Google voice configuration for single speaker
- GoogleVoiceInfo
- Google voice information
- GroqChat
- Groq Chat capability implementation
- GroqChatResponse
- Groq chat response implementation
- GroqClient
- Core Groq HTTP client shared across all capability modules
- GroqConfig
- Groq provider configuration
- GroqProvider
- Groq provider implementation
- HeadersTransformer
- Abstract interface for transforming headers for provider-specific requirements
- HttpConfig
- HTTP configuration builder for LLM providers
- HttpConfigUtils
- HTTP configuration utilities for unified Dio setup across providers
- HttpErrorMapper
- HTTP error mapper utility
- ImageConfig
- Image generation configuration builder
- ImageDimensions
- Image dimensions
- ImageEditRequest
- Image edit request model
- ImageGenerationCapability
- Capability interface for image generation
- ImageGenerationRequest
- Image generation request configuration
- ImageGenerationResponse
- Image generation response with metadata
- ImageInput
- Image input for editing and variation requests
- ImageMessage
- An image message
- ImageSize
- Common image sizes for generation
- ImageUrlMessage
- An image URL message
- ImageVariationRequest
- Image variation request model
- LanguageInfo
- Language information for STT
- ListAssistantsQuery
- Query parameters for listing assistants
- ListAssistantsResponse
- Response for listing assistants
- LLMBuilder
- Builder for configuring and instantiating LLM providers
- LLMConfig
- Unified configuration class for all LLM providers
-
LLMProviderFactory<
T extends ChatCapability> - Factory interface for creating LLM provider instances
- LLMProviderRegistry
- Registry for managing LLM provider factories
-
LocalProviderFactory<
T extends ChatCapability> - Specialized base factory for providers that don't require API keys
- MessageType
- The type of a message in a chat conversation.
- ModelCapabilityConfig
- Model-specific capability configuration
- ModelListingCapability
- Capability interface for model listing
- ModerationAnalysis
- Extended moderation analysis result
- ModerationCapability
- Content moderation capability
- ModerationCategories
- Categories of content that may be flagged by moderation
- ModerationCategoryScores
- Confidence scores for each category
- ModerationRequest
- Request for content moderation
- ModerationResponse
- Response from the moderation API
- ModerationResult
- A single moderation result
- ModerationStats
- Moderation statistics for a batch of texts
- ModifyAssistantRequest
- Request for modifying an assistant
- NoneToolChoice
- Explicitly disables the use of tools. The model will not use any tools even if they are provided.
- OllamaChat
- Ollama Chat capability implementation
- OllamaChatResponse
- Ollama chat response implementation
- OllamaClient
- Core Ollama HTTP client shared across all capability modules
- OllamaCompletion
- Ollama Completion capability implementation
- OllamaConfig
- Ollama provider configuration
- OllamaEmbeddings
- Ollama Embeddings capability implementation
- OllamaModels
- Ollama Models capability implementation
- OllamaProvider
- Ollama provider implementation
- OpenAIAssistants
- OpenAI Assistant Management capability implementation
- OpenAIAudio
- OpenAI Audio capabilities implementation
- OpenAIBuiltInTool
- Base class for OpenAI built-in tools
- OpenAIBuiltInTools
- Convenience factory methods for creating built-in tools
- OpenAIChat
- OpenAI Chat capability implementation
- OpenAIChatResponse
- OpenAI chat response implementation
- OpenAIClient
- Core OpenAI HTTP client shared across all capability modules
-
OpenAICompatibleBaseFactory<
T extends ChatCapability> - Specialized base factory for OpenAI-compatible providers This provides additional functionality for providers that use OpenAI's API format
- OpenAICompatibleConfigs
- Pre-configured OpenAI-compatible provider configurations
- OpenAICompatibleProviderConfig
- OpenAI-compatible provider configuration
- OpenAICompletion
- OpenAI Text Completion capability implementation
- OpenAIComputerUseTool
- Computer use built-in tool
- OpenAIConfig
- OpenAI provider configuration
- OpenAIEmbeddings
- OpenAI Embeddings capability implementation
- OpenAIFiles
- OpenAI File Management capability implementation
- OpenAIFileSearchTool
- File search built-in tool
- OpenAIImages
- OpenAI Image Generation capability implementation
- OpenAIModels
- OpenAI Model Listing capability implementation
- OpenAIModeration
- OpenAI Content Moderation capability implementation
- OpenAIProvider
- OpenAI Provider implementation
- OpenAIResponses
- OpenAI Responses API capability implementation
- OpenAIResponsesResponse
- OpenAI Responses API response implementation
- OpenAIWebSearchTool
- Web search built-in tool
- ParallelToolConfig
- Parallel tool execution configuration
- ParameterProperty
- Represents a parameter in a function tool
- ParametersSchema
- Represents the parameters schema for a function tool
- PhindChat
- Phind Chat capability implementation
- PhindChatResponse
- Phind chat response implementation for parsed streaming responses
- PhindClient
- Phind HTTP client implementation
- PhindConfig
- Phind provider configuration
- PhindProvider
- Phind Provider implementation
- ProviderCapabilities
- Provider capability declaration interface
- ProviderConfig
- Provider-specific configuration builder
- ProviderInfo
- Information about a registered provider
- ProviderRegistry
- Enterprise-grade provider registry for managing multiple providers and their capabilities. Useful for applications that work with multiple LLM providers or need dynamic provider selection.
- RealtimeAudioConfig
- Configuration for real-time audio sessions
- RealtimeAudioEvent
- Events from real-time audio sessions
- RealtimeAudioResponseEvent
- Real-time audio response event
- RealtimeAudioSession
- A stateful real-time audio session
- RealtimeErrorEvent
- Real-time error event
- RealtimeSessionStatusEvent
- Real-time session status event
- RealtimeTranscriptionEvent
- Real-time transcription event
- RegistryProviderInfo
- Information about a registered provider in the registry
- RegistryStats
- Registry statistics
- RequestBodyTransformer
- Abstract interface for transforming request body for provider-specific parameters
- RequestMetadata
- Request metadata for tracking and analytics
- SafetySetting
- Google AI safety setting
- SearchParameters
- Search parameters for LLM providers that support search functionality
- SearchSource
- Search source configuration for search parameters
- SpecificToolChoice
- Model must use the specified tool and only the specified tool. The string parameter is the name of the required tool. This is useful when you want the model to call a specific function.
- StructuredOutputFormat
- Defines rules for structured output responses based on OpenAI's structured output requirements.
- STTRequest
- Speech-to-Text request configuration
- STTResponse
- Speech-to-Text response with metadata
- TextDeltaEvent
- Text delta event
- TextMessage
- A text message
- ThinkingDeltaEvent
- Thinking/reasoning delta event for reasoning models
- Tool
- Represents a tool that can be used in chat
- ToolCall
- Tool call represents a function call that an LLM wants to make.
- ToolCallDeltaEvent
- Tool call delta event
- ToolChoice
- Tool choice determines how the LLM uses available tools. The behavior is standardized across different LLM providers.
- ToolExecutionCapability
- Tool execution capability for providers that support client-side tool execution
- ToolResources
- Tool resources for assistants
- ToolResult
- Tool execution result that can be returned to the model
- ToolResultMessage
- Tool result message
- ToolUseMessage
- A tool use message
- ToolValidator
- Tool validation utility for ensuring tool calls and parameters are valid
- TranscriptionSegment
- Transcription segment information (OpenAI specific)
- TTSRequest
- Text-to-Speech request configuration
- TTSResponse
- Text-to-Speech response with metadata
- UsageInfo
- Usage information for API calls
- Utf8StreamDecoder
- A UTF-8 stream decoder that handles incomplete byte sequences gracefully.
- VectorStoreRequest
- Vector store request for creating vector stores
- VoiceInfo
- Voice information
- VoiceLLMProvider
- LLM provider with voice capabilities
- WebSearchConfig
- Unified web search configuration
- WebSearchLocation
- Geographic location for search localization
- Word
- Word with timing information from ElevenLabs STT
- WordTiming
- Word timing information for STT
- XAIChat
- xAI Chat capability implementation
- XAIChatResponse
- xAI chat response implementation
- XAIClient
- Core xAI HTTP client shared across all capability modules
- XAIConfig
- xAI provider configuration
- XAIEmbedding
- xAI Embedding capability implementation
- XAIEmbeddingData
- Embedding data from xAI API
- XAIEmbeddingResponse
- Embedding response from xAI API
- XAIProvider
- xAI Provider implementation
Enums
- AssistantToolType
- Assistant tool types
- AudioFeature
- Audio features that providers can support
- AudioFormat
- Audio format enumeration for better type safety
- AudioProcessingMode
- Audio processing mode for different use cases
- AudioQuality
- Audio quality settings
- ChatRole
- Role of a participant in a chat conversation.
- CompletionUseCase
- Use cases for completion optimization
- FilePurpose
- Universal file purpose enumeration supporting multiple providers
- FileStatus
- Universal file status enumeration supporting multiple providers
- HarmBlockThreshold
- Google AI harm block thresholds
- HarmCategory
- Google AI harm categories
- ImageMime
- The supported MIME type of an image.
- ImageQuality
- Image quality options
- ImageStyle
- Image style options for generation
- LLMCapability
- Enumeration of LLM capabilities that providers can support
- OpenAIBuiltInToolType
- OpenAI built-in tool types
- ReasoningEffort
- Reasoning effort levels for models that support reasoning
- ServiceTier
- Service tier levels for API requests
- TextNormalization
- Text normalization mode for TTS
- TimestampGranularity
- Timestamp granularity for audio processing
- WebSearchContextSize
- Search context size for providers that support it
- WebSearchStrategy
- Web search implementation strategy
- WebSearchType
- Types of web search
Extensions
- AudioFormatExtension on AudioFormat
- ImageMimeExtension on ImageMime
-
Utf8StreamDecoderExtension
on Stream<
List< int> > - Extension to make it easier to use Utf8StreamDecoder with streams
Properties
- globalProviderRegistry → ProviderRegistry
-
Singleton instance for global provider registry
final
Functions
-
ai(
) → LLMBuilder - Create a new LLM builder instance
-
createAnthropicChatProvider(
{required String apiKey, String model = 'claude-sonnet-4-20250514', String? systemPrompt, double? temperature, int? maxTokens}) → AnthropicProvider - Create an Anthropic provider for chat
-
createAnthropicProvider(
{required String apiKey, String? model, String? baseUrl, int? maxTokens, double? temperature, String? systemPrompt, Duration? timeout, bool? stream, double? topP, int? topK, bool? reasoning, int? thinkingBudgetTokens, bool? interleavedThinking}) → AnthropicProvider - Create an Anthropic provider with default configuration
-
createAnthropicReasoningProvider(
{required String apiKey, String model = 'claude-sonnet-4-20250514', String? systemPrompt, int? thinkingBudgetTokens, bool interleavedThinking = false}) → AnthropicProvider - Create an Anthropic provider for reasoning tasks
-
createAzureOpenAIProvider(
{required String apiKey, required String endpoint, required String deploymentName, String apiVersion = '2024-02-15-preview', double? temperature, int? maxTokens, String? systemPrompt}) → OpenAIProvider - Create an OpenAI provider for Azure OpenAI
-
createCopilotProvider(
{required String apiKey, String model = ProviderDefaults.githubCopilotDefaultModel, double? temperature, int? maxTokens, String? systemPrompt}) → OpenAIProvider - Create an OpenAI provider for GitHub Copilot
-
createDeepSeekChatProvider(
{required String apiKey, String model = 'deepseek-chat', String? systemPrompt, double? temperature, int? maxTokens}) → DeepSeekProvider - Create a DeepSeek provider for chat
-
createDeepSeekProvider(
{required String apiKey, String? model, String? baseUrl, int? maxTokens, double? temperature, String? systemPrompt, Duration? timeout, bool? stream, double? topP, int? topK}) → DeepSeekProvider - Create a DeepSeek provider with default configuration
-
createDeepSeekReasoningProvider(
{required String apiKey, String model = 'deepseek-reasoner', String? systemPrompt, double? temperature, int? maxTokens}) → DeepSeekProvider - Create a DeepSeek provider for reasoning tasks Uses the deepseek-reasoner model which supports reasoning/thinking
-
createElevenLabsCustomVoiceProvider(
{required String apiKey, required String voiceId, String model = ProviderDefaults.elevenLabsDefaultTTSModel, double stability = 0.5, double similarityBoost = 0.75, double style = 0.0, bool useSpeakerBoost = true}) → ElevenLabsProvider - Create an ElevenLabs provider with custom voice settings
-
createElevenLabsProvider(
{required String apiKey, String baseUrl = ProviderDefaults.elevenLabsBaseUrl, String? voiceId, String? model, Duration? timeout, double? stability, double? similarityBoost, double? style, bool? useSpeakerBoost}) → ElevenLabsProvider - Create an ElevenLabs provider with default settings
-
createElevenLabsStreamingProvider(
{required String apiKey, String voiceId = ProviderDefaults.elevenLabsDefaultVoiceId, String model = 'eleven_turbo_v2', double stability = 0.5, double similarityBoost = 0.75}) → ElevenLabsProvider - Create an ElevenLabs provider for real-time streaming
-
createElevenLabsSTTProvider(
{required String apiKey, String model = ProviderDefaults.elevenLabsDefaultSTTModel}) → ElevenLabsProvider - Create an ElevenLabs provider optimized for STT
-
createElevenLabsTTSProvider(
{required String apiKey, String voiceId = ProviderDefaults.elevenLabsDefaultVoiceId, String model = ProviderDefaults.elevenLabsDefaultTTSModel, double stability = 0.5, double similarityBoost = 0.75, double style = 0.0, bool useSpeakerBoost = true}) → ElevenLabsProvider - Create an ElevenLabs provider optimized for high-quality TTS
-
createGoogleChatProvider(
{required String apiKey, String model = 'gemini-1.5-flash', String? systemPrompt, double? temperature, int? maxTokens}) → GoogleProvider - Create a Google provider for chat
-
createGoogleEmbeddingProvider(
{required String apiKey, String model = 'text-embedding-004', String? embeddingTaskType, String? embeddingTitle, int? embeddingDimensions}) → GoogleProvider - Create a Google provider for embeddings
-
createGoogleImageGenerationProvider(
{required String apiKey, String model = 'gemini-1.5-pro', List< String> ? responseModalities}) → GoogleProvider - Create a Google provider for image generation
-
createGoogleProvider(
{required String apiKey, String? model, String? baseUrl, int? maxTokens, double? temperature, String? systemPrompt, Duration? timeout, bool? stream, double? topP, int? topK, ReasoningEffort? reasoningEffort, int? thinkingBudgetTokens, bool? includeThoughts, bool? enableImageGeneration, List< String> ? responseModalities, List<SafetySetting> ? safetySettings, int? maxInlineDataSize, int? candidateCount, List<String> ? stopSequences, String? embeddingTaskType, String? embeddingTitle, int? embeddingDimensions}) → GoogleProvider - Create a Google provider with default configuration
-
createGoogleReasoningProvider(
{required String apiKey, String model = 'gemini-2.0-flash-thinking-exp', String? systemPrompt, int? thinkingBudgetTokens, bool includeThoughts = true}) → GoogleProvider - Create a Google provider for reasoning tasks
-
createGoogleVisionProvider(
{required String apiKey, String model = 'gemini-1.5-pro', String? systemPrompt, double? temperature, int? maxTokens}) → GoogleProvider - Create a Google provider for vision tasks
-
createGrokVisionProvider(
{required String apiKey, String model = 'grok-vision-beta', double? temperature, int? maxTokens, String? systemPrompt}) → XAIProvider - Create an xAI provider for Grok Vision
-
createGroqChatProvider(
{required String apiKey, String model = 'llama-3.3-70b-versatile', String? systemPrompt, double? temperature, int? maxTokens}) → GroqProvider - Create a Groq provider for chat
-
createGroqCodeProvider(
{required String apiKey, String model = 'llama-3.1-70b-versatile', String? systemPrompt, double? temperature, int? maxTokens}) → GroqProvider - Create a Groq provider for code generation
-
createGroqFastProvider(
{required String apiKey, String model = 'llama-3.1-8b-instant', String? systemPrompt, double? temperature, int? maxTokens}) → GroqProvider - Create a Groq provider for fast inference
-
createGroqProvider(
{required String apiKey, String? model, String? baseUrl, int? maxTokens, double? temperature, String? systemPrompt, Duration? timeout, bool? stream, double? topP, int? topK, List< Tool> ? tools, ToolChoice? toolChoice}) → GroqProvider - Create a Groq provider with default configuration
-
createGroqVisionProvider(
{required String apiKey, String model = 'llava-v1.5-7b-4096-preview', String? systemPrompt, double? temperature, int? maxTokens}) → GroqProvider - Create a Groq provider for vision tasks
-
createOllamaChatProvider(
{String baseUrl = 'http://localhost:11434', String model = 'llama3.2', String? systemPrompt, double? temperature, int? maxTokens}) → OllamaProvider - Create an Ollama provider for chat
-
createOllamaCodeProvider(
{String baseUrl = 'http://localhost:11434', String model = 'codellama', String? systemPrompt, double? temperature, int? maxTokens}) → OllamaProvider - Create an Ollama provider for code generation
-
createOllamaCompletionProvider(
{String baseUrl = 'http://localhost:11434', String model = 'llama3.2', double? temperature, int? maxTokens}) → OllamaProvider - Create an Ollama provider for completion tasks
-
createOllamaEmbeddingProvider(
{String baseUrl = 'http://localhost:11434', String model = 'nomic-embed-text'}) → OllamaProvider - Create an Ollama provider for embeddings
-
createOllamaProvider(
{String? baseUrl, String? apiKey, String? model, int? maxTokens, double? temperature, String? systemPrompt, Duration? timeout, double? topP, int? topK, List< Tool> ? tools, StructuredOutputFormat? jsonSchema, int? numCtx, int? numGpu, int? numThread, bool? numa, int? numBatch, String? keepAlive, bool? raw}) → OllamaProvider - Create an Ollama provider with default configuration
-
createOllamaVisionProvider(
{String baseUrl = 'http://localhost:11434', String model = 'llava', String? systemPrompt, double? temperature, int? maxTokens}) → OllamaProvider - Create an Ollama provider for vision tasks
-
createOpenAIProvider(
{required String apiKey, String model = ProviderDefaults.openaiDefaultModel, String baseUrl = ProviderDefaults.openaiBaseUrl, double? temperature, int? maxTokens, String? systemPrompt}) → OpenAIProvider - Create an OpenAI provider with default settings
-
createOpenRouterProvider(
{required String apiKey, String model = ProviderDefaults.openRouterDefaultModel, double? temperature, int? maxTokens, String? systemPrompt}) → OpenAIProvider - Create an OpenAI provider for OpenRouter
-
createPhindCodeProvider(
{required String apiKey, String model = 'Phind-70B', double? temperature = 0.1, int? maxTokens = 4000, String? systemPrompt = 'You are an expert programmer. Provide clear, well-commented code solutions.'}) → PhindProvider - Create a Phind provider optimized for code generation
-
createPhindExplainerProvider(
{required String apiKey, String model = 'Phind-70B', double? temperature = 0.3, int? maxTokens = 2000, String? systemPrompt = 'You are a coding tutor. Explain code concepts clearly and provide examples.'}) → PhindProvider - Create a Phind provider optimized for code explanation
-
createPhindProvider(
{required String apiKey, String model = 'Phind-70B', String baseUrl = 'https://https.extension.phind.com/agent/', double? temperature, int? maxTokens, String? systemPrompt}) → PhindProvider - Create a Phind provider with default settings
-
createProvider(
{required String providerId, required String apiKey, required String model, String? baseUrl, double? temperature, int? maxTokens, String? systemPrompt, Duration? timeout, bool stream = false, double? topP, int? topK, Map< String, dynamic> ? extensions}) → Future<ChatCapability> - Create a provider with the given configuration
-
createTogetherProvider(
{required String apiKey, String model = ProviderDefaults.togetherAIDefaultModel, double? temperature, int? maxTokens, String? systemPrompt}) → OpenAIProvider - Create an OpenAI provider for Together AI
-
createXAILiveSearchProvider(
{required String apiKey, String model = 'grok-3', double? temperature, int? maxTokens, String? systemPrompt, int? maxSearchResults, List< String> ? excludedWebsites}) → XAIProvider - Create an xAI provider with Live Search enabled
-
createXAIProvider(
{required String apiKey, String model = 'grok-3', String baseUrl = 'https://api.x.ai/v1/', double? temperature, int? maxTokens, String? systemPrompt, SearchParameters? searchParameters, bool? liveSearch}) → XAIProvider - Create an xAI provider with default settings
-
createXAISearchProvider(
{required String apiKey, String model = 'grok-3', double? temperature, int? maxTokens, String? systemPrompt, String searchMode = 'auto', List< SearchSource> ? sources, int? maxSearchResults, String? fromDate, String? toDate}) → XAIProvider - Create an xAI provider with search capabilities
Exceptions / Errors
- AuthError
- Authentication and authorization errors
- CapabilityError
- Error thrown when a required capability is not supported
- ContentFilterError
- Content filter error
- GenericError
- Generic error
- HttpError
- HTTP request/response errors
- InvalidRequestError
- Invalid request parameters or format
- JsonError
- JSON serialization/deserialization errors
- LLMError
- Error types that can occur when interacting with LLM providers.
- ModelNotAvailableError
- Model not available error
- NotFoundError
- Resource not found error (404)
- OpenAIResponsesError
- OpenAI Responses API specific error
- ProviderError
- Errors returned by the LLM provider
- QuotaExceededError
- Quota exceeded error
- RateLimitError
- Rate limit exceeded error
- ResponseFormatError
- API response parsing or format error
- ServerError
- Server error (5xx status codes)
- StructuredOutputError
- Structured output validation error
- TimeoutError
- Timeout error for request timeouts
- ToolConfigError
- Tool configuration error
- ToolExecutionError
- Tool execution error
- ToolValidationError
- Tool validation error
- UnsupportedCapabilityError
- Unsupported capability error