dartantic_ai library

Compatibility layer for language models, chat models, and embeddings.

Exports the main abstractions for use with various providers.

Classes

Agent
An agent that manages chat models and provides tool execution and message collection capabilities.
AnthropicChatModel
Wrapper around Anthropic Messages API (aka Claude API).
AnthropicChatOptions
Options to pass into the Anthropic Chat Model.
AnthropicMediaGenerationModel
Media generation model backed by the Anthropic code execution tool.
AnthropicMediaGenerationModelOptions
Options for configuring Anthropic media generation runs.
AnthropicProvider
Provider for Anthropic Claude native API.
AnthropicServerToolConfig
Configuration for enabling Anthropic server-side tools.
AnthropicToolChoice
Determines how Claude should select server-side tools.
BatchEmbeddingsResult
Result for batch embeddings operations.
Chat
A chat session with an agent.
ChatAudioConfig
Configuration for audio output in chat completions.
ChatGoogleGenerativeAISafetySetting
Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed probability that content is blocked.
ChatMessage
A chat message.
ChatModel<TOptions extends ChatModelOptions>
Chat model base class.
ChatModelOptions
Generation options to pass into the Chat Model.
ChatResult<T extends Object>
Result returned by the Chat Model.
CodeInterpreterConfig
Configuration for the OpenAI Responses code_interpreter tool.
CohereEmbeddingsModel
Cohere embeddings model implementation.
CohereEmbeddingsModelOptions
Options for Cohere embeddings models.
CohereProvider
Provider for Cohere OpenAI-compatible API.
ContainerFileData
Resolved data for a downloaded container file, including metadata hints.
DataPart
A data part containing binary data (e.g., images).
DefaultStreamingOrchestrator
Default implementation of the streaming orchestrator.
EmbeddingsModel<TOptions extends EmbeddingsModelOptions>
Embeddings model base class.
EmbeddingsModelOptions
Base class for embeddings model options.
EmbeddingsResult
Result returned by embeddings providers.
FileSearchConfig
Configuration for the OpenAI Responses file_search tool.
GoogleChatModel
Wrapper around Google AI for Developers API (aka Gemini API).
GoogleChatModelOptions
Options to pass into the Google Generative AI Chat Model.
GoogleEmbeddingsModel
Google AI embeddings model implementation.
GoogleEmbeddingsModelOptions
Google AI-specific embeddings model options.
GoogleMediaGenerationModel
Media generation model for Google Gemini.
GoogleMediaGenerationModelOptions
Options for configuring Google Gemini media generation.
GoogleProvider
Provider for Google Gemini native API.
ImageGenerationConfig
Configuration for the image_generation server-side tool.
LanguageModelResult<TOutput extends Object>
Result returned by the model.
LanguageModelUsage
Usage stats for the generation.
LinkPart
A link part referencing external content.
LoggingOptions
Configuration options for logging in the dartantic_ai package.
McpClient
Configuration for connecting to an MCP (Model Context Protocol) server.
MediaGenerationModel<TOptions extends MediaGenerationModelOptions>
Base class for media generation models.
MediaGenerationModelOptions
Base class for media generation model options.
MediaGenerationResult
Streaming chunk returned by a media generation model.
MistralChatModel
Wrapper around Mistral AI Chat Completions API.
MistralChatModelOptions
Options to pass into MistralAI.
MistralEmbeddingsModel
Mistral AI embeddings model implementation.
MistralEmbeddingsModelOptions
Options for Mistral embeddings models.
MistralProvider
Provider for Mistral AI (OpenAI-compatible).
ModelInfo
Model metadata for provider model listing.
ModelStringParser
Parses a model string into a provider name, chat model name, and embeddings model name.
OllamaChatModel
Wrapper around Ollama Chat API that enables to interact with the LLMs in a chat-like fashion.
OllamaChatOptions
Options to pass into Ollama.
OllamaEmbeddingsModel
Ollama embeddings model implementation.
OllamaEmbeddingsModelOptions
Options for Ollama embeddings models.
OllamaProvider
Provider for native Ollama API (local, not OpenAI-compatible).
OpenAIChatModel
Wrapper around OpenAI Chat API.
OpenAIChatOptions
Generation options to pass into the Chat Model.
OpenAIEmbeddingsModel
OpenAI embeddings model implementation.
OpenAIEmbeddingsModelOptions
OpenAI-specific embeddings model options.
OpenAIProvider
Provider for OpenAI-compatible APIs (OpenAI, Cohere, Together, etc.). Handles API key, base URL, and model configuration.
OpenAIRequestParameters
Request-level parameters for the OpenAI Responses API.
OpenAIResponsesChatModel
Chat model backed by the OpenAI Responses API.
OpenAIResponsesChatModelOptions
Options for configuring the OpenAI Responses chat model.
OpenAIResponsesEventMapper
Maps OpenAI Responses streaming events into dartantic chat results.
OpenAIResponsesHistorySegment
Represents the mapped segment of history sent to the Responses API.
OpenAIResponsesInvocation
Represents a fully-constructed OpenAI Responses API invocation.
OpenAIResponsesInvocationBuilder
Builds OpenAI Responses API invocations from Dartantic messages and options.
OpenAIResponsesMediaGenerationModel
Media generation model built on top of the OpenAI Responses API.
OpenAIResponsesMediaGenerationModelOptions
Options for configuring OpenAI Responses media generation runs.
OpenAIResponsesMessageMapper
Converts between dartantic chat messages and OpenAI Responses API payloads.
OpenAIResponsesOptionsMapper
Utilities for mapping Dartantic options to OpenAI Responses API types.
OpenAIResponsesProvider
Provider for the OpenAI Responses API.
OpenAIServerSideToolContext
Server-side tool configuration context.
PartHelpers
Helper utilities for Part-related operations.
Prediction
Predicted output for faster responses.
Provider<TChatOptions extends ChatModelOptions, TEmbeddingsOptions extends EmbeddingsModelOptions, TMediaOptions extends MediaGenerationModelOptions>
Provides a unified interface for accessing all major LLM, chat, and embedding providers in dartantic_ai.
StandardPart
Base class for parts that became de-facto standard for AI messages.
StreamingIterationResult
Result from a single streaming iteration
StreamingOrchestrator
Orchestrates the streaming process, coordinating between model calls, tool execution, and message accumulation
StreamOptions
Options for streaming responses.
TextPart
A text part of a message.
ThinkingConfig
Configuration for extended thinking mode.
ThinkingDisabled
Disables extended thinking.
ThinkingEnabled
Enables extended thinking with a token budget.
ThinkingPart
A "thinking" part of a message, used by some models to show reasoning.
Tool<TInput extends Object>
A tool that can be called by the LLM.
ToolPart
A tool interaction part of a message.
WebSearchConfig
Configuration for the OpenAI Responses web_search tool.
WebSearchLocation
Approximate geographic hints for web search personalisation.
WebSearchOptions
Web search options for the Chat Completions API.
XAICodeInterpreterConfig
Configuration for the xAI code_interpreter tool.
XAIFileSearchConfig
Configuration for the xAI file_search tool.
XAIMcpToolConfig
Configuration for a remote MCP server tool.
XAIProvider
Provider for xAI Grok via the OpenAI-compatible chat completions API.
XAIResponsesChatModel
Chat model backed by the xAI Responses API.
XAIResponsesChatModelOptions
Options for configuring the xAI Responses chat model.
XAIResponsesEventMapper
Event mapper for xAI Responses streams.
XAIResponsesMediaGenerationModel
Media generation model built on top of xAI Images API endpoints.
XAIResponsesMediaGenerationModelOptions
Options for configuring xAI Responses media generation runs.
XAIResponsesProvider
Provider for xAI Grok via the Responses API.
XAIWebSearchConfig
Configuration for the xAI web_search tool.
XAIWebSearchLocation
Approximate user location for web search.
XAIXSearchConfig
Configuration for the xAI x_search tool.

Enums

AnthropicServerSideTool
Enumerates Anthropic server-side tools with convenience helpers.
AnthropicToolChoiceType
Modes for Anthropic server-side tool selection.
ChatGoogleGenerativeAISafetySettingCategory
Safety settings categorizes.
ChatGoogleGenerativeAISafetySettingThreshold
Controls the probability threshold at which harm is blocked.
ChatMessageRole
The role of a message author.
ChatModality
Output modality for chat completions.
ChatOpenAIServiceTier
Specifies the latency tier to use for processing the request. This is relevant for customers subscribed to the scale tier service.
FinishReason
The reason the model stopped generating tokens.
GoogleFunctionCallingMode
Controls how the model decides when to call functions.
GoogleServerSideTool
Server-side tools available for Google (Gemini) models.
ImageGenerationQuality
Quality levels for image generation.
ImageGenerationSize
Size options for generated images.
McpServerKind
Represents the type of MCP server connection.
ModelKind
The kind of model supported by a provider.
OpenAIReasoningEffort
Reasoning effort levels for OpenAI Responses models that support thinking.
OpenAIReasoningSummary
Reasoning summary verbosity preference for OpenAI Responses.
OpenAIServerSideTool
OpenAI-provided server-side tools that can be enabled for a Responses call.
PromptCacheRetention
The retention policy for prompt cache entries.
ReasoningEffort
Reasoning effort level for reasoning models.
SearchContentType
The type of content to search for in web search.
ToolPartKind
The kind of tool interaction.
Verbosity
Verbosity level for output.
WebSearchContextSize
Controls how much context is gathered during server-side web search.
XAIImageDetail
Preferred detail level for image inputs.
XAIReasoningEffort
Reasoning effort levels for xAI Responses models.
XAIReasoningSummary
Reasoning summary verbosity preference for xAI Responses.
XAIServerSideTool
xAI server-side tools for Responses calls.
XAIWebSearchContextSize
Context size hint for web search.

Extension Types

Schema
A JSON Schema object defining any kind of property.

Extensions

AnthropicServerSideToolX on AnthropicServerSideTool
Convenience extensions for AnthropicServerSideTool.
MessagePartHelpers on Iterable<Part>
Static helper methods for extracting specific types of parts from a list.

Functions

betaFeaturesForAnthropicTools({List<AnthropicServerToolConfig>? manualConfigs, Set<AnthropicServerSideTool>? serverSideTools}) List<String>
Computes required beta headers for a set of tools.
mapGoogleMediaFinishReason(Candidate_FinishReason? reason) FinishReason
Maps Google finish reasons to Dartantic finish reasons.
mapGoogleModalities(List<String>? modalities) List<GenerationConfig_Modality>
Validates and maps response modalities to Google enums.
mergeAnthropicServerToolConfigs({List<AnthropicServerToolConfig>? manualConfigs, Set<AnthropicServerSideTool>? serverSideTools}) List<AnthropicServerToolConfig>
Merges explicit tool configs and shorthand server-side tools.
resolveGoogleMediaMimeType(List<String> requested, String? overrideMime) String
Resolves the best MIME type for Google media generation.

Typedefs

CohereChatModel = OpenAIChatModel
Cohere OpenAI-compatible model.
CohereChatOptions = OpenAIChatOptions
Cohere OpenAI-compatible options.
ContainerFileLoader = Future<ContainerFileData> Function(String containerId, String fileId)
Loads a container file by identifier and returns its resolved data.
Part = StandardPart
Alias for StandardPart for API convenience.
S = Schema
A shortcut typedef so that Schema.object, etc. can be used as S.object.

Exceptions / Errors

OpenAIRefusalException
Exception thrown when OpenAI Structured Outputs API returns a refusal.