dartantic_ai library
Compatibility layer for language models, chat models, and embeddings.
Exports the main abstractions for use with various providers.
Classes
- Agent
- An agent that manages chat models and provides tool execution and message collection capabilities.
- AnthropicChatModel
- Wrapper around Anthropic Messages API (aka Claude API).
- AnthropicChatOptions
- Options to pass into the Anthropic Chat Model.
- AnthropicMediaGenerationModel
- Media generation model backed by the Anthropic code execution tool.
- AnthropicMediaGenerationModelOptions
- Options for configuring Anthropic media generation runs.
- AnthropicProvider
- Provider for Anthropic Claude native API.
- AnthropicServerToolConfig
- Configuration for enabling Anthropic server-side tools.
- AnthropicToolChoice
- Determines how Claude should select server-side tools.
- BatchEmbeddingsResult
- Result for batch embeddings operations.
- Chat
- A chat session with an agent.
- ChatGoogleGenerativeAISafetySetting
- Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed probability that content is blocked.
- ChatMessage
- A message in a conversation between a user and a model.
-
ChatModel<
TOptions extends ChatModelOptions> - Chat model base class.
- ChatModelOptions
- Generation options to pass into the Chat Model.
-
ChatResult<
T extends Object> - Result returned by the Chat Model.
- CodeInterpreterConfig
-
Configuration for the OpenAI Responses
code_interpretertool. - CohereEmbeddingsModel
- Cohere embeddings model implementation.
- CohereEmbeddingsModelOptions
- Options for Cohere embeddings models.
- CohereProvider
- Provider for Cohere OpenAI-compatible API.
- ContainerFileData
- Resolved data for a downloaded container file, including metadata hints.
- DataPart
- A data part containing binary data (e.g., images).
- DefaultStreamingOrchestrator
- Default implementation of the streaming orchestrator.
-
EmbeddingsModel<
TOptions extends EmbeddingsModelOptions> - Embeddings model base class.
- EmbeddingsModelOptions
- Base class for embeddings model options.
- EmbeddingsResult
- Result returned by embeddings providers.
- FileSearchConfig
-
Configuration for the OpenAI Responses
file_searchtool. - GoogleChatModel
- Wrapper around Google AI for Developers API (aka Gemini API).
- GoogleChatModelOptions
- Options to pass into the Google Generative AI Chat Model.
- GoogleEmbeddingsModel
- Google AI embeddings model implementation.
- GoogleEmbeddingsModelOptions
- Google AI-specific embeddings model options.
- GoogleMediaGenerationModel
- Media generation model for Google Gemini.
- GoogleMediaGenerationModelOptions
- Options for configuring Google Gemini media generation.
- GoogleProvider
- Provider for Google Gemini native API.
- ImageGenerationConfig
- Configuration for the image_generation server-side tool.
-
LanguageModelResult<
TOutput extends Object> - Result returned by the model.
- LanguageModelUsage
- Usage stats for the generation.
- LinkPart
- A link part referencing external content.
- LoggingOptions
- Configuration options for logging in the dartantic_ai package.
- McpClient
- Configuration for connecting to an MCP (Model Context Protocol) server.
-
MediaGenerationModel<
TOptions extends MediaGenerationModelOptions> - Base class for media generation models.
- MediaGenerationModelOptions
- Base class for media generation model options.
- MediaGenerationResult
- Streaming chunk returned by a media generation model.
- MistralChatModel
- Wrapper around Mistral AI Chat Completions API.
- MistralChatModelOptions
- Options to pass into MistralAI.
- MistralEmbeddingsModel
- Mistral AI embeddings model implementation.
- MistralEmbeddingsModelOptions
- Options for Mistral embeddings models.
- MistralProvider
- Provider for Mistral AI (OpenAI-compatible).
- ModelInfo
- Model metadata for provider model listing.
- ModelStringParser
- Parses a model string into a provider name, chat model name, and embeddings model name.
- OllamaChatModel
- Wrapper around Ollama Chat API that enables to interact with the LLMs in a chat-like fashion.
- OllamaChatOptions
- Options to pass into Ollama.
- OllamaProvider
- Provider for native Ollama API (local, not OpenAI-compatible).
- OpenAIChatModel
- Wrapper around OpenAI Chat API.
- OpenAIChatOptions
- Generation options to pass into the Chat Model.
- OpenAIEmbeddingsModel
- OpenAI embeddings model implementation.
- OpenAIEmbeddingsModelOptions
- OpenAI-specific embeddings model options.
- OpenAIProvider
- Provider for OpenAI-compatible APIs (OpenAI, Cohere, Together, etc.). Handles API key, base URL, and model configuration.
- OpenAIRequestParameters
- Request-level parameters for the OpenAI Responses API.
- OpenAIResponsesChatModel
- Chat model backed by the OpenAI Responses API.
- OpenAIResponsesChatModelOptions
- Options for configuring the OpenAI Responses chat model.
- OpenAIResponsesEventMapper
- Maps OpenAI Responses streaming events into dartantic chat results.
- OpenAIResponsesHistorySegment
- Represents the mapped segment of history sent to the Responses API.
- OpenAIResponsesInvocation
- Represents a fully-constructed OpenAI Responses API invocation.
- OpenAIResponsesInvocationBuilder
- Builds OpenAI Responses API invocations from Dartantic messages and options.
- OpenAIResponsesMediaGenerationModel
- Media generation model built on top of the OpenAI Responses API.
- OpenAIResponsesMediaGenerationModelOptions
- Options for configuring OpenAI Responses media generation runs.
- OpenAIResponsesMessageMapper
- Converts between dartantic chat messages and OpenAI Responses API payloads.
- OpenAIResponsesOptionsMapper
- Utilities for mapping Dartantic options to OpenAI Responses API types.
- OpenAIResponsesProvider
- Provider for the OpenAI Responses API.
- OpenAIServerSideToolContext
- Server-side tool configuration context.
- Part
- Base class for message content parts.
-
Provider<
TChatOptions extends ChatModelOptions, TEmbeddingsOptions extends EmbeddingsModelOptions, TMediaOptions extends MediaGenerationModelOptions> - Provides a unified interface for accessing all major LLM, chat, and embedding providers in dartantic_ai.
- StreamingIterationResult
- Result from a single streaming iteration
- StreamingOrchestrator
- Orchestrates the streaming process, coordinating between model calls, tool execution, and message accumulation
- TextPart
- A text part of a message.
- ThinkingConfig
-
Configuration for enabling Claude's extended thinking. When enabled, responses
include
thinkingcontent blocks showing Claude's thinking process before the final answer. Requires a minimum budget of 1,024 tokens and counts towards yourmax_tokenslimit. -
Tool<
TInput extends Object> - A tool that can be called by the LLM.
- ToolPart
- A tool interaction part of a message.
- WebSearchConfig
-
Configuration for the OpenAI Responses
web_searchtool. - WebSearchLocation
- Approximate geographic hints for web search personalisation.
Enums
- AnthropicServerSideTool
- Enumerates Anthropic server-side tools with convenience helpers.
- AnthropicToolChoiceType
- Modes for Anthropic server-side tool selection.
- ChatGoogleGenerativeAISafetySettingCategory
- Safety settings categorizes.
- ChatGoogleGenerativeAISafetySettingThreshold
- Controls the probability threshold at which harm is blocked.
- ChatMessageRole
- The role of a message author.
- ChatOpenAIServiceTier
- Specifies the latency tier to use for processing the request. This is relevant for customers subscribed to the scale tier service.
- FinishReason
- The reason the model stopped generating tokens.
- GoogleFunctionCallingMode
- Controls how the model decides when to call functions.
- GoogleServerSideTool
- Server-side tools available for Google (Gemini) models.
- ImageGenerationQuality
- Quality levels for image generation.
- ImageGenerationSize
- Size options for generated images.
- McpServerKind
- Represents the type of MCP server connection.
- ModelKind
- The kind of model supported by a provider.
- OpenAIReasoningEffort
- Reasoning effort levels for OpenAI Responses models that support thinking.
- OpenAIReasoningSummary
- Reasoning summary verbosity preference for OpenAI Responses.
- OpenAIServerSideTool
- OpenAI-provided server-side tools that can be enabled for a Responses call.
- ThinkingConfigEnabledType
- The type of thinking configuration.
- ToolPartKind
- The kind of tool interaction.
- WebSearchContextSize
- Controls how much context is gathered during server-side web search.
Extensions
- AnthropicServerSideToolX on AnthropicServerSideTool
- Convenience extensions for AnthropicServerSideTool.
-
MessagePartHelpers
on Iterable<
Part> - Static helper methods for extracting specific types of parts from a list.
Functions
-
betaFeaturesForAnthropicTools(
{List< AnthropicServerToolConfig> ? manualConfigs, Set<AnthropicServerSideTool> ? serverSideTools}) → List<String> - Computes required beta headers for a set of tools.
-
mapGoogleMediaFinishReason(
Candidate_FinishReason? reason) → FinishReason - Maps Google finish reasons to Dartantic finish reasons.
-
mapGoogleModalities(
List< String> ? modalities) → List<GenerationConfig_Modality> - Validates and maps response modalities to Google enums.
-
mergeAnthropicServerToolConfigs(
{List< AnthropicServerToolConfig> ? manualConfigs, Set<AnthropicServerSideTool> ? serverSideTools}) → List<AnthropicServerToolConfig> - Merges explicit tool configs and shorthand server-side tools.
-
resolveGoogleMediaMimeType(
List< String> requested, String? overrideMime) → String - Resolves the best MIME type for Google media generation.
Typedefs
- CohereChatModel = OpenAIChatModel
- Cohere OpenAI-compatible model.
- CohereChatOptions = OpenAIChatOptions
- Cohere OpenAI-compatible options.
-
ContainerFileLoader
= Future<
ContainerFileData> Function(String containerId, String fileId) - Loads a container file by identifier and returns its resolved data.
Exceptions / Errors
- OpenAIRefusalException
- Exception thrown when OpenAI Structured Outputs API returns a refusal.