openai_dart 4.3.0
openai_dart: ^4.3.0 copied to clipboard
Dart client for the OpenAI REST APIs and realtime WebSocket/WebRTC workflows with type-safe access to GPT, image, audio, and Responses APIs.
4.3.0 #
Adds support for GPT Image 2 (gpt-image-2) — surfacing the full GPT-image parameter surface on ImageGenerationRequest and ImageEditRequest (background, moderation, output format/compression, streaming, input fidelity), expanded ImageQuality and ImageSize enums, token-based usage metadata on ImageResponse, and a new ImageModels constants class. Also expands the ReasoningEffort enum with none, minimal, and xhigh to match the latest OpenAI spec and the per-model support matrix for gpt-5.1, gpt-5-pro, and models after gpt-5.1-codex-max.
4.2.0 #
Re-introduces the detail field on InputFileContent via a new FileInputDetail enum (high/low) for controlling how thoroughly the model processes file inputs, following the same pattern as the existing ImageDetail enum. Also refreshes the OpenAPI spec to the latest upstream version.
4.1.0 #
Adds a phase property to ConversationMessageItem to match the latest OpenAI spec. The field labels assistant messages as either commentary (intermediate thinking) or final_answer, which prevents performance degradation when resending conversation history to models like gpt-5.3-codex.
4.0.1 #
4.0.0 #
Caution
This release has breaking changes. See the Migration Guide for upgrade instructions.
Adds FileContentPart and RefusalContentPart to the Chat Completions API — completing the content part union with support for sending PDFs/documents and representing model refusals. The InputFileContent.data() and InputContent.fileData() factories now require a mediaType parameter and construct proper data URL format (raw base64 was rejected by the API). Also adds ToolChoiceAllowedTools and ToolChoiceCustom variants for constraining tool selection, and narrows InputTokensResource.count toolChoice from Object? to ResponseToolChoice?.
- BREAKING FEAT: Add FileContentPart and RefusalContentPart (#152). (821eea60)
- FEAT: Add ToolChoiceAllowedTools and ToolChoiceCustom variants (#161). (f0940801)
- DOCS: Improve toolkit and skills from PR #152 lessons (#153). (f55de1ec)
- DOCS: Overhaul root README and add semver bullet to all packages (#151). (e6af33dd)
3.0.0 #
Caution
This release has breaking changes. See the Migration Guide for upgrade instructions.
Replaces the ServiceTier enum with an extensible class to preserve provider-specific tier values on round-trip serialization. Adds missing type and param fields to ResponseError and makes code nullable. Adds custom tool call support to the Responses API, modifier keys for computer use actions, and expands the Video API with edit, extend, and character endpoints. Removes FileInputDetail enum and detail parameter from InputFileContent. Also fixes toolkit verification warnings, adds docs coverage for 14 resources with new example files, standardizes equality helper locations, and adds llms.txt ecosystem files.
- BREAKING FEAT: Improve spec compliance for ServiceTier and ResponseError (#133). (487231d2)
- BREAKING FEAT: Update OpenAPI spec with custom tools, video expansion, and computer action keys (#143). (b08ba7b9)
- FIX: Resolve toolkit verification warnings across packages (#122). (634bdda2)
- REFACTOR: Standardize equality helpers location across packages (#123). (34086102)
- DOCS: Un-exclude 14 resources from docs verification (#121). (2a2966ad)
- DOCS: Overhaul READMEs and add llms.txt ecosystem (#149). (98f11483)
- TEST: Add OpenRouter integration test (#114). (46a75724)
2.0.0 #
Caution
This release has breaking changes. See the Migration Guide for upgrade instructions.
Made non-streaming response parsing robust for third-party OpenAI-compatible providers (AWS Bedrock proxies, Ollama, vLLM, TogetherAI, OpenRouter, etc.) by relaxing strict casts in fromJson while keeping constructor invariants strict. Added ResponseStreamExtensions for convenient stream event filtering/mapping, copyWith methods to all ResponseStreamEvent subtypes and ChatMessage/ContentPart, and updated RunStep, Message, Tool, and EmbeddingRequest models with additional fields from the latest API spec.
1.4.0 #
This release improves streaming error handling by detecting and surfacing errors embedded in chat and other streaming responses. It also updates model references to the latest gpt-realtime-1.5 and gpt-audio-1.5 models, and documents WebRTC support for the Realtime API.
- FEAT: Detect inline streaming errors (#91). (9f0eaf37)
- FEAT: Update model references to gpt-realtime-1.5 and gpt-audio-1.5 (#83). (30d27274)
- FIX: Detect and throw errors embedded in chat streaming data (#87). (7bdeaaa5)
- DOCS: Improve READMEs with badges, sponsor section, and vertex_ai deprecation (#90). (5741f2f3)
- DOCS: Document WebRTC support in Realtime API (#84). (2f385378)
1.3.0 #
1.2.0 #
Added support for GPT-5.4 and the new Responses API agent capabilities released alongside it — tool search (deferred tool loading at runtime), built-in computer use, and 1M-token context with message phases. Also added multi-modal moderation, fine-tune management methods, and missing ChatCompletionCreateRequest fields. Fixed handling of unknown streaming event types.
- FEAT: GPT-5.4, tool search, computer use & message phase support (#69). (3dab848f)
- FEAT: Add missing ChatCompletionCreateRequest fields (#73). (0b06b159)
- FEAT: Add multi-modal moderation, fine-tune methods & missing fields (#76). (8b54049c)
- FIX: Handle unknown streaming event types gracefully (#72). (28a49804)
- REFACTOR: Migrate API skills to the shared api-toolkit CLI (#74). (923cc83e)
- DOCS: Update README with SOTA models and Responses API (#68). (ff6d2774)
- CHORE: Add .pubignore to exclude .agents/ and specs/ from publishing (#78). (0ff199bf)
1.1.0 #
Added baseUrl and defaultHeaders parameters to withApiKey constructors, aligned Responses API models with the latest OpenAI spec, fixed null index handling in ToolCallDelta.fromJson, and improved hashCode for list fields.
- FEAT: Add baseUrl and defaultHeaders to withApiKey constructors (#57). (f0dd0caa)
- FIX: Align Responses API models with current OpenAI spec (#59). (a55a67b7)
- FIX: Handle null index in ToolCallDelta.fromJson (#64). (9b3df8a4)
- FIX: Use Object.hashAll() for list fields in hashCode (#65). (4b19abd9)
- REFACTOR: Unify equality_helpers.dart across packages (#67). (ec2897f8)
1.0.1 #
1.0.0 #
Caution
This release has breaking changes. See the Migration Guide for upgrade instructions.
TL;DR: Complete reimplementation with a new architecture, minimal dependencies, resource-based API, and improved developer experience. Hand-crafted models (no code generation), interceptor-driven architecture, comprehensive error handling, full OpenAI API coverage, and alignment with the latest OpenAI OpenAPI (2026-02-19).
What's new #
- Resource-based API organization:
client.chat.completions— Chat completion creation, streamingclient.responses— Responses API (recommended unified API)client.conversations— Conversation managementclient.embeddings— Text embeddingsclient.audio.speech/audio.transcriptions/audio.translations— Audio APIsclient.images— Image generation, editing, variationsclient.files/client.uploads— File and large upload managementclient.batches— Batch processingclient.models— Model listing and retrievalclient.moderations— Content moderationclient.fineTuning.jobs— Fine-tuning job managementclient.beta.assistants/beta.threads/beta.vectorStores— Assistants API (Beta)client.videos— Sora video generationclient.containers— Code execution containersclient.chatkit— ChatKit sessions and threads (Beta)client.evals— Model evaluationclient.realtime— WebSocket-based Realtime APIclient.completions— Legacy text completions
- Architecture:
- Interceptor chain (Auth → Logging → Error → Transport with Retry wrapper).
- Authentication: API key, organization+key, or Azure via
AuthProviderinterface (ApiKeyProvider,OrganizationApiKeyProvider,AzureApiKeyProvider). - Retry with exponential backoff + jitter (only for idempotent methods on 429, 5xx, timeouts).
- Abortable requests via
abortTriggerparameter. - SSE streaming parser for real-time responses.
- WebSocket support for Realtime API.
- Central
OpenAIConfig(timeouts, retry policy, log level, baseUrl, auth).
- Hand-crafted models:
- No code generation dependencies (no freezed, json_serializable).
- Minimal runtime dependencies (
http,logging,meta,web_socketonly). - Immutable models with
copyWithusing sentinel pattern. - Full type safety with sealed exception hierarchy.
- Improved DX:
- Simplified message creation (e.g.,
ChatMessage.user(),ChatMessage.system()). - Explicit streaming methods (
createStream()vscreate()). - Response helpers (
.text,.hasToolCalls,.allToolCalls). ChatStreamAccumulatorand extension methods (collectText(),textDeltas(),accumulate()).- Rich logging with field redaction for sensitive data.
- Simplified message creation (e.g.,
- Full API coverage:
- Chat completions with tool calling, vision, structured outputs, audio, and predicted outputs.
- Responses API with built-in tool output types (web search, file search, code interpreter, image generation, MCP).
- Videos API (Sora) for video generation and remixing.
- Conversations API for multi-turn conversation management.
- Containers API for isolated code execution environments.
- ChatKit API for session and thread management (Beta).
- Evals API with multiple grader types and data source configurations.
- Realtime API for WebSocket-based audio conversations.
- Full Assistants, Threads, Messages, Runs, and Vector Stores API (Beta, separate import).
Breaking Changes #
- Resource-based API: Methods reorganized under strongly-typed resources:
client.createChatCompletion()→client.chat.completions.create()client.createChatCompletionStream()→client.chat.completions.createStream()client.createEmbedding()→client.embeddings.create()client.createImage()→client.images.generate()client.createSpeech()→client.audio.speech.create()client.createTranscription()→client.audio.transcriptions.create()client.createFineTuningJob()→client.fineTuning.jobs.create()client.uploadFile()→client.files.upload()client.createBatch()→client.batches.create()
- Model class renames:
CreateChatCompletionRequest→ChatCompletionCreateRequestChatCompletionMessage.user(content: ChatCompletionUserMessageContent.string('...'))→ChatMessage.user('...')ChatCompletionMessage.system(content: '...')→ChatMessage.system('...')ChatCompletionTool(type: ..., function: FunctionObject(...))→Tool.function(...)ChatCompletionModel.modelId('gpt-4o')→'gpt-4o'(plain string)EmbeddingInput.string('...')→EmbeddingInput.text('...')CreateImageRequest→ImageGenerationRequestImageSize.v1024x1024→ImageSize.size1024x1024
- Import structure: Assistants and Realtime APIs moved to separate entry points:
import 'package:openai_dart/openai_dart_assistants.dart'for Assistants, Threads, Messages, Runs, Vector Storesimport 'package:openai_dart/openai_dart_realtime.dart'for Realtime API
- Configuration: New
OpenAIConfigwithAuthProviderpattern:OpenAIClient(apiKey: 'KEY')→OpenAIClient(config: OpenAIConfig(authProvider: ApiKeyProvider('KEY')))- Or use
OpenAIClient.fromEnvironment()to readOPENAI_API_KEY. - Or use
OpenAIClient.withApiKey('KEY')for quick setup.
- Exceptions: Replaced
OpenAIClientExceptionwith typed hierarchy:ApiException,AuthenticationException,RateLimitException,NotFoundException,RequestTimeoutException,AbortedException,ConnectionException,ParseException,StreamException.
- Streaming: Use convenience getters and extension methods:
event.choices.first.delta.content→event.textDelta.map()callbacks → Dart 3 switch expressions oristype checks.
- Nullable fields:
Model.created,Model.ownedBy,ChatCompletion.createdare now nullable for OpenAI-compatible provider support. - Session cleanup:
endSession()→close(). - Dependencies: Removed
freezed,json_serializable; now minimal (http,logging,meta,web_socket).
See MIGRATION.md for step-by-step examples and mapping tables.
Commits #
- BREAKING FEAT: Complete v1.0.0 reimplementation (#24). (ed68e31b)
- BREAKING FEAT: Add type-safe ResponseInput and convenience factories (#29). (015307ea)
- BREAKING FIX: Make created and ownedBy nullable for provider compatibility (#30). (5c56f005)
- FEAT: Add Skills API, response compaction, JSON image editing, and batch endpoints (#34). (98128ade)
- FIX: Pre-release documentation and code fixes (#41). (5616f8f3)
- REFACTOR: Align client package architecture across SDK packages (#37). (cf741ee1)
- REFACTOR: Align API surface across all SDK packages (#36). (ed969cc7)
- DOCS: Refactors repository URLs to new location. (76835268)
0.6.1 #
- FEAT: Add image streaming and new GPT image models (#827). (1218d8c3)
- FEAT: Add ImageGenStreamEvent schema for streaming (#834). (eb640052)
- FEAT: Add ImageGenUsage schema for image generation (#833). (aecf79a9)
- FEAT: Add metadata fields to ImagesResponse (#831). (bd94b4c6)
- FEAT: Add prompt_tokens_details to CompletionUsage (#830). (ede649d1)
- FEAT: Add fine-tuning method parameter and schemas (#828). (99d77425)
- FEAT: Add Batch model and usage fields (#826). (b2933f50)
- FEAT: Add OpenRouter-specific sampling parameters (#825). (3dd9075c)
- FIX: Remove default value from image stream parameter (#829). (d94c7063)
- FIX: Fix OpenRouter reasoning type enum parsing (#810) (#824). (44ab2841)
0.6.0 #
Caution
This release has breaking changes. See the Migration Guide for upgrade instructions.
- FIX: Correct text content serialization in CreateMessageRequest (#805). (e4569c96)
- FIX: Handle optional space after colon in SSE parser (#779). (9defa827)
- FEAT: Add OpenRouter provider routing support (#794). (6d306bc1)
- FEAT: Add OpenAI-compatible vendor reasoning content support (#793). (e0712c38)
- FEAT: Upgrade to http v1.5.0 (#785). (f7c87790)
- BREAKING BUILD: Require Dart >=3.8.0 (#792). (b887f5c6)
0.5.4+1 #
0.5.4 #
0.5.3 #
0.5.2 #
0.5.1 #
0.5.0 #
- BREAKING FEAT: Align OpenAI API changes (#706). (b8b04ca6)
- FEAT: Add support for web search, gpt-image-1 and list chat completions (#716). (269dea03)
- FEAT: Update OpenAI model catalog (#714). (68df4558)
- FEAT: Change the default value of 'reasoning_effort' from medium to null (#713). (f224572e)
- FEAT: Update dependencies (requires Dart 3.6.0) (#709). (9e3467f7)
- REFACTOR: Remove fetch_client dependency in favor of http v1.3.0 (#659). (0e0a685c)
- REFACTOR: Fix linter issues (#708). (652e7c64)
- DOCS: Fix TruncationObject docs typo. (ee5ed4fd)
- DOCS: Document Azure Assistants API base url (#626). (c3459eea)
0.4.5 #
- FEAT: Support Predicted Outputs (#613). (315fe0fd)
- FEAT: Support streaming audio responses in chat completions (#615). (6da756a8)
- FEAT: Add gpt-4o-2024-11-20 to model catalog (#614). (bf333081)
- FIX: Default store field to null to support Azure and Groq APIs (#608). (21332960)
- FIX: Make first_id and last_id nullable in list endpoints (#607). (7cfc4ddf)
- DOCS: Update OpenAI endpoints descriptions (#612). (10c66888)
- REFACTOR: Add new lint rules and fix issues (#621). (60b10e00)
- REFACTOR: Upgrade api clients generator version (#610). (0c8750e8)
0.4.3 #
- FEAT: Add support for audio in chat completions (#577). (0fb058cd)
- FEAT: Add support for storing outputs for model distillation and metadata (#578). (c9b8bdf4)
- FEAT: Support multi-modal moderations (#576). (45b9f423)
- FIX: submitThreadToolOutputsToRunStream not returning any events (#574). (00803ac7)
- DOCS: Add xAI to list of OpenAI-compatible APIs (#582). (017cb74f)
- DOCS: Fix assistants API outdated documentation (#579). (624c4128)
0.4.2+1 #
- DOCS: Add note about the new openai_realtime_dart client. (44672f0a)
0.4.2 #
0.4.1 #
0.4.0 #
- FEAT: Add support for disabling parallel tool calls (#492). (a91e0719)
- FEAT: Add GPT-4o-mini to model catalog (#497). (faa23aee)
- FEAT: Support chunking strategy in file_search tool (#496). (cfa974a9)
- FEAT: Add support for overrides in the file search tool (#491). (89605638)
- FEAT: Allow to customize OpenAI-Beta header (#502). (5fed8dbb)
- FEAT: Add support for service tier (#494). (0838e4b9)
0.3.3 #
0.3.2+1 #
0.3.2 #
0.3.1 #
0.3.0 #
Caution
This release has breaking changes. See the Migration Guide for upgrade instructions. If you are using the Assistants API v1, please refer to the OpenAI docs to see how to migrate to v2.
0.2.1 #
- FEAT: Support for Batch API (#383). (6b89f4a2)
- FEAT: Streaming support for Assistant API (#379). (6ef68196)
- FEAT: Option to specify tool choice in Assistant API (#382). (97d7977a)
- FEAT: JSON mode in Assistant API (#381). (a864dae3)
- FEAT: Max tokens and truncation strategy in Assistant API (#380). (7153167b)
- FEAT: Updated models catalog with GPT-4 Turbo with Vision (#378). (88537540)
- FEAT: Weights & Biases integration for fine-tuning and seed options (#377). (a5fff1bf)
- FEAT: Support for checkpoints in fine-tuning jobs (#376). (69f8e2f9)
0.2.0 #
0.1.7 #
0.1.6 #
0.1.5 #
0.1.4 #
0.1.0+1 #
0.1.0 #
Caution
This release has breaking changes. See the Migration Guide for upgrade instructions. Migration guides: new factories and multi-modal
0.0.2+2 #
0.0.2 #
- FEAT: Support new models API functionality (#203). (33ebe746)
- FEAT: Support new images API functionality (#202). (fcf21daf)
- FEAT: Support new fine-tuning API functionality (#201). (f5f44ad8)
- FEAT: Support new embeddings API functionality (#200). (9b43d85b)
- FEAT: Support new completion API functionality (#199). (f12f6f57)
- FEAT: Support new chat completion API functionality (#198). (01820d69)
- FIX: Handle nullable function call fields when streaming (#191). (8f23cf16)
0.0.1 #
0.0.1-dev.1 #
- Bootstrap project