open_responses 0.3.0
open_responses: ^0.3.0 copied to clipboard
Dart client for the OpenResponses API. Provides a unified, type-safe interface for interacting with multiple LLM providers through the OpenResponses specification.
OpenResponses Dart Client #
Dart client for the OpenResponses specification with streaming, tool calling, structured output, and multi-turn responses across multiple providers. It gives Dart and Flutter applications a pure Dart, type-safe client across iOS, Android, macOS, Windows, Linux, Web, and server-side Dart.
Tip
Coding agents: start with llms.txt. It links to the package docs, examples, and optional references in a compact format.
Table of Contents
Features #
Core response workflows #
- Single typed responses endpoint for text, multimodal input, and reasoning models
- SSE streaming with incremental text and event handling
- Tool calling, MCP tools, and structured outputs with JSON schema
- Multi-turn conversations through
previousResponseId
Provider portability #
- Works with OpenAI-compatible services, local runtimes, and custom gateways
- Pure Dart client surface for backends, CLIs, and Flutter apps
- Interceptors, retries, and typed metadata across providers
Supported providers #
open_responses works with any service that implements the OpenResponses or OpenAI-compatible response shape.
| Provider | Base URL | Auth | Typical use case |
|---|---|---|---|
| OpenAI | https://api.openai.com/v1 |
Bearer token | Hosted production models |
| Ollama | http://localhost:11434/v1 |
None by default | Local models and offline development |
| Hugging Face Spaces | Custom Space URL | Bearer token | Hosted open models |
| OpenRouter | https://openrouter.ai/api/v1 |
Bearer token | Multi-provider routing |
| Vercel AI Gateway | https://ai-gateway.vercel.sh/v1 |
Bearer token | Edge-deployed AI gateway |
| Databricks | https://<host>.databricks.com/serving-endpoints |
Bearer token | Enterprise model serving |
| vLLM | http://localhost:8000/v1 |
None by default | High-throughput local inference |
| LM Studio | http://localhost:1234/v1 |
None by default | Desktop local inference |
Why choose this client? #
- Pure Dart with no Flutter dependency — works in mobile apps, backends, and CLIs.
- Type-safe request and response models with minimal dependencies (
http,logging,meta). - Streaming, retries, interceptors, and error handling built into the client.
- One request format works across providers, reducing migration cost and vendor lock-in.
- Strict semver versioning so downstream packages can depend on stable, predictable version ranges.
Quickstart #
dependencies:
open_responses: ^0.3.0
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient(
config: OpenResponsesConfig(
baseUrl: 'https://api.openai.com/v1',
authProvider: BearerTokenProvider('YOUR_API_KEY'),
),
);
try {
final response = await client.responses.create(
CreateResponseRequest.text(
model: 'gpt-4o',
input: 'What is the capital of France?',
),
);
print(response.outputText);
} finally {
client.close();
}
}
Configuration #
Configure provider URLs, auth, and retries
Use OpenResponsesClient.fromEnvironment() for OpenAI-style defaults, or provide OpenResponsesConfig directly when you target another provider. This keeps provider switching explicit and easy to test.
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient(
config: OpenResponsesConfig(
baseUrl: 'http://localhost:11434/v1',
timeout: const Duration(minutes: 5),
retryPolicy: RetryPolicy(
maxRetries: 3,
initialDelay: Duration(seconds: 1),
),
),
);
client.close();
}
Environment variables:
OPENAI_API_KEYOPENAI_BASE_URL
Use explicit configuration on web builds where runtime environment variables are not available.
Usage #
How do I create a response? #
Show example
The CreateResponseRequest.text(...) helper keeps simple requests short, while the full request type remains available for advanced configurations. response.outputText is the fastest path for text-first integrations.
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient(
config: OpenResponsesConfig(
baseUrl: 'https://api.openai.com/v1',
authProvider: BearerTokenProvider('YOUR_API_KEY'),
),
);
try {
final response = await client.responses.create(
CreateResponseRequest.text(
model: 'gpt-4o',
input: 'Summarize why provider portability matters.',
),
);
print(response.outputText);
} finally {
client.close();
}
}
How do I stream events? #
Show example
Streaming works through a runner helper or manual event iteration. That makes it easy to bind partial text updates to a CLI, server-sent event bridge, or Flutter state notifier.
import 'dart:io';
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient.fromEnvironment();
try {
final runner = client.responses.stream(
CreateResponseRequest.text(
model: 'gpt-4o',
input: 'Write a short note about Flutter desktop.',
),
)..onTextDelta(stdout.write);
await runner.finalResponse;
} finally {
client.close();
}
}
How do I use tool calling? #
Show example
Tool calling is declared in the request, which keeps the provider-neutral contract intact. This is useful when you want one agent loop that can run against hosted or local providers.
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient.fromEnvironment();
try {
final response = await client.responses.create(
CreateResponseRequest(
model: 'gpt-4o',
input: const ResponseTextInput('What is the weather in Berlin?'),
tools: [
FunctionTool(
name: 'get_weather',
description: 'Get the current weather',
parameters: const {
'type': 'object',
'properties': {
'location': {'type': 'string'},
},
'required': ['location'],
},
),
],
),
);
print(response.functionCalls.length);
} finally {
client.close();
}
}
How do I keep a multi-turn conversation? #
Show example
OpenResponses keeps follow-up turns provider-neutral through previousResponseId. That is the main portability feature when you need stateful assistants without rewriting request history handling for each backend.
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient.fromEnvironment();
try {
final first = await client.responses.create(
CreateResponseRequest.text(
model: 'gpt-4o',
input: 'My name is Alice.',
),
);
final second = await client.responses.create(
CreateResponseRequest(
model: 'gpt-4o',
input: const ResponseTextInput('What is my name?'),
previousResponseId: first.id,
),
);
print(second.outputText);
} finally {
client.close();
}
}
How do I request structured output? #
Show example
Structured output uses the same response surface and adds a JSON schema under text.format. That keeps extraction workflows consistent even when you swap providers behind the same Dart code.
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient.fromEnvironment();
try {
final response = await client.responses.create(
CreateResponseRequest(
model: 'gpt-4o',
input: const ResponseTextInput('List three fruits and their colors.'),
text: TextConfig(
format: JsonSchemaFormat(
name: 'fruits',
schema: const {
'type': 'object',
'properties': {
'fruits': {
'type': 'array',
'items': {
'type': 'object',
'properties': {
'name': {'type': 'string'},
'color': {'type': 'string'},
},
'required': ['name', 'color'],
},
},
},
'required': ['fruits'],
},
),
),
),
);
print(response.outputText);
} finally {
client.close();
}
}
How do I switch providers without changing my app code? #
Show example
The provider switch happens in OpenResponsesConfig, not in the request body. That keeps your higher-level application logic independent from the underlying deployment target.
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient(
config: const OpenResponsesConfig(
baseUrl: 'http://localhost:11434/v1',
),
);
try {
final response = await client.responses.create(
CreateResponseRequest.text(
model: 'llama3.2',
input: 'Explain what hot reload means.',
),
);
print(response.outputText);
} finally {
client.close();
}
}
Error Handling #
Handle provider errors, retries, and validation failures
open_responses throws typed exceptions so provider differences do not collapse into raw HTTP status handling. Catch ApiException first, then fall back to OpenResponsesException for transport or client-side failures.
import 'dart:io';
import 'package:open_responses/open_responses.dart';
Future<void> main() async {
final client = OpenResponsesClient.fromEnvironment();
try {
await client.responses.create(
CreateResponseRequest.text(
model: 'gpt-4o',
input: 'Ping',
),
);
} on RateLimitException catch (error) {
stderr.writeln('Retry after: ${error.retryAfter}');
} on ApiException catch (error) {
stderr.writeln('Provider API error ${error.statusCode}: ${error.message}');
} on OpenResponsesException catch (error) {
stderr.writeln('OpenResponses client error: $error');
} finally {
client.close();
}
}
Examples #
See the example/ directory for complete examples:
| Example | Description |
|---|---|
create_response_example.dart |
Basic response creation |
streaming_example.dart |
Streaming events |
tool_calling_example.dart |
Tool calling |
multi_turn_example.dart |
Multi-turn conversations |
structured_output_example.dart |
Structured output with JSON schema |
provider_switch_example.dart |
Switching providers |
mcp_tools_example.dart |
MCP tool integration |
reasoning_example.dart |
Reasoning models |
error_handling_example.dart |
Exception handling patterns |
API Coverage #
| API | Status |
|---|---|
| Responses | ✅ Full |
Official Documentation #
Sponsor #
If these packages are useful to you or your company, please consider sponsoring the project. Development and maintenance are provided to the community for free, but integration tests against real APIs and the tooling required to build and verify releases still have real costs. Your support, at any level, helps keep these packages maintained and free for the Dart & Flutter community.
License #
This package is licensed under the MIT License.
This is a community-maintained package and is not affiliated with or endorsed by the OpenResponses project.