openrouter_api 1.0.2
openrouter_api: ^1.0.2 copied to clipboard
type safe way of interacting with openrouter api
OpenRouter API Dart Package #
A comprehensive Dart package that provides a simple and robust wrapper for the OpenRouter.ai REST API. It enables easy integration for both Dart and Flutter applications, with full support for standard and streaming completions, key management, multi-modal content, and more.
Table of Contents #
Installation #
To use this package, add openrouter_api
as a dependency in your pubspec.yaml
file.
From the command line: #
- For Dart projects:
dart pub add openrouter_api
- For Flutter projects:
flutter pub add openrouter_api
Or manually: #
Add this to your pubspec.yaml
file:
dependencies:
openrouter_api: ^<latest_version>
Then, run dart pub get
or flutter pub get
to install the package.
Usage #
1. Initialization #
All interactions with the OpenRouter API are handled through the OpenRouter
client. You can get your API key from the OpenRouter Keys page.
The package provides two types of clients depending on your needs:
Inference Client
This is the standard client for making API requests like chat completions.
import 'package:openrouter_api/openrouter_api.dart';
// Initialize the client for making inference requests.
final client = OpenRouter.inference(
key: "YOUR_OPENROUTER_API_KEY",
);
You can also provide optional appId
and appTitle
parameters, which are sent as HTTP-Referer
and X-Title
headers respectively.
final clientWithHeaders = OpenRouter.inference(
key: "YOUR_OPENROUTER_API_KEY",
appId: "https://your-app-url.com",
appTitle: "Your App Title",
);
Provisioner Client
This client is used specifically with a provisioning key to manage other API keys.
import 'package:openrouter_api/openrouter_api.dart';
// Initialize the client for managing keys.
final provisioner = OpenRouter.provisioner(
key: "YOUR_PROVISIONING_KEY",
);
2. Making Requests #
Once the client is initialized, you can start making requests.
Standard (Future-based) Completion
For simple use cases where you want to wait for the full response, use getCompletion
.
final client = OpenRouter.inference(key: "YOUR_OPENROUTER_API_KEY");
// Define the messages for the request.
final messages = [
LlmMessage.user(content: LlmMessageContent.text("What is the capital of France?")),
];
// you can also use the `basic` constructor which creates a basic text message
//
// final message = LlmMessage.basic("What is the capital of France?");
try {
final LlmResponse response = await client.getCompletion(
modelId: "openai/gpt-3.5-turbo",
messages: messages,
);
print(response.choices.first.content);
} on OpenRouterError catch (e) {
print("API Error: ${e.message} (Code: ${e.code})");
}
Streaming Completion
For real-time applications, use streamCompletion
. The last response delivired from the stream will contains the aggregated usage and detailed generation data.
final client = OpenRouter.inference(key: "YOUR_OPENROUTER_API_KEY");
final messages = [
LlmMessage.user(content: LlmMessageContent.text("Write a short story about a brave knight.")),
];
try {
final Stream<LlmResponse> responseStream = client.streamCompletion(
modelId: "google/gemini-pro-2.5",
messages: messages,
);
await for (final LlmResponse response in responseStream) {
// Access the content the directly.
print(response.choices.first.content);
} on OpenRouterError catch (e) {
print("API Error during stream: ${e.message}");
}
output will be :
// there
// was once
// a boy
// ...
// ...
// ...
// the end.
3. Understanding the Response #
The LlmResponse
object contains the API's reply.
id
(String): A unique identifier for the request.model
(String): The model used for the response.choices
(List<LlmResponseChoice
>): The main content is inresponse.choices.first.content
.cost
(double): The total cost of the request.generation
(LlmGeneration?): Detailed generation metadata, populated whenwithGenerationDetails: true
is set in agetCompletion
call.
4. Constructing Messages #
The package uses different constructors for each message role, with the user
role having special multi-modal capabilities.
System and Assistant Messages
The system
and assistant
roles are for providing instructions and conversation history. Their constructors accept a simple String
for the content.
LlmMessage.system(content: "You are a helpful assistant that speaks in pirate slang.");
LlmMessage.assistant(content: "Ahoy! The capital o' France be Paris, matey!");
User Messages (Text and Multi-modal)
The user
role is unique because it accepts an LlmMessageContent
object, which allows for rich, multi-modal content. Only the user role can send images, files, or a mix of content types.
For simple text, use LlmMessageContent.text()
:
LlmMessage.user(content: LlmMessageContent.text("What is it known for?"));
To send multiple parts, such as text and an image for a vision model, use LlmMessageContent.parts()
:
final messages = [
LlmMessage.user(
content: LlmMessageContent.parts([
LlmMessageContent.text("What is in this picture?"),
LlmMessageContent.imageUrl("https://i.imgur.com/kQ1c2d6.png"),
]),
),
];
// Use a vision-capable model
final response = await client.getCompletion(
modelId: "google/gemini-pro-vision",
messages: messages,
);
Example: Full Conversation
Here is how you would structure a complete, multi-turn conversation:
final messages = [
LlmMessage.system(content: "You are a helpful assistant."),
LlmMessage.user(content: LlmMessageContent.text("What is the capital of France?")),
LlmMessage.assistant(content: "The capital of France is Paris."),
LlmMessage.user(content: LlmMessageContent.text("What is it known for?")),
];
final response = await client.getCompletion(
modelId: "openai/gpt-4o",
messages: messages,
);
print(response.choices.first.content);
Advanced Usage #
Error Handling #
API-specific errors should be handled by catching the OpenRouterError
exception.
try {
await client.getCompletion(
modelId: "invalid/model",
messages: [LlmMessage.user(content: LlmMessageContent.text("test"))]
);
} on OpenRouterError catch (e) {
print("Caught an API Error!");
print("Code: ${e.code}");
print("Message: ${e.message}");
print("Metadata: ${e.metaData}");
} catch (e) {
print("Caught a general error: $e");
}
Listing Models and and Endpoints and Providers #
You can programmatically fetch the list of available models and providers.
List<LlmModel> models = await client.listModels();
print("Found ${models.length} models.");
List<LlmEndpoint> endpointss = await client.listEndpoints(modelId: "qwen/qwen3-next-80b-a3b-instruct");
print("found ${endpoints.length} endpoints.");
List<LlmProvider> providers = await client.listProviders();
print("Found ${providers.length} providers.");
Key Management (Provisioning) #
Using a provisioning key, the OpenRouter.provisioner
client can manage API keys.
final provisioner = OpenRouter.provisioner(key: "YOUR_PROVISIONING_KEY");
try {
// list assosiated keys
final keys = await provisioner.listKey();
print(keys);
// Create a new key
final (String literalKey, OpenRouterKey keyDetails) = await provisioner.createKey(
name: "New Test Key",
limit: 10.0, // Optional: Set a $10 spending limit
);
print("Created new key: $literalKey with hash: ${keyDetails.hash}");
// Clean up
await provisioner.deleteKey(hash: keyDetails.hash);
print("Deleted key.");
} on OpenRouterError catch (e) {
print("Key management error: ${e.message}");
}
API Reference #
For a complete list of all available methods, classes, and parameters, please refer to the full API Reference.
OpenRouter documentation #
for more details on the openrouter api , you check their docsumentation.
License #
This package is licensed under the MIT License.