ChatOpenAI class

Wrapper around OpenAI Chat API.

Example:

final chatModel = ChatOpenAI(apiKey: '...');
final messages = [
  ChatMessage.system('You are a helpful assistant that translates English to French.'),
  ChatMessage.humanText('I love programming.'),
];
final prompt = PromptValue.chat(messages);
final res = await llm.invoke(prompt);

You can also use this wrapper to consume OpenAI-compatible APIs like Anyscale, Together AI, OpenRouter, One API, etc.

Call options

You can configure the parameters that will be used when calling the chat completions API in several ways:

Default options:

Use the defaultOptions parameter to set the default options. These options will be used unless you override them when generating completions.

final chatModel = ChatOpenAI(
  apiKey: openaiApiKey,
  defaultOptions: const ChatOpenAIOptions(
    temperature: 0.9,
    maxTokens: 100,
  ),
);

Call options:

You can override the default options when invoking the model:

final res = await chatModel.invoke(
  prompt,
  options: const ChatOpenAIOptions(seed: 9999),
);

Bind:

You can also change the options in a Runnable pipeline using the bind method.

In this example, we are using two totally different models for each question:

final chatModel = ChatOpenAI(apiKey: openaiApiKey,);
const outputParser = StringOutputParser();
final prompt1 = PromptTemplate.fromTemplate('How are you {name}?');
final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?');
final chain = Runnable.fromMap({
  'q1': prompt1 | chatModel.bind(const ChatOpenAIOptions(model: 'gpt-4')) | outputParser,
  'q2': prompt2| chatModel.bind(const ChatOpenAIOptions(model: 'gpt-3.5-turbo')) | outputParser,
});
final res = await chain.invoke({'name': 'David'});

Authentication

The OpenAI API uses API keys for authentication. Visit your API Keys page to retrieve the API key you'll use in your requests.

Organization (optional)

For users who belong to multiple organizations, you can specify which organization is used for an API request. Usage from these API requests will count against the specified organization's subscription quota.

final client = ChatOpenAI(
  apiKey: 'OPENAI_API_KEY',
  organization: 'org-dtDDtkEGoFccn5xaP5W1p3Rr',
);

Advance

Azure OpenAI Service

OpenAI's models are also available as an Azure service.

Although the Azure OpenAI API is similar to the official OpenAI API, there are subtle differences between them. This client is intended to be used with the official OpenAI API, but most of the functionality should work with the Azure OpenAI API as well.

If you want to use this client with the Azure OpenAI API (at your own risk), you can do so by instantiating the client as follows:

final client = ChatOpenAI(
  baseUrl: 'https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME',
  headers: { 'api-key': 'YOUR_API_KEY' },
  queryParams: { 'api-version': 'API_VERSION' },
);
  • YOUR_RESOURCE_NAME: This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal.
  • YOUR_DEPLOYMENT_NAME: This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Deployments in the Azure portal.
  • YOUR_API_KEY: This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal.
  • API_VERSION: The Azure OpenAI API version to use (e.g. 2023-05-15). Try to use the latest version available, it will probably be the closest to the official OpenAI API.

Custom HTTP client

You can always provide your own implementation of http.Client for further customization:

final client = ChatOpenAI(
  apiKey: 'OPENAI_API_KEY',
  client: MyHttpClient(),
);

Using a proxy

HTTP proxy

You can use your own HTTP proxy by overriding the baseUrl and providing your required headers:

final client = ChatOpenAI(
  baseUrl: 'https://my-proxy.com',
  headers: {'x-my-proxy-header': 'value'},
);

If you need further customization, you can always provide your own http.Client.

SOCKS5 proxy

To use a SOCKS5 proxy, you can use the socks5_proxy package and a custom http.Client.

Constructors

ChatOpenAI({String? apiKey, String? organization, String baseUrl = 'https://api.openai.com/v1', Map<String, String>? headers, Map<String, dynamic>? queryParams, Client? client, ChatOpenAIOptions defaultOptions = const ChatOpenAIOptions(model: 'gpt-3.5-turbo'), String? encoding})
Create a new ChatOpenAI instance.

Properties

apiKey String
Get the API key.
getter/setter pair
defaultOptions ChatOpenAIOptions
The default options to use when invoking the Runnable.
finalinherited
encoding String?
The encoding to use by tiktoken when tokenize is called.
getter/setter pair
hashCode int
The hash code for this object.
no setterinherited
modelType String
Return type of language model.
no setter
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

batch(List<PromptValue> inputs, {List<ChatOpenAIOptions>? options}) Future<List<ChatResult>>
Batches the invocation of the Runnable on the given inputs.
inherited
bind(ChatOpenAIOptions options) → RunnableBinding<PromptValue, ChatOpenAIOptions, ChatResult>
Binds the Runnable to the given options.
inherited
call(List<ChatMessage> messages, {ChatOpenAIOptions? options}) Future<AIChatMessage>
Runs the chat model on the given messages and returns a chat message.
inherited
close() → void
Cleans up any resources associated with it the Runnable.
countTokens(PromptValue promptValue, {ChatOpenAIOptions? options}) Future<int>
Returns the number of tokens resulting from tokenize the given prompt.
getCompatibleOptions(RunnableOptions? options) ChatOpenAIOptions?
Returns the given options if they are compatible with the Runnable, otherwise returns null.
inherited
invoke(PromptValue input, {ChatOpenAIOptions? options}) Future<ChatResult>
Invokes the Runnable on the given input.
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
pipe<NewRunOutput extends Object?, NewCallOptions extends RunnableOptions>(Runnable<ChatResult, NewCallOptions, NewRunOutput> next) → RunnableSequence<PromptValue, NewRunOutput>
Pipes the output of this Runnable into another Runnable using a RunnableSequence.
inherited
stream(PromptValue input, {ChatOpenAIOptions? options}) Stream<ChatResult>
Streams the output of invoking the Runnable on the given input.
streamFromInputStream(Stream<PromptValue> inputStream, {ChatOpenAIOptions? options}) Stream<ChatResult>
Streams the output of invoking the Runnable on the given inputStream.
inherited
throwNullModelError() → Never
Throws an error if the model id is not specified.
inherited
tokenize(PromptValue promptValue, {ChatOpenAIOptions? options}) Future<List<int>>
Tokenizes the given prompt using tiktoken with the encoding used by the model. If an encoding model is specified in encoding field, that encoding is used instead.
toString() String
A string representation of this object.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited