OpenAI class
Wrapper around OpenAI Completions API.
Example:
final llm = OpenAI(apiKey: '...');
final prompt = PromptValue.string('Tell me a joke');
final res = await llm.invoke(prompt);
Call options
You can configure the parameters that will be used when calling the completions API in several ways:
Default options:
Use the defaultOptions parameter to set the default options. These options will be used unless you override them when generating completions.
final llm = OpenAI(
apiKey: openaiApiKey,
defaultOptions: const OpenAIOptions(
temperature: 0.9,
maxTokens: 100,
),
);
final prompt = PromptValue.string('Hello world!');
final result = await openai.invoke(prompt);
Call options:
You can override the default options when invoking the model:
final res = await llm.invoke(
prompt,
options: const OpenAIOptions(seed: 9999),
);
Bind:
You can also change the options in a Runnable
pipeline using the bind
method.
In this example, we are using two totally different models for each question:
final llm = OpenAI(apiKey: openaiApiKey);
const outputParser = StringOutputParser();
final prompt1 = PromptTemplate.fromTemplate('How are you {name}?');
final prompt2 = PromptTemplate.fromTemplate('How old are you {name}?');
final chain = Runnable.fromMap({
'q1': prompt1 | llm.bind(const OpenAIOptions(model: 'gpt-3.5-turbo-instruct')) | outputParser,
'q2': prompt2| llm.bind(const OpenAIOptions(model: 'text-davinci-003')) | outputParser,
});
final res = await chain.invoke({'name': 'David'});
Authentication
The OpenAI API uses API keys for authentication. Visit your API Keys page to retrieve the API key you'll use in your requests.
Organization (optional)
For users who belong to multiple organizations, you can specify which organization is used for an API request. Usage from these API requests will count against the specified organization's subscription quota.
final client = OpenAI(
apiKey: 'OPENAI_API_KEY',
organization: 'org-dtDDtkEGoFccn5xaP5W1p3Rr',
);
Advance
Azure OpenAI Service
OpenAI's models are also available as an Azure service.
Although the Azure OpenAI API is similar to the official OpenAI API, there are subtle differences between them. This client is intended to be used with the official OpenAI API, but most of the functionality should work with the Azure OpenAI API as well.
If you want to use this client with the Azure OpenAI API (at your own risk), you can do so by instantiating the client as follows:
final client = OpenAI(
baseUrl: 'https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME',
headers: { 'api-key': 'YOUR_API_KEY' },
queryParams: { 'api-version': 'API_VERSION' },
);
YOUR_RESOURCE_NAME
: This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal.YOUR_DEPLOYMENT_NAME
: This value will correspond to the custom name you chose for your deployment when you deployed a model. This value can be found under Resource Management > Deployments in the Azure portal.YOUR_API_KEY
: This value can be found in the Keys & Endpoint section when examining your resource from the Azure portal.API_VERSION
: The Azure OpenAI API version to use (e.g.2023-05-15
). Try to use the latest version available, it will probably be the closest to the official OpenAI API.
Custom HTTP client
You can always provide your own implementation of http.Client
for further
customization:
final client = OpenAI(
apiKey: 'OPENAI_API_KEY',
client: MyHttpClient(),
);
Using a proxy
HTTP proxy
You can use your own HTTP proxy by overriding the baseUrl
and providing
your required headers
:
final client = OpenAI(
baseUrl: 'https://my-proxy.com',
headers: {'x-my-proxy-header': 'value'},
);
If you need further customization, you can always provide your own
http.Client
.
SOCKS5 proxy
To use a SOCKS5 proxy, you can use the
socks5_proxy
package and a
custom http.Client
.
Constructors
Properties
- apiKey ↔ String
-
Get the API key.
getter/setter pair
- defaultOptions → OpenAIOptions
-
The default options to use when invoking the
Runnable
.finalinherited - encoding ↔ String?
-
The encoding to use by tiktoken when tokenize is called.
getter/setter pair
- hashCode → int
-
The hash code for this object.
no setterinherited
- modelType → String
-
Return type of language model.
no setter
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
Methods
-
batch(
List< PromptValue> inputs, {List<OpenAIOptions> ? options}) → Future<List< LLMResult> > -
Batches the invocation of the
Runnable
on the giveninputs
. -
bind(
OpenAIOptions options) → RunnableBinding< PromptValue, OpenAIOptions, LLMResult> -
Binds the
Runnable
to the givenoptions
.inherited -
call(
String prompt, {OpenAIOptions? options}) → Future< String> -
Runs the LLM on the given String prompt and returns a String with the
generated text.
inherited
-
close(
) → void -
Cleans up any resources associated with it the
Runnable
. -
countTokens(
PromptValue promptValue, {OpenAIOptions? options}) → Future< int> -
Returns the number of tokens resulting from tokenize the given prompt.
inherited
-
getCompatibleOptions(
RunnableOptions? options) → OpenAIOptions? -
Returns the given
options
if they are compatible with theRunnable
, otherwise returnsnull
.inherited -
invoke(
PromptValue input, {OpenAIOptions? options}) → Future< LLMResult> -
Invokes the
Runnable
on the giveninput
. -
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
pipe<
NewRunOutput extends Object?, NewCallOptions extends RunnableOptions> (Runnable< LLMResult, NewCallOptions, NewRunOutput> next) → RunnableSequence<PromptValue, NewRunOutput> -
Pipes the output of this
Runnable
into anotherRunnable
using aRunnableSequence
.inherited -
stream(
PromptValue input, {OpenAIOptions? options}) → Stream< LLMResult> -
Streams the output of invoking the
Runnable
on the giveninput
. -
streamFromInputStream(
Stream< PromptValue> inputStream, {OpenAIOptions? options}) → Stream<LLMResult> -
Streams the output of invoking the
Runnable
on the giveninputStream
.inherited -
throwNullModelError(
) → Never -
Throws an error if the model id is not specified.
inherited
-
tokenize(
PromptValue promptValue, {OpenAIOptions? options}) → Future< List< int> > -
Tokenizes the given prompt using tiktoken with the encoding used by the
model
. If an encoding model is specified in encoding field, that encoding is used instead. -
toString(
) → String -
A string representation of this object.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited