dart_openai 1.3.0 dart_openai: ^1.3.0 copied to clipboard
Dart SDK for openAI Apis (GPT-3 & DALL-E), integrate easily the power of OpenAI's state-of-the-art AI models into their Dart applications.
Dart Client For OpenAI (GPT-3 & DALL-E..) #
An open-source Client package that allows developers to easily integrate the power of OpenAI's state-of-the-art AI models into their Dart/Flutter applications.
This library provides simple and intuitive methods for making requests to OpenAI's various APIs, including the GPT-3 language model, DALL-E image generation, and more.
The package is designed to be lightweight and easy to use, so you can focus on building your application, rather than worrying about the complexities and errors caused by dealing with HTTP requests.
Unofficial
OpenAI does not have any official Dart library.
Note: #
Please note that this client SDK connects directly to OpenAI APIs using HTTP requests.
Key Features #
-
Easy to use methods that reflect exactly the OpenAI documentation, with additional functionalities that make it better to use with Dart Programming Language.
-
Authorize just once, use it anywhere and at any time in your application
-
Developer-friendly.
-
Stream
functionality for completions API.
Code Progress (100 %) #
- ✅ Authentication
- ✅ Models
- ✅ Completions
- ✅ With
Stream
responses
- ✅ With
- ✅ Edits
- ✅ Images
- ✅ Embeddings
- ✅ Files
- ✅ Fine-tunes
- ❌ With events
Stream
responses
- ❌ With events
- ✅ Moderation
- ❌ ChatGPT ( as soon as possible when it's released )
Testing Progress (100 %) #
- ✅ Authentication
- ✅ Models
- ✅ Completions
- ✅ Edits
- ✅ Images
- ✅ Embeddings
- ✅ Files
- ✅ Fine-tunes
- ✅ Moderation
Full Documentation: #
For the full documentation about all members this library offers, check here.
Usage #
Authentication #
API key #
The OpenAI API uses API keys for authentication. you can get your account APU key by visiting API keys of your account.
We highly recommend loading your secret key at runtime from a .env
file, you can use the dotenv package for Dart applications or flutter_dotenv](https://pub.dev/packages/flutter_dotenv) for Flutter's.
void main() {
DotEnv env = DotEnv()..load([".env"]); // Loads our .env file.
OpenAI.apiKey = env['OPEN_AI_API_KEY']; // Initialize the package with that API key
// ..
}
if no apiKey
is set, and you tried to access OpenAI.instance
, a MissingApiKeyException
will be thrown even before making the actual request.
if the apiKey
is set, but it is invalid when making requests, a RequestFailedException
will be thrown in your app, check the error handling section for more info.
Setting an organization #
if you belong to a specific organization, you can pass its id to OpenAI.organization
like this:
OpenAI.organization = "ORGANIZATION ID";
If you don't belong actually to any organization, you can just ignore this section, or set it to null
.
Models #
List Models #
Lists the currently available models, and provides basic information about each one such as the owner and availability.
List<OpenAIModelModel> models = await OpenAI.instance.model.list();
OpenAIModelModel firstModel = models.first;
print(firstModel.id); // ...
Retrieve model. #
Retrieves a single model by its id and gets additional pieces of information about it.
OpenAIModelModel model = await OpenAI.instance.model.retrieve("text-davinci-003");
print(model.id)
If the model id does not exists, a RequestFailedException
will be thrown, check Error Handling section.
Completions #
Create completion #
Creates a Completion based on the provided properties model
, prompt
& other properties.
OpenAICompletionModel completion = await OpenAI.instance.completion.create(
model: "text-davinci-003",
prompt: "Dart is a progr",
maxTokens: 20,
temperature: 0.5,
n: 1,
stop: ["\n"],
echo: true,
);
if the request failed (as an example, if you did pass an invalid model id...), a RequestFailedException
will be thrown, check Error Handling section.
Create Completion Stream #
In addition to calling the OpenAI.instance.completion.create()
which is a Future
and will not return an actual value until the completion is ended, you can get a Stream
of values as they are generated:
Stream<OpenAIStreamCompletionModel> completionStream = OpenAI.instance.completion.createStream(
model: "text-davinci-003",
prompt: "Github is ",
maxTokens: 100,
temperature: 0.5,
topP: 1,
);
completionStream.listen((event) {
final firstCompletionChoice = event.choices.first;
print(firstCompletionChoice.text); // ...
});
Edits #
Creates an edited version of the given prompt based on the used model.
Create edit #
OpenAIEditModel edit = await OpenAI.instance.edit.create(
model: "text-davinci-edit-001";
instruction: "remote all '!'from input text",
input: "Hello!!, I! need to be ! somethi!ng"
n: 1,
temperature: 0.8,
);
Images #
Create image #
Generates a new image based on a prompt given.
OpenAIImageModel image = await OpenAI.instance.image.create(
prompt: 'an astronaut on the sea',
n: 1,
size: OpenAIImageSize.size1024,
responseFormat: OpenAIResponseFormat.url,
);
Create image edit #
Creates an edited or extended image given an original image and a prompt.
OpenAiImageEditModel imageEdit = await OpenAI.instance.image.edit(
file: File(/* IMAGE PATH HERE */),
mask: File(/* MASK PATH HERE */),
prompt: "mask the image with a dinosaur",
n: 1,
size: OpenAIImageSize.size1024,
responseFormat: OpenAIResponseFormat.url,
);
Create image variation #
Creates a variation of a given image.
OpenAIImageVariationModel imageVariation = await OpenAI.instance.image.variation(
image: File(/* IMAGE PATH HERE */),
n: 1,
size: OpenAIImageSize.size1024,
responseFormat: OpenAIResponseFormat.url,
);
Embeddings #
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
Create embeddings #
OpenAIEmbeddingsModel embeddings = await OpenAI.instance.embedding.create(
model: "text-embedding-ada-002",
input: "This is a text input just to test",
);
Files #
Files are used to upload documents that can be used with features like Fine-tuning.
List files #
Get a list of all the uploaded files o-to your OpenAI account.
List<OpenAIFileModel> files = await OpenAI.instance.file.list();
print(files.first.fileName); // ...
print(files.first.id); // ...
Upload file #
Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.
OpenAIFileModel uploadedFile = await OpenAI.instance.file.upload(
file: File("/* FILE PATH HERE */"),
purpose: "fine-tuning",
);
print(uploadedFile.id); // ...
Delete file #
Deletes an existent file by it's id.
bool isFileDeleted = await OpenAI.instance.file.delete("/* FILE ID */");
print(isFileDeleted);
Retrieve file #
Fetches for a single file by it's id and returns informations about it.
OpenAIFileModel file = await OpenAI.instance.file.retrieve("FILE ID");
print(file.id);
Retrieve file content #
Fetches for a single file content by it's id.
dynamic fileContent = await OpenAI.instance.file.retrieveContent("FILE ID");
print(fileContent);
Fine Tunes #
Create fine-tune #
Creates a job that fine-tunes a specified model from a given dataset, and returns a fine-tuned object about the enqueued job.
OpenAIFineTuneModel fineTune = await OpenAI.instance.fineTune.create(
trainingFile: "FILE ID",
);
print(fineTune.status); // ...
List fine-tunes #
List your organization's fine-tuning jobs.
List<OpenAIFineTuneModel> fineTunes = await OpenAI.instance.fineTune.list();
print(fineTunes.first); // ...
Retrieve fine-tune #
Retrieves a fine-tune by its id.
OpenAIFineTuneModel fineTune = await OpenAI.instance.fineTune.retrieve("FINE TUNE ID");
print(fineTune.id); // ...
Cancel fine-tune #
Cancels a fine-tune job by its id, and returns it.
OpenAIFineTuneModel cancelledFineTune = await OpenAI.instance.fineTune.cancel("FINE TUNE ID");
print(cancelledFineTune.status); // ...
List fine-tune events #
Lists a single fine-tune progress events by it's id.
List<OpenAIFineTuneEventModel> events = await OpenAI.instance.fineTune.listEvents("FINE TUNE ID");
print(events.first.message); // ...
Delete fine-tune #
deletes a fine-tune job by its id.
bool deleted = await OpenAI.instance.fineTune.delete("FINE TUNE ID");
print(deleted); // ...
Moderations #
Create moderation #
Classifies if text violates OpenAI's Content Policy
OpenAIModerationModel moderation = await OpenAI.instance.moderation.create(
input: "I want to kill him",
);
print(moderation.results); // ...
print(moderation.results.first.categories.hate); // ...
Error Handling #
Any time an error happens from the OpenAI API ends (As Example: when you try to create an image variation from a non-image file.. , a RequestFailedException
will be thrown automatically inside your Flutter / Dart app, you can use a try-catch
to catch that error, and make an action based on it:
try {
// This will throw an error.
final errorVariation = await OpenAI.instance.image.variation(
image: File(/*PATH OF NON-IMAGE FILE*/),
);
} on RequestFailedException catch(e) {
print(e.message);
print(e.statusCode)
}