dart_openai 1.5.5 copy "dart_openai: ^1.5.5" to clipboard
dart_openai: ^1.5.5 copied to clipboard

Dart SDK for openAI Apis (GPT-3 & DALL-E), integrate easily the power of OpenAI's state-of-the-art AI models into their Dart applications.

Dart library For OpenAI APIs (GPT-3 & DALL-E..) #

NEW: ChatGPT API is added to the library and can be used directly. #


GitHub commit activity GitHub contributors GitHub Repo stars GitHub Workflow Status GitHub Workflow Status GitHub Pub Version Pub Likes Pub Points Pub Popularity


Help this grow and get discovered by other devs with a ⭐ star.



An open-source Client package that allows developers to easily integrate the power of OpenAI's state-of-the-art AI models into their Dart/Flutter applications.

This library provides simple and intuitive methods for making requests to OpenAI's various APIs, including the GPT-3 language model, DALL-E image generation, and more.

The package is designed to be lightweight and easy to use, so you can focus on building your application, rather than worrying about the complexities and errors caused by dealing with HTTP requests.



Unofficial
OpenAI does not have any official Dart library.

Note: #

Please note that this client SDK connects directly to OpenAI APIs using HTTP requests.

✨ Key Features #

  • Easy to use methods that reflect exactly the OpenAI documentation, with additional functionalities that make it better to use with Dart Programming Language.
  • Authorize just once, use it anywhere and at any time in your application.
  • Developer-friendly.
  • Stream functionality for completions API & fine-tune events API.

👑 Code Progress (90 %) #

💫 Testing Progress (90 %) #


📜 Full Documentation: #

For the full documentation about all members this library offers, check here.


🟢 Usage #

Authentication #

API key #

The OpenAI API uses API keys for authentication. you can get your account APU key by visiting API keys of your account.

We highly recommend loading your secret key at runtime from a .env file, you can use the envied package.

// .env
OPEN_AI_API_KEY=<REPLACE WITH YOUR API KEY>
// lib/env/env.dart
import 'package:envied/envied.dart';
part 'env.g.dart';

@Envied(path: ".env")
abstract class Env {
  @EnviedField(varName: 'OPEN_AI_API_KEY') // the .env variable.
  static const apiKey = _Env.apiKey;
}
// lib/main.dart
void main() {
 OpenAI.apiKey = Env.apiKey; // Initializes the package with that API key
 // ..
}

if no apiKey is set, and you tried to access OpenAI.instance, a MissingApiKeyException will be thrown even before making the actual request.

if the apiKey is set, but it is invalid when making requests, a RequestFailedException will be thrown in your app, check the error handling section for more info.

Setting an organization #

if you belong to a specific organization, you can pass its id to OpenAI.organization like this:

 OpenAI.organization = "ORGANIZATION ID";

If you don't belong actually to any organization, you can just ignore this section, or set it to null.

Learn More From Here.


Models #

List Models #

Lists the currently available models, and provides basic information about each one such as the owner and availability.

 List<OpenAIModelModel> models = await OpenAI.instance.model.list();
 OpenAIModelModel firstModel = models.first;

 print(firstModel.id); // ...

Retrieve model. #

Retrieves a single model by its id and gets additional pieces of information about it.

 OpenAIModelModel model = await OpenAI.instance.model.retrieve("text-davinci-003");
 print(model.id)

If the model id does not exists, a RequestFailedException will be thrown, check Error Handling section.

Learn More From Here.


Completions #

Create completion #

Creates a Completion based on the provided properties model, prompt & other properties.

OpenAICompletionModel completion = await OpenAI.instance.completion.create(
  model: "text-davinci-003",
  prompt: "Dart is a progr",
  maxTokens: 20,
  temperature: 0.5,
  n: 1,
  stop: ["\n"],
  echo: true,
);

if the request failed (as an example, if you did pass an invalid model id...), a RequestFailedException will be thrown, check Error Handling section.

Create Completion Stream #

In addition to calling the OpenAI.instance.completion.create() which is a Future and will not return an actual value until the completion is ended, you can get a Stream of values as they are generated:

Stream<OpenAIStreamCompletionModel> completionStream = OpenAI.instance.completion.createStream(
  model: "text-davinci-003",
  prompt: "Github is ",
  maxTokens: 100,
  temperature: 0.5,
  topP: 1,
 );

completionStream.listen((event) {
 final firstCompletionChoice = event.choices.first;
 print(firstCompletionChoice.text); // ...
});

Learn More From Here.


Chat (ChatGPT) #

Creates a completion for the chat message, note you need to set each message as a [OpenAIChatCompletionChoiceMessageModel] object.

OpenAIChatCompletionModel chatCompletion = await OpenAI.instance.chat.create(
    model: "gpt-3.5-turbo",
    messages: [
      OpenAIChatCompletionChoiceMessageModel(content: "hello, what is Flutter and Dart ?", role: "user"),
    ],
);

Edits #

Create edit #

Creates an edited version of the given prompt based on the used model.

OpenAIEditModel edit = await OpenAI.instance.edit.create(
model: "text-davinci-edit-001";
instruction: "remote all '!'from input text",
input: "Hello!!, I! need to be ! somethi!ng"
n: 1,
temperature: 0.8,
);

Learn More From Here.


Images #

Create image #

Generates a new image based on a prompt given.

 OpenAIImageModel image = await OpenAI.instance.image.create(
  prompt: 'an astronaut on the sea',
  n: 1,
  size: OpenAIImageSize.size1024,
  responseFormat: OpenAIResponseFormat.url,
);

Create image edit #

Creates an edited or extended image given an original image and a prompt.

OpenAiImageEditModel imageEdit = await OpenAI.instance.image.edit(
 file: File(/* IMAGE PATH HERE */),
 mask: File(/* MASK PATH HERE */),
 prompt: "mask the image with a dinosaur",
 n: 1,
 size: OpenAIImageSize.size1024,
 responseFormat: OpenAIResponseFormat.url,
);

Create image variation #

Creates a variation of a given image.

OpenAIImageVariationModel imageVariation = await OpenAI.instance.image.variation(
 image: File(/* IMAGE PATH HERE */),
 n: 1,
 size: OpenAIImageSize.size1024,
 responseFormat: OpenAIResponseFormat.url,
);

Learn More From Here.


Embeddings #

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Create embeddings #

OpenAIEmbeddingsModel embeddings = await OpenAI.instance.embedding.create(
 model: "text-embedding-ada-002",
 input: "This is a text input just to test",
);

Learn More From Here.


Files #

Files are used to upload documents that can be used with features like Fine-tuning.

List files #

Get a list of all the uploaded files o-to your OpenAI account.

List<OpenAIFileModel> files = await OpenAI.instance.file.list();

print(files.first.fileName); // ...
print(files.first.id); // ...

Upload file #

Upload a file that contains document(s) to be used across various endpoints/features. Currently, the size of all the files uploaded by one organization can be up to 1 GB. Please contact us if you need to increase the storage limit.

OpenAIFileModel uploadedFile = await OpenAI.instance.file.upload(
 file: File("/* FILE PATH HERE */"),
 purpose: "fine-tuning",
);

print(uploadedFile.id); // ...

Delete file #

Deletes an existent file by it's id.

bool isFileDeleted = await OpenAI.instance.file.delete("/* FILE ID */");

print(isFileDeleted);

Retrieve file #

Fetches for a single file by it's id and returns informations about it.

OpenAIFileModel file = await OpenAI.instance.file.retrieve("FILE ID");
print(file.id);

Retrieve file content #

Fetches for a single file content by it's id.

dynamic fileContent  = await OpenAI.instance.file.retrieveContent("FILE ID");

print(fileContent);

Learn More From Here.


Fine Tunes #

Create fine-tune #

Creates a job that fine-tunes a specified model from a given dataset, and returns a fine-tuned object about the enqueued job.

OpenAIFineTuneModel fineTune = await OpenAI.instance.fineTune.create(
 trainingFile: "FILE ID",
);

print(fineTune.status); // ...

List fine-tunes #

List your organization's fine-tuning jobs.

List<OpenAIFineTuneModel> fineTunes = await OpenAI.instance.fineTune.list();

print(fineTunes.first); // ...

Retrieve fine-tune #

Retrieves a fine-tune by its id.

OpenAIFineTuneModel fineTune = await OpenAI.instance.fineTune.retrieve("FINE TUNE ID");

print(fineTune.id); // ...

Cancel fine-tune #

Cancels a fine-tune job by its id, and returns it.

OpenAIFineTuneModel cancelledFineTune = await OpenAI.instance.fineTune.cancel("FINE TUNE ID");

print(cancelledFineTune.status); // ...

List fine-tune events #

Lists a single fine-tune progress events by it's id.

 List<OpenAIFineTuneEventModel> events = await OpenAI.instance.fineTune.listEvents("FINE TUNE ID");

 print(events.first.message); // ...

Listen to fine-tune events Stream #

Streams all events of a fine-tune job by its id, as they happen.

This is a long-running operation that will not return until the fine-tune job is terminated.

The stream will emit an event every time a new event is available.

Stream<OpenAIFineTuneEventStreamModel> eventsStream = OpenAI.instance.fineTune.listEventsStream("FINE TUNE ID");

eventsStream.listen((event) {
 print(event.message);
});

Delete fine-tune #

Deletes a fine-tune job by its id.

 bool deleted = await OpenAI.instance.fineTune.delete("FINE TUNE ID");

print(deleted); // ...

Learn More From Here.


Moderations #

Create moderation #

Classifies if text violates OpenAI's Content Policy

OpenAIModerationModel moderation = await OpenAI.instance.moderation.create(
  input: "I want to kill him",
);

print(moderation.results); // ...
print(moderation.results.first.categories.hate); // ...

Learn More From Here.


Error Handling #

Any time an error happens from the OpenAI API ends (As Example: when you try to create an image variation from a non-image file.. , a RequestFailedException will be thrown automatically inside your Flutter / Dart app, you can use a try-catch to catch that error, and make an action based on it:

try {

// This will throw an error.
 final errorVariation = await OpenAI.instance.image.variation(
  image: File(/*PATH OF NON-IMAGE FILE*/),
 );
} on RequestFailedException catch(e) {
 print(e.message);
 print(e.statusCode)
}
Articles:
408
likes
0
pub points
97%
popularity

Publisher

verified publishergwhyyy.com

Dart SDK for openAI Apis (GPT-3 & DALL-E), integrate easily the power of OpenAI's state-of-the-art AI models into their Dart applications.

Repository (GitHub)
View/report issues

Documentation

Documentation

License

unknown (LICENSE)

Dependencies

collection, http, meta

More

Packages that depend on dart_openai