dart_openai 6.1.0 copy "dart_openai: ^6.1.0" to clipboard
dart_openai: ^6.1.0 copied to clipboard

Dart SDK for openAI Apis (GPT-3 & DALL-E), integrate easily the power of OpenAI's state-of-the-art AI models into their Dart applications.

🚀 Dart OpenAI #

GitHub commit activity GitHub contributors GitHub Repo stars GitHub Workflow Status GitHub Workflow Status GitHub Pub Version Pub Likes Pub Points Pub Popularity

A comprehensive Dart/Flutter client for OpenAI's powerful AI models

Quick StartDocumentationExamplesAPI CoverageContributing


✨ Overview #

Dart OpenAI is an unofficial but comprehensive client package that allows developers to easily integrate OpenAI's state-of-the-art AI models into their Dart/Flutter applications. The package provides simple, intuitive methods for making requests to OpenAI's various APIs, including GPT models, DALL-E image generation, Whisper audio processing, and more.

⚠️ Note: This is an unofficial package. OpenAI does not have an official Dart library.

🎯 Key Features #

  • 🚀 Easy Integration - Simple, intuitive API that mirrors OpenAI's documentation
  • 🔐 Secure Authentication - One-time setup, use anywhere in your application
  • 📡 Streaming Support - Real-time streaming for completions, chat, and fine-tune events
  • 🛠️ Developer Friendly - Comprehensive error handling and logging
  • 📚 Rich Examples - Ready-to-use examples for every implemented feature
  • 🎨 Modern UI Support - Optimized for Flutter applications
  • 🔄 Custom APIs - Additional custom endpoints for enhanced functionality

🚀 Quick Start #

Installation #

Add the package to your pubspec.yaml:

dependencies:
  dart_openai: 
copied to clipboard

Basic Setup #

import 'package:dart_openai/dart_openai.dart';

void main() {
  // Set your API key
  OpenAI.apiKey = "your-api-key-here";
  
  // Optional: Set organization ID
  OpenAI.organization = "your-org-id";
  
  // Optional: Configure timeout
  OpenAI.requestsTimeOut = Duration(seconds: 60);
  
  // Optional: Enable logging
  OpenAI.showLogs = true;
  
  runApp(MyApp());
}
copied to clipboard

Your First API Call #

// Simple chat completion
final chatCompletion = await OpenAI.instance.chat.create(
  model: "gpt-3.5-turbo",
  messages: [
    OpenAIChatCompletionChoiceMessageModel(
      role: OpenAIChatMessageRole.user,
      content: "Hello, how are you?",
    ),
  ],
);

print(chatCompletion.choices.first.message.content);
copied to clipboard

📊 API Coverage (2025) #

API feature Status Details Last Updated
📋 Responses ✅ Complete All 11-08-2025 17:33:39
💭 Conversations ✅ Complete All 11-08-2025 17:38:56
🎵 Audio ✅ Complete All 11-08-2025 17:42:54
🎬 Videos 🗓️ planned -
🎨 Images ✅ Complete All 11-08-2025 17:53:45
🎨 Images Streaaming 🗓️ planned -
📊 Embeddings ✅ Complete All 11-08-2025 17:56:30
⚖️ Evals ✅ Complete All 11-08-2025 21:04:36
🔧 Fine-tuning 🧩 70% Complete missing newer endpoints
📊 Graders ✅ Complete All 11-08-2025 21:46:48
📦 Batch 🗓️ planned -
📁 Files ✅ Complete All 11-08-2025 21:51:34
📤 Uploads 🗓️ planned -
🤖 Models ✅ Complete All 11-08-2025 21:53:13
🛡️ Moderation ✅ Complete All 11-08-2025 21:54:01
🗃️ Vector Stores ✅ Complete All 11-09-2025 17:39:26
💬 ChatKit ❌ Not planned Beta feature
📦 Containers 🗓️ planned -
🕛 Real-time 🗓️ planned -
💬 Chat Completions ✅ Complete excluding stream functionality
🤖 Assistants ❌ Not planned beta feature
🤖 Administration 🗓️ planned -
📝 Completions (Legacy) ✅ Complete All
✏️ Edits (Legacy) ✅ Complete All

📚 Documentation #

Core APIs #

📋 Responses

// Create response
OpenAiResponse response = await OpenAI.instance.responses.create(
  input: "Your input text here",  
  model: "gpt-4",
);

// Get response
OpenAiResponse response = await OpenAI.instance.responses.get(
  responseId: "response-id-here",
  startingAfter: 0, 
);

// Delete response
await OpenAI.instance.responses.delete(
  responseId: "response-id-here",
);

// Cancel response
OpenAiResponse response = await OpenAI.instance.responses.cancel(
  responseId: "response-id-here",
);

// list input items
OpenAiResponseInputItemsList response = await OpenAI.instance.responses.listInputItems(
  responseId: "response-id-here",
  limit: 10, 
);


// Get input token counts
int inputTokens = await OpenAI.instance.responses.getInputTokenCounts(
  model: "gpt-5",
  input: "Your input text here",
);
copied to clipboard

💭 Conversations

// Create conversation
OpenAIConversation conversation = await OpenAI.instance.conversations.create(
  items: [{
    "type": "message",
    "role": "user",
    "content": "Hello!",
  }],
  metadata: {
    "key": "value",
    "another_key": "another_value",
  },
);


// Get conversation
OpenAIConversation conversation = await OpenAI.instance.conversations.get(
  conversationId: "conversation-id-here",
);

// Update conversation
OpenAIConversation updatedConversation = await OpenAI.instance.conversations.update(
  conversationId: "conversation-id",
  metadata: {
    "key": "new_value",
  },
);

// Delete conversation
 await OpenAI.instance.conversations.delete(
  conversationId: "conversation-id-here",
);

// list items
OpenAIConversationItemsResponse itemsList = await OpenAI.instance.conversations.listItems(
  conversationId: "conversation-id-here",
  limit: 10, 
);

// Create item
OpenAIConversationItem item = await OpenAI.instance.conversations.createItems(
  conversationId: "conversation-id-here",
  items: [
    // ...
  ],
);

// get item
OpenAIConversationItem item = await OpenAI.instance.conversations.getItem(
  conversationId: "conversation-id-here",
  itemId: "item-id-here",
);

// delete item
await OpenAI.instance.conversations.deleteItem(
  conversationId: "conversation-id-here",
  itemId: "item-id-here",
);
copied to clipboard

🎵 Audio

// Create speech
File speechFile = await OpenAI.instance.audio.createSpeech(
  model: "tts-1",
  input: "Text to convert to speech",
  voice: OpenAIAudioVoice.fable,
  responseFormat: OpenAIAudioSpeechResponseFormat.mp3,
  outputDirectory: "/path/to/output/directory",
  outputFileName: "output_speech.mp3",
);


// Transcribe audio
OpenAITranscriptionGeneralModel transcription = await OpenAI.instance.audio.createTranscription(
  model: "whisper-1",
  file: File("path/to/audio.mp3"),
  include: ["logprobs"],
  responseFormat: OpenAIAudioResponseFormat.verbose_json,
  language: "en",
  prompt: "This is a sample prompt to guide transcription",
);
// Handling different transcription response formats
if (transcription is OpenAITranscriptionModel) {
  print(transcription.logprobs);
  print(transcription.text);
  print(transcription.usage);
} else if (transcription is OpenAITranscriptionVerboseModel) {
  // print the transcription.
  print(transcription.text);
  print(transcription.segments?.map((e) => e.end));
}

// Create Translation
final translationText = await OpenAI.instance.audio.createTranslation(
  file: File("path/to/audio.mp3"),
  model: "whisper-1",
  prompt: "use unusual english words",
  responseFormat: OpenAIAudioResponseFormat.json,
);

copied to clipboard

🎬 Videos

// (To be implemented)

🎨 Images

// Generate image
OpenAIImageModel image = await OpenAI.instance.image.create(
  model: "dall-e-3",
  prompt: "image of a cat in a spaceship",
  responseFormat: OpenAIImageResponseFormat.url,
  size: OpenAIImageSize.size1024,
  quality: OpenAIImageQuality.standard,
  style: OpenAIImageStyle.vivid,
);

// Edit image
OpenAIImageModel imageEdit = await OpenAI.instance.image.edit(
  prompt: 'A fantasy landscape with mountains and a river',
  image: File("path/to/image.png"),
  size: OpenAIImageSize.size1024,
  responseFormat: OpenAIImageResponseFormat.b64Json,
);

// Create variation
List<OpenAIImageModel> imageVariation = await OpenAI.instance.image.variation(
  model: "dall-e-2",
  image: File("path/to/image.png"),
  size: OpenAIImageSize.size512,
  responseFormat: OpenAIImageResponseFormat.url,
);
copied to clipboard

🎨 Images Streaaming

// (To be implemented)

📊 Embeddings

OpenAIEmbeddingsModel embedding = await OpenAI.instance.embedding.create(
  model: "text-embedding-ada-002",
  input: "This is a sample text",
);
copied to clipboard

⚖️ Evals

// Create eval
OpenAIEval eval = await OpenAI.instance.evals.create(
  dataSourceConfig: RequestDatatSourceConfig.logs(),
);

// Get eval
OpenAIEval eval = await OpenAI.instance.evals.get(
  evalId: "eval-id-here",
);

// Update eval
OpenAIEval updatedEval = await OpenAI.instance.evals.update(
  evalId: "eval-id-here",
  metadata: {
    "key": "new_value",
  },
);

// Delete eval
 await OpenAI.instance.evals.delete(
  evalId: "eval-id-here",
);

// List evals
OpenAIEvalsList evalsList = await OpenAI.instance.evals.list(
  limit: 10, 
);

// Get eval runs.
OpenAIEvalRunsList evalRuns = await OpenAI.instance.evals.getRuns(
  evalId: "eval-id-here",
  limit: 3, 
);

// Get Eval run
OpenAIEvalRun evalRun = await OpenAI.instance.evals.getRun(
  evalId: "eval-id-here",
  runId: "run-id-here",
);

// Create run
OpenAIEvalRun createdRun = await OpenAI.instance.evals.createRun(
  evalId: "eval-id-here",
  dataSource: EvalRunDataSource.jsonl(),
);

// Cancel run
OpenAIEvalRun canceledRun = await OpenAI.instance.evals.cancel(
  evalId: "eval-id-here",
  runId: "run-id-here",
);

// Delete run
 await OpenAI.instance.evals.deleteRun(
  evalId: "eval-id-here",
  runId: "run-id-here",
);

// Get output item of eval run.
OpenAIEvalRunOutputItem outputItem = await OpenAI.instance.evals.getEvalRunOutputItem(
  evalId: "eval-id-here",
  runId: "run-id-here",
  outputItemIdn: "item-id-here",
);

// Get eval run output items.
OpenAIEvalRunOutputItemsList outputItems = await OpenAI.instance.evals.getEvalRunOutputItems(
  evalId: "eval-id-here",
  runId: "run-id-here",
  limit: 10,
);
copied to clipboard

🔧 Fine-tuning

// (To be implemented)

📊 Graders


// graders
final grader = OpenAIGraders.stringCheckGrader(...);
final grader2 = OpenAIGraders.textSimilarityGrader(...);
final grader3 = OpenAIGraders.scoreModelGrader(...);
final grader4 = OpenAIGraders.labelModelGrader(...);
final grader5 = OpenAIGraders.pythonGrader(...);
final grader6 = OpenAIGraders.multiGrader(...);

// Run grader
final grader = await OpenAI.instance.graders.runGrader(
 grader: grader,
 modelSample: "The model output to be graded", 
);

// Validate Grader
final isValid = OpenAI.instance.graders.validateGrader(
  grader: grader
);
copied to clipboard

📦 Batch

// (To be implemented)

📁 Files

// Upload file
OpenAIFileModel file = await OpenAI.instance.files.upload(
  file: File("path/to/file.jsonl"),
  purpose: "assistants",
);

// List files
List<OpenAIFileModel> files = await OpenAI.instance.files.list(
  limit: 10,
);

// Retrieve file
OpenAIFileModel file = await OpenAI.instance.files.retrieve(
   "file_id" 
);

// Delete file
await OpenAI.instance.files.delete("file-id-here");

// Retrieve file content
final content = await OpenAI.instance.files.retrieveContent(
  "file_id"
);
copied to clipboard

📤 Uploads

// (To be implemented)

🤖 Models

// List all available models
List<OpenAIModelModel> models = await OpenAI.instance.model.list();

// Retrieve specific model
OpenAIModelModel model = await OpenAI.instance.model.retrieve("gpt-3.5-turbo");

// Delete fine-tuned model
bool deleted = await OpenAI.instance.model.delete("fine-tuned-model-id");
copied to clipboard

🛡️ Moderation

OpenAIModerationModel moderation = await OpenAI.instance.moderation.create(
  input: ["Text to classify for moderation"],
  model: "omni-moderation-latest",
);
copied to clipboard

🗃️ Vector Stores

Vector Stores
// Create vector store
  OpenAIVectorStoreModel vectorStore = await OpenAI.instance.vectorStores.vectorStores.create(
    name: "example_vector_store",
    chunkingStrategy: OpenAIVectorStoreChunkingStrategy.static(
      chunkOverlapTokens: 300,
      maxChunkSizeTokens: 750,
    ),
    expiresAfter: OpenAIVectorStoreExpiresAfter(
      anchor: "last_active_at",
      days: 1,
    ),
  ); 

  // List vector stores
 OpenAIVectorStoreListModel allVEctorStores = await OpenAI.instance.vectorStores.vectorStores.list(limit: 30);

 // Get vector store
 final firstVectorStoreAsync = await OpenAI.instance.vectorStores.vectorStores.get(
   vectorStoreId: "vector_store_id",
 );

// Modify vector store
final updatedVectorStore = await OpenAI.instance.vectorStores.vectorStores.modify(
  vectorStoreId: "vector_store_id",
  name: "updated_vector_store_name",
);

// Delete vector store
await OpenAI.instance.vectorStores.vectorStores.delete(
  vectorStoreId: "vector_store_id",
);

// search in vector store
final searchVEctorStoreResult = await OpenAI.instance.vectorStores.vectorStores.search(
   vectorStoreId: updatedVectorStore.id,
   query: "example",
   maxNumResults: 10,
   filters: OpenAIVectorStoresSearchFilter.comparison(
     type: "eq",
     key: "metadata.example_key",
     value: "example_value",
   ),
   rankingOptions: OpenAIVectorStoresRankingOptions(
     ranker: "none",
     scoreThreshold: 0,
   ),
);

copied to clipboard
Vector store files

// Create vector store file
final createdVectorStoreFile = await OpenAI.instance.vectorStores.vectorStoresFiles.create(
  vectorStoreId: "vector_store_id",
  fileId: "file_id",
  attributes: {
    "chapter": "Chapter 1",
  },
  chunckingStrategy: OpenAIVectorStoreChunkingStrategy.static(
    chunkOverlapTokens: 300,
    maxChunkSizeTokens: 750,
  ),
);

// List vector store files
final vectorStoreFiles = await OpenAI.instance.vectorStores.vectorStoresFiles.list(
  vectoreStoreId: "vector_store_id",
  limit: 60,
);

// Get vector store file
final vectorStoreFile = await OpenAI.instance.vectorStores.vectorStoresFiles.get(
  fileId: "file_id",
  vectorStoreId: "vector_store_id",
);

// Get vector store file content
final vectorStoreFileContent = await OpenAI.instance.vectorStores.vectorStoresFiles.getContent(
  fileId: "file_id",
  vectorStoreId: "vector_store_id",
);

// Update vector store file
final updatedVectorStoreFile = await OpenAI.instance.vectorStores.vectorStoresFiles.update(
  fileId: "file_id",
  vectorStoreId: "vector_store_id",
  attributes: {
    "chapter": "Updated Chapter 1",
  },
);

// Delete vector store file
await OpenAI.instance.vectorStores.vectorStoresFiles.delete(
  fileId: "file_id",
  vectorStoreId: "vector_store_id",
);

copied to clipboard
Vector store file batches

// Create vector store file batch
final vectoreStoreFileBatch = await OpenAI.instance.vectorStores.vectorStoreFileBatch.create(
  vectorStoreId: "vector_store_id",
  chunkingStrategy: OpenAIVectorStoreChunkingStrategy.static(
    chunkOverlapTokens: 200,
    maxChunkSizeTokens: 550,
  ),
  attributes: {
    "batch_name": "My First Batch",
  },
  fileIds: ["file-abc123", "file-def456"],
);

// Get vector store file batch
final batch = await OpenAI.instance.vectorStores.vectorStoreFileBatch.get(
  batchId: "batch_id",
  vectorStoreId: "vector_store_id",
);

// Cancel vector store file batch
final cancelledBatch = await OpenAI.instance.vectorStores.vectorStoreFileBatch.cancel(
  batchId: "batch_id",
  vectorStoreId: "vector_store_id",
);

// List vector store files in a batch
final vectorStoreBatchFiles = await OpenAI.instance.vectorStores.vectorStoreFileBatch.list(
  vectorStoreId: "vector_store_id",
  batchId: 'batch_id',
);

copied to clipboard

📦 Containers

// (To be implemented)

🕛 Real-time

// (To be implemented)

💬 Chat Completions

// Basic chat completion
OpenAIChatCompletionModel chat = await OpenAI.instance.chat.create(
  model: "gpt-3.5-turbo",
  messages: [
    OpenAIChatCompletionChoiceMessageModel(
      role: OpenAIChatMessageRole.user,
      content: "Hello, how can you help me?",
    ),
  ],
  temperature: 0.7,
  maxTokens: 150,
);

// Streaming chat completion
Stream<OpenAIStreamChatCompletionModel> chatStream = OpenAI.instance.chat.createStream(
  model: "gpt-3.5-turbo",
  messages: [
    OpenAIChatCompletionChoiceMessageModel(
      role: OpenAIChatMessageRole.user,
      content: "Tell me a story",
    ),
  ],
);

chatStream.listen((event) {
  print(event.choices.first.delta.content);
});
copied to clipboard

🤖 Administration

// (To be implemented)


🔧 Configuration #

Environment Variables #

// Using envied package
@Envied(path: ".env")
abstract class Env {
  @EnviedField(varName: 'OPEN_AI_API_KEY')
  static const apiKey = _Env.apiKey;
}

void main() {
  OpenAI.apiKey = Env.apiKey;
  runApp(MyApp());
}
copied to clipboard

Custom Configuration #

void main() {
  // Set API key
  OpenAI.apiKey = "your-api-key";
  
  // Set organization (optional)
  OpenAI.organization = "your-org-id";
  
  // Set custom base URL (optional)
  OpenAI.baseUrl = "https://api.openai.com/v1";
  
  // Set request timeout (optional)
  OpenAI.requestsTimeOut = Duration(seconds: 60);
  
  // Enable logging (optional)
  OpenAI.showLogs = true;
  OpenAI.showResponsesLogs = true;
  
  runApp(MyApp());
}
copied to clipboard

🚨 Error Handling #

try {
  final chat = await OpenAI.instance.chat.create(
    model: "gpt-3.5-turbo",
    messages: [
      OpenAIChatCompletionChoiceMessageModel(
        role: OpenAIChatMessageRole.user,
        content: "Hello",
      ),
    ],
  );
} on RequestFailedException catch (e) {
  print("Request failed: ${e.message}");
  print("Status code: ${e.statusCode}");
} on MissingApiKeyException catch (e) {
  print("API key not set: ${e.message}");
} on UnexpectedException catch (e) {
  print("Unexpected error: ${e.message}");
}
copied to clipboard

🤝 Contributing #

We welcome contributions! Here's how you can help:

🐛 Bug Reports #

  • Use GitHub Issues to report bugs
  • Include reproduction steps and environment details

💡 Feature Requests #

  • Suggest new features via GitHub Issues
  • Check existing issues before creating new ones

🔧 Code Contributions #

  • Fork the repository
  • Create a feature branch
  • Make your changes
  • Add tests if applicable
  • Submit a pull request

📚 Documentation #

  • Help improve documentation
  • Add examples for missing features
  • Fix typos and improve clarity

💰 Sponsoring #


📜 License #

This project is licensed under the MIT License - see the LICENSE file for details.


🙏 Acknowledgments #

  • OpenAI for providing the amazing AI models and APIs
  • Contributors who help maintain and improve this package
  • Sponsors who support the project financially
  • Community for feedback and suggestions

📞 Support #


Made with ❤️ by the Dart OpenAI community

⭐ Star this repo🐛 Report Bug💡 Request Feature📖 Documentation

561
likes
140
points
46.4k
downloads

Publisher

verified publishergwhyyy.com

Weekly Downloads

2024.12.17 - 2025.11.11

Dart SDK for openAI Apis (GPT-3 & DALL-E), integrate easily the power of OpenAI's state-of-the-art AI models into their Dart applications.

Repository (GitHub)
View/report issues

Documentation

Documentation
API reference

License

MIT (license)

Dependencies

collection, fetch_client, http, http_parser, meta

More

Packages that depend on dart_openai