chat_gpt_sdk 3.0.8 chat_gpt_sdk: ^3.0.8 copied to clipboard
create chat bot and other bot with ChatGPT SDK Support GPT-4 , 3.5 and SSE Generate Prompt (Stream)
ChatGPT Application with flutter #
ChatGPT is a chat-bot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3.5 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.
Unofficial #
"community-maintained” library.
OpenAI Powerful Library Support GPT-4 #
Features #
- ✅ Install Package
- ✅ Create OpenAI Instance
- ✅ Change Access Token
- ✅ Complete Text
- ✅ Chat Complete GPT-4
- ✅ Assistants API
- Threads
- Messages
- Runs
- ✅ Error Handle
- ✅ Example Q&A
- ✅ Generate Image With Prompt
- ✅ Editing
- ✅ Cancel Generate
- ✅ File
- ✅ Audio
- ✅ Embedding
- ✅ Fine-Tune
- Create Fine Tune
- Fine Tune List
- Fine Tune List Stream (SSE)
- Fine Tune Get by Id
- Cancel Fine Tune
- Delete Fine Tune
- Fine-Tune Deprecate
- New Fine-Tune Job
- ✅ Moderations
- ✅ Model And Engine
- ✅ Translate Example
- ✅ Video Tutorial
- ✅ Docs
Install Package #
chat_gpt_sdk: 3.0.8
Create OpenAI Instance #
- Parameter
- Token
- Your secret API keys are listed below. Please note that we do not display your secret API keys again after you generate them.
- Do not share your API key with others, or expose it in the browser or other client-side code. In order to protect the security of your account, OpenAI may also automatically rotate any API key that we've found has leaked publicly.
- https://beta.openai.com/account/api-keys
- Token
- OrgId
- Identifier for this organization sometimes used in API requests
- https://beta.openai.com/account/org-settings
final openAI = OpenAI.instance.build(token: token,baseOption: HttpSetup(receiveTimeout: const Duration(seconds: 5)),enableLog: true);
Change Access Token #
openAI.setToken('new-access-token');
///get token
openAI.token;
Complete Text #
-
Text Complete API
- Translate Method
- translateEngToThai
- translateThaiToEng
- translateToJapanese
- Model
- kTranslateModelV3
- kTranslateModelV2
- kCodeTranslateModelV2
- Translate natural language to SQL queries.
- Create code to call the Stripe API using natural language.
- Find the time complexity of a function.
- https://beta.openai.com/examples
- Translate Method
-
Complete with Feature #
void _translateEngToThai() async{
final request = CompleteText(
prompt: translateEngToThai(word: _txtWord.text.toString()),
maxToken: 200,
model: TextDavinci3Model());
final response = await openAI.onCompletion(request: request);
///cancel request
openAI.cancelAIGenerate();
print(response);
}
- Complete with FutureBuilder
Future<CTResponse?>? _translateFuture;
_translateFuture = openAI.onCompletion(request: request);
///ui code
FutureBuilder<CTResponse?>(
future: _translateFuture,
builder: (context, snapshot) {
final data = snapshot.data;
if(snapshot.connectionState == ConnectionState.done) return something
if(snapshot.connectionState == ConnectionState.waiting) return something
return something
})
-
GPT 3 with SSE #
void completeWithSSE() {
final request = CompleteText(
prompt: "Hello world", maxTokens: 200, model: TextDavinci3Model());
openAI.onCompletionSSE(request: request).listen((it) {
debugPrint(it.choices.last.text);
});
}
Chat Complete (GPT-4 and GPT-3.5) #
-
Chat Complete #
void chatComplete() async {
final request = ChatCompleteText(messages: [
Map.of({"role": "user", "content": 'Hello!'})
], maxToken: 200, model: Gpt4ChatModel());
final response = await openAI.onChatCompletion(request: request);
for (var element in response!.choices) {
print("data -> ${element.message?.content}");
}
}
-
GPT 4 with SSE #
void chatCompleteWithSSE() {
final request = ChatCompleteText(messages: [
Map.of({"role": "user", "content": 'Hello!'})
], maxToken: 200, model: Gpt4ChatModel());
openAI.onChatCompletionSSE(request: request).listen((it) {
debugPrint(it.choices.last.message?.content);
});
}
- Support SSE(Server Send Event)
- GPT-3.5 Turbo
void chatCompleteWithSSE() {
final request = ChatCompleteText(messages: [
Map.of({"role": "user", "content": 'Hello!'})
], maxToken: 200, model: GptTurboChatModel());
openAI.onChatCompletionSSE(request: request).listen((it) {
debugPrint(it.choices.last.message?.content);
});
}
- Chat Complete
void chatComplete() async {
final request = ChatCompleteText(messages: [
Map.of({"role": "user", "content": 'Hello!'})
], maxToken: 200, model: Gpt41106PreviewChatModel());
final response = await openAI.onChatCompletion(request: request);
for (var element in response!.choices) {
print("data -> ${element.message?.content}");
}
}
-
Chat Complete Function Calling #
void gptFunctionCalling() async {
final request = ChatCompleteText(
messages: [
Messages(
role: Role.user,
content: "What is the weather like in Boston?",
name: "get_current_weather"),
],
maxToken: 200,
model: Gpt41106PreviewChatModel(),
tools: [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
toolChoice: 'auto',
);
ChatCTResponse? response = await openAI.onChatCompletion(request: request);
}
-
Chat Complete Image Input #
void imageInput() async {
final request = ChatCompleteText(
messages: [
{
"role": "user",
"content": [
{"type": "text", "text": "What’s in this image?"},
{
"type": "image_url",
"image_url": {"url": "image-url"}
}
]
}
],
maxToken: 200,
model: Gpt4VisionPreviewChatModel(),
);
ChatCTResponse? response = await openAI.onChatCompletion(request: request);
debugPrint("$response");
}
Assistants #
-
Create Assistant #
void createAssistant() async {
final assistant = Assistant(
model: Gpt4AModel(),
name: 'Math Tutor',
instructions:
'You are a personal math tutor. When asked a question, write and run Python code to answer the question.',
tools: [
{
"type": "code_interpreter",
}
],
);
await openAI.assistant.create(assistant: assistant);
}
-
Create Assistant File #
void createAssistantFile() async {
await openAI.assistant.createFile(assistantId: '',fileId: '',);
}
-
List assistants #
void listAssistant() async {
final assistants = await openAI.assistant.list();
assistants.map((e) => e.toJson()).forEach(print);
}
-
List assistants files #
void listAssistantFile() async {
final assistants = await openAI.assistant.listFile(assistantId: '');
assistants.data.map((e) => e.toJson()).forEach(print);
}
-
Retrieve assistant #
void retrieveAssistant() async {
final assistants = await openAI.assistant.retrieves(assistantId: '');
}
-
Retrieve assistant file #
void retrieveAssistantFiles() async {
final assistants = await openAI.assistant.retrievesFile(assistantId: '',fileId: '');
}
-
Modify assistant #
void modifyAssistant() async {
final assistant = Assistant(
model: Gpt4AModel(),
instructions:
'You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.',
tools: [
{
"type": "retrieval",
}
],
fileIds: [
"file-abc123",
"file-abc456",
],
);
await openAI.assistant.modifies(assistantId: '', assistant: assistant);
}
-
Delete assistant #
void deleteAssistant() async {
await openAI.assistant.delete(assistantId: '');
}
-
Delete assistant file #
void deleteAssistantFile() async {
await openAI.assistant.deleteFile(assistantId: '',fileId: '');
}
Threads #
-
Create threads #
///empty body
void createThreads()async {
await openAI.threads.createThread(request: ThreadRequest());
}
///with message
void createThreads() async {
final request = ThreadRequest(messages: [
{
"role": "user",
"content": "Hello, what is AI?",
"file_ids": ["file-abc123"]
},
{
"role": "user",
"content": "How does AI work? Explain it in simple terms."
},
]);
await openAI.threads.createThread(request: request);
}
-
Retrieve thread #
void retrieveThread()async {
final mThread = await openAI.threads.retrieveThread(threadId: 'threadId');
}
-
Modify thread #
void modifyThread() async {
await openAI.threads.modifyThread(threadId: 'threadId', metadata: {
"metadata": {
"modified": "true",
"user": "abc123",
},
});
}
-
Delete thread #
void deleteThread() async {
await openAI.threads.deleteThread(threadId: 'threadId');
}
Messages #
-
Create Message #
void createMessage() async {
final request = CreateMessage(
role: 'user',
content: 'How does AI work? Explain it in simple terms.',
);
await openAI.threads.messages.createMessage(
threadId: 'threadId',
request: request,
);
}
-
List messages #
void listMessage()async {
final mMessages = await openAI.threads.messages.listMessage(threadId: 'threadId');
}
-
List message files #
void listMessageFile() async {
final mMessagesFile = await openAI.threads.messages.listMessageFile(
threadId: 'threadId',
messageId: '',
);
}
-
Retrieve message #
void retrieveMessage() async {
final mMessage = await openAI.threads.messages.retrieveMessage(
threadId: 'threadId',
messageId: '',
);
}
-
Retrieve message file #
void retrieveMessageFile() async {
final mMessageFile = await openAI.threads.messages.retrieveMessageFile(
threadId: 'threadId',
messageId: '',
fileId: '',
);
}
-
Modify message #
void modifyMessage() async {
await openAI.threads.messages.modifyMessage(
threadId: 'threadId',
messageId: 'messageId',
metadata: {
"metadata": {"modified": "true", "user": "abc123"},
},
);
}
Runs #
-
Create run #
void createRun() async {
final request = CreateRun(assistantId: 'assistantId');
await openAI.threads.runs.createRun(threadId: 'threadId', request: request);
}
-
Create thread and run #
void createThreadAndRun() async {
final request = CreateThreadAndRun(assistantId: 'assistantId', thread: {
"messages": [
{"role": "user", "content": "Explain deep learning to a 5 year old."}
],
});
await openAI.threads.runs.createThreadAndRun(request: request);
}
-
List runs #
void listRuns() async {
final mRuns = await openAI.threads.runs.listRuns(threadId: 'threadId');
}
-
List run steps #
void listRunSteps() async {
final mRunSteps = await openAI.threads.runs.listRunSteps(threadId: 'threadId',runId: '',);
}
-
Retrieve run #
void retrieveRun() async {
final mRun = await openAI.threads.runs.retrieveRun(threadId: 'threadId',runId: '',);
}
-
Retrieve run step #
void retrieveRunStep() async {
final mRun = await openAI.threads.runs.retrieveRunStep(threadId: 'threadId',runId: '',stepId: '');
}
-
Modify run #
void modifyRun() async {
await openAI.threads.runs.modifyRun(
threadId: 'threadId',
runId: '',
metadata: {
"metadata": {"user_id": "user_abc123"},
},
);
}
-
Submit tool outputs to run #
void submitToolOutputsToRun() async {
await openAI.threads.runs.submitToolOutputsToRun(
threadId: 'threadId',
runId: '',
toolOutputs: [
{
"tool_call_id": "call_abc123",
"output": "28C",
},
],
);
}
-
Cancel a run #
void cancelRun() async {
await openAI.threads.runs.cancelRun(
threadId: 'threadId',
runId: '',
);
}
Error Handle #
///using catchError
openAI.onCompletion(request: request)
.catchError((err){
if(err is OpenAIAuthError){
print('OpenAIAuthError error ${err.data?.error?.toMap()}');
}
if(err is OpenAIRateLimitError){
print('OpenAIRateLimitError error ${err.data?.error?.toMap()}');
}
if(err is OpenAIServerError){
print('OpenAIServerError error ${err.data?.error?.toMap()}');
}
});
///using try catch
try {
await openAI.onCompletion(request: request);
} on OpenAIRateLimitError catch (err) {
print('catch error ->${err.data?.error?.toMap()}');
}
///with stream
openAI
.onCompletionSSE(request: request)
.transform(StreamTransformer.fromHandlers(
handleError: (error, stackTrace, sink) {
if (error is OpenAIRateLimitError) {
print('OpenAIRateLimitError error ->${error.data?.message}');
}}))
.listen((event) {
print("success");
});
Q&A #
- Example Q&A
- Answer questions based on existing knowledge.
final request = CompleteText(prompt:'What is human life expectancy in the United States?'),
model: TextDavinci3Model(), maxTokens: 200);
final response = await openAI.onCompletion(request:request);
- Request
Q: What is human life expectancy in the United States?
- Response
A: Human life expectancy in the United States is 78 years.
Generate Image With Prompt #
-
Generate Image
- prompt
- A text description of the desired image(s). The maximum length is 1000 characters.
- n
- The number of images to generate. Must be between 1 and 10.
- size
- The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
- response_format
- The format in which the generated images are returned. Must be one of url or b64_json.
- user
- A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
- prompt
-
Generate with feature #
void _generateImage() {
const prompt = "cat eating snake blue red.";
final request = GenerateImage( model: DallE2(),prompt, 1,size: ImageSize.size256,
responseFormat: Format.url);
final response = openAI.generateImage(request);
print("img url :${response.data?.last?.url}");
}
Edit #
-
Edit Prompt #
void editPrompt() async {
final response = await openAI.editor.prompt(EditRequest(
model: CodeEditModel(),
input: 'What day of the wek is it?',
instruction: 'Fix the spelling mistakes'));
print(response.choices.last.text);
}
-
Edit Image #
void editImage() async {
final response = await openAI.editor.editImage(EditImageRequest(
image: FileInfo("${image?.path}", '${image?.name}'),
mask: FileInfo('file path', 'file name'),
size: ImageSize.size1024,
prompt: 'King Snake'),
model: DallE3(),);
print(response.data?.last?.url);
}
-
Variations #
void variation() async {
final request =
Variation(model: DallE2(),image: FileInfo('${image?.path}', '${image?.name}'));
final response = await openAI.editor.variation(request);
print(response.data?.last?.url);
}
Cancel Generate #
-
Stop Generate Prompt #
_openAI
.onChatCompletionSSE(request: request, onCancel: onCancel);
///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
mCancel = cancelData;
}
mCancel?.cancelToken.cancel("canceled ");
-
Stop Edit #
- image
- prompt
openAI.edit.editImage(request,onCancel: onCancel);
///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
mCancel = cancelData;
}
mCancel?.cancelToken.cancel("canceled edit image");
-
Stop Embedding #
openAI.embed.embedding(request,onCancel: onCancel);
///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
mCancel = cancelData;
}
mCancel?.cancelToken.cancel("canceled embedding");
- Stop Audio
- translate
- transcript
openAI.audio.transcribes(request,onCancel: onCancel);
///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
mCancel = cancelData;
}
mCancel?.cancelToken.cancel("canceled audio transcribes");
- Stop File
- upload file
- get file
- delete file
openAI.file.uploadFile(request,onCancel: onCancel);
///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
mCancel = cancelData;
}
mCancel?.cancelToken.cancel("canceled uploadFile");
File #
-
Get File #
void getFile() async {
final response = await openAI.file.get();
print(response.data);
}
-
Upload File #
void uploadFile() async {
final request = UploadFile(file: FileInfo('file-path', 'file-name'),purpose: 'fine-tune');
final response = await openAI.file.uploadFile(request);
print(response);
}
-
Delete File #
void delete() async {
final response = await openAI.file.delete("file-Id");
print(response);
}
-
Retrieve File #
void retrieve() async {
final response = await openAI.file.retrieve("file-Id");
print(response);
}
-
Retrieve Content File #
void retrieveContent() async {
final response = await openAI.file.retrieveContent("file-Id");
print(response);
}
Audio #
-
Audio Translate #
void audioTranslate() async {
final mAudio = File('mp3-path');
final request =
AudioRequest(file: FileInfo(mAudio.path, 'name'), prompt: '...');
final response = await openAI.audio.translate(request);
}
-
Audio Transcribe #
void audioTranscribe() async {
final mAudio = File('mp3-path');
final request =
AudioRequest(file: FileInfo(mAudio.path, 'name'), prompt: '...');
final response = await openAI.audio.transcribes(request);
}
-
Create speech #
void createSpeech() async {
final request = SpeechRequest(
model: 'tts-1', input: 'The quick brown fox jumped over the lazy dog.');
final List<int> response = await openAI.audio
.createSpeech(request: request);
}
Embedding #
- Embedding
void embedding() async {
final request = EmbedRequest(
model: TextSearchAdaDoc001EmbedModel(),
input: 'The food was delicious and the waiter');
final response = await openAI.embed.embedding(request);
print(response.data.last.embedding);
}
Fine Tune #
-
Create Fine Tune #
void createTineTune() async {
final request = CreateFineTuneJob(trainingFile: 'The ID of an uploaded file');
final response = await openAI.fineTune.createFineTuneJob(request);
}
-
Fine Tune List #
void tineTuneList() async {
final response = await openAI.fineTune.listFineTuneJob();
}
-
Fine Tune List Stream #
void tineTuneListStream() {
openAI.fineTune.listFineTuneJobStream('fineTuneId').listen((it) {
///handled data
});
}
-
Fine Tune Get by Id #
void tineTuneById() async {
final response = await openAI.fineTune.retrieveFineTuneJob('fineTuneId');
}
-
Cancel Fine Tune #
void tineTuneCancel() async {
final response = await openAI.fineTune.cancel('fineTuneId');
}
-
Delete Fine Tune #
void deleteTineTune() async {
final response = await openAI.fineTune.delete('model');
}
Moderations #
-
Create Moderation #
void createModeration() async {
final response = await openAI.moderation
.create(input: 'input', model: TextLastModerationModel());
}
Model&Engine #
- Model List
- List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
- https://beta.openai.com/docs/api-reference/models
final models = await openAI.listModel();
- Engine List
- Lists the currently available (non-finetuned) models, and provides basic information about each one such as the owner and availability.
- https://beta.openai.com/docs/api-reference/engines
final engines = await openAI.listEngine();