chat_gpt_sdk 2.2.6 copy "chat_gpt_sdk: ^2.2.6" to clipboard
chat_gpt_sdk: ^2.2.6 copied to clipboard

retracted

create chat bot and other bot with ChatGPT SDK Support GPT-4 , 3.5 and SSE Generate Prompt (Stream)

ChatGPT Application with flutter #

ChatGPT is a chat-bot launched by OpenAI in November 2022. It is built on top of OpenAI's GPT-3.5 family of large language models, and is fine-tuned with both supervised and reinforcement learning techniques.

Unofficial #

"community-maintained” library.

OpenAI Powerful Library Support GPT-4 #


GitHub commit activity GitHub contributors GitHub Repo stars GitHub Workflow Status GitHub Pub Points Pub Popularity Pub Likes Pub Version Code Coverage


Features #

Install Package #

chat_gpt_sdk: 2.2.6

Create OpenAI Instance #

  • Parameter
    • Token
      • Your secret API keys are listed below. Please note that we do not display your secret API keys again after you generate them.
      • Do not share your API key with others, or expose it in the browser or other client-side code. In order to protect the security of your account, OpenAI may also automatically rotate any API key that we've found has leaked publicly.
      • https://beta.openai.com/account/api-keys
  • OrgId
final openAI = OpenAI.instance.build(token: token,baseOption: HttpSetup(receiveTimeout: const Duration(seconds: 5)),enableLog: true);

Change Access Token #

openAI.setToken('new-access-token');
///get toekn
openAI.token;

Complete Text #

  • Text Complete API

    • Translate Method
      • translateEngToThai
      • translateThaiToEng
      • translateToJapanese
    • Model
      • kTranslateModelV3
      • kTranslateModelV2
      • kCodeTranslateModelV2
        • Translate natural language to SQL queries.
        • Create code to call the Stripe API using natural language.
        • Find the time complexity of a function.
    • https://beta.openai.com/examples
  • Complete with Feature

  void _translateEngToThai() async{
  final request = CompleteText(
          prompt: translateEngToThai(word: _txtWord.text.toString()),
          maxToken: 200,
          model: TextDavinci3Model());

  final response = await openAI.onCompletion(request: request);
  
  ///cancel request
  openAI.cancelAIGenerate();
  print(response);
}
  • Complete with FutureBuilder
Future<CTResponse?>? _translateFuture;

_translateFuture = openAI.onCompletion(request: request);

///ui code
FutureBuilder<CTResponse?>(
 future: _translateFuture,
 builder: (context, snapshot) {
   final data = snapshot.data;
   if(snapshot.connectionState == ConnectionState.done) return something 
   if(snapshot.connectionState == ConnectionState.waiting) return something
   return something
})
  • GPT-3 with SSE
 void completeWithSSE() {
  final request = CompleteText(
          prompt: "Hello world", maxTokens: 200, model: TextDavinci3Model());
  openAI.onCompletionSSE(request: request).listen((it) {
    debugPrint(it.choices.last.text);
  });
}

Chat Complete (GPT-4 and GPT-3.5) #

  • GPT-4
  void chatComplete() async {
    final request = ChatCompleteText(messages: [
      Map.of({"role": "user", "content": 'Hello!'})
    ], maxToken: 200, model: Gpt4ChatModel());

    final response = await openAI.onChatCompletion(request: request);
    for (var element in response!.choices) {
      print("data -> ${element.message?.content}");
    }
  }
  • GPT-4 with SSE(Server Send Event)
 void chatCompleteWithSSE() {
  final request = ChatCompleteText(messages: [
    Map.of({"role": "user", "content": 'Hello!'})
  ], maxToken: 200, model: Gpt4ChatModel());

  openAI.onChatCompletionSSE(request: request).listen((it) {
    debugPrint(it.choices.last.message?.content);
  });
}
  • Support SSE(Server Send Event)
    • GPT-3.5 Turbo
 void chatCompleteWithSSE() {
  final request = ChatCompleteText(messages: [
    Map.of({"role": "user", "content": 'Hello!'})
  ], maxToken: 200, model: GptTurboChatModel());

  openAI.onChatCompletionSSE(request: request).listen((it) {
    debugPrint(it.choices.last.message?.content);
  });
}
  • Chat Complete
  void chatComplete() async {
    final request = ChatCompleteText(messages: [
      Map.of({"role": "user", "content": 'Hello!'})
    ], maxToken: 200, model: GptTurbo0301ChatModel());

    final response = await openAI.onChatCompletion(request: request);
    for (var element in response!.choices) {
      print("data -> ${element.message?.content}");
    }
  }
  • Chat Complete Function Calling
/// work only with gpt-turbo-0613,gpt-4-0613
  void gptFunctionCalling() async {
  final request = ChatCompleteText(
          messages: [
            Messages(
                    role: Role.user, content: "What is the weather like in Boston?",name: "get_current_weather"),
          ],
          maxToken: 200,
          model: GptTurbo0631Model(),
          functions: [
            FunctionData(
                    name: "get_current_weather",
                    description: "Get the current weather in a given location",
                    parameters: {
                      "type": "object",
                      "properties": {
                        "location": {
                          "type": "string",
                          "description": "The city and state, e.g. San Francisco, CA"
                        },
                        "unit": {
                          "type": "string",
                          "enum": ["celsius", "fahrenheit"]
                        }
                      },
                      "required": ["location"]
                    })
          ],
          functionCall: FunctionCall.auto);

  ChatCTResponse? response = await openAI.onChatCompletion(request: request);
}

Error Handle #

///using catchError
 openAI.onCompletion(request: request)
    .catchError((err){
      if(err is OpenAIAuthError){
        print('OpenAIAuthError error ${err.data?.error?.toMap()}');
      }
      if(err is OpenAIRateLimitError){
        print('OpenAIRateLimitError error ${err.data?.error?.toMap()}');
      }
      if(err is OpenAIServerError){
        print('OpenAIServerError error ${err.data?.error?.toMap()}');
      }
      });

///using try catch
 try {
   await openAI.onCompletion(request: request);
 } on OpenAIRateLimitError catch (err) {
   print('catch error ->${err.data?.error?.toMap()}');
 }

///with stream
 openAI
        .onCompletionSSE(request: request)
        .transform(StreamTransformer.fromHandlers(
          handleError: (error, stackTrace, sink) {
              if (error is OpenAIRateLimitError) {
              print('OpenAIRateLimitError error ->${error.data?.message}');
              }}))
        .listen((event) {
          print("success");
        });

Q&A #

  • Example Q&A
    • Answer questions based on existing knowledge.
final request = CompleteText(prompt:'What is human life expectancy in the United States?'),
                model: TextDavinci3Model(), maxTokens: 200);

 final response = await openAI.onCompletion(request:request);
  • Request
Q: What is human life expectancy in the United States?
  • Response
A: Human life expectancy in the United States is 78 years.

Generate Image With Prompt #

  • Generate Image

    • prompt
      • A text description of the desired image(s). The maximum length is 1000 characters.
    • n
      • The number of images to generate. Must be between 1 and 10.
    • size
      • The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024.
    • response_format
      • The format in which the generated images are returned. Must be one of url or b64_json.
    • user
      • A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
  • Generate with feature

  void _generateImage() {
  const prompt = "cat eating snake blue red.";

  final request = GenerateImage(prompt, 1,size: ImageSize.size256,
          responseFormat: Format.url);
  final response = openAI.generateImage(request);
  print("img url :${response.data?.last?.url}");
}

Edit #

  • Edit Prompt
void editPrompt() async {
    final response = await openAI.editor.prompt(EditRequest(
        model: CodeEditModel(),
        input: 'What day of the wek is it?',
        instruction: 'Fix the spelling mistakes'));

    print(response.choices.last.text);
  }
  • Edit Image
 void editImage() async {
  final response = await openAI.editor.editImage(EditImageRequest(
          image: EditFile("${image?.path}", '${image?.name}'),
          mask: EditFile('file path', 'file name'),
          size: ImageSize.size1024,
          prompt: 'King Snake'));

  print(response.data?.last?.url);
}
  • Variations
  void variation() async {
  final request =
  Variation(image: EditFile('${image?.path}', '${image?.name}'));
  final response = await openAI.editor.variation(request);

  print(response.data?.last?.url);
}

Cancel Generate #

  • Stop Generate Prompt
 _openAI
        .onChatCompletionSSE(request: request, onCancel: onCancel);

///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
  mCancel = cancelData;
}

mCancel?.cancelToken.cancel("canceled ");
  • Stop Edit
    • image
    • prompt
openAI.edit.editImage(request,onCancel: onCancel);

///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
  mCancel = cancelData;
}

mCancel?.cancelToken.cancel("canceled edit image");
  • Stop Embedding
openAI.embed.embedding(request,onCancel: onCancel);

///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
  mCancel = cancelData;
}

mCancel?.cancelToken.cancel("canceled embedding");
  • Stop Audio
    • translate
    • transcript
openAI.audio.transcribes(request,onCancel: onCancel);

///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
  mCancel = cancelData;
}

mCancel?.cancelToken.cancel("canceled audio transcribes");
  • Stop File
    • upload file
    • get file
    • delete file
openAI.file.uploadFile(request,onCancel: onCancel);

///CancelData
CancelData? mCancel;
void onCancel(CancelData cancelData) {
  mCancel = cancelData;
}

mCancel?.cancelToken.cancel("canceled uploadFile");

File #

  • Get File
void getFile() async {
  final response = await openAI.file.get();
  print(response.data);
}
  • Upload File
void uploadFile() async {
  final request = UploadFile(file: EditFile('file-path', 'file-name'),purpose: 'fine-tune');
  final response = await openAI.file.uploadFile(request);
  print(response);
}
  • Delete File
  void delete() async {
  final response = await openAI.file.delete("file-Id");
  print(response);
}
  • Retrieve File
  void retrieve() async {
  final response = await openAI.file.retrieve("file-Id");
  print(response);
}
  • Retrieve Content File
  void retrieveContent() async {
  final response = await openAI.file.retrieveContent("file-Id");
  print(response);
}

Audio #

  • Audio Translate
void audioTranslate() async {
  final mAudio = File('mp3-path');
  final request =
  AudioRequest(file: EditFile(mAudio.path, 'name'), prompt: '...');

  final response = await openAI.audio.translate(request);
}
  • Audio Transcribe
void audioTranscribe() async {
  final mAudio = File('mp3-path');
  final request =
  AudioRequest(file: EditFile(mAudio.path, 'name'), prompt: '...');

  final response = await openAI.audio.transcribes(request);
}

Embedding #

  • Embedding
void embedding() async {
  final request = EmbedRequest(
          model: TextSearchAdaDoc001EmbedModel(),
          input: 'The food was delicious and the waiter');

  final response = await openAI.embed.embedding(request);

  print(response.data.last.embedding);
}

Fine Tune #

  • Create Fine Tune
void createTineTune() async {
  final request = CreateFineTuneJob(trainingFile: 'The ID of an uploaded file');
  final response = await openAI.fineTune.createFineTuneJob(request);
}
  • Fine Tune List
 void tineTuneList() async {
    final response = await openAI.fineTune.listFineTuneJob();
  }
  • Fine Tune List Stream (SSE)
 void tineTuneListStream() {
    openAI.fineTune.listFineTuneJobStream('fineTuneId').listen((it) {
      ///handled data
    });
  }
  • Fine Tune Get by Id
void tineTuneById() async {
    final response = await openAI.fineTune.retrieveFineTuneJob('fineTuneId');
  }
  • Cancel Fine Tune
  void tineTuneCancel() async {
    final response = await openAI.fineTune.cancel('fineTuneId');
  }
  • Delete Fine Tune
 void deleteTineTune() async {
    final response = await openAI.fineTune.delete('model');
  }

Moderations #

  • Create Moderation
  void createModeration() async {
  final response = await openAI.moderation
          .create(input: 'input', model: TextLastModerationModel());
}

Model&Engine #

final models = await openAI.listModel();
final engines = await openAI.listEngine();

Translate App #

ChatGPT Demo App #

Google Play

Video Tutorials #

Docs (Support Thai) #

ChatGPT Part 1 ChatGPT Part 2 ChatGPT Part 2

313
likes
0
pub points
96%
popularity

Publisher

unverified uploader

create chat bot and other bot with ChatGPT SDK Support GPT-4 , 3.5 and SSE Generate Prompt (Stream)

Homepage
Repository (GitHub)
View/report issues

License

unknown (license)

Dependencies

dio, http_parser

More

Packages that depend on chat_gpt_sdk