openai_dart library

Dart client for the OpenAI API. Supports chat (GPT-4o, etc.), completions, embeddings, images (DALLĀ·E 3), assistants (threads, runs, vector stores, etc.), batch, fine-tuning, etc.

Classes

AssistantModel
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
AssistantModelEnumeration
AssistantModelString
AssistantObject
Represents an assistant that can call the model and use tools.
AssistantObjectResponseFormat
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
AssistantObjectResponseFormatEnumeration
AssistantObjectResponseFormatResponseFormat
AssistantsFunctionCallOption
No Description
AssistantsNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific tool.
AssistantStreamEvent
Represents an event emitted when streaming a Run.
AssistantTools
A tool that can be used by an assistant.
AssistantToolsCodeInterpreter
AssistantToolsFileSearch
AssistantToolsFileSearchFileSearch
Overrides for the file search tool.
AssistantToolsFunction
AutoChunkingStrategyRequestParam
Batch
Represents a batch of requests.
BatchErrors
No Description
BatchErrorsDataInner
No Description
BatchRequestCounts
The request counts for different statuses within the batch.
ChatCompletionAssistantMessage
ChatCompletionAssistantMessageAudio
If the audio output modality is requested, this object contains data about the audio response from the model. Learn more.
ChatCompletionAudioOptions
Parameters for audio output. Required when audio output is requested with modalities: ["audio"]. Learn more.
ChatCompletionFunctionCall
Deprecated in favor of tool_choice.
ChatCompletionFunctionCallChatCompletionFunctionCallOption
ChatCompletionFunctionCallEnumeration
ChatCompletionFunctionCallOption
Forces the model to call the specified function.
ChatCompletionFunctionMessage
ChatCompletionLogprobs
Log probability information for the choice.
ChatCompletionMessage
A message in a chat conversation.
ChatCompletionMessageContentPart
A content part of a user message.
ChatCompletionMessageContentPartAudio
ChatCompletionMessageContentPartImage
ChatCompletionMessageContentPartRefusal
ChatCompletionMessageContentParts
ChatCompletionMessageContentPartText
ChatCompletionMessageFunctionCall
The name and arguments of a function that should be called, as generated by the model.
ChatCompletionMessageImageUrl
The URL of the image.
ChatCompletionMessageInputAudio
The audio input.
ChatCompletionMessageToolCall
A tool call generated by the model, such as a function call.
ChatCompletionModel
ID of the model to use. See the model endpoint compatibility table for details on which models work with the Chat API.
ChatCompletionModelEnumeration
ChatCompletionModelString
ChatCompletionNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific function.
ChatCompletionResponseChoice
A choice the model generated for the input prompt.
ChatCompletionStop
Up to 4 sequences where the API will stop generating further tokens.
ChatCompletionStopListString
ChatCompletionStopString
ChatCompletionStreamMessageFunctionCall
The name and arguments of a function that should be called, as generated by the model.
ChatCompletionStreamMessageToolCallChunk
The tool that should be called, as generated by the model.
ChatCompletionStreamOptions
Options for streaming response. Only set this when you set stream: true.
ChatCompletionStreamResponseChoice
A choice the model generated for the input prompt.
ChatCompletionStreamResponseChoiceLogprobs
Log probability information for the choice.
ChatCompletionStreamResponseDelta
A chat completion delta generated by streamed model responses.
ChatCompletionStreamResponseDeltaAudio
If the audio output modality is requested, this object contains data about the audio response from the model. Learn more.
ChatCompletionSystemMessage
ChatCompletionTokenLogprob
Log probability information for a token.
ChatCompletionTokenTopLogprob
Most likely tokens and their log probability, at this token position.
ChatCompletionTool
A tool the model may use.
ChatCompletionToolChoiceOption
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
ChatCompletionToolChoiceOptionChatCompletionNamedToolChoice
ChatCompletionToolChoiceOptionEnumeration
ChatCompletionToolMessage
ChatCompletionUserMessage
ChatCompletionUserMessageContent
The contents of the user message.
ChatCompletionUserMessageContentString
ChunkingStrategyRequestParam
The chunking strategy used to chunk the file(s). If not set, will use the auto strategy.
ChunkingStrategyResponseParam
The chunking strategy used to chunk the file(s).
CompletionChoice
A choice the model generated for the input prompt.
CompletionLogprobs
The probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response.
CompletionModel
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
CompletionModelEnumeration
CompletionModelString
CompletionPrompt
The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
CompletionPromptListInt
CompletionPromptListListInt
CompletionPromptListString
CompletionPromptString
CompletionStop
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
CompletionStopListString
CompletionStopString
CompletionTokensDetails
Breakdown of tokens used in a completion.
CompletionUsage
Usage statistics for the completion request.
CreateAssistantRequest
Request object for the Create assistant endpoint.
CreateAssistantRequestResponseFormat
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
CreateAssistantRequestResponseFormatEnumeration
CreateAssistantRequestResponseFormatResponseFormat
CreateBatchRequest
Represents a request to create a new batch.
CreateChatCompletionRequest
Request object for the Create chat completion endpoint.
CreateChatCompletionResponse
Represents a chat completion response returned by model, based on the provided input.
CreateChatCompletionStreamResponse
Represents a streamed chunk of a chat completion response returned by model, based on the provided input.
CreateCompletionRequest
Request object for the Create completion endpoint.
CreateCompletionResponse
Represents a completion response from the API. Note: both the streamed and non-streamed response objects share the same shape (unlike the chat endpoint).
CreateEmbeddingRequest
Request object for the Create embedding endpoint.
CreateEmbeddingResponse
Represents an embedding vector returned by embedding endpoint.
CreateFineTuningJobRequest
Request object for the Create fine-tuning job endpoint.
CreateImageRequest
Request object for the Create image endpoint.
CreateImageRequestModel
The model to use for image generation.
CreateImageRequestModelEnumeration
CreateImageRequestModelString
CreateMessageRequest
Request object for the Create message endpoint.
CreateMessageRequestContent
The content of the message.
CreateMessageRequestContentListMessageContent
CreateMessageRequestContentString
CreateModerationRequest
Request object for the Create moderation endpoint.
CreateModerationResponse
Represents if a given text input is potentially harmful.
CreateRunRequest
Request object for the Create run endpoint.
CreateRunRequestModel
The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
CreateRunRequestModelEnumeration
CreateRunRequestModelString
CreateRunRequestResponseFormat
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
CreateRunRequestResponseFormatEnumeration
CreateRunRequestResponseFormatResponseFormat
CreateRunRequestToolChoice
Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
CreateRunRequestToolChoiceAssistantsNamedToolChoice
CreateRunRequestToolChoiceEnumeration
CreateThreadAndRunRequest
Request object for the Create thread and run endpoint.
CreateThreadAndRunRequestResponseFormat
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
CreateThreadAndRunRequestResponseFormatEnumeration
CreateThreadAndRunRequestResponseFormatResponseFormat
CreateThreadAndRunRequestToolChoice
Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
CreateThreadAndRunRequestToolChoiceAssistantsNamedToolChoice
CreateThreadAndRunRequestToolChoiceEnumeration
CreateThreadRequest
Request object for the Create thread endpoint.
CreateVectorStoreFileBatchRequest
Request object for the Create vector store file batch endpoint.
CreateVectorStoreFileRequest
Request object for the Create vector store file endpoint.
CreateVectorStoreRequest
Request object for the Create assistant file endpoint.
DeleteAssistantResponse
Represents a deleted response returned by the Delete assistant endpoint.
DeleteMessageResponse
No Description
DeleteModelResponse
Represents a deleted response returned by the Delete model endpoint.
DeleteThreadResponse
No Description
DeleteVectorStoreFileResponse
Response object for the Delete vector store file endpoint.
DeleteVectorStoreResponse
Response object for the Delete vector store endpoint.
DoneEvent
Embedding
Represents an embedding vector returned by embedding endpoint.
EmbeddingInput
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for text-embedding-ada-002), cannot be an empty string, and any array must be 2048 dimensions or less. Example Python code for counting tokens.
EmbeddingInputListInt
EmbeddingInputListListInt
EmbeddingInputListString
EmbeddingInputString
EmbeddingModel
ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
EmbeddingModelEnumeration
EmbeddingModelString
EmbeddingUsage
The usage information for the request.
EmbeddingVector
The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide.
EmbeddingVectorListDouble
EmbeddingVectorString
Error
Represents an error that occurred during an API request.
ErrorEvent
FileSearchRankingOptions
The ranking options for the file search. If not specified, the file search tool will use the auto ranker and a score_threshold of 0.
FineTuningIntegration
A fine-tuning integration to enable for a fine-tuning job.
FineTuningIntegrationWandb
The settings for your integration with Weights and Biases. This payload specifies the project that metrics will be sent to. Optionally, you can set an explicit display name for your run, add tags to your run, and set a default entity (team, username, etc) to be associated with your run.
FineTuningJob
The fine_tuning.job object represents a fine-tuning job that has been created through the API.
FineTuningJobCheckpoint
The fine_tuning.job.checkpoint object represents a model checkpoint for a fine-tuning job that is ready to use.
FineTuningJobCheckpointMetrics
Metrics at the step number during the fine-tuning job.
FineTuningJobError
For fine-tuning jobs that have failed, this will contain more information on the cause of the failure.
FineTuningJobEvent
Fine-tuning job event object.
FineTuningJobHyperparameters
The hyperparameters used for the fine-tuning job. See the fine-tuning guide for more details.
FineTuningModel
The name of the model to fine-tune. You can select one of the supported models.
FineTuningModelEnumeration
FineTuningModelString
FineTuningNEpochs
The number of epochs to train the model for. An epoch refers to one full cycle through the training dataset.
FineTuningNEpochsEnumeration
FineTuningNEpochsInt
FunctionObject
A function that the model may call.
Image
Represents the url or the content of an image generated by the OpenAI API.
ImagesResponse
Represents a generated image returned by the images endpoint.
JsonSchemaObject
A JSON Schema object.
ListAssistantsResponse
Represents a list of assistants returned by the List assistants endpoint.
ListBatchesResponse
Represents a list of batches returned by the List batches endpoint.
ListFineTuningJobCheckpointsResponse
Represents a list of fine-tuning job checkpoints.
ListFineTuningJobEventsResponse
Represents a list of fine-tuning job events.
ListMessagesResponse
Represents a list of messages returned by the List messages endpoint.
ListModelsResponse
Represents a list of models returned by the List models endpoint.
ListPaginatedFineTuningJobsResponse
Represents a list of fine-tuning jobs.
ListRunsResponse
Represents a list of runs returned by the List runs endpoint.
ListRunStepsResponse
Represents a list of run steps returned by the List run steps endpoint.
ListThreadsResponse
Represents a list of threads returned by the List threads endpoint.
ListVectorStoreFilesResponse
Represents a list of message files returned by the List vector store files endpoint.
ListVectorStoresResponse
Represents a list of files returned by the List vector store files endpoint.
MessageAttachment
An attachment to a message.
MessageContent
The content of a message.
MessageContentImageFile
The image file that is part of a message.
MessageContentImageFileObject
MessageContentImageUrl
The image URL part of a message.
MessageContentImageUrlObject
MessageContentRefusalObject
MessageContentText
The text content that is part of a message.
MessageContentTextAnnotations
An annotation within the message that points to a specific quote from a specific File associated with the assistant or the message.
MessageContentTextAnnotationsFileCitation
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message.
MessageContentTextAnnotationsFileCitationObject
MessageContentTextAnnotationsFilePath
No Description
MessageContentTextAnnotationsFilePathObject
MessageContentTextObject
MessageDelta
The delta containing the fields that have changed on the Message.
MessageDeltaContent
The content of a message delta.
MessageDeltaContentImageFileObject
MessageDeltaContentImageUrlObject
MessageDeltaContentRefusalObject
MessageDeltaContentText
The text content that is part of a message.
MessageDeltaContentTextAnnotations
An annotation within the message that points to a specific quote from a specific File associated with the assistant or the message.
MessageDeltaContentTextAnnotationsFileCitation
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message.
MessageDeltaContentTextAnnotationsFileCitationObject
MessageDeltaContentTextAnnotationsFilePathObject
MessageDeltaContentTextAnnotationsFilePathObjectFilePath
No Description
MessageDeltaContentTextObject
MessageDeltaObject
Represents a message delta i.e. any changed fields on a message during streaming.
MessageObject
Represents a message within a thread.
MessageObjectIncompleteDetails
On an incomplete message, details about why the message is incomplete.
MessageRequestContentTextObject
The text content that is part of a message.
MessageStreamDeltaEvent
MessageStreamEvent
Model
Describes an OpenAI model offering that can be used with the API.
Moderation
Represents policy compliance report by OpenAI's content moderation model against a given input.
ModerationCategories
A list of the categories, and whether they are flagged or not.
ModerationCategoriesAppliedInputTypes
A list of the categories along with the input type(s) that the score applies to.
ModerationCategoriesScores
A list of the categories along with their scores as predicted by model.
ModerationInput
Input (or inputs) to classify. Can be a single string, an array of strings, or an array of multi-modal input objects similar to other models.
ModerationInputListModerationInputObject
ModerationInputListString
ModerationInputObject
Multi-modal inputs to the moderation model.
ModerationInputObjectImageUrl
ModerationInputObjectImageUrlImageUrl
Contains either an image URL or a data URL for a base64 encoded image.
ModerationInputObjectText
ModerationInputString
ModerationModel
The content moderation model you would like to use. Learn more in the moderation guide, and learn about available models here.
ModerationModelEnumeration
ModerationModelString
ModifyAssistantRequest
Request object for the Modify assistant endpoint.
ModifyAssistantRequestResponseFormat
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
ModifyAssistantRequestResponseFormatEnumeration
ModifyAssistantRequestResponseFormatResponseFormat
ModifyMessageRequest
Request object for the Modify message endpoint.
ModifyRunRequest
Request object for the Modify run endpoint.
ModifyThreadRequest
Request object for the Modify thread endpoint.
OpenAIClient
Client for OpenAI API.
OtherChunkingStrategyResponseParam
PredictionContent
Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time. This is most common when you are regenerating a file with only minor changes to most of the content.
PredictionContentContent
The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly.
PredictionContentContentListChatCompletionMessageContentPartText
PredictionContentContentString
ResponseFormat
An object specifying the format that the model must output. Compatible with GPT-4o, GPT-4o mini, GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.
ResponseFormatJsonObject
ResponseFormatJsonSchema
ResponseFormatText
RunCompletionUsage
Usage statistics related to the run. This value will be null if the run is not in a terminal state (i.e. in_progress, queued, etc.).
RunLastError
The last error associated with this run. Will be null if there are no errors.
RunObject
Represents an execution run on a thread.
RunObjectIncompleteDetails
Details on why the run is incomplete. Will be null if the run is not incomplete.
RunObjectResponseFormat
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106.
RunObjectResponseFormatEnumeration
RunObjectResponseFormatResponseFormat
RunObjectToolChoice
Controls which (if any) tool is called by the model. none means the model will not call any tools and instead generates a message. auto is the default value and means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user. Specifying a particular tool like {"type": "file_search"} or {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool.
RunObjectToolChoiceAssistantsNamedToolChoice
RunObjectToolChoiceEnumeration
RunRequiredAction
Details on the action required to continue the run. Will be null if no action is required.
RunStepCompletionUsage
Usage statistics related to the run step. This value will be null while the run step's status is in_progress.
RunStepDelta
The delta containing the fields that have changed on the run step.
RunStepDeltaDetails
The details of the run step
RunStepDeltaObject
Represents a run step delta i.e. any changed fields on a run step during streaming.
RunStepDeltaStepDetailsMessageCreation
Details of the message creation by the run step.
RunStepDeltaStepDetailsMessageCreationObject
RunStepDeltaStepDetailsToolCalls
Tool calls the run step was involved in.
RunStepDeltaStepDetailsToolCallsCodeObject
RunStepDeltaStepDetailsToolCallsCodeObjectCodeInterpreter
The Code Interpreter tool call definition. - outputs
RunStepDeltaStepDetailsToolCallsCodeOutput
The output of the Code Interpreter tool call.
RunStepDeltaStepDetailsToolCallsCodeOutputImage
Code interpreter image output.
RunStepDeltaStepDetailsToolCallsCodeOutputImageObject
RunStepDeltaStepDetailsToolCallsCodeOutputLogsObject
RunStepDeltaStepDetailsToolCallsFileSearchObject
RunStepDeltaStepDetailsToolCallsFunction
The definition of the function that was called.
RunStepDeltaStepDetailsToolCallsFunctionObject
RunStepDeltaStepDetailsToolCallsObject
RunStepDetails
The details of the run step.
RunStepDetailsMessageCreation
Details of the message creation by the run step.
RunStepDetailsMessageCreationObject
RunStepDetailsToolCalls
Tool calls the run step was involved in.
RunStepDetailsToolCallsCodeObject
RunStepDetailsToolCallsCodeObjectCodeInterpreter
The Code Interpreter tool call definition.
RunStepDetailsToolCallsCodeOutput
The output of the Code Interpreter tool call.
RunStepDetailsToolCallsCodeOutputImage
Code interpreter image output.
RunStepDetailsToolCallsCodeOutputImageObject
RunStepDetailsToolCallsCodeOutputLogsObject
RunStepDetailsToolCallsFileSearch
The definition of the file search that was called.
RunStepDetailsToolCallsFileSearchObject
RunStepDetailsToolCallsFileSearchRankingOptionsObject
The ranking options for the file search.
RunStepDetailsToolCallsFileSearchResultContent
The content of the result that was found.
RunStepDetailsToolCallsFileSearchResultObject
A result instance of the file search.
RunStepDetailsToolCallsFunction
The definition of the function that was called.
RunStepDetailsToolCallsFunctionObject
RunStepDetailsToolCallsObject
RunStepLastError
The last error associated with this run step. Will be null if there are no errors.
RunStepObject
Represents a step in execution of a run.
RunStepStreamDeltaEvent
RunStepStreamEvent
RunStreamEvent
RunSubmitToolOutput
Output of a tool.
RunSubmitToolOutputs
Details on the tool outputs needed for this run to continue.
RunToolCallFunction
The function definition.
RunToolCallObject
Tool call objects
StaticChunkingStrategy
Static chunking strategy
StaticChunkingStrategyRequestParam
StaticChunkingStrategyResponseParam
SubmitToolOutputsRunRequest
Request object for the Submit tool outputs to run endpoint.
ThreadAndRunModel
The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
ThreadAndRunModelEnumeration
ThreadAndRunModelString
ThreadObject
Represents a thread that contains messages.
ThreadStreamEvent
ToolResources
A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter tool requires a list of file IDs, while the file_search tool requires a list of vector store IDs.
ToolResourcesCodeInterpreter
No Description
ToolResourcesFileSearch
No Description
ToolResourcesFileSearchVectorStore
A helper to create a vector store with file_ids and attach it to this thread.
TruncationObject
Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.
Uint8ListConverter
UpdateVectorStoreRequest
Request object for the Update vector store endpoint.
VectorStoreExpirationAfter
The expiration policy for a vector store.
VectorStoreFileBatchObject
A batch of files attached to a vector store.
VectorStoreFileBatchObjectFileCounts
The number of files per status.
VectorStoreFileObject
A list of files attached to a vector store.
VectorStoreFileObjectLastError
The last error associated with this vector store file. Will be null if there are no errors.
VectorStoreObject
A vector store is a collection of processed files can be used by the file_search tool.
VectorStoreObjectFileCounts
The number of files in the vector store.

Enums

AssistantModels
Available assistant models. Mind that the list may not be exhaustive nor up-to-date.
AssistantObjectObject
The object type, which is always assistant.
AssistantResponseFormatMode
auto is the default value
AssistantsToolType
The type of the tool. If type is function, the function name must be set
AssistantToolsEnumType
BatchCompletionWindow
The time frame within which the batch should be processed. Currently only 24h is supported.
BatchEndpoint
The endpoint to be used for all requests in the batch. Currently /v1/chat/completions, /v1/embeddings, and /v1/completions are supported. Note that /v1/embeddings batches are also restricted to a maximum of 50,000 embedding inputs across all requests in the batch.
BatchObject
The object type, which is always batch.
BatchStatus
The current status of the batch.
ChatCompletionAudioFormat
Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16.
ChatCompletionAudioVoice
The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, sage, shimmer, and verse.
ChatCompletionFinishReason
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.
ChatCompletionFunctionCallMode
none means the model will not call a function and instead generates a message. auto means the model can pick between generating a message or calling a function.
ChatCompletionMessageContentPartEnumType
ChatCompletionMessageContentPartType
The type of the content part.
ChatCompletionMessageEnumType
ChatCompletionMessageImageDetail
Specifies the detail level of the image. Learn more in the Vision guide.
ChatCompletionMessageInputAudioFormat
The format of the encoded audio data. Currently supports "wav" and "mp3".
ChatCompletionMessageRole
The role of the messages author. One of system, user, assistant, or tool (function is deprecated).
ChatCompletionMessageToolCallType
The type of the tool. Currently, only function is supported.
ChatCompletionModality
Output types that you would like the model to generate for this request.
ChatCompletionModels
Available completion models. Mind that the list may not be exhaustive nor up-to-date.
ChatCompletionNamedToolChoiceType
The type of the tool. Currently, only function is supported.
ChatCompletionStreamMessageToolCallChunkType
The type of the tool. Currently, only function is supported.
ChatCompletionToolChoiceMode
none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
ChatCompletionToolType
The type of the tool. Currently, only function is supported.
ChunkingStrategyRequestParamEnumType
ChunkingStrategyResponseParamEnumType
CompletionFinishReason
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, or content_filter if content was omitted due to a flag from our content filters.
CompletionModels
Available completion models. Mind that the list may not be exhaustive nor up-to-date.
CreateAssistantResponseFormatMode
auto is the default value
CreateChatCompletionRequestServiceTier
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
CreateCompletionResponseObject
The object type, which is always "text_completion"
CreateEmbeddingResponseObject
The object type, which is always "list".
CreateRunRequestResponseFormatMode
auto is the default value
CreateRunRequestToolChoiceMode
none means the model will not call any tools and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user.
CreateThreadAndRunRequestResponseFormatMode
auto is the default value
CreateThreadAndRunRequestToolChoiceMode
none means the model will not call any tools and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user.
DeleteAssistantResponseObject
The object type, which is always assistant.deleted.
DeleteMessageResponseObject
The object type, which is always thread.message.deleted.
DeleteThreadResponseObject
The object type, which is always thread.deleted.
EmbeddingEncodingFormat
The format to return the embeddings in. Can be either float or base64.
EmbeddingModels
Available completion models. Mind that the list may not be exhaustive nor up-to-date.
EmbeddingObject
The object type, which is always "embedding".
EventType
The type of the event.
FileSearchRanker
The ranker to use for the file search. If not specified will use the auto ranker.
FineTuningIntegrationType
The type of integration to enable. Currently, only "wandb" (Weights and Biases) is supported.
FineTuningJobCheckpointObject
The object type, which is always "fine_tuning.job.checkpoint".
FineTuningJobEventLevel
The log level of the event.
FineTuningJobEventObject
The object type, which is always "fine_tuning.job.event".
FineTuningJobObject
The object type, which is always "fine_tuning.job".
FineTuningJobStatus
The current status of the fine-tuning job, which can be either validating_files, queued, running, succeeded, failed, or cancelled.
FineTuningModels
Available fine-tuning models. Mind that the list may not be exhaustive nor up-to-date.
FineTuningNEpochsOptions
The mode for the number of epochs.
ImageModels
Available models for image generation. Mind that the list may not be exhaustive nor up-to-date.
ImageQuality
The quality of the image that will be generated. hd creates images with finer details and greater consistency across the image. This param is only supported for dall-e-3.
ImageResponseFormat
The format in which the generated images are returned. Must be one of url or b64_json. URLs are only valid for 60 minutes after the image has been generated.
ImageSize
The size of the generated images. Must be one of 256x256, 512x512, or 1024x1024 for dall-e-2. Must be one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models.
ImageStyle
The style of the generated images. Must be one of vivid or natural. Vivid causes the model to lean towards generating hyper-real and dramatic images. Natural causes the model to produce more natural, less hyper-real looking images. This param is only supported for dall-e-3.
ListBatchesResponseObject
The object type, which is always list.
ListFineTuningJobCheckpointsResponseObject
The object type, which is always "list".
ListFineTuningJobEventsResponseObject
The object type, which is always "list".
ListModelsResponseObject
The object type, which is always "list".
ListPaginatedFineTuningJobsResponseObject
The object type, which is always "list".
MessageContentEnumType
MessageContentImageDetail
Specifies the detail level of the image if specified by the user. low uses fewer tokens, you can opt in to high resolution using high.
MessageDeltaContentEnumType
MessageDeltaObjectObject
The object type, which is always thread.message.delta.
MessageObjectIncompleteDetailsReason
The reason the message is incomplete.
MessageObjectObject
The object type, which is always thread.message.
MessageObjectStatus
The status of the message, which can be either in_progress, incomplete, or completed.
MessageRole
The entity that produced the message. One of user or assistant.
ModelObject
The object type, which is always "model".
ModerationInputObjectEnumType
ModerationInputObjectType
The type of the input object.
ModerationModels
Available moderation models. Mind that the list may not be exhaustive nor up-to-date.
ModifyAssistantResponseFormatMode
auto is the default value
ResponseFormatEnumType
ResponseFormatType
The type of response format being defined.
RunLastErrorCode
One of server_error, rate_limit_exceeded, or invalid_prompt.
RunModels
Available models. Mind that the list may not be exhaustive nor up-to-date.
RunObjectIncompleteDetailsReason
The reason why the run is incomplete. This will point to which specific token limit was reached over the course of the run.
RunObjectObject
The object type, which is always thread.run.
RunObjectResponseFormatMode
auto is the default value
RunObjectToolChoiceMode
none means the model will not call any tools and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools before responding to the user.
RunRequiredActionType
For now, this is always submit_tool_outputs.
RunStatus
The status of the run, which can be either queued, in_progress, requires_action, cancelling, cancelled, failed, completed, incomplete, or expired.
RunStepDeltaObjectObject
The object type, which is always thread.run.step.delta.
RunStepLastErrorCode
One of server_error or rate_limit_exceeded.
RunStepObjectObject
The object type, which is always thread.run.step.
RunStepStatus
The status of the run step, which can be either in_progress, cancelled, failed, completed, or expired.
RunStepType
The type of run step, which can be either message_creation or tool_calls.
RunToolCallObjectType
The type of tool call the output is required for. For now, this is always function.
ServiceTier
The service tier used for processing the request. This field is only included if the service_tier parameter is specified in the request.
ThreadAndRunModels
Available models. Mind that the list may not be exhaustive nor up-to-date.
ThreadObjectObject
The object type, which is always thread.
TruncationObjectType
The truncation strategy to use for the thread. The default is auto. If set to last_messages, the thread will be truncated to the n most recent messages in the thread. When set to auto, messages in the middle of the thread will be dropped to fit the context length of the model, max_prompt_tokens.
VectorStoreExpirationAfterAnchor
Anchor timestamp after which the expiration policy applies. Supported anchors: last_active_at.
VectorStoreFileBatchObjectStatus
The status of the vector store files batch, which can be either in_progress, completed, cancelled or failed.
VectorStoreFileObjectLastErrorCode
One of server_error or rate_limit_exceeded.
VectorStoreFileStatus
The status of the vector store file, which can be either in_progress, completed, cancelled, or failed. The status completed indicates that the vector store file is ready for use.
VectorStoreObjectStatus
The status of the vector store, which can be either expired, in_progress, or completed. A status of completed indicates that the vector store is ready for use.

Extensions

EmbeddingX on Embedding
Extension methods for Embedding.
MessageContentX on MessageContent
Extension methods for MessageContent.

Typedefs

ChatCompletionMessageToolCalls = List<ChatCompletionMessageToolCall>
The tool calls generated by the model, such as function calls.
FunctionParameters = Map<String, dynamic>
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.

Exceptions / Errors

OpenAIClientException
HTTP exception handler for OpenAIClient