OpenAIChatOptions class
Generation options to pass into the Chat Model.
- Inheritance
-
- Object
- ChatModelOptions
- OpenAIChatOptions
- Annotations
-
- @immutable
Constructors
-
OpenAIChatOptions({double? frequencyPenalty, Map<
String, int> ? logitBias, int? maxTokens, int? n, double? presencePenalty, dynamic responseFormat, int? seed, List<String> ? stop, double? temperature, double? topP, bool? parallelToolCalls, ChatOpenAIServiceTier? serviceTier, String? user, StreamOptions? streamOptions, bool? logprobs, int? topLogprobs, ReasoningEffort? reasoningEffort, Verbosity? verbosity, Prediction? prediction, List<ChatModality> ? modalities, ChatAudioConfig? audio, WebSearchOptions? webSearchOptions, bool? store, Map<String, dynamic> ? metadata, String? promptCacheKey, PromptCacheRetention? promptCacheRetention, String? safetyIdentifier}) -
Creates a new chat openai options instance.
const
Properties
- audio → ChatAudioConfig?
-
Audio output configuration (voice and format). Required when
modalities includes ChatModality.audio.
final
- frequencyPenalty → double?
-
Number between -2.0 and 2.0. Positive values penalize new tokens based on
their existing frequency in the text so far, decreasing the model's
likelihood to repeat the same line verbatim.
final
- hashCode → int
-
The hash code for this object.
no setterinherited
-
logitBias
→ Map<
String, int> ? -
Modify the likelihood of specified tokens appearing in the completion.
final
- logprobs → bool?
-
Whether to return log probabilities of the output tokens or not.
final
- maxTokens → int?
-
The maximum number of tokens to generate in the chat completion. Defaults
to inf.
final
-
metadata
→ Map<
String, dynamic> ? -
Custom metadata to attach to the request.
final
-
modalities
→ List<
ChatModality> ? -
Output modalities to request (e.g. text, audio).
final
- n → int?
-
How many chat completion choices to generate for each input message.
final
- parallelToolCalls → bool?
-
Whether to enable parallel tool calling during tool use. By default, it is
enabled.
final
- prediction → Prediction?
-
Predicted output for faster responses when high confidence in the
response content. Works best with gpt-4o and gpt-4.1 models.
final
- presencePenalty → double?
-
Number between -2.0 and 2.0. Positive values penalize new tokens based on
whether they appear in the text so far, increasing the model's likelihood
to talk about new topics.
final
- promptCacheKey → String?
-
Prompt cache key for optimizing cache hit rates.
final
- promptCacheRetention → PromptCacheRetention?
-
Retention policy for the prompt cache.
final
- reasoningEffort → ReasoningEffort?
-
Controls reasoning effort for reasoning models (o1, o3, o4-mini).
final
- responseFormat → dynamic
-
An object specifying the format that the model must output.
final
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
- safetyIdentifier → String?
-
A stable identifier for detecting usage policy violations.
final
- seed → int?
-
This feature is in Beta. If specified, our system will make a best effort
to sample deterministically, such that repeated requests with the same
seed and parameters should return the same result. Determinism is not
guaranteed, and you should refer to the system_fingerprint response
parameter to monitor changes in the backend.
final
- serviceTier → ChatOpenAIServiceTier?
-
Specifies the latency tier to use for processing the request. This is
relevant for customers subscribed to the scale tier service.
final
-
stop
→ List<
String> ? -
Up to 4 sequences where the API will stop generating further tokens.
final
- store → bool?
-
Whether to store this completion for model improvements.
final
- streamOptions → StreamOptions?
-
Stream options for OpenAI chat completions.
final
- temperature → double?
-
What sampling temperature to use, between 0 and 2.
final
- topLogprobs → int?
-
An integer between 0 and 20 specifying the number of most likely tokens to
return at each token position.
final
- topP → double?
-
An alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with top_p probability
mass.
final
- user → String?
-
A unique identifier representing your end-user, which can help OpenAI to
monitor and detect abuse.
final
- verbosity → Verbosity?
-
Controls the verbosity of the model's response.
final
- webSearchOptions → WebSearchOptions?
-
Web search options for including web results in the response.
final
Methods
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
toString(
) → String -
A string representation of this object.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited