GptCompletionRequest class
- Annotations
-
- @JsonSerializable(includeIfNull: false)
Properties
-
bestOf
↔ int?
-
Generates best_of completions server-side and returns the "best"
(the one with the highest log probability per token).
Results cannot be streamed.
getter/setter pair
-
echo
↔ bool?
-
Echo back the prompt in addition to the completion
getter/setter pair
-
frequencyPenalty
↔ int?
-
Number between -2.0 and 2.0. Positive values penalize new tokens
based on their existing frequency in the text so far, decreasing
the model's likelihood to repeat the same line verbatim.
getter/setter pair
-
hashCode
→ int
-
The hash code for this object.
no setterinherited
-
logprobs
↔ int?
-
Include the log probabilities on the logprobs most likely tokens,
as well the chosen tokens. For example, if logprobs is 5,
the API will return a list of the 5 most likely tokens.
The API will always return the logprob of the sampled token,
so there may be up to logprobs+1 elements in the response.
getter/setter pair
-
maxTokens
↔ int?
-
The maximum number of tokens to generate in the completion.
getter/setter pair
-
model
↔ String
-
ID of the model to use. You can use the List models API to see all
of your available models, or see our Model overview for
descriptions of them.
getter/setter pair
-
n
↔ int?
-
How many completions to generate for each prompt.
getter/setter pair
-
presencePenalty
↔ int?
-
Number between -2.0 and 2.0. Positive values penalize new tokens based
on whether they appear in the text so far, increasing the model's
likelihood to talk about new topics.
getter/setter pair
-
prompt
↔ List<String>
-
The prompt(s) to generate completions for, encoded as a string,
array of strings, array of tokens, or array of token arrays.
getter/setter pair
-
runtimeType
→ Type
-
A representation of the runtime type of the object.
no setterinherited
-
stop
↔ List<String>?
-
Up to 4 sequences where the API will stop generating further tokens.
The returned text will not contain the stop sequence.
getter/setter pair
-
suffix
↔ String?
-
The suffix that comes after a completion of inserted text.
getter/setter pair
-
temperature
↔ int?
-
What sampling temperature to use, between 0 and 2.
Higher values like 0.8 will make the output more random,
while lower values like 0.2 will make it more focused and deterministic.
getter/setter pair
-
topP
↔ int?
-
An alternative to sampling with temperature, called nucleus sampling,
where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising
the top 10% probability mass are considered.
getter/setter pair
-
user
↔ String?
-
A unique identifier representing your end-user,
which can help OpenAI to monitor and detect abuse.
getter/setter pair