GroqChatSettings constructor
temperature controls randomness of responses. maxTokens maximum number of tokens that can be generated in the chat completion. topP method of text generation where a model will only consider the most probable next tokens that make up the probability p. stream user server-side events to send the completion in small deltas rather than in a single batch after all processing has finished. choicesCount how many chat completion choices to generate for each input message. stop a stop sequence is a predefined or user-specified text string that signals an AI to stop generating content. maxConversationalMemoryLength conversational memory length. The number of previous messages to include in the model's context. A higher value will result in more context-aware responses. Default: 1024
Throws an assertion error if the temperature is not between 0.0 and 2.0, maxTokens is less than or equal to 0, topP is not between 0.0 and 1.0, or choicesCount is less than or equal to 0.
Default values: temperature: 1.0, maxTokens: 8192, topP: 1.0, stream: false, choicesCount: 1, stop: null, maxConversationalMemoryLength: 1024
Conversational memory length.
The number of previous messages to include in the model's context.
A higher value will result in more context-aware responses.
Example:
The maximum number of tokens that can be generated in the chat completion.
The total length of input tokens and generated tokens is limited by the model's context length.
Default: 8192
A stop sequence is a predefined or user-specified text string that signals an AI to stop generating content,
ensuring its responses remain focused and concise.
Default: null
Controls randomness of responses.
A lower temperature leads to more predictable outputs while a higher temperature results
in more varies and sometimes more creative outputs.
Default: 1.0
A method of text generation where a model will only consider the most probable
next tokens that make up the probability p. 0.5 means half of all
likelihood-weighted options are considered.
Default: 1.0
Returns a copy of the current GroqChatSettings object with the new values
temperature controls randomness of responses. maxTokens maximum number of tokens that can be generated in the chat completion. topP method of text generation where a model will only consider the most probable next tokens that make up the probability p. stream user server-side events to send the completion in small deltas rather than in a single batch after all processing has finished. choicesCount how many chat completion choices to generate for each input message. stop a stop sequence is a predefined or user-specified text string that signals an AI to stop generating content.
maxConversationalMemoryLength conversational memory length. The number of previous messages to include in the model's context. A higher value will result in more context-aware responses.
Example: