countTokens method
Returns the number of tokens resulting from tokenize the given prompt.
Knowing how many tokens are in a text string can tell you:
- Whether the string is too long for a text model to process.
- How much the API call can costs (as usage is usually priced by token).
In message-based models the exact way that tokens are counted from messages may change from model to model. Consider the result from this method an estimate, not a timeless guarantee.
promptValue
The prompt to tokenize.
Note: subclasses can override this method to provide a more accurate implementation.
Implementation
Future<int> countTokens(
final PromptValue promptValue, {
final Options? options,
}) async {
final tokens = await tokenize(promptValue, options: options);
return tokens.length;
}