tokenize method

  1. @override
Future<List<int>> tokenize(
  1. PromptValue promptValue, {
  2. ChatOllamaOptions? options,
})

Tokenizes the given prompt using tiktoken.

Currently Ollama does not provide a tokenizer for the models it supports. So we use tiktoken and encoding model to get an approximation for counting tokens. Mind that the actual tokens will be totally different from the ones used by the Ollama model.

If an encoding model is specified in encoding field, that encoding is used instead.

  • promptValue The prompt to tokenize.

Implementation

@override
Future<List<int>> tokenize(
  final PromptValue promptValue, {
  final ChatOllamaOptions? options,
}) async {
  final encoding = getEncoding(this.encoding);
  return encoding.encode(promptValue.toString());
}