langchain_tiktoken 1.0.1 copy "langchain_tiktoken: ^1.0.1" to clipboard
langchain_tiktoken: ^1.0.1 copied to clipboard

A BPE tokeniser for use with OpenAI's models. It exposes APIs used for processing text using tokens.

⏳ langchain_tiktoken #

langchain_tiktoken is a BPE tokeniser for use with OpenAI's models.

Splitting text strings into tokens is useful because GPT models see text in the form of tokens. Knowing how many tokens are in a text string can tell you a) whether the string is too long for a text model to process and b) how much an OpenAI API call costs (as usage is priced by token). Different models use different encodings.

Features #

The main Tiktoken class exposes APIs that allow you to process text using tokens, which are common sequences of character found in text. Some of the things you can do with tiktoken package are:

  • Encode text into tokens
  • Decode tokens into text
  • Compare different encodings
  • Count tokens for chat API calls

Usage #

For more examples, see the /example folder.

import 'package:langchain_tiktoken/tiktoken.dart';
void main() async {
  WidgetsFlutterBinding.ensureInitialized();
  await TiktokenDataProcessCenter().initData();
}
import 'package:langchain_tiktoken/tiktoken.dart';

// Load an encoding
final encoding = encodingForModel("gpt-4");

// Tokenize text
print(encoding.encode("tiktoken is great!")); // [83, 1609, 5963, 374, 2294, 0]

// Decode tokens
print(encoding.decode([83, 1609, 5963, 374, 2294, 0])); // "tiktoken is great!"

// Alternatively, get the tokenizer by specifying encoding name:
final encoding = getEncoding("cl100k_base");
assert(enc.decode(enc.encode("hello world")) == "hello world");

Extending tiktoken #

You may wish to extend Tiktoken to support new encodings. You can do this by passing around the existing model:

import 'package:langchain_tiktoken/tiktoken.dart';

// Create a base
final cl100kBase = encodingForModel("cl100k_base");

// Instantiate a new encoding and extend the base params
final encoding = Tiktoken(
  name: "cl100k_im",
  patStr: cl100kBase.patStr,
  mergeableRanks: cl100kBase.mergeableRanks,
  specialTokens: {
    ...cl100kBase.specialTokens,
    "<|im_start|>": 100264,
    "<|im_end|>": 100265,
  },
);

Additional information #

This is a Dart port from the original tiktoken library written in Rust/Python.

2
likes
140
points
4.71k
downloads

Publisher

verified publisherdragonx.cloud

Weekly Downloads

A BPE tokeniser for use with OpenAI's models. It exposes APIs used for processing text using tokens.

Repository (GitHub)
View/report issues

Documentation

API reference

License

MIT (license)

More

Packages that depend on langchain_tiktoken