tokencost 0.2.9 copy "tokencost: ^0.2.9" to clipboard
tokencost: ^0.2.9 copied to clipboard

Clientside token counting + price estimation for LLM apps and AI agents ported to Dart from Python package at https://github.com/AgentOps-AI/tokencost.

tokencost #

style: very good analysis Powered by Mason License: MIT

Overview #

Clientside token counting + price estimation for LLM apps and AI agents. tokencost helps calculate the USD cost of using major Large Language Model (LLMs) APIs by calculating the estimated cost of prompts and completions.

Ported from Python tokencost package, see AgentOps-AI/tokencost.

Features #

  • LLM Price Tracking Major LLM providers frequently add new models and update pricing. This repo helps track the latest price changes
  • Token counting Accurately count prompt tokens before sending OpenAI requests
  • Easy integration Get the cost of a prompt or completion with a single function

Example usage: #

import 'package:tokencost/tokencost.dart';

void main() {
  const model = 'gpt-3.5-turbo';
  const prompt = [
    {
      'role': 'user',
      'content': 'Hello world',
    },
  ];
  const completion = 'How may I assist you today?';

  final promptCost = calculatePromptCost(prompt, model);
  final completionCost = calculateCompletionCost(completion, model);

  print('$promptCost + $completionCost = ${promptCost + completionCost}');
  // $0.00001350 + $0.00001400 = $0.00002750
}

Installation ๐Ÿ’ป #

โ— In order to start using tokencost you must have the Dart SDK installed on your machine.

Install via dart pub add:

dart pub add tokencost

Usage #

Cost estimates #

Calculating the cost of prompts and completions from OpenAI requests

import 'package:tokencost/tokencost.dart';
import 'package:dart_openai/dart_openai.dart';

const model = 'gpt-3.5-turbo';
const prompt = [{'role': 'user', 'content': 'Say this is a test'}];

OpenAICompletioModel chatCompletion = await OpenAI.instance.completion.create(
    model: model,
    prompt: prompt,
);

completion = chatCompletion.choices.first.message.content!.first.text!;
// This is a test.

final promptCost = calculatePromptCost(prompt, model)
final completionCost = calculateCompletionCost(completion, model)
print(''$promptCost + $completionCost = ${promptCost + completionCost}'');
// $0.00001800 + $0.00001000 = $0.00002800

print('Cost USD: ${(promptCost + completionCost)}');
// Cost USD: $2.8e-05

Calculating cost using string prompts instead of messages:

const promptString = 'Hello world'; 
const response = 'How may I assist you today?';
const model = 'gpt-3.5-turbo';

final promptCost = calculatePromptCost(promptString, model);
print('Cost: $promptCost');
// Cost: $3e-06

Counting tokens #

import 'package:tokencost/tokencost.dart';

const messagePrompt = [{'role': 'user', 'content': 'Hello world'}];
// Counting tokens in prompts formatted as message lists
print(countMessageTokens(messagePrompt, 'gpt-3.5-turbo'));
// 9

// Alternatively, counting tokens in string prompts
print(countStringTokens('Hello world', 'gpt-3.5-turbo'));
// 2

Continuous Integration ๐Ÿค– #

tokencost comes with a built-in GitHub Actions workflow powered by Very Good Workflows but you can also add your preferred CI/CD solution.

Out of the box, on each pull request and push, the CI formats, lints, and tests the code. This ensures the code remains consistent and behaves correctly as you add functionality or make changes. The project uses Very Good Analysis for a strict set of analysis options used by our team. Code coverage is enforced using the Very Good Workflows.


Running Tests ๐Ÿงช #

To run all unit tests:

dart pub global activate coverage 1.2.0
dart test --coverage=coverage
dart pub global run coverage:format_coverage --lcov --in=coverage --out=coverage/lcov.info

To view the generated coverage report you can use lcov.

# Generate Coverage Report
genhtml coverage/lcov.info -o coverage/

# Open Coverage Report
open coverage/index.html

Contributing #

Contributions to TokenCost are welcome! Feel free to create an issue for any bug reports, complaints, or feature suggestions.

License #

TokenCost is released under the MIT License.

2
likes
150
pub points
66%
popularity

Publisher

verified publishermatteodigiovinazzo.com

Clientside token counting + price estimation for LLM apps and AI agents ported to Dart from Python package at https://github.com/AgentOps-AI/tokencost.

Repository (GitHub)
View/report issues

Documentation

API reference

License

MIT (LICENSE)

Dependencies

fpdart, freezed_annotation, http, json_annotation, money2, tiktoken

More

Packages that depend on tokencost