llama_cpp_dart 0.0.8 copy "llama_cpp_dart: ^0.0.8" to clipboard
llama_cpp_dart: ^0.0.8 copied to clipboard

Dart binding for llama.cpp --- high level wrappers for both Dart and Flutter

LLAMA.CPP DART #

A high-performance Dart binding for llama.cpp, enabling advanced text generation capabilities in both Dart and Flutter applications with flexible integration options.

Overview #

This library provides three levels of abstraction for integrating llama.cpp into your Dart/Flutter projects, allowing you to choose the right balance between control and convenience:

  1. Low-Level FFI Bindings: Direct access to llama.cpp functions
  2. High-Level Wrapper: Simplified, object-oriented API
  3. Managed Isolate: Flutter-friendly, non-blocking implementation

Features #

  • Asynchronous text generation using Dart isolates
  • Flexible configuration through customizable parameters
  • Multiple integration approaches to suit different needs
  • Real-time text generation with stream-based output
  • Support for different prompt formats (ChatML, Alpaca)
  • Comprehensive parameter control for model, context, and sampling

Usage Examples #

Low-Level FFI Bindings #

Direct llama.cpp integration with maximum control:

import 'package:llama_cpp_dart/src/llama_cpp.dart';

void main() {
  final lib = llama_cpp(DynamicLibrary.open("libllama.dylib"));
  // Initialize model, context, and sampling parameters
  // See examples/low_level.dart for complete example
}

High-Level Wrapper #

Simplified API for common use cases:

import 'package:llama_cpp_dart/llama_cpp_dart.dart';

void main() {
  Llama.libraryPath = "libllama.dylib";
  final llama = Llama("path/to/model.gguf");
  
  llama.setPrompt("2 * 2 = ?");
  while (true) {
    var (token, done) = llama.getNext();
    print(token);
    if (done) break;
  }
  llama.dispose();
}

Managed Isolate #

Perfect for Flutter applications:

import 'package:llama_cpp_dart/llama_cpp_dart.dart';

void main() async {
  final loadCommand = LlamaLoad(
    path: "path/to/model.gguf",
    modelParams: ModelParams(),
    contextParams: ContextParams(),
    samplingParams: SamplerParams(),
    format: ChatMLFormat(),
  );

  final llamaParent = LlamaParent(loadCommand);
  await llamaParent.init();

  llamaParent.stream.listen((response) => print(response));
  llamaParent.sendPrompt("2 * 2 = ?");
}

Getting Started #

Prerequisites #

  • Dart SDK (for console applications)
  • Flutter SDK (for Flutter applications)
  • Compiled llama.cpp shared library

Building llama.cpp Library #

  1. Clone the llama.cpp repository:
git clone https://github.com/ggerganov/llama.cpp
  1. Compile into a shared library:
  • Windows: Outputs .dll
  • Linux: Outputs .so
  • macOS: Outputs .dylib
  1. Place the compiled library in your project's accessible directory

Installation #

Add to your pubspec.yaml:

dependencies:
  llama_cpp_dart: ^latest_version

License #

This project is licensed under the MIT License - see the LICENSE.md file for details.

42
likes
140
points
270
downloads

Publisher

unverified uploader

Weekly Downloads

Dart binding for llama.cpp --- high level wrappers for both Dart and Flutter

Repository (GitHub)
View/report issues

Documentation

API reference

License

MIT (license)

Dependencies

ffi, flutter, typed_isolate

More

Packages that depend on llama_cpp_dart