fcllama 0.0.3 copy "fcllama: ^0.0.3" to clipboard
fcllama: ^0.0.3 copied to clipboard

A flutter binding for llama.cpp, which use platform channel.

FCllama #

License: MIT pub package

Flutter binding of llama.cpp , which use platform channel .

llama.cpp: Inference of LLaMA model in pure C/C++

Installation #

Flutter #

flutter pub add fcllama

iOS

Please run pod install or pod update in your iOS project.

Android

You need install cmake 3.31.0、android sdk 35 and ndk 28.0.12433566. No additional operation required .

OpenHarmonyOS/HarmonyOS #

This is the fastest and recommended way to add HLlama to your project.

ohpm install hllama

Or, you can add it to your project manually.

  • Add the following lines to oh-package.json5 on your app module.
"dependencies": {
  "hllama": "^0.0.1",
}
  • Then run
  ohpm install

How to use #

Flutter #

  1. Initializing Llama
import 'package:fcllama/fllama.dart';

FCllama.instance()?.initContext("model path",emitLoadProgress: true)
        .then((context) {
  modelContextId = context?["contextId"].toString() ?? "";
  if (modelContextId.isNotEmpty) {
    // you can get modelContextId,if modelContextId > 0 is success.
  }
});
  1. Bench model on device
import 'package:fcllama/fllama.dart';

FCllama.instance()?.bench(double.parse(modelContextId),pp:8,tg:4,pl:2,nr: 1).then((res){
  Get.log("[FCllama] Bench Res $res");
});
  1. Tokenize and Detokenize
import 'package:fcllama/fllama.dart';

FCllama.instance()?.tokenize(double.parse(modelContextId), text: "What can you do?").then((res){
  Get.log("[FCllama] Tokenize Res $res");
  FCllama.instance()?.detokenize(double.parse(modelContextId), tokens: res?['tokens']).then((res){
    Get.log("[FCllama] Detokenize Res $res");
  });
});
  1. Streaming monitoring
import 'package:fcllama/fllama.dart';

FCllama.instance()?.onTokenStream?.listen((data) {
  if(data['function']=="loadProgress"){
    Get.log("[FCllama] loadProgress=${data['result']}");
  }else if(data['function']=="completion"){
    Get.log("[FCllama] completion=${data['result']}");
    final tempRes = data["result"]["token"];
    // tempRes is ans
  }
});
  1. Release this or Stop one
import 'package:fcllama/fllama.dart';

FCllama.instance()?.stopCompletion(contextId: double.parse(modelContextId)); // stop one completion
FCllama.instance()?.releaseContext(double.parse(modelContextId)); // release one
FCllama.instance()?.releaseAllContexts(); // release all

OpenHarmonyOS/HarmonyOS #

You can see this file

Support System #

System Min SDK Arch Other
Android 23 arm64-v8a、x86_64、armeabi-v7a Supports additional optimizations for certain CPUs
iOS 14 arm64 Support Metal
OpenHarmonyOS/HarmonyOS 12 arm64-v8a、x86_64 No additional optimizations for certain CPUs are supported

Obtain the model #

You can search HuggingFace for available models (Keyword: GGUF).

For get a GGUF model or quantize manually, see Prepare and Quantize section in llama.cpp.

NOTE #

iOS:

  • The Extended Virtual Addressing capability is recommended to enable on iOS project.
  • Metal:
    • We have tested to know some devices is not able to use Metal ('params.n_gpu_layers > 0') due to llama.cpp used SIMD-scoped operation, you can check if your device is supported in Metal feature set tables, Apple7 GPU will be the minimum requirement.
    • It's also not supported in iOS simulator due to this limitation, we used constant buffers more than 14.

Android:

  • Currently only supported arm64-v8a / x86_64 / armeabi-v7a platform, this means you can't initialize a context on another platforms. The 64-bit platform are recommended because it can allocate more memory for the model.
  • No integrated any GPU backend yet.

License #

MIT

4
likes
150
points
77
downloads

Publisher

unverified uploader

Weekly Downloads

A flutter binding for llama.cpp, which use platform channel.

Homepage
Repository (GitHub)
View/report issues

Documentation

API reference

License

MIT (license)

Dependencies

flutter, plugin_platform_interface

More

Packages that depend on fcllama