llamadart 0.1.0 copy "llamadart: ^0.1.0" to clipboard
llamadart: ^0.1.0 copied to clipboard

A Dart/Flutter plugin for llama.cpp - run LLM inference on any platform using GGUF models

0.1.0 #

  • WASM Support: Full support for running the Flutter app and LLM inference in WASM on the web.
  • Performance Improvements: Optimized memory usage and loading times for web models.
  • Enhanced Web Interop: Improved wllama integration with better error handling and progress reporting.
  • Bug Fixes: Resolved minor UI issues on mobile and web layouts.

0.0.1 #

  • Initial release.
  • Supported platforms: iOS, macOS, Android, Linux, Windows, Web.
  • Features:
    • Text generation with llama.cpp backend.
    • GGUF model support.
    • Hardware acceleration (Metal, Vulkan, CUDA).
    • Flutter Chat Example.
    • CLI Basic Example.
1
likes
160
points
0
downloads

Publisher

verified publisherleehack.com

Weekly Downloads

A Dart/Flutter plugin for llama.cpp - run LLM inference on any platform using GGUF models

Repository (GitHub)
View/report issues
Contributing

Topics

#llama #llm #ai #inference #gguf

Documentation

API reference

License

MIT (license)

Dependencies

ffi, flutter, flutter_web_plugins, http, meta, path, web

More

Packages that depend on llamadart

Packages that implement llamadart