mediapipe_genai 0.0.1 copy "mediapipe_genai: ^0.0.1" to clipboard
mediapipe_genai: ^0.0.1 copied to clipboard

An implementation of MediaPipe generative AI-based Tasks.

MediaPipe GenAI for Flutter #

pub package

A Flutter plugin to use the MediaPipe GenAI API, which contains multiple generative AI-based Mediapipe tasks.

To learn more about MediaPipe, please visit the MediaPipe website

Getting Started #

To get started with MediaPipe, please see the documentation.

Supported Tasks #

Task Android iOS Web Windows macOS Linux
Inference

Selecting a device #

On-device inference is a demanding task, and as such is recommended to run on Pixel 7 or newer (or other comprable Android devices) or iPhone 13's or newer for iOS.

Mobile emulators are not supported.

For desktop, macOS is supported and Windows / Linux are coming soon.

Usage #

To get started with this plugin, you must be on the master channel. Second, you will need to opt-in to the native-assets experiment, using the --enable-experiment=native-assets flag whenever you run any commands using the $ dart command line tool.

To enable this globally in Flutter, run:

$ flutter config --enable-native-assets

To disable this globally in Flutter, run:

$ flutter config --no-enable-native-assets

Add dependencies #

Add mediapipe_genai and mediapipe_core to your pubspec.yaml file:

dependencies:
  flutter:
    sdk: flutter
  mediapipe_core: latest
  mediapipe_genai: latest

Add tflite models #

Unlike other MediaPipe task flavors (text, vision, and audio), generative AI models must be downloaded at runtime from a URL hosted by the developer. To acquire these models, you must create a Kaggle account, accept the Terms of Service, download whichever models you want to use in your app, self-host those models at a location of your choosing, and then configure your app to download them at runtime.

See the example directory for a working implementation and directions on how to get MediaPipe's inference task working.

CPU vs GPU models #

Inference tasks can either run on the CPU or GPU, and each model is compiled once for each strategy. When you choose which model(s) to use from Kaggle, note their CPU vs GPU variants and be sure to invoke the appropriate options constructor.

Initialize your engine #

Inference example:

import 'package:mediapipe_genai/mediapipe_genai.dart';

// Location where you downloaded the file at runtime, or
// placed the model yourself in advance (using `adb push`
// or similar)
final String modelPath = getModelPath();

// Select the CPU or GPU runtime, based on your model
// See the example for suitable values to pass to the rest
// of the `LlmInferenceOptions` class's parameters.
bool isGpu = yourValueHere;
final options = switch (isGpu) {
  true => LlmInferenceOptions.gpu(
    modelPath: modelPath,
    ...
  ),
  false => LlmInferenceOptions.cpu(
    modelPath: modelPath,
    ...
  ),
};

// Create an inference engine
final engine = LlmInferenceEngine(options);

// Stream results from the engine
final Stream<String> responseStream = engine.generateResponse('Hello, world!');
await for (final String responseChunk in responseStream) {
  print('the LLM said: $chunk');
}

Issues and feedback #

Please file Flutter-MediaPipe specific issues, bugs, or feature requests in our issue tracker.

Issues that are specific to Flutter can be filed in the Flutter issue tracker.

To contribute a change to this plugin, please review our contribution guide and open a pull request.

11
likes
120
pub points
68%
popularity

Publisher

unverified uploader

An implementation of MediaPipe generative AI-based Tasks.

Repository (GitHub)
View/report issues
Contributing

Documentation

API reference

License

BSD-3-Clause (license)

Dependencies

async, equatable, ffi, http, logging, mediapipe_core, native_assets_cli, native_toolchain_c, path

More

Packages that depend on mediapipe_genai