gemini_nano_android 0.0.1 copy "gemini_nano_android: ^0.0.1" to clipboard
gemini_nano_android: ^0.0.1 copied to clipboard

A Flutter plugin to access on-device Gemini Nano models via Android AI Core. Enables offline inference and privacy-first text generation.

gemini_nano_android #

pub package analysis Star on Github License: MIT

A Flutter plugin to access Gemini Nano, Google's most efficient on-device AI model, directly through Android AI Core

This package enables offline-first, latency-free, and privacy-centric generative AI features by bridging Flutter with the native Android AICore (via ML Kit).

✨ Features #

  • 100% Offline Inference: Process data without an internet connection.
  • Privacy Focused: Data never leaves the device.
  • Zero Latency: No network round-trips; utilizes the on-device NPU (Pixel Tensor / Snapdragon).
  • Cost Efficient: Save on cloud API tokens by offloading simple tasks to the device.

📱 Supported Devices #

Gemini Nano via AI Core is currently available on select flagship Android devices, including but not limited to:

  • Google: Pixel 9 series, Pixel 10 series (e.g., Pixel 10 Pro XL).
  • Samsung: Galaxy S24 series (and newer).

Note: The user must have "AI Core" installed and updated via the Google Play Store.

🛠 Installation #

Add the dependency to your pubspec.yaml:

dependencies:
  gemini_nano_android: ^0.0.1

⚙️ Android Configuration #

Ensure your android/app/build.gradle defines a minimum SDK version compatible with modern AI features (API 26+ recommended):

android {
    defaultConfig {
        minSdkVersion 26
    }
}

🚀 Usage #

1. Basic Text Generation #

The simplest way to generate text.

import 'package:gemini_nano_android/gemini_nano_android.dart';

void generateText() async {
  try {
    // Check if the device supports Gemini Nano
    bool isAvailable = await GeminiNanoAndroid.isAvailable();
    
    if (isAvailable) {
      String result = await GeminiNanoAndroid.generate("Explain quantum physics in 10 words.");
      print("Gemini Nano says: $result");
    } else {
      print("Gemini Nano is not supported or not installed on this device.");
    }
  } catch (e) {
    print("Error generating text: $e");
  }
}

Since on-device models have limitations (context size, multimodal capabilities) or might not be downloaded yet, it is best practice to use a fallback strategy.

This example attempts to use Gemini Nano first (Free/Fast), and falls back to Firebase Vertex AI (Cloud) if it fails.

import 'package:gemini_nano_android/gemini_nano_android.dart';
// import 'package:firebase_vertexai/firebase_vertexai.dart'; // Uncomment if using Firebase

Future<String> smartProcess(String prompt) async {
  try {
    // 1. Try On-Device (Fast, Free, Private)
    print("Attempting local inference...");
    final result = await GeminiNanoAndroid.generate(prompt);
    return result;
    
  } catch (e) {
    // 2. Fallback to Cloud (More powerful, costs money, requires internet)
    print("Local inference failed ($e). Switching to Cloud...");
    
    // final model = FirebaseVertexAI.instance.generativeModel(model: 'gemini-1.5-flash');
    // final response = await model.generateContent([Content.text(prompt)]);
    // return response.text ?? "Error in cloud generation";
    
    return "Fallback placeholder: Cloud generation would happen here.";
  }
}

3. Structured Output (JSON) for Receipt Processing #

Gemini Nano is great at cleaning up OCR text.

Future<void> processReceiptText(String ocrRawText) async {
  final prompt = """
    Extract data from this receipt text. 
    Return ONLY a JSON with keys: 'total', 'date', 'merchant'.
    Text: $ocrRawText
  """;

  final jsonResult = await GeminiNanoAndroid.generate(prompt);
  // Parse jsonResult...
}

⚠️ Troubleshooting & Limitations #

Model Not Downloaded: The first time an app requests the model, Android AI Core needs to download it (approx. 1GB+). This happens in the background. If you get a Model not found error, ensure the device is on Wi-Fi and charging, then try again later.

Multimodal Support: Currently, the Android AICore implementation for 3rd party apps is primarily Text-to-Text. To process images (like PDFs or receipts), use google_mlkit_text_recognition to extract text first, then pass that text to GeminiNanoAndroid.

Context Window: On-device models have smaller context windows than cloud models. Keep prompts concise.

⚠️ Troubleshooting & Limitations #

Model Not Downloaded: The first time an app requests the model, Android AI Core needs to download it (approx. 1GB+). This happens in the background. If you get a Model not found error, ensure the device is on Wi-Fi and charging, then try again later.

Multimodal Support: Currently, the Android AICore implementation for 3rd party apps is primarily Text-to-Text. To process images (like PDFs or receipts), use google_mlkit_text_recognition to extract text first, then pass that text to GeminiNanoAndroid.

Context Window: On-device models have smaller context windows than cloud models. Keep prompts concise.

📄 License #

Distributed under the MIT License. See LICENSE for more information.

1
likes
0
points
234
downloads

Publisher

unverified uploader

Weekly Downloads

A Flutter plugin to access on-device Gemini Nano models via Android AI Core. Enables offline inference and privacy-first text generation.

Repository (GitHub)
View/report issues

Topics

#ai #gemini #android #llm #offline

License

unknown (license)

Dependencies

flutter, plugin_platform_interface

More

Packages that depend on gemini_nano_android

Packages that implement gemini_nano_android