flutter_gemma library

Classes

CancelToken
Token for cancelling model downloads
CorruptionDetectionResult
Result of corruption detection
DiagnosticReport
Diagnostic information for troubleshooting
DocumentWithEmbedding
Document with embedding for HNSW rebuild
DownloadProgress
Progress information for model downloads
EmbeddingInstallation
Result of embedding model installation
EmbeddingInstallationBuilder
Fluent builder for embedding model installation
EmbeddingModel
Represents an embedding model instance.
EmbeddingModelSpec
Specification for embedding models (model.bin + tokenizer.json)
ErrorHandlingResult
Result of error handling
EstimatedDimensions
Estimated image dimensions
FlutterGemma
Modern API facade for Flutter Gemma
FlutterGemmaDesktop
Desktop implementation of FlutterGemma plugin
FlutterGemmaPlugin
Interface for the FlutterGemma plugin.
FunctionCallParser
Facade for backward compatibility. Delegates to model-specific FunctionCallFormat implementations.
FunctionCallResponse
Gemma3Specs
Gemma 3 SigLIP vision encoder specifications
GeneralVisionSpecs
General vision encoder specifications
ImageErrorHandler
Comprehensive error handling and debugging utilities for AI image processing to prevent corruption that causes repeating text patterns in model responses.
ImageProcessor
Comprehensive image processing utilities to prevent AI image corruption and ensure proper vision encoder compatibility.
ImageTokenizer
Handles proper image tokenization for multimodal AI models to prevent "Prompt contained 0 image tokens but received 1 images" errors and corruption that causes repeating text patterns.
InferenceChat
InferenceInstallation
Result of inference model installation
InferenceInstallationBuilder
Fluent builder for inference model installation
InferenceModel
Represents an LLM model instance.
InferenceModelSession
Session managing response generation from the model.
InferenceModelSpec
Specification for inference models (main model + optional LoRA)
LegacyPreferencesMigrator
Migrates old Legacy preference keys to Modern ModelRepository
Message
MigrationResult
Result of migration operation
ModelFile
Represents a single file that belongs to a model
ModelFileManager
ModelResponse
Base interface for model responses from InferenceChat Can be either TextResponse, FunctionCallResponse, or ThinkingResponse
ModelSpec
Base specification for any model (inference or embedding)
MultimodalImageHandler
Main integration class for handling multimodal image processing in Flutter Gemma to prevent AI image corruption and repeating text pattern issues.
MultimodalImageResult
Result of multimodal image processing
OrphanedFileInfo
Information about a potentially orphaned file
ParallelFunctionCallResponse
Multiple function calls in a single model response.
PlatformService
ProcessedImage
Represents a processed image ready for AI model consumption
ResponseValidationResult
Result of response validation
RetrievalResult
StopTokenFilter
Filters stop tokens from model response stream. For .litertlm on iOS, MediaPipe doesn't handle <end_of_turn> — this filter detects and terminates the stream at the stop token, with buffering for partial tag matches.
StorageStats
Storage statistics
TextResponse
Text token during streaming
ThinkingResponse
Thinking process content from the model
Tool
ValidationResult
Result of image validation
VectorStoreStats
VisionEncoderValidator
Validates images for compatibility with AI vision encoders to prevent corruption that causes models to interpret images as repeating text patterns.
VisionSpecs
Base class for vision encoder specifications

Enums

CorruptionAction
Actions to take when corruption is detected
ErrorType
Types of errors that can occur in image processing
MessageType
ModelFileType
ModelManagementType
Base enumeration for different model management types
ModelReplacePolicy
Policy for handling old models when switching to new ones
ModelType
PreferredBackend
Hardware backend for model inference.
ResponseAction
Actions to take for corrupted responses
TaskType
Task type for embedding generation, following Google RAG SDK convention.
ToolChoice
Controls whether the model should call tools.
VisionEncoderType
Vision encoder types with their specifications
WebStorageMode
Storage mode for web platform models

Mixins

RawSdkResponseSession
Mixin for sessions that surface the SDK's structured raw JSON response (LiteRT-LM Gemma 4 path with tool_calls). Allows InferenceChat to read the structured tool calls without a hard dependency on a concrete session type, and lets non-FFI sessions opt out by simply not implementing this mixin.

Constants

defaultMaxFunctionBufferLength → const int
Default maximum length for function call buffer before flushing as text. Must accommodate verbose formats (DeepSeek tags, parallel calls).
supportedLoraRanks → const List<int>

Properties

isDesktop bool
Check if current platform is desktop
no setter

Exceptions / Errors

DownloadCancelledException
Exception thrown when a download is cancelled
ImageProcessingException
Exception thrown when image processing fails
ImageTokenizationException
Exception thrown when image tokenization fails
MigrationException
Exception thrown during migration
ModelStorageException
Exception thrown when model storage operations fail
VisionEncoderValidationException
Exception thrown when validation fails