Flutter AI Agent SDK

A high-performance, extensible AI Agent SDK for Flutter with voice interaction, LLM integration, and tool execution capabilities.

โœจ Features

  • ๐ŸŽ™๏ธ Voice Interaction: Built-in STT/TTS with native platform support
  • ๐Ÿค– Multiple LLM Providers: OpenAI, Anthropic, and custom providers
  • ๐Ÿ› ๏ธ Tool/Function Calling: Execute custom functions from AI responses
  • ๐Ÿ’พ Conversation Memory: Short-term and long-term memory management
  • ๐Ÿ”„ Streaming Support: Real-time response streaming
  • ๐ŸŽฏ Turn Detection: VAD, push-to-talk, and hybrid modes
  • ๐Ÿ“ฆ Pure Dart: No platform-specific code required
  • โšก High Performance: Optimized for mobile devices

๐Ÿš€ Installation

Add to your pubspec.yaml:

dependencies:
  flutter_ai_agent_sdk:
    path: ../flutter_ai_agent_sdk

๐Ÿ“– Quick Start

1. Create an LLM Provider

import 'package:flutter_ai_agent_sdk/flutter_ai_agent_sdk.dart';

final llmProvider = OpenAIProvider(
  apiKey: 'your-api-key',
  model: 'gpt-4-turbo-preview',
);

2. Configure Your Agent

final config = AgentConfig(
  name: 'My Assistant',
  instructions: 'You are a helpful AI assistant.',
  llmProvider: llmProvider,
  sttService: NativeSTTService(),
  ttsService: NativeTTSService(),
  turnDetection: TurnDetectionConfig(
    mode: TurnDetectionMode.vad,
    silenceThreshold: Duration(milliseconds: 700),
  ),
  tools: [
    // Add your custom tools here
  ],
);

3. Create and Use Agent

final agent = VoiceAgent(config: config);
final session = await agent.createSession();

// Send text message
await session.sendMessage('Hello!');

// Start voice interaction
await session.startListening();

// Listen to events
session.events.listen((event) {
  if (event is MessageReceivedEvent) {
    print('Assistant: ${event.message.content}');
  }
});

// Listen to state changes
session.state.listen((status) {
  print('State: ${status.state}');
});

๐Ÿ› ๏ธ Creating Custom Tools

final weatherTool = FunctionTool(
  name: 'get_weather',
  description: 'Get current weather for a location',
  parameters: {
    'type': 'object',
    'properties': {
      'location': {'type': 'string', 'description': 'City name'},
      'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit']},
    },
    'required': ['location'],
  },
  function: (args) async {
    final location = args['location'];
    final unit = args['unit'] ?? 'celsius';
    
    // Your weather API call here
    return {
      'temperature': 22,
      'condition': 'sunny',
      'location': location,
      'unit': unit,
    };
  },
);

๐Ÿ—๏ธ Architecture

flutter_ai_agent_sdk/
โ”œโ”€โ”€ lib/
โ”‚   โ”œโ”€โ”€ src/
โ”‚   โ”‚   โ”œโ”€โ”€ core/           # Core agent logic
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ agents/     # VoiceAgent, config
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ sessions/   # Session management
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ events/     # Event system
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ models/     # Data models
โ”‚   โ”‚   โ”œโ”€โ”€ voice/          # Voice processing
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ stt/        # Speech-to-text
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ tts/        # Text-to-speech
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ vad/        # Voice activity detection
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ audio/      # Audio utilities
โ”‚   โ”‚   โ”œโ”€โ”€ llm/            # LLM integration
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ providers/  # Provider implementations
โ”‚   โ”‚   โ”‚   โ”œโ”€โ”€ chat/       # Chat context
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ streaming/  # Stream processing
โ”‚   โ”‚   โ”œโ”€โ”€ tools/          # Tool execution
โ”‚   โ”‚   โ”œโ”€โ”€ memory/         # Memory management
โ”‚   โ”‚   โ””โ”€โ”€ utils/          # Utilities
โ”‚   โ””โ”€โ”€ flutter_ai_agent_sdk.dart
โ”œโ”€โ”€ example/                # Example app
โ””โ”€โ”€ test/                   # Tests

๐Ÿ”Œ Supported LLM Providers

OpenAI

OpenAIProvider(
  apiKey: 'sk-...',
  model: 'gpt-4-turbo-preview',
)

Anthropic

AnthropicProvider(
  apiKey: 'sk-ant-...',
  model: 'claude-3-sonnet-20240229',
)

Custom Provider

class MyCustomProvider extends LLMProvider {
  @override
  String get name => 'MyProvider';
  
  @override
  Future<LLMResponse> generate({
    required List<Message> messages,
    List<Tool>? tools,
    Map<String, dynamic>? parameters,
  }) async {
    // Your implementation
  }
  
  @override
  Stream<LLMResponse> generateStream({
    required List<Message> messages,
    List<Tool>? tools,
    Map<String, dynamic>? parameters,
  }) async* {
    // Your streaming implementation
  }
}

๐ŸŽฏ Turn Detection Modes

  • VAD (Voice Activity Detection): Automatic speech detection
  • Push-to-Talk: Manual button control
  • Server VAD: Server-side detection (e.g., OpenAI Realtime)
  • Hybrid: Combined VAD + silence detection

๐Ÿ“ฑ Platform Support

  • โœ… iOS
  • โœ… Android
  • โœ… Web (limited voice features)
  • โœ… macOS
  • โœ… Windows
  • โœ… Linux

๐Ÿงช Testing

flutter test

๐Ÿ“„ License

MIT License

๐Ÿค Contributing

Contributions welcome! Please read CONTRIBUTING.md first.

๐Ÿ“ž Support