flutter_voice_commands

A Flutter plugin for voice command recognition and processing across multiple platforms.

pub package License: MIT Flutter Dart

Features

  • 🎤 Voice Recognition: Convert speech to text with high accuracy
  • 🔧 Command Processing: Built-in command parsing and processing
  • 📱 Cross-Platform: Support for Android, iOS, Web, Windows, macOS, and Linux
  • 🔐 Permission Handling: Automatic microphone permission management
  • Real-time Processing: Stream-based voice input processing
  • 🎯 Customizable: Easy to configure and extend for your needs

Getting Started

Installation

Add this to your package's pubspec.yaml file:

dependencies:
  flutter_voice_commands: ^0.0.1

Usage

Basic Voice Recognition

import 'package:flutter_voice_commands/flutter_voice_commands.dart';

class VoiceRecognitionExample extends StatefulWidget {
  @override
  _VoiceRecognitionExampleState createState() => _VoiceRecognitionExampleState();
}

class _VoiceRecognitionExampleState extends State<VoiceRecognitionExample> {
  final FlutterVoiceCommands _voiceCommands = FlutterVoiceCommands();
  bool _isListening = false;
  String _recognizedText = '';

  @override
  void initState() {
    super.initState();
    _initializeVoiceCommands();
  }

  Future<void> _initializeVoiceCommands() async {
    await _voiceCommands.initialize();
  }

  Future<void> _startListening() async {
    if (await _voiceCommands.hasPermission()) {
      setState(() => _isListening = true);
      
      _voiceCommands.listen(
        onResult: (text) {
          setState(() => _recognizedText = text);
        },
        onError: (error) {
          print('Error: $error');
        },
      );
    } else {
      await _voiceCommands.requestPermission();
    }
  }

  Future<void> _stopListening() async {
    await _voiceCommands.stop();
    setState(() => _isListening = false);
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(title: Text('Voice Commands Example')),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.center,
          children: [
            Text(
              'Recognized Text:',
              style: TextStyle(fontSize: 18, fontWeight: FontWeight.bold),
            ),
            SizedBox(height: 20),
            Container(
              padding: EdgeInsets.all(16),
              decoration: BoxDecoration(
                border: Border.all(color: Colors.grey),
                borderRadius: BorderRadius.circular(8),
              ),
              child: Text(
                _recognizedText.isEmpty ? 'No text recognized yet' : _recognizedText,
                style: TextStyle(fontSize: 16),
              ),
            ),
            SizedBox(height: 30),
            ElevatedButton(
              onPressed: _isListening ? _stopListening : _startListening,
              child: Text(_isListening ? 'Stop Listening' : 'Start Listening'),
            ),
          ],
        ),
      ),
    );
  }

  @override
  void dispose() {
    _voiceCommands.dispose();
    super.dispose();
  }
}

Command Processing

// Define custom commands
final commands = [
  VoiceCommand(
    pattern: 'open (\\w+)',
    action: (matches) => print('Opening: ${matches[1]}'),
  ),
  VoiceCommand(
    pattern: 'close (\\w+)',
    action: (matches) => print('Closing: ${matches[1]}'),
  ),
];

// Register commands
_voiceCommands.registerCommands(commands);

// Start listening with command processing
_voiceCommands.listenWithCommands(
  onCommand: (command, matches) {
    command.action(matches);
  },
  onUnknown: (text) {
    print('Unknown command: $text');
  },
);

Platform Support

Platform Status Notes
Android ✅ Supported Requires microphone permission
iOS ✅ Supported Requires microphone permission
Web ✅ Supported Uses Web Speech API
Windows ✅ Supported Requires microphone access
macOS ✅ Supported Requires microphone permission
Linux ✅ Supported Requires pulseaudio

Permissions

Android

Add the following permission to your android/app/src/main/AndroidManifest.xml:

<uses-permission android:name="android.permission.RECORD_AUDIO" />

iOS

Add the following permission to your ios/Runner/Info.plist:

<key>NSMicrophoneUsageDescription</key>
<string>This app needs access to microphone for voice commands.</string>
<key>NSSpeechRecognitionUsageDescription</key>
<string>This app needs access to speech recognition for voice commands.</string>

API Reference

FlutterVoiceCommands

The main class for voice command recognition.

Methods

  • initialize(): Initialize the voice recognition system
  • hasPermission(): Check if microphone permission is granted
  • requestPermission(): Request microphone permission
  • listen(): Start listening for voice input
  • listenWithCommands(): Start listening with command processing
  • stop(): Stop listening
  • dispose(): Clean up resources

Properties

  • isListening: Whether the system is currently listening
  • isAvailable: Whether voice recognition is available on the device

VoiceCommand

Represents a voice command pattern and action.

class VoiceCommand {
  final String pattern;
  final Function(List<String>) action;
  
  VoiceCommand({
    required this.pattern,
    required this.action,
  });
}

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Development Setup

  1. Clone the repository
  2. Run flutter pub get
  3. Run flutter analyze to check for issues
  4. Run flutter test to run tests
  5. Make your changes and ensure tests pass

Testing

Run the test suite:

flutter test

Run with coverage:

flutter test --coverage

License

This project is licensed under the MIT License - see the LICENSE file for details.

Issues and Feedback

Please file issues and feature requests on the GitHub repository.

Changelog

See CHANGELOG.md for a list of changes and version history.