flutter_davoice 0.0.1
flutter_davoice: ^0.0.1 copied to clipboard
Flutter plugin for Davoice speech recognition and text-to-speech APIs.
flutter_davoice #
Flutter plugin for Davoice speech recognition, text-to-speech, audio playback, license, and native voice framework APIs.
This package is the Flutter equivalent of the React Native bridge in
../TTSNPM/speech/index.ts. The native iOS and Android libraries are vendored
inside this plugin so a Flutter app can call the same Davoice STT/TTS APIs from
Dart.
Native Libraries #
The package expects the native artifacts in these locations:
- iOS framework:
ios/Frameworks/DavoiceTTS.xcframework - iOS static libraries:
ios/Libraries/libphonemes.a,ios/Libraries/libucd.a - iOS phoneme module map/header:
ios/Libraries/phonemes/ - Android AAR Maven layout:
android/libs/com/davoice/tts/1.0.0/
Run ./update_lib_dirs.sh after replacing the Android AAR or POM. It refreshes
the Maven checksum files.
Installation #
Add the package to your Flutter app. For local development:
dependencies:
flutter_davoice:
path: ../FlutterPubDavoicePrivate
Android #
The Davoice Android AAR is stored as a local Maven artifact. Add the plugin's
android/libs directory to the host app repositories.
For an app next to this repo, android/build.gradle.kts can include:
allprojects {
repositories {
google()
mavenCentral()
maven {
url = uri(rootProject.file("../../FlutterPubDavoicePrivate/android/libs"))
}
}
}
The plugin requires Android minSdk 29 or newer.
Add app permissions as needed:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.INTERNET" />
iOS #
The podspec links the vendored framework and static libraries automatically. Add the host app usage descriptions:
<key>NSMicrophoneUsageDescription</key>
<string>This app uses the microphone for voice features.</string>
<key>NSSpeechRecognitionUsageDescription</key>
<string>This app uses speech recognition to convert speech to text.</string>
The plugin supports iOS 13.0 or newer.
Basic Usage #
import 'package:flutter_davoice/flutter_davoice.dart';
final speech = FlutterDavoice();
await speech.setLicense('YOUR_LICENSE_KEY');
await speech.initAll(
const DavoiceInitAllOptions(
locale: 'en-US',
model: 'assets/models/model_ex_ariana_fast.dm',
timeoutMs: 30000,
),
);
await speech.speak(
'Hello from Davoice.',
speakerId: 0,
speed: 0.88,
);
await speech.start(
'en-US',
options: const DavoiceStartOptions(extraPartialResults: true),
);
Events #
Native callbacks are exposed as Dart streams:
final subscriptions = [
speech.onSpeechStart.listen((_) {
// Recognition started.
}),
speech.onSpeechPartialResults.listen((event) {
print(event.value.join(' '));
}),
speech.onSpeechResults.listen((event) {
print(event.value.join(' '));
}),
speech.onSpeechError.listen((event) {
print(event.message);
}),
speech.onFinishedSpeaking.listen((_) {
// TTS or marked audio playback finished.
}),
];
Available event streams:
onSpeechStartonSpeechRecognizedonSpeechEndonSpeechErroronSpeechResultsonSpeechPartialResultsonSpeechVolumeChangedonNewSpeechWAVonFinishedSpeaking
API Surface #
Speech recognition:
initAlldestroyAllinitWithoutModeldestroyWithoutModeldestroyWihtouModelstartstartWithSVOnboardingJsonstopcancelisAvailableisRecognizingpauseSpeechRecognitionunPauseSpeechRecognitionpauseMicrophoneunPauseMicrophonehasIOSMicPermissionsrequestIOSMicPermissionshasIOSSpeechRecognitionPermissionsrequestIOSSpeechRecognitionPermissions
Text-to-speech and playback:
initTTSspeakstopSpeakingplayWavplayPCMplayBuffer
License and audio:
setLicenseisLicenseValidsetAECEnabled
Android remote STT helpers:
initAllRemoteSTTinitAllRemoteSTTAndTTS
Model Assets #
Model paths are passed as strings to the native bridge. In a Flutter app, add the
models to the app pubspec.yaml:
flutter:
assets:
- assets/models/model_ex_ariana_fast.dm
- assets/models/model_ex_rich_fast.dm
Then pass the same asset key to initAll or initTTS.
Speaker Verification #
Speaker verification enrollment is produced by the companion wake-word package and can be supplied to Davoice. Use a saved enrollment JSON file path when you want STT and TTS flows to be speaker verified:
await speech.initAll(
DavoiceInitAllOptions(
locale: 'en-US',
model: 'assets/models/model_ex_ariana_fast.dm',
onboardingJsonPath: enrollmentJson,
),
);
The standalone example app demonstrates this flow with flutter_wake_word.
Example App #
The runnable Flutter example app is intentionally kept outside this package repository because it carries large voice, wake-word, and speaker-verification model assets that may require Git LFS.
Use the standalone example app repo:
https://github.com/frymanofer/Flutter_DaVoice
That app mirrors the React Native demo flow:
- voice model selection
- optional license entry
- speaker verification onboarding
- saved speaker signature reuse
- real-time speaker verification before voice activation
- wake-word detection through
flutter_wake_word - Davoice STT/TTS initialization with speaker-verification enrollment
- manual TTS and speech echo testing
- Full AI Chat with Gemini-style turn taking
Validation #
Common validation commands:
flutter analyze
flutter test
cd example && flutter test
cd example && flutter build apk --debug
cd example && flutter build ios --no-codesign
The repo also includes:
./update_library.sh
That refreshes Android checksums and runs the package checks.