picovoice 1.1.6 picovoice: ^1.1.6 copied to clipboard
A Flutter package for Picovoice's end-to-end voice platform.
Picovoice SDK for Flutter #
Picovoice #
Made in Vancouver, Canada by Picovoice
Picovoice is an end-to-end platform for building voice products on your terms. It enables creating voice experiences similar to Alexa and Google. But it entirely runs 100% on-device.
Picovoice is:
- Private: Everything is processed offline. Intrinsically HIPAA and GDPR compliant.
- Reliable: Runs without needing constant connectivity.
- Zero Latency: Edge-first architecture eliminates unpredictable network delay.
- Accurate: Resilient to noise and reverberation. It outperforms cloud-based alternatives by wide margins *.
- Cross-Platform: Design once, deploy anywhere. Build using familiar languages and frameworks.
Compatibility #
This binding is for running Picovoice on Flutter 1.20.0+ on the following platforms:
- Android 4.1+ (API 16+)
- iOS 9.0+
Installation #
To start, you must have the Flutter SDK installed on your system. Once installed, you can run flutter doctor
to determine any other missing requirements.
To add the Picovoice package to your app project, you can reference it in your pub.yaml:
dependencies:
picovoice: ^<version>
If you prefer to clone the repo and use it locally, you can reference the local binding location:
dependencies:
picovoice:
path: /path/to/picovoice/flutter/binding
NOTE: When archiving for release on iOS, you may have to change the build settings of your project in order to prevent stripping of the Picovoice library. To do this open the Runner project in XCode and change build setting Deployment -> Strip Style to 'Non-Global Symbols'.
Permissions #
To enable recording with the hardware's microphone, you must first ensure that you have enabled the proper permission on both iOS and Android.
On iOS, open your Info.plist and add the following line:
<key>NSMicrophoneUsageDescription</key>
<string>[Permission explanation]</string>
On Android, open your AndroidManifest.xml and add the following line:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
Usage #
The module provides you with two levels of API to choose from depending on your needs.
High-Level API
PicovoiceManager provides a high-level API that takes care of audio recording. This class is the quickest way to get started.
The constructor PicovoiceManager.create
will create an instance of the PicovoiceManager using the Porcupine keyword and Rhino context files that you pass to it.
import 'package:picovoice/picovoice_manager.dart';
import 'package:picovoice/picovoice_error.dart';
_picovoiceManager = PicovoiceManager.create(
"/path/to/keyword/file.ppn",
_wakeWordCallback,
"/path/to/context/file.rhn",
_inferenceCallback);
The wakeWordCallback
and inferenceCallback
parameters are functions that you want to execute when a wake word is detected and when an inference is made.
void _wakeWordCallback(){
// wake word detected
}
void _infererenceCallback(Map<String, dynamic> inference){
if(inference['isUnderstood']){
String intent = inference['intent']
Map<String, String> slots = inference['slots']
// add code to take action based on inferred intent and slot values
}
else{
// add code to handle unsupported commands
}
}
You can override the default model files and sensitivities. There is also an optional errorCallback that is called if there is a problem encountered while processing audio. These optional parameters can be passed in like so:
void createPicovoiceManager() {
double porcupineSensitivity = 0.7;
double rhinoSensitivity = 0.6;
_picovoiceManager = PicovoiceManager.create(
"/path/to/keyword/file.ppn",
wakeWordCallback,
"/path/to/context/file.rhn",
inferenceCallback,
porcupineSensitivity: porcupineSensitivity,
rhinoSensitivity: rhinoSensitivity,
porcupineModelPath: "/path/to/porcupine/model.pv",
rhinoModelPath: "/path/to/rhino/model.pv",
errorCallback: _errorCallback);
}
void _errorCallback(PvError error){
// handle error
}
Once you have instantiated a PicovoiceManager, you can start audio capture and processing by calling:
try{
await _picovoiceManager.start();
} on PvAudioException catch (ex) {
// deal with audio exception
} on PvError catch(ex){
// deal with Picovoice init error
}
And then stop it by calling:
await _picovoiceManager.stop();
PicovoiceManager uses our flutter_voice_processor Flutter plugin to capture frames of audio and automatically pass it to the Picovoice platform.
Low-Level API
Picovoice provides low-level access to the Picovoice platform for those who want to incorporate it into a already existing audio processing pipeline.
Picovoice
is created by passing a Porcupine keyword file and Rhino context file to the create
static constructor. Sensitivity and model files are optional.
import 'package:picovoice/picovoice_manager.dart';
import 'package:picovoice/picovoice_error.dart';
void createPicovoice() async {
double porcupineSensitivity = 0.7;
double rhinoSensitivity = 0.6;
try{
_picovoice = await Picovoice.create(
"/path/to/keyword/file.ppn",
wakeWordCallback,
"/path/to/context/file.rhn",
inferenceCallback,
porcupineSensitivity,
rhinoSensitivity,
"/path/to/porcupine/model.pv",
"/path/to/rhino/model.pv");
} on PvError catch (err) {
// handle picovoice init error
}
}
void wakeWordCallback(){
// wake word detected
}
void infererenceCallback(Map<String, dynamic> inference){
if(inference['isUnderstood']){
String intent = inference['intent']
Map<String, String> slots = inference['slots']
// add code to take action based on inferred intent and slot values
}
else{
// add code to handle unsupported commands
}
}
To use Picovoice, you must pass frames of audio to the process
function. The callbacks will automatically trigger when the wake word is detected and then when the follow-on command is detected.
List<int> buffer = getAudioFrame();
try {
_picovoice.process(buffer);
} on PvError catch (error) {
// handle error
}
For process to work correctly, the audio data must be in the audio format required by Picovoice.
The required audio format is found by calling .sampleRate
to get the required sample rate and .frameLength
to get the required frame size. Audio must be single-channel and 16-bit linearly-encoded.
Finally, once you no longer need the Picovoice, be sure to explicitly release the resources allocated to it:
_picovoice.delete();
Custom Model Integration #
To add custom models to your Flutter application, first add them to an assets
folder in your project directory. Then add them to you your pubspec.yaml:
flutter:
assets:
- assets/keyword.ppn
- assets/context.rhn
In your Flutter code, using the path_provider plugin, extract the asset files to your device like so:
String keywordAsset = "assets/keyword.ppn"
String extractedKeywordPath = await _extractAsset(keywordAsset);
String contextAsset = "assets/context.rhn"
String extractedContextPath = await _extractAsset(contextAsset);
// create Picovoice
// ...
Future<String> _extractAsset(String resourcePath) async {
// extraction destination
String resourceDirectory = (await getApplicationDocumentsDirectory()).path;
String outputPath = '$resourceDirectory/$resourcePath';
File outputFile = new File(outputPath);
ByteData data = await rootBundle.load(resourcePath);
final buffer = data.buffer;
await outputFile.create(recursive: true);
await outputFile.writeAsBytes(
buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
return outputPath;
}
Non-English Models #
In order to detect wake words and run inference in other languages you need to use the corresponding model file. The model files for all supported languages are available here and here.
Demo App #
Check out the Picovoice Flutter demo to see what it looks like to use Picovoice in a cross-platform app!