azure_speech_recognition 0.8.0

  • Readme
  • Changelog
  • Example
  • Installing
  • 65

AzureSpeechRecognition #

Demonstrates how to use the AzureSpeechRecognition plugin.

Getting Started #

This project is a starting point for using the Azure Speech Recognition Services.

To use this plugin you must have already create an account on the cognitive service page.

Installation #

To install the package use the latest:

azure_speech_recognition: ^0.8.0

Usage #

import 'package:azure_speech_recognition/AzureSpeechRecognition.dart';

Initialize #

There are 2 type of initializer:

Simple initializer #

It should be used in any case other than the IntentRecognition. The language default setting is "en-EN" but you could use what you want (if it is supported).

AzureSpeechRecognition.initialize("your_subscription_key", "your_server_region",lang: "it-IT");

Intent initializer #

It should be used only in IntentRecognition. The language default setting is "en-EN" but you could use what you want (if it is supported).

AzureSpeechRecognition.initializeLanguageUnderstading("your_language_subscription_key", "your_language_server_region", "your_language_appId",lang:"it-IT");

Types of recognitions #

Simple voice recognition #

The response is given at the end of the recognition.


AzureSpeechRecognition _speechAzure;
String subKey = "your_key";
String region = "your_server_region";
String lang = "it-IT";

void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initialize(subKey, region,lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
  });

}

  @override
  void initState() {
    
    _speechAzure = new AzureSpeechRecognition();

    activateSpeechRecognizer();

    super.initState();
  }



Future recognizeVoice() async {
    try {
      AzureSpeechRecognition.simpleVoiceRecognition();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

Voice recognition with microphone streaming #

It returns in the recognitionResultHandler the temporary phrases that it understand and at the end the final response is returned by the setFinalTranscription method.


void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initialize(subKey, region,lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
  });

  _speechAzure.setRecognitionResultHandler((text) {
    // do what you want with your partial transcription (this one is called every time a word is recognized)
    // if you have a string that is displayed you could call here setState() to updated with the partial result
  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
  });

}


Future recognizeVoiceMicStreaming() async {
    try {
      AzureSpeechRecognition.micStream();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

Voice recognition with microphone streaming #

It returns in the recognitionResultHandler the temporary phrases that it understand and at the end the final response is returned by the setFinalTranscription method.


void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initialize(subKey, region,lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
  });

  _speechAzure.setRecognitionResultHandler((text) {
    // do what you want with your partial transcription (this one is called every time a word is recognized)
    // if you have a string that is displayed you could call here setState() to updated with the partial result
  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
  });

}


Future recognizeVoiceMicStreaming() async {
    try {
      AzureSpeechRecognition.micStream();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

Voice recognition continuously : CURRENTLY NOT WORKING #

It returns in the recognitionResultHandler the temporary phrases that it understand and at when the function is called again the final response is returned by the setFinalTranscription method.

Voice intent recognition #

It returns in the recognitionResultHandler the temporary phrases that it understand and at the end the final response is returned by the setFinalTranscription method.


void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initializeLanguageUnderstading(subKey, region, appId, lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
  });

  _speechAzure.setRecognitionResultHandler((text) {
    // do what you want with your partial transcription (this one is called every time a word is recognized)
    // if you have a string that is displayed you could call here setState() to updated with the partial result
  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
  });

}


Future speechIntentRecognizer() async {
    try {
      AzureSpeechRecognition.intentRecognizer();
    } on PlatformException catch (e) {
      print("Failed start the recognition: '${e.message}'.");
    }
  }

Voice recognition with keyword : CURRENTLY NOT WORKING #

This method require the keywords file to be put in the asset folder. The mandatory parameter is the name of that file. It returns in the recognitionResultHandler the temporary phrases that it understand and at the end the final response is returned by the setFinalTranscription method.

Contributing #

Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.

Please make sure to update tests as appropriate.

0.8.0 #

Breaking changes: #

New method to initialize the speech recognition plugin. See readme to know more about.

New release : #

  • Support for asynchronous recognition for the simple voice.
  • Support for microphone streaming for having text while dictating
  • New method to initialize the AzureSpeechRecognition

0.0.1 #

First release: #

  • it supports only Android.
  • it supports only the voice recognition with the result at the end of the speech.

TODO:

  • add support to IOS.
  • add support to microphone stream

example/lib/main.dart

import 'package:flutter/material.dart';
import 'dart:async';

import 'package:flutter/services.dart';
import 'package:azure_speech_recognition/azure_speech_recognition.dart';

void main() => runApp(MyApp());

class MyApp extends StatefulWidget {
  @override
  _MyAppState createState() => _MyAppState();
}

class _MyAppState extends State<MyApp> {
  String _centerText = 'Unknown';
  AzureSpeechRecognition _speechAzure;
  String subKey = "your_key";
  String region = "your_server_region";
  String lang = "it-IT";
  bool isRecording = false;

void activateSpeechRecognizer(){
    // MANDATORY INITIALIZATION
  AzureSpeechRecognition.initialize(subKey, region,lang: lang);
  
  _speechAzure.setFinalTranscription((text) {
    // do what you want with your final transcription
    setState(() {
      _centerText = text;
      isRecording = false;
    });

  });

  _speechAzure.setRecognitionStartedHandler(() {
   // called at the start of recognition (it could also not be used)
    isRecording = true;
  });

}
  @override
  void initState() {
    
    _speechAzure = new AzureSpeechRecognition();

    activateSpeechRecognizer();

    super.initState();
  }

Future _recognizeVoice() async {
    try {
      AzureSpeechRecognition.simpleVoiceRecognition();//await platform.invokeMethod('azureVoice');
     
    } on PlatformException catch (e) {
      print("Failed to get text '${e.message}'.");
    }
  }



  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      home: Scaffold(
        appBar: AppBar(
          title: const Text('Plugin example app'),
        ),
        body: Center(
          child: Column(
            children: <Widget>[
              Text('TEXT RECOGNIZED : $_centerText\n'),
              FloatingActionButton(
                onPressed: (){
                  if(!isRecording)_recognizeVoice();
                },
                child: Icon(Icons.mic),),
            ],
          ),
        ),
      ),
    );
  }
}

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:


dependencies:
  azure_speech_recognition: ^0.8.0

2. Install it

You can install packages from the command line:

with Flutter:


$ flutter pub get

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:


import 'package:azure_speech_recognition/azure_speech_recognition.dart';
  
Popularity:
Describes how popular the package is relative to other packages. [more]
29
Health:
Code health derived from static analysis. [more]
100
Maintenance:
Reflects how tidy and up-to-date the package is. [more]
100
Overall:
Weighted score of the above. [more]
65
Learn more about scoring.

We analyzed this package on Jul 2, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:

  • Dart: 2.8.4
  • pana: 0.13.13
  • Flutter: 1.17.5

Analysis suggestions

Package does not support Flutter platform ios

Because of import path [package:azure_speech_recognition/azure_speech_recognition.dart] that declares support for platforms: android

Package does not support Flutter platform linux

Because of import path [package:azure_speech_recognition/azure_speech_recognition.dart] that declares support for platforms: android

Package does not support Flutter platform macos

Because of import path [package:azure_speech_recognition/azure_speech_recognition.dart] that declares support for platforms: android

Package does not support Flutter platform web

Because of import path [package:azure_speech_recognition/azure_speech_recognition.dart] that declares support for platforms: android

Package does not support Flutter platform windows

Because of import path [package:azure_speech_recognition/azure_speech_recognition.dart] that declares support for platforms: android

Package not compatible with SDK dart

because of import path [azure_speech_recognition] that is in a package requiring null.

Health suggestions

Format lib/azure_speech_recognition.dart.

Run flutter format to format lib/azure_speech_recognition.dart.

Dependencies

Package Constraint Resolved Available
Direct dependencies
Dart SDK >=2.1.0 <3.0.0
flutter 0.0.0
Transitive dependencies
collection 1.14.12 1.14.13
meta 1.1.8
sky_engine 0.0.99
typed_data 1.1.6 1.2.0
vector_math 2.0.8
Dev dependencies
flutter_test