Google Speech

This package allows the use of Google Speech Api with grpc as a pure Dart implementation. With the support of grpc it is also possible to use the streaming transcription of the Google Speech Api with this package.

Demo recognize

Demo with recognize

Demo Streaming

Demo with streaming

Before we get started

To use the Google Speech Api it is first of all important to create a Google Cloud account and activate the Speech Api. The best way to do this is to follow the first point of this documentation.

After you have created a service account and downloaded the Json file with the necessary access data, you can start using this package.

At this time this package only supports authentication via service account. It is therefore necessary to create a service account and have the necessary Json data ready.

Getting Started

Authentication

As of version 3.0.0 there are now 3 different ways to authenticate. In addition to the old way viaServiceAccount, there is now also the possibility to authenticate via token or ThirdPartyAuthenticator.

Authentication via ThirdParty Authenticator > Version 3.0.0

   final speechToText = SpeechToText.viaThirdPartyAuthenticator(
      ThirdPartyAuthenticator(
        obtainCredentialsFromThirdParty: () async {
          // request api to get token
          final json = await requestCredentialFromMyApi();
          return AccessCredentials.fromJson(json);
          },
      ),
    );

Authentication via Token > Version 3.0.0

Creates a SpeechToText interface using a token. You are responsible for updating the token when it expires.

    final speechToText = SpeechToText.viaToken(
    'Bearer',
    '<token-here>',
    );

Authentication via a service account

There are two ways to log in using a service account. Option number one is the direct transfer of the Json file. Make sure that the file really exists in the path you passed and that the file has a .json extension.

    import 'package:google_speech/speech_client_authenticator.dart';
    
    final serviceAccount = ServiceAccount.fromFile(File('PATH_TO_FILE'));

    final speechToText = SpeechToText.viaServiceAccount(serviceAccount);

Option number two is to pass the Json data directly as a string. This could be used for example to load the data from an external service first and not have to keep it directly in the app.

    final serviceAccount = ServiceAccount.fromString(r'''{YOUR_JSON_STRING}''');
    
    /// OR load the data from assets
    
    final serviceAccount = ServiceAccount.fromString(
        '${(await rootBundle.loadString('assets/test_service_account.json'))}');

    final speechToText = SpeechToText.viaServiceAccount(serviceAccount);

After you have successfully connected the ServiceAccount, you can already start using the Api.

Transcribing a file using recognize

Define a RecognitionConfig
    final config = RecognitionConfig(
                         encoding: AudioEncoding.LINEAR16,
                         model: RecognitionModel.basic,
                         enableAutomaticPunctuation: true,
                         sampleRateHertz: 16000,
                         languageCode: 'en-US');
Get the contents of the audio file
     Future<List<int>> _getAudioContent(String name) async {
       final directory = await getApplicationDocumentsDirectory();
       final path = directory.path + '/$name';
       return File(path).readAsBytesSync().toList();
     }
    
    final audio = await _getAudioContent('test.wav');
And finally send the request
    final response = await speechToText.recognize(config, audio);

Transcribing a file using streamRecognize

Define a StreamingRecognitionConfig
    final streamingConfig = StreamingRecognitionConfig(config: config, interimResults: true);
Get the contents of the audio file as stream || or get an audio stream directly from a microphone input
     Future<Stream<List<int>>> _getAudioStream(String name) async {
       final directory = await getApplicationDocumentsDirectory();
       final path = directory.path + '/$name';
       return File(path).openRead();
     }
    
    final audio = await _getAudioStream('test.wav');
And finally send the request
    final responseStream = speechToText.streamingRecognize(streamingConfig, audio);
    responseStream.listen((data) {
        // listen for response 
    });

More information can be found in the official Google Cloud Speech documentation.

Getting empty response (Issue #25)

If it happens that google_speech returns an empty response, then this error is probably due to the recorded audio file.

You can find more information here https://cloud.google.com/speech-to-text/docs/troubleshooting#returns_an_empty_response

Use Google Speech Beta

Since version 1.1.0 google_speech also supports the use of features available in the Google Speech Beta Api. For this you just have to use SpeechToTextBeta instead of SpeechToText.

Use Google Speech Version 2

Since version 4.0.0 google_speech also supports the use of features available in the Google Speech V2. For this you just have to use SpeechToTextV2 instead of SpeechToText. An Example could be found in audio_file_example_v2

Endless stream support

Since version 5.0.0 google_speech also supports endless stream. For this you just have to use EndlessStreamingService instead of SpeechToText.

    final serviceAccount = ServiceAccount.fromString(r'''{YOUR_JSON_STRING}''');
    
    /// OR load the data from assets
    
    final serviceAccount = ServiceAccount.fromString(
        '${(await rootBundle.loadString('assets/test_service_account.json'))}');

    final speechToText = EndlessStreamingService.viaServiceAccount(serviceAccount);

    final responseStream = speechToText.endlessStream;
    
    speechToText.endlessStreamingRecognize(
        StreamingRecognitionConfig(config: config, interimResults: true),
        _audioStream!);
    
    responseStream.listen((data) {...});

Libraries

generated/google/protobuf/any.pb
generated/google/protobuf/any.pbenum
generated/google/protobuf/any.pbjson
generated/google/cloud/speech/v2/cloud_speech.pb
generated/google/cloud/speech/v1/cloud_speech.pb
generated/google/cloud/speech/v1p1beta1/cloud_speech.pb
generated/google/cloud/speech/v1p1beta1/cloud_speech.pbenum
generated/google/cloud/speech/v1/cloud_speech.pbenum
generated/google/cloud/speech/v2/cloud_speech.pbenum
generated/google/cloud/speech/v1p1beta1/cloud_speech.pbgrpc
generated/google/cloud/speech/v2/cloud_speech.pbgrpc
generated/google/cloud/speech/v1/cloud_speech.pbgrpc
generated/google/cloud/speech/v2/cloud_speech.pbjson
generated/google/cloud/speech/v1/cloud_speech.pbjson
generated/google/cloud/speech/v1p1beta1/cloud_speech.pbjson
generated/google/protobuf/duration.pb
generated/google/protobuf/duration.pbenum
generated/google/protobuf/duration.pbjson
generated/google/protobuf/empty.pb
generated/google/protobuf/empty.pbenum
generated/google/protobuf/empty.pbjson
exception
generated/google/protobuf/field_mask.pb
generated/google/protobuf/field_mask.pbenum
generated/google/protobuf/field_mask.pbjson
endless_streaming_service_v2
endless_streaming_service_beta
speech_to_text_v2
speech_to_text
endless_streaming_service
speech_to_text_beta
google_speech
generated/google/api/label.pb
generated/google/api/label.pbenum
generated/google/api/label.pbjson
generated/google/api/launch_stage.pb
generated/google/api/launch_stage.pbenum
generated/google/api/launch_stage.pbjson
config/longrunning_result
generated/google/api/monitored_resource.pb
generated/google/api/monitored_resource.pbenum
generated/google/api/monitored_resource.pbjson
generated/google/longrunning/operations.pb
generated/google/longrunning/operations.pbenum
generated/google/longrunning/operations.pbgrpc
generated/google/longrunning/operations.pbjson
config/recognition_config
config/recognition_config_v1
config/recognition_config_v1p1beta1
config/recognition_config_v2
generated/google/cloud/speech/v1p1beta1/resource.pb
generated/google/cloud/speech/v1/resource.pb
generated/google/cloud/speech/v1/resource.pbenum
generated/google/cloud/speech/v1p1beta1/resource.pbenum
generated/google/cloud/speech/v1p1beta1/resource.pbjson
generated/google/cloud/speech/v1/resource.pbjson
speech_client_authenticator
generated/google/rpc/status.pb
generated/google/rpc/status.pbenum
generated/google/rpc/status.pbjson
config/streaming_recognition_config
generated/google/protobuf/struct.pb
generated/google/protobuf/struct.pbenum
generated/google/protobuf/struct.pbjson
auth/third_party_authenticator
generated/google/protobuf/timestamp.pb
generated/google/protobuf/timestamp.pbenum
generated/google/protobuf/timestamp.pbjson
generated/google/protobuf/wrappers.pb
generated/google/protobuf/wrappers.pbenum
generated/google/protobuf/wrappers.pbjson