google_speech 1.0.2

  • Readme
  • Changelog
  • Example
  • Installing
  • 84

Google Speech #

This package allows the use of Google Speech Api with grpc as a pure Dart implementation. With the support of grpc it is also possible to use the streaming transcription of the Google Speech Api with this package.

Demo recognize

Demo with recognize

Demo Streaming

Demo with streaming

Before we get started #

To use the Google Speech Api it is first of all important to create a Google Cloud account and activate the Speech Api. The best way to do this is to follow the first point of this documentation.

After you have created a service account and downloaded the Json file with the necessary access data, you can start using this package.

At this time this package only supports authentication via service account. It is therefore absolutely necessary to create a service account and have the necessary Json data ready.

Getting Started #

Authentication via a service account

There are two ways to log in using a service account. Option number one is the direct transfer of the Json file. Make sure that the file really exists in the path you passed and that the file has a .json extension.

    import 'package:google_speech/speech_client_authenticator.dart';
    
    final serviceAccount = ServiceAccount.fromFile(File('PATH_TO_FILE'));

Option number two is to pass the Json data directly as a string. This could be used for example to load the data from an external service first and not have to keep it directly in the app.

    final serviceAccount = ServiceAccount.fromString(r'''{YOUR_JSON_STRING}''');
    
    /// OR load the data from assets
    
    final serviceAccount = ServiceAccount.fromString(
        '${(await rootBundle.loadString('assets/test_service_account.json'))}');

After you have successfully connected the ServiceAccount, you can already start using the Api.

Initialize SpeechToText

    import 'package:google_speech/google_speech.dart';
    
    final speechToText = SpeechToText.viaServiceAccount(serviceAccount);

Transcribing a file using recognize

Define a RecognitionConfig
    final config = RecognitionConfig(
                         encoding: AudioEncoding.LINEAR16,
                         model: RecognitionModel.basic,
                         enableAutomaticPunctuation: true,
                         sampleRateHertz: 16000,
                         languageCode: 'en-US');
Get the contents of the audio file
     Future<List<int>> _getAudioContent(String name) async {
       final directory = await getApplicationDocumentsDirectory();
       final path = directory.path + '/$name';
       return File(path).readAsBytesSync().toList();
     }
    
    final audio = await _getAudioContent('test.wav');
And finally send the request
    final response = await speechToText.recognize(config, audio);

Transcribing a file using streamRecognize

Define a StreamingRecognitionConfig
    final streamingConfig = StreamingRecognitionConfig(config: config, interimResults: true);
Get the contents of the audio file as stream || or get an audio stream directly from a microphone input
     Future<Stream<List<int>>> _getAudioStream(String name) async {
       final directory = await getApplicationDocumentsDirectory();
       final path = directory.path + '/$name';
       return File(path).openRead();
     }
    
    final audio = await _getAudioStream('test.wav');
And finally send the request
    final responseStream = speechToText.streamingRecognize(streamingConfig, audio);
    responseStream.listen((data) {
        // listen for response 
    });

More information can be found in the official Google Cloud Speech documentation.

TODO #

  • [x] Seeking example in Example project
  • [x] Add streamingRecognize support
  • [ ] Add longRunningRecognize support
  • [ ] Add infinity stream support
  • [ ] Add more tests

[1.0.2] - An example with a microphone input added. #

  • Added an example project which shows the use of google_speech with a microphone input.

[1.0.1] - Provide a Readme file in the Example folder. #

  • Added a readme file to the example folder, to follow the package layout conventions.

[1.0.0] - Initial release on pub.dev. #

  • Added a function to use the Google Speech Api via request.
  • Added a function to use the Google Speech Api via a stream.
  • Added a sample project.

example/README.md

Google Speech Examples #

Audio File Example #

To run this example project it is necessary to create a service account Json in the Assets folder of the project. This should have the name: 'test_service_account.json'.

Mic Stream Example #

To run this example project it is necessary to create a service account Json in the Assets folder of the project. This should have the name: 'test_service_account.json'.

``

import 'dart:io';

import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:google_speech/google_speech.dart';
import 'package:path_provider/path_provider.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Audio File Example',
      theme: ThemeData(
        primarySwatch: Colors.blue,
        visualDensity: VisualDensity.adaptivePlatformDensity,
      ),
      home: AudioRecognize(),
    );
  }
}

class AudioRecognize extends StatefulWidget {
  @override
  State<StatefulWidget> createState() => _AudioRecognizeState();
}

class _AudioRecognizeState extends State<AudioRecognize> {
  bool recognizing = false;
  bool recognizeFinished = false;
  String text = '';

  void recognize() async {
    setState(() {
      recognizing = true;
    });
    final serviceAccount = ServiceAccount.fromString(
        '${(await rootBundle.loadString('assets/test_service_account.json'))}');
    final speechToText = SpeechToText.viaServiceAccount(serviceAccount);
    final config = _getConfig();
    final audio = await _getAudioContent('test.wav');

    await speechToText.recognize(config, audio).then((value) {
      setState(() {
        text = value.results
            .map((e) => e.alternatives.first.transcript)
            .join('\n');
      });
    }).whenComplete(() => setState(() {
          recognizeFinished = true;
          recognizing = false;
        }));
  }

  void streamingRecognize() async {
    setState(() {
      recognizing = true;
    });
    final serviceAccount = ServiceAccount.fromString(
        '${(await rootBundle.loadString('assets/test_service_account.json'))}');
    final speechToText = SpeechToText.viaServiceAccount(serviceAccount);
    final config = _getConfig();

    final responseStream = speechToText.streamingRecognize(
        StreamingRecognitionConfig(config: config, interimResults: true),
        await _getAudioStream('test.wav'));

    responseStream.listen((data) {
      setState(() {
        text =
            data.results.map((e) => e.alternatives.first.transcript).join('\n');
        recognizeFinished = true;
      });
    }, onDone: () {
      setState(() {
        recognizing = false;
      });
    });
  }

  RecognitionConfig _getConfig() => RecognitionConfig(
      encoding: AudioEncoding.LINEAR16,
      model: RecognitionModel.basic,
      enableAutomaticPunctuation: true,
      sampleRateHertz: 16000,
      languageCode: 'en-US');

  Future<void> _copyFileFromAssets(String name) async {
    var data = await rootBundle.load('assets/$name');
    final directory = await getApplicationDocumentsDirectory();
    final path = directory.path + '/$name';
    await File(path).writeAsBytes(
        data.buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
  }

  Future<List<int>> _getAudioContent(String name) async {
    final directory = await getApplicationDocumentsDirectory();
    final path = directory.path + '/$name';
    if (!File(path).existsSync()) {
      await _copyFileFromAssets(name);
    }
    return File(path).readAsBytesSync().toList();
  }

  Future<Stream<List<int>>> _getAudioStream(String name) async {
    final directory = await getApplicationDocumentsDirectory();
    final path = directory.path + '/$name';
    if (!File(path).existsSync()) {
      await _copyFileFromAssets(name);
    }
    return File(path).openRead();
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      appBar: AppBar(
        title: Text('Audio File Example'),
      ),
      body: Center(
        child: Column(
          mainAxisAlignment: MainAxisAlignment.spaceAround,
          children: <Widget>[
            if (recognizeFinished)
              _RecognizeContent(
                text: text,
              ),
            RaisedButton(
              onPressed: recognizing ? () {} : recognize,
              child: recognizing
                  ? CircularProgressIndicator()
                  : Text('Test with recognize'),
            ),
            SizedBox(
              height: 10.0,
            ),
            RaisedButton(
              onPressed: recognizing ? () {} : streamingRecognize,
              child: recognizing
                  ? CircularProgressIndicator()
                  : Text('Test with streaming recognize'),
            ),
          ],
        ),
      ), // This trailing comma makes auto-formatting nicer for build methods.
    );
  }
}

class _RecognizeContent extends StatelessWidget {
  final String text;

  const _RecognizeContent({Key key, this.text}) : super(key: key);

  @override
  Widget build(BuildContext context) {
    return Padding(
      padding: const EdgeInsets.all(16.0),
      child: Column(
        children: <Widget>[
          Text(
            'The text recognized by the Google Speech Api:',
          ),
          SizedBox(
            height: 16.0,
          ),
          Text(
            text,
            style: Theme.of(context).textTheme.bodyText1,
          ),
        ],
      ),
    );
  }
}
``

Use this package as a library

1. Depend on it

Add this to your package's pubspec.yaml file:


dependencies:
  google_speech: ^1.0.2

2. Install it

You can install packages from the command line:

with Flutter:


$ flutter pub get

Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more.

3. Import it

Now in your Dart code, you can use:


import 'package:google_speech/google_speech.dart';
  
Popularity:
Describes how popular the package is relative to other packages. [more]
68
Health:
Code health derived from static analysis. [more]
98
Maintenance:
Reflects how tidy and up-to-date the package is. [more]
100
Overall:
Weighted score of the above. [more]
84
Learn more about scoring.

We analyzed this package on Jul 9, 2020, and provided a score, details, and suggestions below. Analysis was completed with status completed using:

  • Dart: 2.8.4
  • pana: 0.13.14
  • Flutter: 1.17.5

Analysis suggestions

Package not compatible with SDK dart

Because:

  • google_speech that is a package requiring null.

Package not compatible with runtime flutter-web on web

Because:

  • package:google_speech/google_speech.dart that imports:
  • package:google_speech/speech_to_text.dart that imports:
  • package:grpc/grpc.dart that imports:
  • package:grpc/src/shared/streams.dart that imports:
  • package:http2/transport.dart that imports:
  • package:http2/src/hpack/hpack.dart that imports:
  • package:http2/src/hpack/huffman_table.dart that imports:
  • package:http2/src/hpack/huffman.dart that imports:
  • dart:io

Health suggestions

Fix lib/generated/google/protobuf/any.pb.dart. (-0.50 points)

Analysis of lib/generated/google/protobuf/any.pb.dart reported 1 hint:

line 12 col 8: Don't import implementation files from another package.

Fix lib/generated/google/protobuf/duration.pb.dart. (-0.50 points)

Analysis of lib/generated/google/protobuf/duration.pb.dart reported 1 hint:

line 13 col 8: Don't import implementation files from another package.

Fix lib/generated/google/protobuf/struct.pb.dart. (-0.50 points)

Analysis of lib/generated/google/protobuf/struct.pb.dart reported 1 hint:

line 12 col 8: Don't import implementation files from another package.

Fix lib/generated/google/protobuf/timestamp.pb.dart. (-0.50 points)

Analysis of lib/generated/google/protobuf/timestamp.pb.dart reported 1 hint:

line 13 col 8: Don't import implementation files from another package.

Dependencies

Package Constraint Resolved Available
Direct dependencies
Dart SDK >=2.7.0 <3.0.0
fixnum ^0.10.11 0.10.11
flutter 0.0.0
grpc ^2.1.3 2.2.0
meta ^1.1.8 1.1.8 1.2.2
path_provider ^1.6.8 1.6.11
protobuf ^1.0.1 1.0.1
Transitive dependencies
async 2.4.2
charcode 1.1.3
collection 1.14.12 1.14.13
convert 2.1.1
crypto 2.1.5
file 5.2.1
googleapis_auth 0.2.12
http 0.12.1
http2 1.0.0
http_parser 3.1.4
intl 0.16.1
path 1.7.0
path_provider_linux 0.0.1+2
path_provider_macos 0.0.4+3
path_provider_platform_interface 1.0.2
platform 2.2.1
plugin_platform_interface 1.0.2
process 3.0.13
sky_engine 0.0.99
source_span 1.7.0
string_scanner 1.0.5
term_glyph 1.1.0
typed_data 1.1.6 1.2.0
vector_math 2.0.8 2.1.0-nullsafety
xdg_directories 0.1.0
Dev dependencies
flutter_test
mockito ^4.1.1
pedantic ^1.9.0 1.9.0 1.9.2