create static method

Future<Rhino> create(
  1. String accessKey,
  2. String contextPath, {
  3. String? modelPath,
  4. String? device,
  5. double sensitivity = 0.5,
  6. double endpointDurationSec = 1.0,
  7. bool requireEndpoint = true,
})

Static creator for initializing Rhino

accessKey AccessKey obtained from Picovoice Console (https://console.picovoice.ai/).

contextPath Absolute path to the Rhino context file (.rhn).

modelPath (Optional) Path to the file containing model parameters. If not set it will be set to the default location.

device is the string representation of the device (e.g., CPU or GPU) to use. If set to best, the most suitable device is selected automatically. If set to gpu, the engine uses the first available GPU device. To select a specific GPU device, set this argument to gpu:${GPU_INDEX}, where ${GPU_INDEX} is the index of the target GPU. If set to cpu, the engine will run on the CPU with the default number of threads. To specify the number of threads, set this argument to cpu:${NUM_THREADS}, where ${NUM_THREADS} is the desired number of threads.

sensitivity (Optional) Inference sensitivity. A higher sensitivity value results in fewer misses at the cost of (potentially) increasing the erroneous inference rate. Sensitivity should be a floating-point number within 0 and 1.

endpointDurationSec (Optional) Endpoint duration in seconds. An endpoint is a chunk of silence at the end of an utterance that marks the end of spoken command. It should be a positive number within 0.5, 5. A lower endpoint duration reduces delay and improves responsiveness. A higher endpoint duration assures Rhino doesn't return inference preemptively in case the user pauses before finishing the request.

requireEndpoint (Optional) If set to true, Rhino requires an endpoint (a chunk of silence) after the spoken command. If set to false, Rhino tries to detect silence, but if it cannot, it still will provide inference regardless. Set to false only if operating in an environment with overlapping speech (e.g. people talking in the background).

Throws a RhinoException if not initialized correctly

returns an instance of the speech-to-intent engine

Implementation

static Future<Rhino> create(String accessKey, String contextPath,
    {String? modelPath,
    String? device,
    double sensitivity = 0.5,
    double endpointDurationSec = 1.0,
    bool requireEndpoint = true}) async {
  if (modelPath != null) {
    modelPath = await _tryExtractFlutterAsset(modelPath);
  }

  contextPath = await _tryExtractFlutterAsset(contextPath);

  try {
    Map<String, dynamic> result =
        Map<String, dynamic>.from(await _channel.invokeMethod(_NativeFunctions.CREATE.name, {
      'accessKey': accessKey,
      'contextPath': contextPath,
      'modelPath': modelPath,
      'device': device,
      'sensitivity': sensitivity,
      'endpointDurationSec': endpointDurationSec,
      'requireEndpoint': requireEndpoint
    }));

    return Rhino._(
        result['handle'],
        result['contextInfo'],
        result['frameLength'],
        result['sampleRate'],
        result['version']);
  } on PlatformException catch (error) {
    throw rhinoStatusToException(error.code, error.message);
  } on Exception catch (error) {
    throw RhinoException(error.toString());
  }
}