Huawei ML Kit Flutter Plugin

Contents

1. Introduction

This plugin enables communication between HUAWEI ML Kit Plugin provides following services:

  • Text Related Services: These services allow you to recognize the text in images, documents, cards and forms.
  • Language/Voice Related Services: These services provide text to speech, speech to text, translation and language detection capabilities.
  • Face/Body Related Services: These services provide capabilities like face, skeleton and hand detections
  • Image Related Services: These services provide capabilities like object classification, landmark recognition and image resolution.

2. Installation Guide

Before you get started, you must register as a HUAWEI Developer and complete identity verification on the HUAWEI Developer website. For details, please refer to Register a HUAWEI ID.

Creating a Project in AppGallery Connect

Creating an app in AppGallery Connect is required in order to communicate with the Huawei services. To create an app, perform the following steps:

Step 1. Sign in to AppGallery Connect and select My Projects.

Step 2. Select your project from the project list or create a new one by clicking the Add Project button.

Step 3. Go to Project Setting > General Information and click Add App.

If an app exists in the project and you need to add a new one, expand the app selection area on the top of the page and click Add App.

Step 4. On the Add App page, enter the app information and click Ok.

Configuring the Signing Certificate Fingerprint

A signing certificate fingerprint is used to verify the authenticity of an app when it attempts to access an HMS Core (APK) through the HMS SDK. Before using the HMS Core (APK), you must locally generate a signing certificate fingerprint and configure it in the AppGallery Connect. You can refer to 3rd or 4th steps of Generating a Signing Certificate codelab tutorial for the certificate generation. Perform the following steps after you have generated the certificate.

Integrating Flutter ML Plugin

Step 1. Sign in to AppGallery Connect and select your project from My Projects. Then go to Manage APIs tab on the Project Settings page and make sure ML Kit is enabled.

Step 2. Go to Project Setting > General Information page, under the App information field, click agconnect-services.json to download the configuration file.

Step 3. Copy the agconnect-services.json file to the android/app directory of your project.

Step 4. Open the build.gradle file in the android directory of your project.

  • Navigate to the buildscript section and configure the Maven repository address and agconnect plugin for the HMS SDK.

      buildscript {
        repositories {
            google()
            jcenter()
            maven { url 'https://developer.huawei.com/repo/' }
        }
    
        dependencies {
            /*
              * <Other dependencies>
              */
            classpath 'com.huawei.agconnect:agcp:1.4.2.301'
        }
      }
    
  • Go to allprojects and configure the Maven repository address for the HMS SDK.

      allprojects {
        repositories {
            google()
            jcenter()
            maven { url 'https://developer.huawei.com/repo/' }
        }
      }
    

Step 5. Open the build.gradle file in the android/app/ directory.

  • Add apply plugin: 'com.huawei.agconnect' line after other apply entries.

      apply plugin: 'com.android.application'
      apply from: "$flutterRoot/packages/flutter_tools/gradle/flutter.gradle"
      apply plugin: 'com.huawei.agconnect'
    
  • Set your package name in defaultConfig > applicationId and set minSdkVersion to 19 or higher. Package name must match with the package_name entry in agconnect-services.json file.

  • Set multiDexEnabled to true so the app won't crash. Because ML Plugin has many API's.

      defaultConfig {
          applicationId "<package_name>"
          minSdkVersion 19
          multiDexEnabled true
          /*
          * <Other configurations>
          */
      }
    

Step 6. Create a file <app_dir>/android/key.properties that contains a reference to your keystore which you generated on the previous step (Generating a Signing Certificate). Add the following lines to the key.properties file and change the values regarding to the keystore you've generated.

storePassword=<your_keystore_password>
keyPassword=<your_key_password>
keyAlias=key
storeFile=<location of the keystore file, for example: D:\\Users\\<user_name>\\key.jks>

Warning: Keep this file private and don't include it on the public source control.

Step 7. Add the following code to build.gradle before android block for reading the key.properties file:

def keystoreProperties = new Properties()
def keystorePropertiesFile = rootProject.file('key.properties')
if (keystorePropertiesFile.exists()) {
    keystoreProperties.load(new FileInputStream(keystorePropertiesFile))
}

android {
        ...
}

Step 8. Edit buildTypes as follows and add signingConfigs below:

signingConfigs {
    config {
        keyAlias keystoreProperties['keyAlias']
        keyPassword keystoreProperties['keyPassword']
        storeFile keystoreProperties['storeFile'] ? file(keystoreProperties['storeFile']) : null
        storePassword keystoreProperties['storePassword']
    }
}
buildTypes {
    debug {
        signingConfig signingConfigs.config
    }
    release {
        signingConfig signingConfigs.config
    }
}

Step 9. On your Flutter project directory, find and open your pubspec.yaml file and add the huawei_ml library to dependencies. For more details please refer to the Using packages document.

  • To download the package from pub.dev.

          dependencies:
            huawei_ml: {library version}
    

    or

    If you downloaded the package from HUAWEI Developer website, specify the library path on your local device.

          dependencies:
            huawei_ml:
                # Replace {library path} with actual library path of Huawei ML Kit Plugin for Flutter.
                path: {library path}
    
    • Replace {library path} with the actual library path of Flutter ML Plugin. The following are examples:
      • Relative path example: path: ../huawei_ml
      • Absolute path example: path: D:\Projects\Libraries\huawei_ml

Step 10. Run the following command to update package info.

    [project_path]> flutter pub get

Step 11 Import the library to access services and methods.

    import 'package:huawei_ml/huawei_ml.dart';

Step 12. Run the following command to start the app.

    [project_path]> flutter run

3. API Reference

MLAftEngine

Converts an audio file into a text file.

Method Summary

Return TypeMethodDescription
voidstartRecognition(MLAftSetting setting)Stars the audio file transcription.
voidsetAftListener(MLAftListener listener)Sets the listener for transcription.
Future<void>startTask(String taskId)Resumes a long audio transcription task on the cloud.
Future<void>pauseTask(String taskId)Pauses a long audio transcription task on the cloud.
Future<void>destroyTask(String taskId)Destroys the on cloud transcription.
Future<void>getLongAftResult(String taskId)Updates the transcription information.
Future<bool>closeAftEngine()Closes the aft engine.

Methods

startRecognition(MLAftSetting setting)

Starts audio file transcription.

Parameters

NameTypeDescription
settingMLAftSettingConfigurations for recognition.

Return Type

TypeDescription
voidNo return value.

Call Example

MLAftSetting setting = new MLAftSetting();
setting.path = "audio file path";

aftEngine.startRecognition(setting: setting);

setAftListener(MLAftListener listener)

Sets the listener for transcription.

Parameters

NameTypeDescription
listenerMLAftListenerListener for MLAftEngine.

Call Example

MLAftEngine aftEngine = new MLAftEngine();

aftEngine.setAftListener((event, taskId, {errorCode, eventId, result, uploadProgress}) {
  // Your implementation here
});

startTask(String taskId)

Resumes a long audio transcription task on the cloud.

Parameters

NameTypeDescription
taskIdStringID of the audio transcription task.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await aftEngine.startTask(taskId: "task id");

pauseTask(String taskId)

Pauses a long audio transcription task on the cloud.

Parameters

NameTypeDescription
taskIdStringID of the audio transcription task.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await aftEngine.pauseTask(taskId: "task id");

destroyTask(String taskId)

Destroys a long audio transcription task on the cloud. If the task is destroyed after the audio file is successfully uploaded, the transcription has started and charging cannot be canceled.

Parameters

NameTypeDescription
taskIdStringID of the audio transcription task.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await aftEngine.destroyTask(taskId: "task id");

getLongAftResult(String taskId)

Obtains the long audio transcription result from the cloud.

Parameters

NameTypeDescription
taskIdStringID of the audio transcription task.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await aftEngine.getLongAftResult(taskId: "task id");

closeAftEngine()

Disables the audio transcription engine to release engine resources.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

await aftEngine.closeAftEngine();

Data Types

MLAftListener

A function type defined for listening audio file transcription events.

DefinitionDescription
void MLAftListener(MLAftEvent event, String taskId, {int eventId, MLAftResult result, int errorCode, double uploadProgress})Audio file transcription listener.

Parameters

NameTypeDescription
eventMLAftEventAft event listener.
taskIdStringTranscription task id.
eventIdintTranscription event id.
resultMLAftResultTranscription result.
errorCodeintError code on failure.
uploadProgressdoubleAudio file upload progress.

enum MLAftEvent

Enumerated object that represents the events of audio file transcription.

ValueDescription
onResultCalled when the audio transcription result is returned on the cloud.
onErrorCalled if an audio transcription error occurs.
onInitCompleteReserved.
onUploadProgressReserved.
onEventReserved.

MLAsrRecognizer

Automatically recognizes speech.

Method Summary

Return TypeMethodDescription
Future<String>startRecognizing(MLAsrSetting setting)Starts recognition without pickup ui.
Future<String>startRecognizingWithUi(MLAsrSetting setting)Starts recognition with pickup ui.
Future<bool>stopRecognition()Stops the recognition.
voidsetListener(MLAsrListener listener)Sets the listener callback function of the recognizer to receive the recognition result.

Methods

startRecognizing(MLAsrSetting setting)

Starts recognition without pickup ui.

Parameters

NameTypeDescription
settingMLAsrSettingConfigurations for recognition.

Return Type

TypeDescription
Future<String>Returns the recognition result on a successful operation.

Call Example

final setting = new MLAsrSetting();
setting.language = MLAsrSetting.LAN_EN_US;
setting.feature = MLAsrSetting.FEATURE_WORD_FLUX;

String result = await recognizer.startRecognizing(settings);

startRecognizingWithUi(MLAsrSetting setting)

Starts recognition with pickup ui.

Parameters

NameTypeDescription
settingMLAsrSettingConfigurations for recognition.

Return Type

TypeDescription
Future<String>Returns the recognition result on a successful operation.

Call Example

final setting = new MLAsrSetting();
setting.language = MLAsrSetting.LAN_EN_US;
setting.feature = MLAsrSetting.FEATURE_WORD_FLUX;

String result = await recognizer.startRecognizingWithUi(settings);

stopRecognition()

Stops the speech recognition.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.
await recognizer.stopRecognition();

setListener(MLAsrListener listener)

Sets the listener callback function of the recognizer to receive the recognition result.

Parameters

NameTypeDescription
listenerMLAsrListenerListener for speech recognition.

Call Example

recognizer.setListener((event, info) {
  // Your implementation here
});

Data Types

MLAsrListener

A function type defined for listening speech recognition events.

DefinitionDescription
void MLAsrListener(MLAsrEvent event, dynamic info)Listener for speech recognition.

Parameters

NameTypeDescription
eventMLAsrEventAsr event listener.
infodynamicAll event information.

enum MLAsrEvent

Enumerated object that represents the events of speech recognition.

ValueDescription
onStateCalled when the audio transcription result is returned on the cloud.
onStartListeningCalled when the recorder starts to receive speech.
onStartingOfSpeechCalled when a user starts to speak, that is, the speech recognizer detects that the user starts to speak.
onVoiceDataReceivedReturns the original PCM stream and audio power to the user.
onRecognizingResultsWhen the speech recognition mode is set to MLAsrConstants.FEATURE_WORDFLUX, the speech recognizer continuously returns the speech recognition result through this API.

MLBankcardAnalyzer

Contains bank card recognition plug-in APIs.

Method Summary

Return TypeMethodDescription
Future<MLBankcard>analyzeBankcard(MlBankcardSettings settings)Recognizes bankcard from a local image.
Future<MLBankcard>captureBankcard(MlBankcardSettings settings)Recognizes bankcard with capture activity.
Future<bool>stopBankcardAnalyzer()Stops bankcard recognition.

Methods

analyzeBankcard(MlBankcardSettings settings)

Recognizes bankcard from a local image.

Parameters

NameTypeDescription
settingsMLBankcardSettingsConfigurations for recognition.

Return Type

TypeDescription
Future<MLBankcard>Returns the object on a successful operation, throws PlatformException otherwise.

Call Example

MLBankcardAnalyzer analyzer = new MLBankcardAnalyzer();
MLBankcardSettings settings = new MLBankcardSettings();
settings.path = "local image path";

MLBankcard card = await analyzer.analyzeBankcard(settings);

captureBankcard(MlBankcardSettings settings)

Recognizes bankcard with a capture activity.

Parameters

NameTypeDescription
settingsMLBankcardSettingsConfigurations for recognition.

Return Type

TypeDescription
Future<MLBankcard>Returns the object on a successful operation.

Call Example

MLBankcardAnalyzer analyzer = new MLBankcardAnalyzer();
MLBankcardSettings settings = new MLBankcardSettings();
settings.orientation = MLBankcardSettings.ORIENTATION_AUTO;

MLBankcard card = await analyzer.captureBankcard(settings: settings);

stopBankcardAnalyzer()

Stops the bankcard recognition.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

await analyzer.stopBankcardAnalyzer();

MLClassificationAnalyzer

This package represents the image classification SDK. It contains image classification classes and APIs.

Method Summary

Return TypeMethodDescription
Future<List<MLImageClassification>>asyncAnalyzeFrame(MLClassificationAnalyzerSetting setting)Analyzes asynchronously.
Future<List<MLImageClassification>>analyzeFrame(MLClassificationAnalyzerSetting setting)Analyzes synchronously.
Future<int>getAnalyzerType()Gets the analyzer type.
Future<bool>stopClassification()Stops the classification.

Methods

asyncAnalyzeFrame(MLClassificationAnalyzerSetting setting)

Does classification asynchronously.

Parameters

NameTypeDescription
settingMLClassificationAnalyzerSettingConfigurations for classification.

Return Type

TypeDescription
Future<List<MLImageClassification>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLClassificationAnalyzer analyzer = new MLClassificationAnalyzer();
MLClassificationAnalyzerSetting setting = new MLClassificationAnalyzerSetting();
setting.path = "local image path";

List<MLImageClassification> list = await analyzer.asyncAnalyzeFrame(setting);

analyzeFrame(MLClassificationAnalyzerSetting setting)

Does classification synchronously.

Parameters

NameTypeDescription
settingMLClassificationAnalyzerSettingConfigurations for classification.

Return Type

TypeDescription
Future<List<MLImageClassification>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLClassificationAnalyzer analyzer = new MLClassificationAnalyzer();
MLClassificationAnalyzerSetting setting = new MLClassificationAnalyzerSetting();
setting.path = "local image path";

List<MLImageClassification> list = await analyzer.analyzeFrame(setting);

getAnalyzerType()

Gets the analyzer type

Return Type

TypeDescription
Future<int>Returns the type on success, throws PlatformException otherwise.

Call Example

int result = await analyzer.getAnalyzerType();

stopClassification()

Stops the classification.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopClassification();

MLCustomModel

Allows to execute a custom AI model.

Method Summary

Return TypeMethodDescription
Future<bool>prepareCustomModel(MLCustomModelSetting setting)Prepares the custom model executor.
Future<Map<dynamic, dynamic>>executeCustomModel()Performs inference using input and output configurations and content.
Future<int>getOutputIndex(String name)Obtains the channel index based on the output channel name.
Future<bool>stopExecutor()Stops an inference task to release resources.

Methods

prepareCustomModel(MLCustomModelSetting setting)

Prepares the custom model executor.

Parameters

NameTypeDescription
settingMLCustomModelSettingConfigurations for custom model execution.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

MLCustomModel customModel = new MLCustomModel();
MLCustomModelSetting setting = new MLCustomModelSetting();

setting.path = "local image path";
setting.modelName = "custom model name";
setting.labelFileName = "label file name";
setting.assetPathFile = "path of the custom model"

bool result = await customModel.prepareCustomModel(setting: setting);

executeCustomModel()

Performs inference using input and output configurations and content.

Return Type

TypeDescription
Future<Map<dynamic, dynamic>>Returns the execution result on success, throws PlatformException otherwise.

Call Example

Map<dynamic, dynamic> result = await customModel.executeCustomModel();

getOutputIndex(String name)

Obtains the channel index based on the output channel name.

Parameters

NameTypeDescription
nameStringOutput channel name.

Return Type

TypeDescription
Future<int>Returns the output index on success, throws PlatformException otherwise.

Call Example

int result = await customModel.getOutputIndex("channel name");

stopExecutor()

Stops an inference task to release resources.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await customModel.stopExecutor();

Data Types

MLCustomModelSetting

Constants

ConstantTypeDescription
FLOAT32intModel data type.
INT32intModel data type.
REGION_DR_CHINAintChina.
REGION_DR_AFILAintAfrica, America
REGION_DR_EUROPEintEurope.
REGION_DR_RUSSIAintRussia.

Properties

NameTypeDescription
imagePathStringLocal image path. null by default.
modelNameStringCustom model name. null by default.
modelDataTypeintModel data type. 1 by default.
assetPathFileStringCustom model path. null by default.
localFullPathFileStringCustom model path. null by default.
isFromAssetboolCreates the executor depending on the custom model file path. true by default.
labelFileNameStringLabel file name. null by default.
bitmapSizeintSets the bitmap size while execution. 224 by default.
channelSizeintSets the channel size while execution. 3 by default.
outputLengthintNumber of categories supported by your model. 1001 by default.
regionintRegion. 1002 by default.
needWifiboolSets true if wifi is needed while downloading the model. true by default.
needChargingboolSets true if charging is needed while downloading the model. false by default.
needDeviceIdleboolSets true if device idle is needed while downloading the model. false by default.

MLDocumentAnalyzer

Provides a document recognition component that recognizes text from images of documents.

Method Summary

Return TypeMethodDescription
Future<MLDocument>asyncAnalyzeFrame(MLDocumentAnalyzerSetting setting)Analyzes the image asynchronously.
Future<bool>closeDocumentAnalyzer()Closes the document analyzer.
Future<bool>stopDocumentAnalyzer()Stops the document analyzer.

Methods

asyncAnalyzeFrame(MLDocumentAnalyzerSetting setting)

Analyzes the image asynchronously.

Parameters

NameTypeDescription
settingMLDocumentAnalyzerSettingConfigurations for document recognition.

Return Type

TypeDescription
Future<MLDocument>Returns the object on success, throws PlatformException otherwise.

Call Example

MLDocumentAnalyzerSetting setting = new MLDocumentAnalyzerSetting();
MLDocumentAnalyzer analyzer = new MLDocumentAnalyzer();

setting.path = "local image path";

MLDocument document = await analyzer.asyncAnalyzeFrame(setting);

closeDocumentAnalyzer()

Closes the document analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.closeDocumentAnalyzer();

stopDocumentAnalyzer()

Stops the document analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopDocumentAnalyzer();

MLDocumentSkewCorrectionAnalyzer

Allows to detect and correct skew in images.

Method Summary

Return TypeMethodDescription
Future<MLDocumentSkewDetectResult>asyncDocumentSkewDetect(String imagePath)Detects document skew asynchronously.
Future<MLDocumentSkewCorrectionResult>asyncDocumentSkewResult()Gets the corrected document result asynchronously.
Future<MLDocumentSkewDetectResult>syncDocumentSkewDetect(String imagePath)Detects document skew synchronously.
Future<MLDocumentSkewCorrectionResult>syncDocumentSkewResult()Gets the corrected document result synchronously.
Future<bool>stopDocumentSkewCorrection()Stops the document skew detection & correction.

Methods

asyncDocumentSkewDetect(String imagePath)

Detects document skew asynchronously.

Parameters

NameTypeDescription
imagePathStringLocal image path.

Return Type

TypeDescription
Future<MLDocumentSkewDetectResult>Returns the object on success, throws PlatformException otherwise.

Call Example

MLDocumentSkewCorrectionAnalyzer analyzer = new MLDocumentSkewCorrectionAnalyzer();

MLDocumentSkewDetectResult detectionResult = await analyzer.asyncDocumentSkewDetect("local image path");

asyncDocumentSkewResult()

Gets the corrected document result asynchronously.

Return Type

TypeDescription
Future<MLDocumentSkewCorrectionResult>Returns the object on success, throws PlatformException otherwise.

Call Example

MLDocumentSkewCorrectionResult corrected = await analyzer.asyncDocumentSkewResult();

syncDocumentSkewDetect(String imagePath)

Detects document skew synchronously.

Parameters

NameTypeDescription
imagePathStringLocal image path.

Return Type

TypeDescription
Future<MLDocumentSkewDetectResult>Returns the object on success, throws PlatformException otherwise.

Call Example

MLDocumentSkewCorrectionAnalyzer analyzer = new MLDocumentSkewCorrectionAnalyzer();

MLDocumentSkewDetectResult detectionResult = await analyzer.syncDocumentSkewDetect("local image path");

syncDocumentSkewResult()

Gets the corrected document result synchronously.

Return Type

TypeDescription
Future<MLDocumentSkewCorrectionResult>Returns the object on success, throws PlatformException otherwise.

Call Example

MLDocumentSkewCorrectionResult corrected = await analyzer.syncDocumentSkewResult();

stopDocumentSkewCorrection()

Stops the document skew detection & correction.

Return Type

TypeDescription
Future<bool>Returns true on success, throws Exception otherwise.

Call Example

bool result = await analyzer.stopDocumentSkewCorrection();

MLFaceAnalyzer

Serves as the face detection SDK. It contains face detection classes and APIs.

Method Summary

Return TypeMethodDescription
Future<List<MLFace>>asyncAnalyzeFrame(MLFaceAnalyzerSetting setting)Recognizes the face asynchronously.
Future<List<MLFace>>analyzeFrame(MLFaceAnalyzerSetting setting)Recognizes the face synchronously.
Future<bool>isAvailable()Checks whether the face analyzer is available.
Future<bool>stop()Stops the face analyzer.

Methods

asyncAnalyzeFrame(MLFaceAnalyzerSetting setting)

Recognizes the face asynchronously.

Parameters

NameTypeDescription
settingMLFaceAnalyzerSettingConfigurations for face recognition.

Return Type

TypeDescription
Future<List<MLFace>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLFaceAnalyzer analyzer = new MLFaceAnalyzer();
MLFaceAnalyzerSetting setting = new MLFaceAnalyzerSetting();

setting.path = "local image path";

List<MLFace> faces = await analyzer.asyncAnalyzeFrame(setting);

analyzeFrame(MLFaceAnalyzerSetting setting)

Recognizes the face synchronously.

Parameters

NameTypeDescription
settingMLFaceAnalyzerSettingConfigurations for face recognition.

Return Type

TypeDescription
Future<List<MLFace>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLFaceAnalyzer analyzer = new MLFaceAnalyzer();
MLFaceAnalyzerSetting setting = new MLFaceAnalyzerSetting();

setting.path = "local image path";

List<MLFace> faces = await analyzer.analyzeFrame(setting);

isAvailable()

Checks whether the face analyzer is available.

Return Type

TypeDescription
Future<bool>Returns true on success, throws Exception otherwise.

Call Example

bool result = await analyzer.isAvailable();

stop()

Stops the face analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stop();

Data Types

MLFaceAnalyzerSetting

Configurations for the face analyzer.

Constants

ConstantTypeDescription
TYPE_FEATURESintDetects all facial features and expressions.
TYPE_FEATURE_AGEintDetects the age.
TYPE_FEATURE_BEARDintDetects whether a person has a beard.
TYPE_FEATURE_EMOTIONintDetects facial expressions.
TYPE_FEATURE_EYEGLASSintDetects whether a person wears glasses.
TYPE_FEATURE_GENDERintDetects the gender.
TYPE_FEATURE_HATintDetects whether a person wears a hat.
TYPE_FEATURE_OPEN_CLOSE_EYEintDetects eye opening and eye closing.
TYPE_UNSUPPORTED_FEATURESintDetects only basic data: including contours, key points, and three-dimensional rotation angles; does not detect facial features or expressions.
TYPE_KEY_POINTSintDetects key face points.
TYPE_UNSUPPORTED_KEY_POINTSintDoes not detect key face points.
TYPE_PRECISIONintPrecision preference mode. This mode will detect more faces and be more precise in detecting key points and contours, but will run slower.
TYPE_SPEEDintSpeed preference mode. This mode will detect fewer faces and be less precise in detecting key points and contours, but will run faster.
TYPE_SHAPESintDetects facial contours.
TYPE_UNSUPPORTED_SHAPESintDoes not detect facial contours.
MODE_TRACING_ROBUSTintCommon tracking mode. In this mode, initial detection is fast, but the performance of detection during tracking will be affected by face re-detection every several frames. The detection result in this mode is stable.
MODE_TRACING_FASTintFast tracking mode. In this mode, detection and tracking are performed at the same time. Initial detection has a delay, but the detection during tracking is fast. When used together with the speed preference mode, this mode can make the greatest improvements to the detection performance.

Properties

NameTypeDescription
pathStringLocal image path. null by default.
frameTypeStringRecognition frame type. MLFrameType.fromBitmap by default. You are adviced to use it this way.
propertyStringRecognition property type.
featureTypeStringSets the mode for an analyzer to detect facial features and expressions. 1 by default.
keyPointTypeStringSets the mode for an analyzer to detect key face points. 1 by default.
maxSizeFaceOnlyStringSets whether to detect only the largest face in an image. true by default.
minFaceProportionStringSets the smallest proportion (range: 0.0-1.0) of a face in an image. 0.5 by default.
performanceTypeStringSets the preference mode of an analyzer. 1 by default.
poseDisabledStringSets whether to disable pose detection. false by default.
shapeTypeStringSets the mode for an analyzer to detect facial contours. 2 by default.
tracingAllowedStringSets whether to enable face tracking. false by default.
tracingModeStringSets the tracing mode. 2 by default.

ML3DFaceAnalyzer

Serves as the 3D face detection SDK. It contains face detection classes and APIs.

Method Summary

Return TypeMethodDescription
Future<List<ML3DFace>>asyncAnalyzeFrame(ML3DFaceAnalyzerSetting setting)Recognizes the face asynchronously.
Future<List<ML3DFace>>analyzeFrame(ML3DFaceAnalyzerSetting setting)Recognizes the face synchronously.
Future<bool>isAvailable()Checks whether the face analyzer is available.
Future<bool>stop()Stops the 3D face analyzer.

Methods

asyncAnalyzeFrame(ML3DFaceAnalyzerSetting setting)

Recognizes the face asynchronously.

Parameters

NameTypeDescription
settingML3DFaceAnalyzerSettingConfigurations for recognition.

Return Type

TypeDescription
Future<List<ML3DFace>>Returns the list on success, throws PlatformException otherwise.

Call Example

ML3DFaceAnalyzer analyzer = new ML3DFaceAnalyzer();
ML3DFaceAnalyzerSetting setting = new ML3DFaceAnalyzerSetting();

setting.path = "local image path";

List<ML3DFace> faces = await analyzer.asyncAnalyzeFrame(setting);

analyzeFrame(ML3DFaceAnalyzerSetting setting)

Recognizes the face synchronously.

Parameters

NameTypeDescription
settingML3DFaceAnalyzerSettingConfigurations for recognition.

Return Type

TypeDescription
Future<List<ML3DFace>>Returns the list on success, throws PlatformException otherwise.

Call Example

ML3DFaceAnalyzer analyzer = new ML3DFaceAnalyzer();
ML3DFaceAnalyzerSetting setting = new ML3DFaceAnalyzerSetting();

setting.path = "local image path";

List<ML3DFace> faces = await analyzer.analyzeFrame(setting);

isAvailable()

Checks whether the face analyzer is available.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.isAvailable();

stop()

Stops the 3D face analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stop();

MLFormRecognitionAnalyzer

Allows to recognize text in forms.

Method Summary

Return TypeMethodDescription
Future<MLTable>asyncFormDetection(String imagePath)Recognizes the form content asynchronously.
Future<MLTable>syncFormDetection(String imagePath)Recognizes the form content synchronously.
Future<bool>stopFormRecognition()Stops the form recognition.

Methods

asyncFormDetection(String imagePath)

Recognizes the form content asynchronously.

Parameters

NameTypeDescription
imagePathStringLocal image path.

Return Type

TypeDescription
Future<MLTable>Returns the object on success, throws PlatformException otherwise.

Call Example

MLFormRecognitionAnalyzer analyzer = new MLFormRecognitionAnalyzer();

MLTable table = await analyzer.asyncFormDetection("local image path");

syncFormDetection(String imagePath)

Recognizes the form content synchronously.

Parameters

NameTypeDescription
imagePathStringLocal image path.

Return Type

TypeDescription
Future<MLTable>Returns the object on success, throws PlatformException otherwise.

Call Example

MLFormRecognitionAnalyzer analyzer = new MLFormRecognitionAnalyzer();

MLTable table = await analyzer.syncFormDetection("local image path");

stopFormRecognition()

Stops the form recognition.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopFormRecognition();

MLGeneralCardAnalyzer

Provides on-device APIs for general card recognition.

Method Summary

Return TypeMethodDescription
Future<MLGeneralCard>capturePreview(MLGeneralCardAnalyzerSetting setting)Enables the plug-in for recognizing general cards in camera streams.
Future<MLGeneralCard>capturePhoto(MLGeneralCardAnalyzerSetting setting)Enables the plug-in for taking a photo of a general card and recognizing the general card on the photo.
Future<MLGeneralCard>captureImage(MLGeneralCardAnalyzerSetting setting)Enables the plug-in for recognizing static images of general cards.

Methods

capturePreview(MLGeneralCardAnalyzerSetting setting)

Enables the plug-in for recognizing general cards in camera streams.

Parameters

NameTypeDescription
settingMLGeneralCardAnalyzerSettingConfigurations for general card recognition.

Return Type

TypeDescription
Future<MLGeneralCard>Returns the object on success, throws PlatformException otherwise.

Call Example

MLGeneralCardAnalyzerSetting setting = new MLGeneralCardAnalyzerSetting();

setting.scanBoxCornerColor = Colors.greenAccent;
setting.tipTextColor = Colors.black;
setting.tipText = "Hold still...";

MLGeneralCard card = await analyzer.capturePreview(setting);

capturePhoto(MLGeneralCardAnalyzerSetting setting)

Enables the plug-in for taking a photo of a general card and recognizing the general card on the photo.

Parameters

NameTypeDescription
settingMLGeneralCardAnalyzerSettingConfigurations for general card recognition.

Return Type

TypeDescription
Future<MLGeneralCard>Returns the object on success, throws PlatformException otherwise.

Call Example

MLGeneralCardAnalyzerSetting setting = new MLGeneralCardAnalyzerSetting();

setting.scanBoxCornerColor = Colors.greenAccent;
setting.tipTextColor = Colors.black;
setting.tipText = "Hold still...";

MLGeneralCard card = await analyzer.capturePhoto(setting);

captureImage(MLGeneralCardAnalyzerSetting setting)

Enables the plug-in for recognizing static images of general cards.

Parameters

NameTypeDescription
settingMLGeneralCardAnalyzerSettingConfigurations for general card recognition.

Return Type

TypeDescription
Future<MLGeneralCard>Returns the object on success, throws PlatformException otherwise.

Call Example

MLGeneralCardAnalyzerSetting setting = new MLGeneralCardAnalyzerSetting();

setting.path = "local image path";

MLGeneralCard card = await analyzer.captureImage(setting);

Data Types

MLGeneralCardAnalyzerSetting

Configuration class for general card recognition service.

Properties

NameTypeDescription
pathStringLocal image path. null by default.
languageStringRecognition language. "zh" by default.
scanBoxCornerColorColorScan box border color. Green by default.
tipTextColorColorTip text color. White by default.
tipTextStringTip text for scanning process. "Recognizing.." by default.

MLHandKeypointAnalyzer

Serves as the hand keypoint detection SDK, which contains hand keypoint detection classes and APIs.

Method Summary

Return TypeMethodDescription
Future<List<MLHandKeypoints>>asyncHandDetection(MLHandKeypointAnalyzerSetting setting)Recognizes the hand keypoints asynchronously.
Future<List<MLHandKeypoints>>syncHandDetection(MLHandKeypointAnalyzerSetting setting)Recognizes the hand keypoints synchronously.
Future<bool>stopHandDetection()Stops the hand keypoint analyzer.

Methods

asyncHandDetection(MLHandKeypointAnalyzerSetting setting)

Recognizes the hand keypoints asynchronously.

Parameters

NameTypeDescription
settingMLHandKeypointAnalyzerSettingConfigurations for hand keypoint recognition.

Return Type

TypeDescription
Future<List<MLHandKeypoints>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLHandKeypointAnalyzer analyzer = new MLHandKeypointAnalyzer();
MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting();

setting.path = "local image path";

List<MLHandKeypoints> list = await analyzer.asyncHandDetection(setting);

syncHandDetection(MLHandKeypointAnalyzerSetting setting)

Recognizes the hand keypoints synchronously.

Parameters

NameTypeDescription
settingMLHandKeypointAnalyzerSettingConfigurations for hand keypoint recognition.

Return Type

TypeDescription
Future<List<MLHandKeypoints>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLHandKeypointAnalyzer analyzer = new MLHandKeypointAnalyzer();
MLHandKeypointAnalyzerSetting setting = new MLHandKeypointAnalyzerSetting();

setting.path = "local image path";

List<MLHandKeypoints> list = await analyzer.syncHandDetection(setting);

stopHandDetection()

Stops the hand keypoint analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopHandDetection();

MLImageSuperResolutionAnalyzer

This package represents the image super-resolution SDK. It contains image super-resolution classes and APIs.

Method Summary

Return TypeMethodDescription
Future<MLImageSuperResolutionResult>asyncImageResolution(MLImageSuperResolutionAnalyzerSetting setting)Performs super-resolution processing on the source image using the asynchronous method.
Future<List<MLImageSuperResolutionResult>>syncImageResolution(MLImageSuperResolutionAnalyzerSetting setting)Performs super-resolution processing on the source image using the synchronous method.
Future<bool>stopImageSuperResolution()Releases resources used by an analyzer.

Methods

asyncImageResolution(MLImageSuperResolutionAnalyzerSetting setting)

Performs super-resolution processing on the source image using the asynchronous method.

Parameters

NameTypeDescription
settingMLImageSuperResolutionAnalyzerSettingConfigurations for the recognition.

Return Type

TypeDescription
Future<MLImageSuperResolutionResult>Returns the resolution result on success, throws PlatformException otherwise.

Call Example

MLImageSuperResolutionAnalyzer analyzer = new MLImageSuperResolutionAnalyzer();
MLImageSuperResolutionAnalyzerSetting setting = new MLImageSuperResolutionAnalyzerSetting();

setting.path = "local image path";
setting.scale = MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_1X;

MLImageSuperResolutionResult result = await analyzer.asyncImageResolution(setting);

syncImageResolution(MLImageSuperResolutionAnalyzerSetting setting)

Performs super-resolution processing on the source image using the synchronous method.

Parameters

NameTypeDescription
settingMLImageSuperResolutionAnalyzerSettingConfigurations for the recognition.

Return Type

TypeDescription
Future<List<MLImageSuperResolutionResult>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLImageSuperResolutionAnalyzer analyzer = new MLImageSuperResolutionAnalyzer();
MLImageSuperResolutionAnalyzerSetting setting = new MLImageSuperResolutionAnalyzerSetting();

setting.path = "local image path";
setting.scale = MLImageSuperResolutionAnalyzerSetting.ISR_SCALE_1X;

List<MLImageSuperResolutionResult> result = await analyzer.syncImageResolution(setting);

stopImageSuperResolution

Releases resources used by an analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopImageSuperResolution();

MLImageSegmentationAnalyzer

Provides the image segmentation SDK. It contains image segmentation classes and APIs.

Method Summary

Return TypeMethodDescription
Future<MLImageSegmentation>asyncAnalyzeFrame(MLImageSegmentationAnalyzerSetting setting)Implements image segmentation in asynchronous mode.
Future<List<MLImageSegmentation>>analyzeFrame(MLImageSegmentationAnalyzerSetting setting)Implements image segmentation in synchronous mode.
Future<bool>stopSegmentation()Releases resources, including input and output streams and loaded model files.

Methods

asyncAnalyzeFrame(MLImageSegmentationAnalyzerSetting setting)

Implements image segmentation in asynchronous mode.

Parameters

NameTypeDescription
settingMLImageSegmentationAnalyzerSettingConfigurations for image segmentation.

Return Type

TypeDescription
Future<MLImageSegmentation>Returns the object on success, throws PlatformException otherwise.

Call Example

MLImageSegmentationAnalyzer analyzer = new MLImageSegmentationAnalyzer();
MLImageSegmentationAnalyzerSetting setting = new MLImageSegmentationAnalyzerSetting();

setting.path = "local image path";
setting.analyzerType = MLImageSegmentationAnalyzerSetting.BODY_SEG;
setting.scene = MLImageSegmentationAnalyzerSetting.ALL;

MLImageSegmentation segmentation = await analyzer.asyncAnalyzeFrame(setting);

analyzeFrame(MLImageSegmentationAnalyzerSetting setting)

Implements image segmentation in synchronous mode.

Parameters

NameTypeDescription
settingMLImageSegmentationAnalyzerSettingConfigurations for image segmentation.

Return Type

TypeDescription
Future<List<MLImageSegmentation>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLImageSegmentationAnalyzer analyzer = new MLImageSegmentationAnalyzer();
MLImageSegmentationAnalyzerSetting setting = new MLImageSegmentationAnalyzerSetting();

setting.path = "local image path";
setting.analyzerType = MLImageSegmentationAnalyzerSetting.BODY_SEG;
setting.scene = MLImageSegmentationAnalyzerSetting.ALL;

List<MLImageSegmentation> list = await analyzer.analyzeFrame(setting);

stopSegmentation()

Releases resources, including input and output streams and loaded model files.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopSegmentation();

Data Types

MLImageSegmentationAnalyzerSetting

Configuration class for image segmentation service.

Properties

NameTypeDescription
pathStringLocal image path. null by default.
frameTypeMLFrameTypeRecognition frame type. MLFrameType.fromBitmap by default. You are adviced to use it this way.
propertyMLFramePropertyRecognition property type.
analyzerTypeintSets the classification mode. For static image segmentation, MLImageSegmentationAnalyzerSetting.BODY_SEG (only the human body and background) and MLImageSegmentationAnalyzerSetting.IMAGE_SEG (11 categories, including the human body) can be set.
sceneintSets the type of the returned result. This setting takes effect only in MLImageSegmentationAnalyzerSetting.BODY_SEG mode. In MLImageSegmentationAnalyzerSetting.IMAGE_SEG mode, only the pixel-level label information is returned. The options are as follows: MLImageSegmentationAnalyzerSetting.ALL (return all segmentation results, including the pixel-level label information, human body image with a transparent background, and gray-scale image with a white human body and black background), MLImageSegmentationAnalyzerSetting.MASK_ONLY (return only the pixel-level label information), MLImageSegmentationAnalyzerSetting.FOREGROUND_ONLY (return only the human body image with a transparent background), and MLImageSegmentationAnalyzerSetting.GRAYSCALE_ONLY (return only the gray-scale image with a white human body and black background).
exactModeboolDetermines whether to support fine detection. true by default.

MLLandmarkAnalyzer

Implements image landmark detection of HUAWEI ML Kit.

Method Summary

Return TypeMethodDescription
Future<List<MLLandmark>>asyncAnalyzeFrame(MLLandmarkAnalyzerSetting setting)Detects landmarks in a supplied image.
Future<bool>stopLandmarkDetection()Releases resources, including input and output streams.

Methods

asyncAnalyzeFrame(MLLandmarkAnalyzerSetting setting)

Detects landmarks in a supplied image asynchronously.

Parameters

NameTypeDescription
settingMLLandmarkAnalyzerSettingConfigurations for landmark recognition.

Return Type

TypeDescription
Future<List<MLLandmark>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLLandmarkAnalyzer analyzer = new MLLandmarkAnalyzer();
MLLandmarkAnalyzerSetting setting = new MLLandmarkAnalyzerSetting();

setting.path = "local image path";
setting.largestNumberOfReturns = 8;

List<MLLandmark> list = await analyzer.asyncAnalyzeFrame(setting);

stopLandmarkDetection()

Releases resources, including input and output streams.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopLandmarkDetection();

MLLangDetector

Detects languages.

Method Summary

Return TypeMethodDescription
Future<String>firstBestDetect(MLLangDetectorSetting setting)Returns the language detection result with the highest confidence based on the supplied text.
Future<String>syncFirstBestDetect(MLLangDetectorSetting setting)Synchronously returns the language detection result with the highest confidence based on the supplied text.
Future<List<MLDetectedLang>>probabilityDetect(MLLangDetectorSetting setting)Returns multi-language detection results based on the supplied text.
Future<List<MLDetectedLang>>syncProbabilityDetect(MLLangDetectorSetting setting)Synchronously returns multi-language detection results based on the supplied text.
Future<bool>stop()Releases resources, including input and output streams.

Methods

firstBestDetect(MLLangDetectorSetting setting)

Returns the language detection result with the highest confidence based on the supplied text.

Parameters

NameTypeDescription
settingMLLangDetectorSettingConfigurations for language detection service.

Return Type

TypeDescription
Future<String>Returns the language code on success, throws PlatformException otherwise.

Call Example

MLLangDetector detector = new MLLangDetector();
MLLangDetectorSetting setting = new MLLangDetectorSetting();

setting.sourceText = "source text";
setting.isRemote = true;

String result = await detector.firstBestDetect(setting: setting);

syncFirstBestDetect(MLLangDetectorSetting setting)

Synchronously returns the language detection result with the highest confidence based on the supplied text.

Parameters

NameTypeDescription
settingMLLangDetectorSettingConfigurations for language detection service.

Return Type

TypeDescription
Future<String>Returns the language code on success, throws PlatformException otherwise.

Call Example

MLLangDetector detector = new MLLangDetector();
MLLangDetectorSetting setting = new MLLangDetectorSetting();

setting.sourceText = "source text";
setting.isRemote = true;

String result = await detector.syncFirstBestDetect(setting: setting);

probabilityDetect(MLLangDetectorSetting setting)

Returns multi-language detection results based on the supplied text.

Parameters

NameTypeDescription
settingMLLangDetectorSettingConfigurations for language detection service.

Return Type

TypeDescription
Future<List<MLDetectedLang>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLLangDetector detector = new MLLangDetector();
MLLangDetectorSetting setting = new MLLangDetectorSetting();

setting.sourceText = "source text";
setting.isRemote = true;

List<MLDetectedLang> list = await detector.probabilityDetect(setting: setting);

syncProbabilityDetect(MLLangDetectorSetting setting)

Synchronously returns multi-language detection results based on the supplied text.

Parameters

NameTypeDescription
settingMLLangDetectorSettingConfigurations for language detection service.

Return Type

TypeDescription
Future<List<MLDetectedLang>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLLangDetector detector = new MLLangDetector();
MLLangDetectorSetting setting = new MLLangDetectorSetting();

setting.sourceText = "source text";
setting.isRemote = true;

List<MLDetectedLang> list = await detector.syncProbabilityDetect(setting: setting);

stop()

Releases resources, including input and output streams.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await detector.stop();

LensEngine

A class with the camera initialization, frame obtaining, and logic control functions encapsulated.

Constructor Summary

ConstructorDescription
LensEngine(LensViewController controller)Requires a controller to initialize a texture for camera preview and build the lens engine with required parameters.

Constructors

LensEngine(LensViewController controller)

Requires a controller to initialize a texture for camera preview and build the lens engine with required parameters.

Method Summary

Return TypeMethodDescription
Future<void>initLens()Initializes the surface texture for camera to be previewed.
Future<void>run()Runs the lens engine and starts live detection.
Future<bool>release()Releases resources occupied by LensEngine.
Future<String>photograph()Captures an image during live detection.
Future<void>zoom(double z)Adjusts the focal length of the camera based on the scaling coefficient (digital zoom).
Future<bool>getLens()Checks if the engine has a usable camera instance.
Future<int>getLensType()Obtains the lens type that being used during live detection.
Future<Size>getDisplayDimension()Obtains the size of the preview image of a camera.
Future<void>switchCamera()Switches between front and back lenses.
voidsetTransactor(LensTransactor transactor)Sets a listener for detection events.

Methods

initLens()

Initializes the surface texture for camera to be previewed.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

final LensViewController controller = new LensViewController(
    lensType: LensViewController.BACK_LENS,
    analyzerType: LensEngineAnalyzerOptions.FACE
);

LensEngine lensEngine = new LensEngine(controller: controller);

await lensEngine.initLens();
setState(() {});

run()

Starts the LensEngine and uses SurfaceTexture as the frame preview panel. A frame preview panel is used to preview images and display detection results.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await lensEngine.run();

release()

Releases resources occupied by LensEngine.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await lensEngine.release();

photograph()

Captures an image during live detection.

Return Type

TypeDescription
Future<String>Returns the captured image path on success.

Call Example

String result = await lensEngine.photograph();

zoom(double z)

Adjusts the focal length of the camera based on the scaling coefficient (digital zoom).

Parameters

NameTypeDescription
zdoubleScaling coefficient. If the scaling coefficient is greater than 1.0, the focal length is calculated as follows: Maximum focal length supported by the camera x 1/10 x Scaling coefficient. If the scaling coefficient is 1.0, the focal length does not change. If the scaling coefficient is less than 1.0, the focal length equals the current focal length multiplied by the scaling coefficient.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await lensEngine.zoom(1.5);

getLens()

Checks if the engine has a usable camera instance.

Return Type

TypeDescription
Future<bool>Returns true or false depending on the obtained camera instance.

Call Example

bool result = await lensEngine.getLens();

getLensType()

Obtains the lens type that being used during live detection.

Return Type

TypeDescription
Future<int>Returns the lens type on success, throws PlatformException otherwise.

Call Example

int type = await lensEngine.getLensType();

getDisplayDimension()

Obtains the size of the preview image of a camera.

Return Type

TypeDescription
Future<Size>Returns the display dimension on success, throws PlatformException otherwise.

Call Example

Size size = await lensEngine.getDisplayDimension();

switchCamera()

Switches between front and back lenses.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await lensEngine.switchCamera();

setTransactor(LensTransactor transactor)

Sets a listener for detection events.

Parameters

NameTypeDescription
transactorLensTransactorListener for live detections.

Call Example

lensEngine.setTransactor(({isAnalyzerAvailable, result}) {
    // Your implementation here
});

Data Types

LensViewController

Configuration class for live detections.

Constants

ConstantTypeDescription
BACK_LENSintBack lens.
FRONT_LENSintFront lens.
FLASH_MODE_OFFStringTurns off the flash.
FLASH_MODE_AUTOStringAutomatically determine whether to turn on the flash.
FLASH_MODE_ONStringTurns on the flash.
FOCUS_MODE_CONTINUOUS_VIDEOStringcontinuous video focus mode.
FOCUS_MODE_CONTINUOUS_PICTUREStringcontinuous image focus mode.

Properties

NameTypeDescription
lensTypeintSets the lens type. 0 by default.
analyzerTypeLensEngineAnalyzerOptionsSets the analyzer type will be used during live detection.
applyFpsdoubleSets the preview frame rate (FPS) of a camera. The preview frame rate of a camera depends on the firmware capability of the camera. 30.0 by default.
dimensionWidthintSets the width of size of the preview image of a camera. 1440 by default.
dimensionHeightintSets the height of size of the preview image of a camera. 1080 by default.
flashModeStringSets the flash mode for a camera. "auto" by default.
focusModeStringSets the focus mode for a camera. "continuous-video" by default.
automaticFocusboolEnables or disables the automatic focus function for a camera. true by default.
maxFrameLostCountintSets the maximum number of frames for determining that a face disappears. This option is only used with LensEngineAnalyzerOptions.MAX_SIZE_FACE.

LensTransactor

A function type defined for listening live detection events.

DefinitionDescription
void LensTransactor({dynamic result, bool isAnalyzerAvailable})Live detection event listener.

Parameters

NameTypeDescription
resultdynamicLive detection result. Varies with different types of analysis.
isAnalyzerAvailabledynamicObtains the status of the given analyzer type.

enum LensEngineAnalyzerOptions

Enumerated object that represents the analyzer type will be used during live detection.

ValueDescription
FACELens engine will detect with face analyzer.
FACE_3DLens engine will detect with 3D face analyzer.
MAX_SIZE_FACELens engine will detect with max size face transactor.
HANDLens engine will detect with hand keypoint analyzer.
SKELETONLens engine will detect with skeleton analyzer.
CLASSIFICATIONLens engine will detect with classification analyzer.
TEXTLens engine will detect with text analyzer.
OBJECTLens engine will detect with object analyzer.
SCENELens engine will detect with scene analyzer.

LensView

Special widget that allows to carry out the live detections.

Constructor Summary

ConstructorDescription
LensView(LensViewController controller, double with, double height)Requires a controller which has a texture id that will be used for camera preview.

Constructors

LensView(LensViewController controller, double with, double height)

Requires a controller which has a texture id that will be used for camera preview. Also takes a width and a height to have a configurable size.

MLLivenessCapture

Constants

ConstantTypeDescription
CAMERA_NO_PERMISSIONintThe camera permission is not obtained.
CAMERA_START_FAILEDintFailed to start the camera.
DETECT_FACE_TIME_OUTintThe face detection module times out. (The duration does not exceed 2 minutes.)
USER_CANCELintThe operation is canceled by the user.
DETECT_MASKintSets whether to detect the mask.
MASK_WAS_DETECTEDintA mask is detected.
NO_FACE_WAS_DETECTEDintNo face is detected.

Method Summary

Return TypeMethodDescription
Future<MLLivenessCaptureResult>startLivenessDetection(bool detectMask)Starts a liveness detection activity.

Methods

startLivenessDetection(bool detectMask)

Starts a liveness detection activity.

Parameters

NameTypeDescription
detectMaskboolAn optional parameter. true by default. The service considers the mask in detection when true.

Return Type

TypeDescription
Future<MLLivenessCaptureResult>Returns the object on success, throws PlatformException otherwise.

Call Example

MLLivenessCapture livenessCapture = new MLLivenessCapture();

MLLivenessCaptureResult result = await livenessCapture.startLivenessDetection(detectMask: true);

MLApplication

An app information class used to store basic information about apps with the HMS Core ML SDK integrated and complete the initialization of ML Kit. When using cloud services of the ML Kit, you need to set the apiKey of your app.

Method Summary

Return TypeMethodDescription
Future<void>setApiKey(String apiKey)Sets the api key for on cloud services.
Future<void>setAccessToken(String accessToken)Sets the access token for on cloud services.
Future<void>enableLogger()Enables the HMS plugin method analytics.
Future<void>disableLogger()Disables the HMS plugin method analytics.

Methods

setApiKey(String apiKey)

Sets the api key for on cloud services.

Parameters

NameTypeDescription
apiKeyStringapiKey of an app.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await MLApplication().setApiKey(apiKey: "your api key");

setAccessToken(String accessToken)

Sets the access token for on cloud services.

Parameters

NameTypeDescription
accessTokenStringaccess token of an app.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await MLApplication().setAccessToken(accessToken: "your access token");

enableLogger()

Enables HMS Plugin Method Analytics which is used for sending usage analytics of Health Kit SDK's methods to improve the service quality.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await MLApplication().enableLogger();

disableLogger()

Disables HMS Plugin Method Analytics which is used for sending usage analytics of Health Kit SDK's methods to improve the service quality.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

await MLApplication().disableLogger();

MLFrame

A class that encapsulates video frame or static image data sourced from a camera as well as related data processing logic.

Constants

ConstantTypeDescription
SCREEN_FIRST_QUADRANTintLandscape.
SCREEN_SECOND_QUADRANTintPortrait, which is 90 degrees clockwise from SCREEN_FIRST_QUADRANT.
SCREEN_THIRD_QUADRANTintReverse landscape, which is 90 degrees clockwise from SCREEN_SECOND_QUADRANT.
SCREEN_FOURTH_QUADRANTintReverse portrait, which is 90 degrees clockwise from SCREEN_THIRD_QUADRANT.

Constructor Summary

ConstructorDescription
MLFrame(MLFrameProperty property)Configures the request for image related API's.

Constructors

MLFrame(MLFrameProperty property)

Property object has some configurable request options that can be used with image related services. By default this object is null in the image related analyzer setting classes. You are adviced not to use MLProperty in image related requests.

Method Summary

Return TypeMethodDescription
Future<String>getPreviewBitmap()Obtains the image from last image related analysis.
Future<String>readBitmap()Obtains the image from last image related analysis.
Future<String>rotate(String path, int quadrant)Rotates given image and returns the result.

Methods

getPreviewBitmap()

Obtains the image from last image related analysis.

Return Type

TypeDescription
Future<String>Returns the image path on success, throws PlatformException otherwise.

Call Example

String result = await MLFrame().getPreviewBitmap();

readBitmap()

Obtains the image from last image related analysis.

Return Type

TypeDescription
Future<String>Returns the image path on success, throws PlatformException otherwise.

Call Example

String result = await MLFrame().readBitmap();

rotate(String path, int quadrant)

Rotates given image and returns the result.

Parameters

NameTypeDescription
pathStringLocal image path
quadrantintIndicates rotation degree

Return Type

TypeDescription
Future<String>Returns rotated image path on success, throws PlatformException otherwise.

Call Example

String result = await MLFrame().rotate("local image path", MLFrame.SCREEN_SECOND_QUADRANT);

MLObjectAnalyzer

This package implements object detection and tracking of HUAWEI ML Kit.

Method Summary

Return TypeMethodDescription
Future<List<MLObject>>asyncAnalyzeFrame(MLObjectAnalyzerSetting setting)Analyzes the object asynchronously.
Future<List<MLObject>>analyzeFrame(MLObjectAnalyzerSetting setting)Analyzes the object synchronously.
Future<bool>stopObjectDetection()Stops object detection.

Methods

asyncAnalyzeFrame(MLObjectAnalyzerSetting setting)

Analyzes the object asynchronously.

Parameters

NameTypeDescription
settingMLObjectAnalyzerSettingConfigurations for object recognition.

Return Type

TypeDescription
Future<List<MLObject>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLObjectAnalyzer analyzer = new MLObjectAnalyzer();
MLObjectAnalyzerSetting setting = new MLObjectAnalyzerSetting();

setting.path = "local image path";

List<MLObject> list = await analyzer.asyncAnalyzeFrame(setting);

analyzeFrame(MLObjectAnalyzerSetting setting)

Analyzes the object synchronously.

Parameters

NameTypeDescription
settingMLObjectAnalyzerSettingConfigurations for object recognition.

Return Type

TypeDescription
Future<List<MLObject>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLObjectAnalyzer analyzer = new MLObjectAnalyzer();
MLObjectAnalyzerSetting setting = new MLObjectAnalyzerSetting();

setting.path = "local image path";

List<MLObject> list = await analyzer.analyzeFrame(setting);

stopObjectDetection()

Stops object detection.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopObjectDetection();

MLProductVisionSearchAnalyzer

Represents the image-based product detection API of HUAWEI ML Kit.

Method Summary

Return TypeMethodDescription
Future<List<MlProductVisualSearch>>searchProduct(MLProductVisionSearchAnalyzerSetting setting)Recognizes the product.
Future<List<MLProductCaptureResult>>searchProductWithPlugin(MLProductVisionSearchAnalyzerSetting setting)Recognizes the product with the plugin.
Future<bool>stopProductAnalyzer()Stops the product analyzer.

Methods

searchProduct(MLProductVisionSearchAnalyzerSetting setting)

Recognizes the product with a local image.

Parameters

NameTypeDescription
settingMLProductVisionSearchAnalyzerSettingConfiguration for product search service.

Return Type

TypeDescription
Future<List<MlProductVisualSearch>>Returns the product list on success, throws PlatformException otherwise.

Call Example

MLProductVisionSearchAnalyzer analyzer = new MLProductVisionSearchAnalyzer();
MLProductVisionSearchAnalyzerSetting setting = new MLProductVisionSearchAnalyzerSetting();

setting.path = "local image path";
setting.largestNumberOfReturns = 10;
setting.productSetId = "bags";
setting.region = MLProductVisionSearchAnalyzerSetting.REGION_DR_CHINA;

List<MlProductVisualSearch> visionSearch = await analyzer.searchProduct(setting);

searchProductWithPlugin(MLProductVisionSearchAnalyzerSetting setting)

Recognizes the product with the plugin.

Parameters

NameTypeDescription
settingMLProductVisionSearchAnalyzerSettingConfiguration for product search service.

Return Type

TypeDescription
Future<List<MLProductCaptureResult>>Returns the product list on success, throws PlatformException otherwise.

Call Example

MLProductVisionSearchAnalyzer analyzer = new MLProductVisionSearchAnalyzer();
MLProductVisionSearchAnalyzerSetting setting = new MLProductVisionSearchAnalyzerSetting();

setting.largestNumberOfReturns = 10;
setting.productSetId = "bags";
setting.region = MLProductVisionSearchAnalyzerSetting.REGION_DR_CHINA;

<List<MLProductCaptureResult> list = await analyzer.searchProductWithPlugin(setting);

stopProductAnalyzer()

Stops the product analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool result = await analyzer.stopProductAnalyzer();

Data Types

MLProductVisionSearchAnalyzerSetting

Constants

ConstantTypeDescription
REGION_DR_SINGAPOREintSingapore
REGION_DR_CHINAintChina
REGION_DR_GERMANintGermany
REGION_DR_RUSSIAintRussia
REGION_DR_EUROPEintEurope
REGION_DR_AFILAintAsia, America
REGION_DR_UNKNOWNintUnknown region

Properties

NameTypeDescription
pathStringLocal image path. null by default.
productSetIdStringSets the product set id. "vmall" by default.
largestNumberOfReturnsintSet max result count. 20 by default.
regionintSets the region. 1002 by default.

MLSpeechRealTimeTranscription

Converts speech into text in real time.

Method Summary

Return TypeMethodDescription
Future<void>startRecognizing(MLSpeechRealTimeTranscriptionConfig config)Starts the real time transcription service.
Future<bool>destroyRealTimeTranscription()Stops the real time transcription service.
voidsetListener(RttListener listener)Sets a listener for real time transcription service.

Methods

startRecognizing(MLSpeechRealTimeTranscriptionConfig config)

Starts the real time transcription service.

Parameters

NameTypeDescription
configMLSpeechRealTimeTranscriptionConfigConfigurations for real time transcription.

Return Type

TypeDescription
Future<void>No return value.

Call Example

MLSpeechRealTimeTranscription client = new MLSpeechRealTimeTranscription();
MLSpeechRealTimeTranscriptionConfig config = new MLSpeechRealTimeTranscriptionConfig();

config.language = MLSpeechRealTimeTranscriptionConfig.LAN_EN_US;

await client.startRecognizing(config: config);

destroyRealTimeTranscription()

Stops the real time transcription service.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await client.destroyRealTimeTranscription();

setListener(RttListener listener)

Sets a listener for real time transcription service.

Parameters

NameTypeDescription
listenerRttListenerListens transcription events.

Call Example

client.setListener((partialResult, {recognizedResult}) {
    // Your implementation here
});

Data Types

RttListener

A function type defined for listening real time transcription events.

DefinitionDescription
void RttListener(dynamic partialResult, {String recognizedResult})Transcription event listener.

Parameters

NameTypeDescription
partialResultdynamicTranscription information.
recognizedResultStringObtained text from speech.

MLSpeechRealTimeTranscriptionConfig

Constants

ConstantTypeDescription
LAN_ZH_CNStringChinese.
LAN_EN_USStringEnglish.
LAN_FR_FRStringFrench.
SCENES_SHOPPINGStringShopping scenario.

Properties

NameTypeDescription
languageStringSets the language. "en-US" by default.
sceneStringSets the scenario. Only available with chinese.
punctuationEnabledboolIndicates whether punctuation is required in the transcription result. By default, punctuation is required.
sentenceTimeOffsetEnabledboolSets whether the sentence offset is required in the transcription result. By default, the sentence offset is not required.
wordTimeOffsetEnabledboolSets whether the word offset is required in the transcription result. By default, the word offset is not required.

MLSceneDetectionAnalyzer

Detects scenes.

Method Summary

Return TypeMethodDescription
Future<List<MLSceneDetection>>asyncSceneDetection(MLSceneDetectionAnalyzerSetting setting)Analyzes the scene asynchronously.
Future<List<MLSceneDetection>>syncSceneDetection(MLSceneDetectionAnalyzerSetting setting)Analyzes the scene synchronously.
Future<bool>stopSceneDetection()Stops the scene detection.

Methods

asyncSceneDetection(MLSceneDetectionAnalyzerSetting setting)

Analyzes the scene asynchronously.

Parameters

NameTypeDescription
settingMLSceneDetectionAnalyzerSettingConfigurations for scene detection.

Return Type

TypeDescription
Future<List<MLSceneDetection>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLSceneDetectionAnalyzer analyzer = new MLSceneDetectionAnalyzer();
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting();

setting.path = "local image path";

List<MLSceneDetection> list = await analyzer.asyncSceneDetection(setting);

syncSceneDetection(MLSceneDetectionAnalyzerSetting setting)

Analyzes the scene synchronously.

Parameters

NameTypeDescription
settingMLSceneDetectionAnalyzerSettingConfigurations for scene detection.

Return Type

TypeDescription
Future<List<MLSceneDetection>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLSceneDetectionAnalyzer analyzer = new MLSceneDetectionAnalyzer();
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting();

setting.path = "local image path";

List<MLSceneDetection> list = await analyzer.syncSceneDetection(setting);

stopSceneDetection()

Stops the scene detection.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await analyzer.stopSceneDetection();

MLSkeletonAnalyzer

Detects skeleton points.

Method Summary

Return TypeMethodDescription
Future<List<MLSkeleton>>asyncSkeletonDetection(MLSkeletonAnalyzerSetting setting)Recognizes the skeleton points asynchronously.
Future<List<MLSkeleton>>syncSkeletonDetection(MLSkeletonAnalyzerSetting setting)]Recognizes the skeleton points synchronously.
Future<double>calculateSimilarity(List<MLSkeleton> list1, List<MLSkeleton> list2)Calculates the similarity between two lists of MLSkeleton objects.
Future<bool>stopSkeletonDetection()Stops the skeleton detection.

Methods

asyncSkeletonDetection(MLSkeletonAnalyzerSetting setting)

Recognizes the skeleton points asynchronously.

Parameters

NameTypeDescription
settingMLSkeletonAnalyzerSettingConfigurations for skeleton detection.

Return Type

TypeDescription
Future<List<MLSkeleton>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLSkeletonAnalyzer analyzer = new MLSkeletonAnalyzer();
MLSkeletonAnalyzerSetting setting = new MLSkeletonAnalyzerSetting();

setting.path = "local image path";

List<MLSkeleton> list = await analyzer.asyncSkeletonDetection(setting);

syncSkeletonDetection(MLSkeletonAnalyzerSetting setting)

Recognizes the skeleton points synchronously.

Parameters

NameTypeDescription
settingMLSkeletonAnalyzerSettingConfigurations for skeleton detection.

Return Type

TypeDescription
Future<List<MLSkeleton>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLSkeletonAnalyzer analyzer = new MLSkeletonAnalyzer();
MLSkeletonAnalyzerSetting setting = new MLSkeletonAnalyzerSetting();

setting.path = "local image path";

List<MLSkeleton> list = await analyzer.syncSkeletonDetection(setting);

calculateSimilarity(List<MLSkeleton> list1, List<MLSkeleton> list2)

Calculates the similarity between two lists of MLSkeleton objects.

Parameters

NameTypeDescription
list1List<MLSkeleton>A list of MLSkeleton objects.
list2List<MLSkeleton>A list of MLSkeleton objects.

Return Type

TypeDescription
Future<double>Returns the similarity on success, throws PlatformException otherwise.

Call Example

final List<MLSkeleton> list1 = [MLSkeleton(..), MLSkeleton(..)];

final List<MLSkeleton> list2 = [MLSkeleton(..), MLSkeleton(..)];

double res = await analyzer.calculateSimilarity(list1, list2);

stopSkeletonDetection

Stops the skeleton detection.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await analyzer.stopSkeletonDetection();

MLSoundDetector

Automatically detects sound.

Method Summary

Return TypeMethodDescription
Future<int>startSoundDetector()Starts listening the sound.
Future<bool>stopSoundDetector()Stops the sound detector.
Future<bool>destroySoundDetector()Destroys the sound detector.

Methods

startSoundDetector()

Starts listening the sound.

Return Type

TypeDescription
Future<int>Returns the sound detection result on success.

Call Example

MLSoundDetector detector = new MLSoundDetector();

int res = await detector.startSoundDetector();

stopSoundDetector()

Stops the sound detector.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await detector.stopSoundDetector();

destroySoundDetector()

Destroys the sound detector.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await detector.destroySoundDetector();

MLTextAnalyzer

Serves as a text recognition component that recognizes text in images.

Method Summary

Return TypeMethodDescription
Future<MLText>asyncAnalyzeFrame(MLTextAnalyzerSetting setting)Recognizes the texts in image asynchronously.
Future<List<Blocks>>analyzeFrame(MLTextAnalyzerSetting setting)Recognizes the text blocks in image synchronously.
Future<int>getAnalyzeType()Obtains the analyze type.
Future<bool>isTextAnalyzerAvailable()Checks whether the analyzer is available.
Future<bool>stopTextAnalyzer()Stops the text analyzer.

Methods

asyncAnalyzeFrame(MLTextAnalyzerSetting setting)

Recognizes the texts in image asynchronously.

Parameters

NameTypeDescription
settingMLTextAnalyzerSettingConfigurations for text recognition.

Return Type

TypeDescription
Future<MLText>Returns the object on success, throws PlatformException otherwise.

Call Example

MLTextAnalyzer analyzer = new MLTextAnalyzer();
MLTextAnalyzerSetting setting = new MLTextAnalyzerSetting();

setting.path = "local image path";
setting.isRemote = true;
setting.language = "en";

MLText text = await analyzer.asyncAnalyzeFrame(setting);

analyzeFrame(MLTextAnalyzerSetting setting)

Recognizes the text blocks in image synchronously.

Parameters

NameTypeDescription
settingMLTextAnalyzerSettingConfigurations for text recognition.

Return Type

TypeDescription
Future<List<Blocks>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLTextAnalyzer analyzer = new MLTextAnalyzer();
MLTextAnalyzerSetting setting = new MLTextAnalyzerSetting();

setting.path = "local image path";
setting.isRemote = true;
setting.language = "en";

List<Blocks> list = await analyzer.analyzeFrame(setting);

getAnalyzeType()

Obtains the analyze type.

Return Type

TypeDescription
Future<int>Returns the type success, throws PlatformException otherwise.

Call Example

int res = await analyzer.getAnalyzeType();

isTextAnalyzerAvailable()

Checks whether the analyzer is available.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await analyzer.isTextAnalyzerAvailable();

stopTextAnalyzer()

Stops the text analyzer.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await analyzer.stopTextAnalyzer();

MLTextEmbeddingAnalyzer

Text embedding component.

Method Summary

Return TypeMethodDescription
Future<bool>createTextEmbeddingAnalyzer(MLTextEmbeddingAnalyzerSetting setting)Creates the text embedding analyzer.
Future<List<dynamic>>analyzeSentenceVector(String sentence)Queries the sentence vector asynchronously.
Future<double>analyseSentencesSimilarity(String sentence1, String sentence2)Asynchronously queries the similarity between two sentences. The similarity range is -1, 1.
Future<List<dynamic>>analyseWordVector(String word)Queries the word vector asynchronously.
Future<double>analyseWordsSimilarity(String word1, String word2)Asynchronously queries the similarity between two words. The similarity range is -1, 1.
Future<List<dynamic>>analyseSimilarWords(String word, int number)Asynchronously queries a specified number of similar words.
Future<MlVocabularyVersion>getVocabularyVersion()Asynchronously queries dictionary version information.
Future<dynamic>analyseWordVectorBatch(List<String> words)Asynchronously queries word vectors in batches. (The number of words ranges from 1 to 500.)

Methods

createTextEmbeddingAnalyzer(MLTextEmbeddingAnalyzerSetting setting)

Creates the text embedding analyzer.

Parameters

NameTypeDescription
settingMLTextEmbeddingAnalyzerSettingConfigurations for text embedding service.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

MLTextEmbeddingAnalyzer analyzer = new MLTextEmbeddingAnalyzer();
MLTextEmbeddingAnalyzerSetting setting = new MLTextEmbeddingAnalyzerSetting();

setting.language = MLTextEmbeddingAnalyzerSetting.LANGUAGE_EN;

bool res = await analyzer.createTextEmbeddingAnalyzer(setting: setting);

analyzeSentenceVector(String sentence)

Queries the sentence vector asynchronously.

Parameters

NameTypeDescription
sentenceStringSentence.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list on success, throws PlatformException otherwise.

Call Example

List<dynamic> list = await analyzer.analyzeSentenceVector(sentence: "your sentence");

analyseSentencesSimilarity(String sentence1, String sentence2)

Asynchronously queries the similarity between two sentences. The similarity range is -1, 1.

Parameters

NameTypeDescription
sentence1StringSentence.
sentence2StringSentence.

Return Type

TypeDescription
Future<double>Returns the result on success, throws PlatformException otherwise.

Call Example

double res = await analyzer.analyseSentencesSimilarity(sentence1: "sentence 1", sentence2: "sentence 2");

analyseWordVector(String word)

Queries the word vector asynchronously.

Parameters

NameTypeDescription
wordStringWord.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list on success, throws PlatformException otherwise.

Call Example

List<dynamic> list = await analyzer.analyseWordVector(word: "your word");

analyseWordsSimilarity(String word1, String word2)

Asynchronously queries the similarity between two words. The similarity range is -1, 1.

Parameters

NameTypeDescription
word1StringWord.
word2StringWord.

Return Type

TypeDescription
Future<double>Returns the result on success, throws PlatformException otherwise.

Call Example

double res = await analyzer.analyseWordsSimilarity(word1: "word 1", word2: "word 2");

analyseSimilarWords(String word, int number)

Asynchronously queries a specified number of similar words.

Parameters

NameTypeDescription
wordStringWord.
numberintResult count.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list on success, throws PlatformException otherwise.

Call Example

List<dynamic> list = await analyzer.analyseSimilarWords(word: "word", number: 8);

getVocabularyVersion()

Asynchronously queries dictionary version information.

Return Type

TypeDescription
Future<MlVocabularyVersion>Returns the object on success, throws PlatformException otherwise.

Call Example

MLVocabularyVersion version = await analyzer.getVocabularyVersion();

analyseWordVectorBatch(List<String> words)

Asynchronously queries word vectors in batches. (The number of words ranges from 1 to 500.)

Parameters

NameTypeDescription
wordsList<String>Words list.

Return Type

TypeDescription
Future<dynamic>Returns the result on success, throws PlatformException otherwise.

Call Example

List<dynamic> list = await analyzer.analyseWordVectorBatch(["one", "two", "three"]);

MLTextImageSuperResolutionAnalyzer

This package represents the text image super-resolution SDK. It contains text image super-resolution classes and APIs.

Method Summary

Return TypeMethodDescription
Future<MLTextImageSuperResolution>asyncAnalyzeFrame(String imagePath)Does the text resolution asynchronously.
Future<List<MLTextImageSuperResolution>>analyzeFrame(String imagePath)Does the text resolution synchronously.
Future<bool>stopTextResolution()Stops the text resolution service.

Methods

asyncAnalyzeFrame(String imagePath)

Does the text resolution asynchronously.

Parameters

NameTypeDescription
imagePathStringLocal image path.

Return Type

TypeDescription
Future<MLTextImageSuperResolution>Returns the object on success, throws PlatformException otherwise.

Call Example

MLTextImageSuperResolutionAnalyzer analyzer = new MLTextImageSuperResolutionAnalyzer();

MLTextImageSuperResolution result = await analyzer.asyncAnalyzeFrame("image path");

analyzeFrame(String imagePath)

Does the text resolution synchronously.

Parameters

NameTypeDescription
imagePathStringLocal image path.

Return Type

TypeDescription
Future<List<MLTextImageSuperResolution>>Returns the list on success, throws PlatformException otherwise.

Call Example

MLTextImageSuperResolutionAnalyzer analyzer = new MLTextImageSuperResolutionAnalyzer();

List<MLTextImageSuperResolution> list = await analyzer.analyzeFrame("image path");

stopTextResolution()

Stops the text resolution service.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await analyzer.stopTextResolution();

MLLocalTranslator

Translates text on the device.

Method Summary

Return TypeMethodDescription
Future<List<dynamic>>getLocalAllLanguages()Obtains supported languages by local translation asynchronously.
Future<List<dynamic>>syncGetLocalAllLanguages()Obtains supported languages by local translation synchronously.
Future<bool>prepareModel(MLTranslateSetting setting)Prepares the local model for translation.
Future<bool>deleteModel(String langCode)Deletes the model downloaded for local translation.
Future<String>asyncTranslate(String sourceText)Translates on device asynchronously.
Future<String>syncTranslate(String sourceText)Translates on device synchronously.
Future<bool>stopTranslate()Stops the local translator.

Methods

getLocalAllLanguages()

Obtains supported languages by local translation asynchronously.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list of supported languages.

Call Example

MLLocalTranslator translator = new MLLocalTranslator();

List<dynamic> list = await translator.getLocalAllLanguages();

syncGetLocalAllLanguages()

Obtains supported languages by local translation synchronously.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list of supported languages.

Call Example

MLLocalTranslator translator = new MLLocalTranslator();

List<dynamic> list = await translator.syncGetLocalAllLanguages();

prepareModel(MLTranslateSetting setting)

Prepares the local model for translation.

Parameters

NameTypeDescription
settingMLTranslateSettingConfigurations for translation.

Return Type

TypeDescription
Future<bool>Returns true on successful model download, throws PlatformException otherwise.

Call Example

MLTranslateSetting setting = new MLTranslateSetting();

setting.sourceLangCode = "es";
setting.targetLangCode = "en";

bool res = await translator.prepareModel(setting: setting);

deleteModel(String langCode)

Deletes the model downloaded for local translation.

Parameters

NameTypeDescription
langCodeStringLanguage code.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await translator.deleteModel("es");

asyncTranslate(String sourceText)

Translates on device asynchronously.

Parameters

NameTypeDescription
sourceTextStringSource text.

Return Type

TypeDescription
Future<String>Returns translation result on success, throws PlatformException otherwise.

Call Example

String res = await translator.asyncTranslate(sourceText: "Cómo te sientes hoy");

syncTranslate(String sourceText)

Translates on device synchronously.

Parameters

NameTypeDescription
sourceTextStringSource text.

Return Type

TypeDescription
Future<String>Returns translation result on success, throws PlatformException otherwise.

Call Example

String res = await translator.syncTranslate(sourceText: "Cómo te sientes hoy");

stopTranslate()

Stops on device translation.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await translator.stopTranslate();

MLRemoteTranslator

Translates text on the cloud.

Method Summary

Return TypeMethodDescription
Future<List<dynamic>>getCloudAllLanguages()Obtains supported languages by on cloud translation asynchronously.
Future<List<dynamic>>syncGetCloudAllLanguages()Obtains supported languages by on cloud translation synchronously.
Future<String>asyncTranslate(MLTranslateSetting setting)Translates on cloud asynchronously.
Future<String>syncTranslate(MLTranslateSetting setting)Translates on cloud synchronously.
Future<bool>stopTranslate()Stops on cloud translation.

Methods

getCloudAllLanguages()

Obtains supported languages by on cloud translation asynchronously.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list of supported languages.

Call Example

MLRemoteTranslator translator = new MLRemoteTranslator();

List<dynamic> list = await translator.getCloudAllLanguages();

syncGetCloudAllLanguages()

Obtains supported languages by on cloud translation synchronously.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list of supported languages.

Call Example

MLRemoteTranslator translator = new MLRemoteTranslator();

List<dynamic> list = await translator.syncGetCloudAllLanguages();

asyncTranslate(MLTranslateSetting setting)

Translates on cloud asynchronously.

Parameters

NameTypeDescription
settingMLTranslateSettingConfigurations for translation.

Return Type

TypeDescription
Future<String>Returns the translation result on success, throws PlatformException otherwise.

Call Example

MLTranslateSetting setting = new MLTranslateSetting();

setting.sourceLangCode = "en";
setting.targetLangCode = "es";
setting.sourceTextOnRemote = "how are you feeling today";

String res = await translator.asyncTranslate(setting: setting);

syncTranslate(MLTranslateSetting setting)

Translates on cloud synchronously.

Parameters

NameTypeDescription
settingMLTranslateSettingConfigurations for translation.

Return Type

TypeDescription
Future<String>Returns the translation result on success, throws PlatformException otherwise.

Call Example

MLTranslateSetting setting = new MLTranslateSetting();

setting.sourceLangCode = "en";
setting.targetLangCode = "es";
setting.sourceTextOnRemote = "how are you feeling today";

String res = await translator.syncTranslate(setting: setting);

stopTranslate()

Stops on cloud translation.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

await translator.stopTranslate();

MLTtsEngine

Provides the text to speech (TTS) service of ML Kit.

Method Summary

Return TypeMethodDescription
Future<bool>init()Initializes the tts engine.
Future<List<dynamic>>getLanguages()Obtains the supported languages.
Future<int>isLanguageAvailable(String lang)Checks whether the language is available.
Future<List<MLTtsSpeaker>>getSpeaker(String language)Obtaines speakers for a specific language.
Future<List<MLTtsSpeaker>>getSpeakers()Obtains all speakers.
Future<void>speakOnCloud(MLTtsConfig config)Starts speech on cloud.
Future<void>speakOnDevice(MLTtsConfig config)Starts speech on device.
Future<bool>pauseSpeech()Pauses the speech.
Future<bool>resumeSpeech()Resumes the speech.
Future<bool>stopTextToSpeech()Stops the speech.
Future<bool>shutdownTextToSpeech()Destroys the tts engine.
voidsetTtsCallback(TtsCallback callback)Sets a listener for tts events.

Methods

init()

Initializes the tts engine.

Return Type

TypeDescription
Future<bool>Returns true on a successful operation.

Call Example

MLTtsEngine engine = new MLTtsEngine();

bool res = await engine.init();

getLanguages()

Obtains the supported languages.

Return Type

TypeDescription
Future<List<dynamic>>Returns the list on success, throws PlatformException otherwise.

Call Example

List<dynamic> list = await engine.getLanguages();

isLanguageAvailable(String lang)

Checks whether the language is available.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await engine.isLanguageAvailable("en-US");

getSpeaker(String language)

Obtains speakers for a specific language.

Return Type

TypeDescription
Future<List<MLTtsSpeaker>>Returns the list on success, throws PlatformException otherwise.

Call Example

List<MLTtsSpeaker> list = await engine.getSpeaker("en-US");

getSpeakers()

Obtains all speakers for text to speech recognition.

Return Type

TypeDescription
Future<List<MLTtsSpeaker>>Returns the list on success, throws PlatformException otherwise.

Call Example

List<MLTtsSpeaker> list = await engine.getSpeakers();

speakOnCloud(MLTtsConfig config)

Starts speech on cloud.

Parameters

NameTypeDescription
configMLTtsConfigConfigurations for text to speech recognition.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

MLTtsConfig config = new MLTtsConfig();

config.text = text;
config.person = MLTtsConfig.TTS_SPEAKER_FEMALE_EN;
config.language = MLTtsConfig.TTS_EN_US;
config.synthesizeMode = MLTtsConfig.TTS_ONLINE_MODE;

await engine.speakOnCloud(config);

speakOnDevice(MLTtsConfig config)

Starts speech on device.

Parameters

NameTypeDescription
configMLTtsConfigConfigurations for text to speech recognition.

Return Type

TypeDescription
Future<void>Future result of an execution that returns no value.

Call Example

MLTtsConfig config = new MLTtsConfig();

config.text = text;
config.person = MLTtsConfig.TTS_SPEAKER_OFFLINE_EN_US_FEMALE_BOLT;
config.language = MLTtsConfig.TTS_EN_US;
config.synthesizeMode = MLTtsConfig.TTS_OFFLINE_MODE;

await engine.speakOnDevice(config);

pauseSpeech()

Pauses the speech.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await engine.pauseSpeech();

resumeSpeech()

Resumes the speech.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await engine.resumeSpeech();

stopTextToSpeech()

Stops the speech.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await engine.stopTextToSpeech();

shutdownTextToSpeech()

Destroys the tts engine.

Return Type

TypeDescription
Future<bool>Returns true on success, throws PlatformException otherwise.

Call Example

bool res = await engine.shutdownTextToSpeech();

setTtsCallback(TtsCallback callback)

Sets a listener for text to speech events.

Parameters

NameTypeDescription
callbackTtsCallbackListener for tts events.

Call Example

engine.setTtsCallback((event, details, {errorCode}) {
    // Your implementation here
});

Data Types

TtsCallback

DefinitionDescription
void TtsCallback(MLTtsEvent event, dynamic details, {int errorCode})Tts event listener.

Parameters

NameTypeDescription
eventMLTtsEventText to speech event.
detailsdynamicAll event information.
errorCodeintError code on failure.

enum MLTtsEvent

Enumerated object that represents the events of audio file transcription.

ValueDescription
onErrorError event callback function. This method is used to listen to error events when an error occurs in an audio synthesis task.
onWarnAlarm event callback function.
onRangeStartThe TTS engine splits the text input by the audio synthesis task. This callback function can be used to listen to the playback start event of the split text.
onAudioAvailableAudio stream callback API, which is used to return the synthesized audio data to the app.
onEventAudio synthesis task callback extension method.

4. Configuration Description

No.

5. Preparing For Release

Before building a release version of your app you may need to customize the proguard-rules.pro obfuscation configuration file to prevent the HMS Core SDK from being obfuscated. Add the configurations below to exclude the HMS Core SDK from obfuscation. For more information on this topic refer to this Android developer guide.

<flutter_project>/android/app/proguard-rules.pro

-ignorewarnings
-keepattributes *Annotation*
-keepattributes Exceptions
-keepattributes InnerClasses
-keepattributes Signature
-keep class com.huawei.updatesdk.**{*;}
-keep class com.huawei.hms.**{*;}
-keep class com.hianalytics.android.**{*;}
-keep class com.huawei.hms.flutter.** { *; }

# Flutter wrapper
-keep class io.flutter.app.** { *; }
-keep class io.flutter.plugin.**  { *; }
-keep class io.flutter.util.**  { *; }
-keep class io.flutter.view.**  { *; }
-keep class io.flutter.**  { *; }
-keep class io.flutter.plugins.**  { *; }
-dontwarn io.flutter.embedding.**
-repackageclasses

<flutter_project>/android/app/build.gradle

buildTypes {
    debug {
        signingConfig signingConfigs.config
    }
    release {
        signingConfig signingConfigs.config
        // Enables code shrinking, obfuscation and optimization for release builds
        minifyEnabled true
        // Unused resources will be removed.
        shrinkResources true
        proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
   }
}

6. Sample Project

This plugin includes a demo project in the example folder, there you can find more usage examples.

7. Questions or Issues

If you have questions about how to use HMS samples, try the following options:

  • Stack Overflow is the best place for any programming questions. Be sure to tag your question with huawei-mobile-services.
  • Github is the official repository for these plugins, You can open an issue or submit your ideas.
  • Huawei Developer Forum HMS Core Module is great for general questions, or seeking recommendations and opinions.
  • Huawei Developer Docs is place to official documentation for all HMS Core Kits, you can find detailed documentations in there.

If you run into a bug in our samples, please submit an issue to the GitHub repository.

8. Licensing and Terms

Huawei ML Kit Flutter Plugin is licensed under Apache 2.0 license

Libraries

channels
huawei_ml
lens_engine
lens_view
lens_view_controller
ml_3d_face
ml_3d_face_analyzer
ml_3d_face_analyzer_setting
ml_aft_engine
ml_aft_result
ml_aft_setting
ml_application
ml_application_setting
ml_asr_recognizer
ml_asr_setting
ml_bankcard
ml_bankcard_analyzer
ml_bankcard_settings
ml_border
ml_classification_analyzer
ml_classification_analyzer_setting
ml_constants
ml_custom_model
ml_custom_model_setting
ml_detected_lang
ml_document
ml_document_analyzer
ml_document_analyzer_setting
ml_document_skew_correction_analyzer
ml_document_skew_detect_result
ml_face
ml_face_analyzer
ml_face_analyzer_setting
ml_form_recognition_analyzer
ml_frame
ml_frame_property
ml_general_card
ml_general_card_analyzer
ml_general_card_analyzer_setting
ml_hand_keypoint
ml_hand_keypoint_analyzer
ml_hand_keypoint_analyzer_setting
ml_image_classification
ml_image_resolution_result
ml_image_segmentation
ml_image_segmentation_analyzer
ml_image_segmentation_analyzer_setting
ml_image_super_resolution_analyzer
ml_image_super_resolution_analyzer_setting
ml_landmark
ml_landmark_analyzer
ml_landmark_analyzer_setting
ml_lang_detector
ml_lang_detector_setting
ml_liveness_capture
ml_liveness_capture_result
ml_local_translator
ml_object
ml_object_analyzer
ml_object_analyzer_setting
ml_point
ml_product_vision_search_analyzer
ml_product_vision_search_analyzer_setting
ml_remote_translator
ml_scene_detection
ml_scene_detection_analyzer
ml_scene_detection_analyzer_setting
ml_skeleton
ml_skeleton_analyzer
ml_skeleton_analyzer_setting
ml_sound_detector
ml_speech_real_time_transcription
ml_speech_real_time_transcription_config
ml_table
ml_text
ml_text_analyzer
ml_text_analyzer_setting
ml_text_embedding_analyzer
ml_text_embedding_analyzer_setting
ml_text_image_super_resolution
ml_text_image_super_resolution_analyzer
ml_text_language
ml_translate_setting
ml_tts_config
ml_tts_engine
ml_tts_speaker
ml_utils
ml_vocabulary_version
permission_client