Rekognition class

This is the Amazon Rekognition API reference.

Constructors

Rekognition({required String region, AwsClientCredentials? credentials, AwsClientCredentialsProvider? credentialsProvider, Client? client, String? endpointUrl})

Properties

hashCode int
The hash code for this object.
no setterinherited
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

close() → void
Closes the internal HTTP client if none was provided at creation. If a client was passed as a constructor argument, this becomes a noop.
compareFaces({required Image sourceImage, required Image targetImage, QualityFilter? qualityFilter, double? similarityThreshold}) Future<CompareFacesResponse>
Compares a face in the source input image with each of the 100 largest faces detected in the target input image. You pass the input and target images either as base64-encoded image bytes or as references to images in an Amazon S3 bucket. If you use the AWS CLI to call Amazon Rekognition operations, passing image bytes isn't supported. The image must be formatted as a PNG or JPEG file.
createCollection({required String collectionId}) Future<CreateCollectionResponse>
Creates a collection in an AWS Region. You can add faces to the collection using the IndexFaces operation.
createProject({required String projectName}) Future<CreateProjectResponse>
Creates a new Amazon Rekognition Custom Labels project. A project is a logical grouping of resources (images, Labels, models) and operations (training, evaluation and detection).
createProjectVersion({required OutputConfig outputConfig, required String projectArn, required TestingData testingData, required TrainingData trainingData, required String versionName}) Future<CreateProjectVersionResponse>
Creates a new version of a model and begins training. Models are managed as part of an Amazon Rekognition Custom Labels project. You can specify one training dataset and one testing dataset. The response from CreateProjectVersion is an Amazon Resource Name (ARN) for the version of the model.
createStreamProcessor({required StreamProcessorInput input, required String name, required StreamProcessorOutput output, required String roleArn, required StreamProcessorSettings settings}) Future<CreateStreamProcessorResponse>
Creates an Amazon Rekognition stream processor that you can use to detect and recognize faces in a streaming video.
deleteCollection({required String collectionId}) Future<DeleteCollectionResponse>
Deletes the specified collection. Note that this operation removes all faces in the collection. For an example, see delete-collection-procedure.
deleteFaces({required String collectionId, required List<String> faceIds}) Future<DeleteFacesResponse>
Deletes faces from a collection. You specify a collection ID and an array of face IDs to remove from the collection.
deleteProject({required String projectArn}) Future<DeleteProjectResponse>
Deletes an Amazon Rekognition Custom Labels project. To delete a project you must first delete all models associated with the project. To delete a model, see DeleteProjectVersion.
deleteProjectVersion({required String projectVersionArn}) Future<DeleteProjectVersionResponse>
Deletes an Amazon Rekognition Custom Labels model.
deleteStreamProcessor({required String name}) Future<void>
Deletes the stream processor identified by Name. You assign the value for Name when you create the stream processor with CreateStreamProcessor. You might not be able to use the same name for a stream processor for a few seconds after calling DeleteStreamProcessor.
describeCollection({required String collectionId}) Future<DescribeCollectionResponse>
Describes the specified collection. You can use DescribeCollection to get information, such as the number of faces indexed into a collection and the version of the model used by the collection for face detection.
describeProjects({int? maxResults, String? nextToken}) Future<DescribeProjectsResponse>
Lists and gets information about your Amazon Rekognition Custom Labels projects.
describeProjectVersions({required String projectArn, int? maxResults, String? nextToken, List<String>? versionNames}) Future<DescribeProjectVersionsResponse>
Lists and describes the models in an Amazon Rekognition Custom Labels project. You can specify up to 10 model versions in ProjectVersionArns. If you don't specify a value, descriptions for all models are returned.
describeStreamProcessor({required String name}) Future<DescribeStreamProcessorResponse>
Provides information about a stream processor created by CreateStreamProcessor. You can get information about the input and output streams, the input parameters for the face recognition being performed, and the current status of the stream processor.
detectCustomLabels({required Image image, required String projectVersionArn, int? maxResults, double? minConfidence}) Future<DetectCustomLabelsResponse>
Detects custom labels in a supplied image by using an Amazon Rekognition Custom Labels model.
detectFaces({required Image image, List<Attribute>? attributes}) Future<DetectFacesResponse>
Detects faces within an image that is provided as input.
detectLabels({required Image image, int? maxLabels, double? minConfidence}) Future<DetectLabelsResponse>
Detects instances of real-world entities within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature.
detectModerationLabels({required Image image, HumanLoopConfig? humanLoopConfig, double? minConfidence}) Future<DetectModerationLabelsResponse>
Detects unsafe content in a specified JPEG or PNG format image. Use DetectModerationLabels to moderate images depending on your requirements. For example, you might want to filter images that contain nudity, but not images containing suggestive content.
detectProtectiveEquipment({required Image image, ProtectiveEquipmentSummarizationAttributes? summarizationAttributes}) Future<DetectProtectiveEquipmentResponse>
Detects Personal Protective Equipment (PPE) worn by people detected in an image. Amazon Rekognition can detect the following types of PPE.
detectText({required Image image, DetectTextFilters? filters}) Future<DetectTextResponse>
Detects text in the input image and converts it into machine-readable text.
getCelebrityInfo({required String id}) Future<GetCelebrityInfoResponse>
Gets the name and additional information about a celebrity based on his or her Amazon Rekognition ID. The additional information is returned as an array of URLs. If there is no additional information about the celebrity, this list is empty.
getCelebrityRecognition({required String jobId, int? maxResults, String? nextToken, CelebrityRecognitionSortBy? sortBy}) Future<GetCelebrityRecognitionResponse>
Gets the celebrity recognition results for a Amazon Rekognition Video analysis started by StartCelebrityRecognition.
getContentModeration({required String jobId, int? maxResults, String? nextToken, ContentModerationSortBy? sortBy}) Future<GetContentModerationResponse>
Gets the unsafe content analysis results for a Amazon Rekognition Video analysis started by StartContentModeration.
getFaceDetection({required String jobId, int? maxResults, String? nextToken}) Future<GetFaceDetectionResponse>
Gets face detection results for a Amazon Rekognition Video analysis started by StartFaceDetection.
getFaceSearch({required String jobId, int? maxResults, String? nextToken, FaceSearchSortBy? sortBy}) Future<GetFaceSearchResponse>
Gets the face search results for Amazon Rekognition Video face search started by StartFaceSearch. The search returns faces in a collection that match the faces of persons detected in a video. It also includes the time(s) that faces are matched in the video.
getLabelDetection({required String jobId, int? maxResults, String? nextToken, LabelDetectionSortBy? sortBy}) Future<GetLabelDetectionResponse>
Gets the label detection results of a Amazon Rekognition Video analysis started by StartLabelDetection.
getPersonTracking({required String jobId, int? maxResults, String? nextToken, PersonTrackingSortBy? sortBy}) Future<GetPersonTrackingResponse>
Gets the path tracking results of a Amazon Rekognition Video analysis started by StartPersonTracking.
getSegmentDetection({required String jobId, int? maxResults, String? nextToken}) Future<GetSegmentDetectionResponse>
Gets the segment detection results of a Amazon Rekognition Video analysis started by StartSegmentDetection.
getTextDetection({required String jobId, int? maxResults, String? nextToken}) Future<GetTextDetectionResponse>
Gets the text detection results of a Amazon Rekognition Video analysis started by StartTextDetection.
indexFaces({required String collectionId, required Image image, List<Attribute>? detectionAttributes, String? externalImageId, int? maxFaces, QualityFilter? qualityFilter}) Future<IndexFacesResponse>
Detects faces in the input image and adds them to the specified collection.
listCollections({int? maxResults, String? nextToken}) Future<ListCollectionsResponse>
Returns list of collection IDs in your account. If the result is truncated, the response also provides a NextToken that you can use in the subsequent request to fetch the next set of collection IDs.
listFaces({required String collectionId, int? maxResults, String? nextToken}) Future<ListFacesResponse>
Returns metadata for faces in the specified collection. This metadata includes information such as the bounding box coordinates, the confidence (that the bounding box contains a face), and face ID. For an example, see Listing Faces in a Collection in the Amazon Rekognition Developer Guide.
listStreamProcessors({int? maxResults, String? nextToken}) Future<ListStreamProcessorsResponse>
Gets a list of stream processors that you have created with CreateStreamProcessor.
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
recognizeCelebrities({required Image image}) Future<RecognizeCelebritiesResponse>
Returns an array of celebrities recognized in the input image. For more information, see Recognizing Celebrities in the Amazon Rekognition Developer Guide.
searchFaces({required String collectionId, required String faceId, double? faceMatchThreshold, int? maxFaces}) Future<SearchFacesResponse>
For a given input face ID, searches for matching faces in the collection the face belongs to. You get a face ID when you add a face to the collection using the IndexFaces operation. The operation compares the features of the input face with faces in the specified collection. The operation response returns an array of faces that match, ordered by similarity score with the highest similarity first. More specifically, it is an array of metadata for each face match that is found. Along with the metadata, the response also includes a confidence value for each face match, indicating the confidence that the specific face matches the input face.
searchFacesByImage({required String collectionId, required Image image, double? faceMatchThreshold, int? maxFaces, QualityFilter? qualityFilter}) Future<SearchFacesByImageResponse>
For a given input image, first detects the largest face in the image, and then searches the specified collection for matching faces. The operation compares the features of the input face with faces in the specified collection.
startCelebrityRecognition({required Video video, String? clientRequestToken, String? jobTag, NotificationChannel? notificationChannel}) Future<StartCelebrityRecognitionResponse>
Starts asynchronous recognition of celebrities in a stored video.
startContentModeration({required Video video, String? clientRequestToken, String? jobTag, double? minConfidence, NotificationChannel? notificationChannel}) Future<StartContentModerationResponse>
Starts asynchronous detection of unsafe content in a stored video.
startFaceDetection({required Video video, String? clientRequestToken, FaceAttributes? faceAttributes, String? jobTag, NotificationChannel? notificationChannel}) Future<StartFaceDetectionResponse>
Starts asynchronous detection of faces in a stored video.
startFaceSearch({required String collectionId, required Video video, String? clientRequestToken, double? faceMatchThreshold, String? jobTag, NotificationChannel? notificationChannel}) Future<StartFaceSearchResponse>
Starts the asynchronous search for faces in a collection that match the faces of persons detected in a stored video.
startLabelDetection({required Video video, String? clientRequestToken, String? jobTag, double? minConfidence, NotificationChannel? notificationChannel}) Future<StartLabelDetectionResponse>
Starts asynchronous detection of labels in a stored video.
startPersonTracking({required Video video, String? clientRequestToken, String? jobTag, NotificationChannel? notificationChannel}) Future<StartPersonTrackingResponse>
Starts the asynchronous tracking of a person's path in a stored video.
startProjectVersion({required int minInferenceUnits, required String projectVersionArn}) Future<StartProjectVersionResponse>
Starts the running of the version of a model. Starting a model takes a while to complete. To check the current state of the model, use DescribeProjectVersions.
startSegmentDetection({required List<SegmentType> segmentTypes, required Video video, String? clientRequestToken, StartSegmentDetectionFilters? filters, String? jobTag, NotificationChannel? notificationChannel}) Future<StartSegmentDetectionResponse>
Starts asynchronous detection of segment detection in a stored video.
startStreamProcessor({required String name}) Future<void>
Starts processing a stream processor. You create a stream processor by calling CreateStreamProcessor. To tell StartStreamProcessor which stream processor to start, use the value of the Name field specified in the call to CreateStreamProcessor.
startTextDetection({required Video video, String? clientRequestToken, StartTextDetectionFilters? filters, String? jobTag, NotificationChannel? notificationChannel}) Future<StartTextDetectionResponse>
Starts asynchronous detection of text in a stored video.
stopProjectVersion({required String projectVersionArn}) Future<StopProjectVersionResponse>
Stops a running model. The operation might take a while to complete. To check the current status, call DescribeProjectVersions.
stopStreamProcessor({required String name}) Future<void>
Stops a running stream processor that was created by CreateStreamProcessor.
toString() String
A string representation of this object.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited