rekognition-2016-06-27 library

Classes

AgeRange
Structure containing the estimated age range, in years, for a face.
Asset
Assets are the images that you use to train and evaluate a model version. Assets can also contain validation information that you use to debug a failed model training.
AudioMetadata
Metadata information about an audio stream. An array of AudioMetadata objects for the audio streams found in a stored video is returned by GetSegmentDetection.
AwsClientCredentials
AWS credentials.
Beard
Indicates whether or not the face has a beard, and the confidence level in the determination.
BoundingBox
Identifies the bounding box around the label, face, text or personal protective equipment. The left (x-coordinate) and top (y-coordinate) are coordinates representing the top and left sides of the bounding box. Note that the upper-left corner of the image is the origin (0,0).
Celebrity
Provides information about a celebrity recognized by the RecognizeCelebrities operation.
CelebrityDetail
Information about a recognized celebrity.
CelebrityRecognition
Information about a detected celebrity and the time the celebrity was detected in a stored video. For more information, see GetCelebrityRecognition in the Amazon Rekognition Developer Guide.
ComparedFace
Provides face metadata for target image faces that are analyzed by CompareFaces and RecognizeCelebrities.
ComparedSourceImageFace
Type that describes the face Amazon Rekognition chose to compare with the faces in the target. This contains a bounding box for the selected face and confidence level that the bounding box contains a face. Note that Amazon Rekognition selects the largest face in the source image for this comparison.
CompareFacesMatch
Provides information about a face in a target image that matches the source image face analyzed by CompareFaces. The Face property contains the bounding box of the face in the target image. The Similarity property is the confidence that the source image face matches the face in the bounding box.
CompareFacesResponse
ContentModerationDetection
Information about an unsafe content label detection in a stored video.
CoversBodyPart
Information about an item of Personal Protective Equipment covering a corresponding body part. For more information, see DetectProtectiveEquipment.
CreateCollectionResponse
CreateProjectResponse
CreateProjectVersionResponse
CreateStreamProcessorResponse
CustomLabel
A custom label detected in an image by a call to DetectCustomLabels.
DeleteCollectionResponse
DeleteFacesResponse
DeleteProjectResponse
DeleteProjectVersionResponse
DeleteStreamProcessorResponse
DescribeCollectionResponse
DescribeProjectsResponse
DescribeProjectVersionsResponse
DescribeStreamProcessorResponse
DetectCustomLabelsResponse
DetectFacesResponse
DetectionFilter
A set of parameters that allow you to filter out certain results from your returned results.
DetectLabelsResponse
DetectModerationLabelsResponse
DetectProtectiveEquipmentResponse
DetectTextFilters
A set of optional parameters that you can use to set the criteria that the text must meet to be included in your response. WordFilter looks at a word’s height, width, and minimum confidence. RegionOfInterest lets you set a specific region of the image to look for text in.
DetectTextResponse
Emotion
The emotions that appear to be expressed on the face, and the confidence level in the determination. The API is only making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state and should not be used in such a way. For example, a person pretending to have a sad face might not be sad emotionally.
EquipmentDetection
Information about an item of Personal Protective Equipment (PPE) detected by DetectProtectiveEquipment. For more information, see DetectProtectiveEquipment.
EvaluationResult
The evaluation results for the training of a model.
Eyeglasses
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
EyeOpen
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
Face
Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
FaceDetail
Structure containing attributes of the face that the algorithm detected.
FaceDetection
Information about a face detected in a video analysis request and the time the face was detected in the video.
FaceMatch
Provides face metadata. In addition, it also provides the confidence in the match of this face with the input face.
FaceRecord
Object containing both the face metadata (stored in the backend database), and facial attributes that are detected but aren't stored in the database.
FaceSearchSettings
Input face recognition parameters for an Amazon Rekognition stream processor. FaceRecognitionSettings is a request parameter for CreateStreamProcessor.
Gender
The predicted gender of a detected face.
Geometry
Information about where an object (DetectCustomLabels) or text (DetectText) is located on an image.
GetCelebrityInfoResponse
GetCelebrityRecognitionResponse
GetContentModerationResponse
GetFaceDetectionResponse
GetFaceSearchResponse
GetLabelDetectionResponse
GetPersonTrackingResponse
GetSegmentDetectionResponse
GetTextDetectionResponse
GroundTruthManifest
The S3 bucket that contains an Amazon Sagemaker Ground Truth format manifest file.
HumanLoopActivationOutput
Shows the results of the human in the loop evaluation. If there is no HumanLoopArn, the input did not trigger human review.
HumanLoopConfig
Sets up the flow definition the image will be sent to if one of the conditions is met. You can also set certain attributes of the image before review.
HumanLoopDataAttributes
Allows you to set attributes of the image. Currently, you can declare an image as free of personally identifiable information.
Image
Provides the input image either as bytes or an S3 object.
ImageQuality
Identifies face image brightness and sharpness.
IndexFacesResponse
Instance
An instance of a label returned by Amazon Rekognition Image (DetectLabels) or by Amazon Rekognition Video (GetLabelDetection).
KinesisDataStream
The Kinesis data stream Amazon Rekognition to which the analysis results of a Amazon Rekognition stream processor are streamed. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
KinesisVideoStream
Kinesis video stream stream that provides the source streaming video for a Amazon Rekognition Video stream processor. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
Label
Structure containing details about the detected label, including the name, detected instances, parent labels, and level of confidence.
LabelDetection
Information about a label detected in a video analysis request and the time the label was detected in the video.
Landmark
Indicates the location of the landmark on the face.
ListCollectionsResponse
ListFacesResponse
ListStreamProcessorsResponse
ModerationLabel
Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
MouthOpen
Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
Mustache
Indicates whether or not the face has a mustache, and the confidence level in the determination.
NotificationChannel
The Amazon Simple Notification Service topic to which Amazon Rekognition publishes the completion status of a video analysis operation. For more information, see api-video.
OutputConfig
The S3 bucket and folder location where training output is placed.
Parent
A parent label for a label. A label can have 0, 1, or more parents.
PersonDetail
Details about a person detected in a video analysis request.
PersonDetection
Details and path tracking information for a single time a person's path is tracked in a video. Amazon Rekognition operations that track people's paths return an array of PersonDetection objects with elements for each time a person's path is tracked in a video.
PersonMatch
Information about a person whose face matches a face(s) in an Amazon Rekognition collection. Includes information about the faces in the Amazon Rekognition collection (FaceMatch), information about the person (PersonDetail), and the time stamp for when the person was detected in a video. An array of PersonMatch objects is returned by GetFaceSearch.
Point
The X and Y coordinates of a point on an image. The X and Y values returned are ratios of the overall image size. For example, if the input image is 700x200 and the operation returns X=0.5 and Y=0.25, then the point is at the (350,50) pixel coordinate on the image.
Pose
Indicates the pose of the face as determined by its pitch, roll, and yaw.
ProjectDescription
A description of a Amazon Rekognition Custom Labels project.
ProjectVersionDescription
The description of a version of a model.
ProtectiveEquipmentBodyPart
Information about a body part detected by DetectProtectiveEquipment that contains PPE. An array of ProtectiveEquipmentBodyPart objects is returned for each person detected by DetectProtectiveEquipment.
ProtectiveEquipmentPerson
A person detected by a call to DetectProtectiveEquipment. The API returns all persons detected in the input image in an array of ProtectiveEquipmentPerson objects.
ProtectiveEquipmentSummarizationAttributes
Specifies summary attributes to return from a call to DetectProtectiveEquipment. You can specify which types of PPE to summarize. You can also specify a minimum confidence value for detections. Summary information is returned in the Summary (ProtectiveEquipmentSummary) field of the response from DetectProtectiveEquipment. The summary includes which persons in an image were detected wearing the requested types of person protective equipment (PPE), which persons were detected as not wearing PPE, and the persons in which a determination could not be made. For more information, see ProtectiveEquipmentSummary.
ProtectiveEquipmentSummary
Summary information for required items of personal protective equipment (PPE) detected on persons by a call to DetectProtectiveEquipment. You specify the required type of PPE in the SummarizationAttributes (ProtectiveEquipmentSummarizationAttributes) input parameter. The summary includes which persons were detected wearing the required personal protective equipment (PersonsWithRequiredEquipment), which persons were detected as not wearing the required PPE (PersonsWithoutRequiredEquipment), and the persons in which a determination could not be made (PersonsIndeterminate).
RecognizeCelebritiesResponse
RegionOfInterest
Specifies a location within the frame that Rekognition checks for text. Uses a BoundingBox object to set a region of the screen.
Rekognition
This is the Amazon Rekognition API reference.
S3Object
Provides the S3 bucket name and object name.
SearchFacesByImageResponse
SearchFacesResponse
SegmentDetection
A technical cue or shot detection segment detected in a video. An array of SegmentDetection objects containing all segments detected in a stored video is returned by GetSegmentDetection.
SegmentTypeInfo
Information about the type of a segment requested in a call to StartSegmentDetection. An array of SegmentTypeInfo objects is returned by the response from GetSegmentDetection.
ShotSegment
Information about a shot detection segment detected in a video. For more information, see SegmentDetection.
Smile
Indicates whether or not the face is smiling, and the confidence level in the determination.
StartCelebrityRecognitionResponse
StartContentModerationResponse
StartFaceDetectionResponse
StartFaceSearchResponse
StartLabelDetectionResponse
StartPersonTrackingResponse
StartProjectVersionResponse
StartSegmentDetectionFilters
Filters applied to the technical cue or shot detection segments. For more information, see StartSegmentDetection.
StartSegmentDetectionResponse
StartShotDetectionFilter
Filters for the shot detection segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
StartStreamProcessorResponse
StartTechnicalCueDetectionFilter
Filters for the technical segments returned by GetSegmentDetection. For more information, see StartSegmentDetectionFilters.
StartTextDetectionFilters
Set of optional parameters that let you set the criteria text must meet to be included in your response. WordFilter looks at a word's height, width and minimum confidence. RegionOfInterest lets you set a specific region of the screen to look for text in.
StartTextDetectionResponse
StopProjectVersionResponse
StopStreamProcessorResponse
StreamProcessor
An object that recognizes faces in a streaming video. An Amazon Rekognition stream processor is created by a call to CreateStreamProcessor. The request parameters for CreateStreamProcessor describe the Kinesis video stream source for the streaming video, face recognition parameters, and where to stream the analysis resullts.
StreamProcessorInput
Information about the source streaming video.
StreamProcessorOutput
Information about the Amazon Kinesis Data Streams stream to which a Amazon Rekognition Video stream processor streams the results of a video analysis. For more information, see CreateStreamProcessor in the Amazon Rekognition Developer Guide.
StreamProcessorSettings
Input parameters used to recognize faces in a streaming video analyzed by a Amazon Rekognition stream processor.
Summary
The S3 bucket that contains the training summary. The training summary includes aggregated evaluation metrics for the entire testing dataset and metrics for each individual label.
Sunglasses
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
TechnicalCueSegment
Information about a technical cue segment. For more information, see SegmentDetection.
TestingData
The dataset used for testing. Optionally, if AutoCreate is set, Amazon Rekognition Custom Labels creates a testing dataset using an 80/20 split of the training dataset.
TestingDataResult
Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
TextDetection
Information about a word or line of text detected by DetectText.
TextDetectionResult
Information about text detected in a video. Incudes the detected text, the time in milliseconds from the start of the video that the text was detected, and where it was detected on the screen.
TrainingData
The dataset used for training.
TrainingDataResult
Sagemaker Groundtruth format manifest files for the input, output and validation datasets that are used and created during testing.
UnindexedFace
A face that IndexFaces detected, but didn't index. Use the Reasons response attribute to determine why a face wasn't indexed.
ValidationData
Contains the Amazon S3 bucket location of the validation data for a model training job.
Video
Video file stored in an Amazon S3 bucket. Amazon Rekognition video start operations such as StartLabelDetection use Video to specify a video for analysis. The supported file formats are .mp4, .mov and .avi.
VideoMetadata
Information about a video that Amazon Rekognition analyzed. Videometadata is returned in every page of paginated responses from a Amazon Rekognition video operation.

Extensions

AttributeFromString on String
AttributeValueExtension on Attribute
BodyPartFromString on String
BodyPartValueExtension on BodyPart
CelebrityRecognitionSortByFromString on String
CelebrityRecognitionSortByValueExtension on CelebrityRecognitionSortBy
ContentClassifierFromString on String
ContentClassifierValueExtension on ContentClassifier
ContentModerationSortByFromString on String
ContentModerationSortByValueExtension on ContentModerationSortBy
EmotionNameFromString on String
EmotionNameValueExtension on EmotionName
FaceAttributesFromString on String
FaceAttributesValueExtension on FaceAttributes
FaceSearchSortByFromString on String
FaceSearchSortByValueExtension on FaceSearchSortBy
GenderTypeFromString on String
GenderTypeValueExtension on GenderType
LabelDetectionSortByFromString on String
LabelDetectionSortByValueExtension on LabelDetectionSortBy
LandmarkTypeFromString on String
LandmarkTypeValueExtension on LandmarkType
OrientationCorrectionFromString on String
OrientationCorrectionValueExtension on OrientationCorrection
PersonTrackingSortByFromString on String
PersonTrackingSortByValueExtension on PersonTrackingSortBy
ProjectStatusFromString on String
ProjectStatusValueExtension on ProjectStatus
ProjectVersionStatusFromString on String
ProjectVersionStatusValueExtension on ProjectVersionStatus
ProtectiveEquipmentTypeFromString on String
ProtectiveEquipmentTypeValueExtension on ProtectiveEquipmentType
QualityFilterFromString on String
QualityFilterValueExtension on QualityFilter
ReasonFromString on String
ReasonValueExtension on Reason
SegmentTypeFromString on String
SegmentTypeValueExtension on SegmentType
StreamProcessorStatusFromString on String
StreamProcessorStatusValueExtension on StreamProcessorStatus
TechnicalCueTypeFromString on String
TechnicalCueTypeValueExtension on TechnicalCueType
TextTypesFromString on String
TextTypesValueExtension on TextTypes
VideoJobStatusFromString on String
VideoJobStatusValueExtension on VideoJobStatus