startContentModeration method
- required Video video,
- String? clientRequestToken,
- String? jobTag,
- double? minConfidence,
- NotificationChannel? notificationChannel,
Starts asynchronous detection of unsafe content in a stored video.
Amazon Rekognition Video can moderate content in a video stored in an
Amazon S3 bucket. Use Video to specify the bucket name and the
filename of the video. StartContentModeration
returns a job
identifier (JobId
) which you use to get the results of the
analysis. When unsafe content analysis is finished, Amazon Rekognition
Video publishes a completion status to the Amazon Simple Notification
Service topic that you specify in NotificationChannel
.
To get the results of the unsafe content analysis, first check that the
status value published to the Amazon SNS topic is SUCCEEDED
.
If so, call GetContentModeration and pass the job identifier
(JobId
) from the initial call to
StartContentModeration
.
For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
May throw AccessDeniedException. May throw IdempotentParameterMismatchException. May throw InvalidParameterException. May throw InvalidS3ObjectException. May throw InternalServerError. May throw VideoTooLargeException. May throw ProvisionedThroughputExceededException. May throw LimitExceededException. May throw ThrottlingException.
Parameter video
:
The video in which you want to detect unsafe content. The video must be
stored in an Amazon S3 bucket.
Parameter clientRequestToken
:
Idempotent token used to identify the start request. If you use the same
token with multiple StartContentModeration
requests, the same
JobId
is returned. Use ClientRequestToken
to
prevent the same job from being accidently started more than once.
Parameter jobTag
:
An identifier you specify that's returned in the completion notification
that's published to your Amazon Simple Notification Service topic. For
example, you can use JobTag
to group related jobs and
identify them in the completion notification.
Parameter minConfidence
:
Specifies the minimum confidence that Amazon Rekognition must have in
order to return a moderated content label. Confidence represents how
certain Amazon Rekognition is that the moderated content is correctly
identified. 0 is the lowest confidence. 100 is the highest confidence.
Amazon Rekognition doesn't return any moderated content labels with a
confidence level lower than this specified value. If you don't specify
MinConfidence
, GetContentModeration
returns
labels with confidence values greater than or equal to 50 percent.
Parameter notificationChannel
:
The Amazon SNS topic ARN that you want Amazon Rekognition Video to publish
the completion status of the unsafe content analysis to.
Implementation
Future<StartContentModerationResponse> startContentModeration({
required Video video,
String? clientRequestToken,
String? jobTag,
double? minConfidence,
NotificationChannel? notificationChannel,
}) async {
ArgumentError.checkNotNull(video, 'video');
_s.validateStringLength(
'clientRequestToken',
clientRequestToken,
1,
64,
);
_s.validateStringLength(
'jobTag',
jobTag,
1,
256,
);
_s.validateNumRange(
'minConfidence',
minConfidence,
0,
100,
);
final headers = <String, String>{
'Content-Type': 'application/x-amz-json-1.1',
'X-Amz-Target': 'RekognitionService.StartContentModeration'
};
final jsonResponse = await _protocol.send(
method: 'POST',
requestUri: '/',
exceptionFnMap: _exceptionFns,
// TODO queryParams
headers: headers,
payload: {
'Video': video,
if (clientRequestToken != null)
'ClientRequestToken': clientRequestToken,
if (jobTag != null) 'JobTag': jobTag,
if (minConfidence != null) 'MinConfidence': minConfidence,
if (notificationChannel != null)
'NotificationChannel': notificationChannel,
},
);
return StartContentModerationResponse.fromJson(jsonResponse.body);
}