ModerationLabel class
Provides information about a single type of unsafe content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Detecting Unsafe Content in the Amazon Rekognition Developer Guide.
Constructors
- ModerationLabel({double? confidence, String? name, String? parentName})
-
ModerationLabel.fromJson(Map<
String, dynamic> json) -
factory
Properties
- confidence → double?
-
Specifies the confidence that Amazon Rekognition has that the label has been
correctly identified.
final
- hashCode → int
-
The hash code for this object.
no setterinherited
- name → String?
-
The label name for the type of unsafe content detected in the image.
final
- parentName → String?
-
The name for the parent label. Labels at the top level of the hierarchy have
the parent label
""
.final - runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
Methods
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
toString(
) → String -
A string representation of this object.
inherited
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited