OpenAIModeration class

OpenAI Content Moderation capability implementation

This module handles content moderation functionality for OpenAI providers.

Implemented types

Constructors

OpenAIModeration.new(OpenAIClient client, OpenAIConfig config)

Properties

client OpenAIClient
final
config OpenAIConfig
final
hashCode int
The hash code for this object.
no setterinherited
runtimeType Type
A representation of the runtime type of the object.
no setterinherited

Methods

analyzeContent(String text, {String? model}) Future<ModerationAnalysis>
Get detailed moderation analysis
analyzeMultipleContents(List<String> texts, {String? model}) Future<List<ModerationAnalysis>>
Batch analyze multiple texts
filterSafeContent(List<String> texts, {String? model}) Future<List<String>>
Filter out unsafe content from a list
getModerationStats(List<String> texts, {String? model}) Future<ModerationStats>
Get moderation statistics for a batch of texts
hasUnsafeContent(List<String> texts, {String? model}) Future<bool>
Check if any text in the list is unsafe
isTextSafe(String text, {String? model}) Future<bool>
Check if text is safe (not flagged)
moderate(ModerationRequest request) Future<ModerationResponse>
Moderate content for policy violations
override
moderateText(String text, {String? model}) Future<ModerationResult>
Moderate a single text input
moderateTexts(List<String> texts, {String? model}) Future<List<ModerationResult>>
Moderate multiple text inputs
noSuchMethod(Invocation invocation) → dynamic
Invoked when a nonexistent method or property is accessed.
inherited
toString() String
A string representation of this object.
inherited

Operators

operator ==(Object other) bool
The equality operator.
inherited