ai_sensitive_content_classifier 0.1.3
ai_sensitive_content_classifier: ^0.1.3 copied to clipboard
A package that uses Gemini to detect sensitive content (text/images).
ai_sensitive_content_classifier #
A Dart/Flutter package that classifies text and images for sensitive content using the Google Generative AI (Gemini) API.
Supports input as plain text, Uint8List, ui.Image, or Flutter ImageProvider.
๐ Features #
- ๐ Classifies content(text/images or bytes directly) into types like
gore,violence,nudity,racism,hateSpeech,offensive, ornotSensitive - ๐ง Powered by Google Gemini (
gemini-2.0-flash-liteby default) - ๐ผ๏ธ Supports direct analysis from Flutter UI images (๐ธ Support for
ui.Image,Uint8List, orImageProvider(e.g.,AssetImage,NetworkImage)) - ๐งช Easily customizable model config (temperature, topP, maxOutputTokens, etc.)
- โ JSON schema validation for safer AI response handling
- ๐ง JSON schema validation for structured AI responses
- ๐ก No content filtering: all Gemini safety filters disabled to ensure full sensitivity analysis
๐ Installation #
Add this to your pubspec.yaml:
dependencies:
ai_sensitive_content_classifier: ^0.1.3
Then run
flutter pub get
Usage:
import 'package:ai_sensitive_content_classifier/ai_sensitive_content_classifier.dart';
final classifier = AiSensitiveContentDetector(
apiKey: 'your-gemini-api-key',
);
final result = await classifier.analyseIsSensitiveContent(
text: 'This is a violent message',
);
print(result?.isSensitive); // true/false
print(result?.textClassification); // e.g., "violence"
Configurations:
AiSensitiveContentDetector({
required String apiKey,
String model = 'gemini-2.0-flash-lite',
double temperature = 0.1,
double topP = 0.95,
int topK = 64,
int maxOutputTokens = 8192,
})