xPunge
On-device NSFW detection for Flutter. No uploads. No cloud.
Pass image bytes, get back labels, confidence scores, and bounding boxes. Everything runs locally — images never leave the device.
- Android: LiteRT (CPU) — works on every ARM64 device
- iOS: CoreML + Vision — uses Neural Engine when available
- 5.1 MB model, ~39 ms CPU latency, no network round-trip
Quick start
# pubspec.yaml
dependencies:
xpunge: ^0.1.0
import 'package:xpunge/xpunge.dart';
// Call once at app startup
await XPunge.initialize('xp1.YOUR_API_KEY');
// Analyze an image (JPEG, PNG, HEIF, WebP)
final Uint8List bytes = await file.readAsBytes();
final List<Detection> detections = await XPunge.analyzeImage(bytes);
for (final d in detections) {
print('${d.label} — ${(d.confidence * 100).toStringAsFixed(1)}%');
print('box: ${d.boundingBox}'); // normalized Rect, free tier = Rect.zero
}
Get an API key at xpunge.markatlarge.com.
API
XPunge.initialize(String apiKey)
Validates the API key and decrypts the on-device model. Call once before any detection — typically in main() or your app's init sequence.
Throws XPungeException if the key is invalid, expired, or registered to a different app.
XPunge.analyzeImage(Uint8List bytes) → Future<List<Detection>>
Runs detection on raw image bytes. Accepts JPEG, PNG, HEIF, and WebP.
XPunge.analyzeFile(File file) → Future<List<Detection>>
Convenience wrapper — reads the file and calls analyzeImage.
XPunge.analyzeXFile(XFile file) → Future<List<Detection>>
Convenience wrapper for image_picker users — reads the XFile and calls analyzeImage. No extra conversion step needed.
XPunge.tier → XPungeTier
Returns .free or .paid. Set after initialize().
Video screening
There is no dedicated video API — pass decoded frames to analyzeImage at whatever rate suits your use case. Each call counts as one image toward your monthly quota, so you control the cost by controlling how often you sample.
import 'package:video_player/video_player.dart';
// Example: sample one frame per second while a video is playing.
// Swap in any video package that lets you extract raw frame bytes.
Timer? _scanTimer;
void startVideoScan(VideoPlayerController controller) {
_scanTimer = Timer.periodic(const Duration(seconds: 1), (_) async {
final Uint8List? frameBytes = await extractCurrentFrame(controller);
if (frameBytes == null) return;
final detections = await XPunge.analyzeImage(frameBytes);
if (detections.isNotEmpty) {
// Handle explicit content — pause playback, blur frame, etc.
}
});
}
void stopVideoScan() => _scanTimer?.cancel();
Sample every second, every two seconds, on scene changes — the choice is yours.
Detection result
class Detection {
final String label; // 'breast' | 'penis' | 'anus' | 'rear' | 'vagina'
final double confidence; // 0.0–1.0
final Rect boundingBox; // normalized LTWH (0.0–1.0); Rect.zero on free tier
}
Detections below the confidence threshold (0.15) are filtered out before results are returned.
Drawing bounding boxes (paid tier)
Bounding box coordinates are normalized (0.0–1.0), so you need to scale them to the rendered image size. If the image is displayed with BoxFit.contain, it may not fill the full widget — account for the offset or boxes will be misaligned.
class DetectionOverlayPainter extends CustomPainter {
final List<Detection> detections;
final int imageWidth;
final int imageHeight;
const DetectionOverlayPainter(this.detections, this.imageWidth, this.imageHeight);
@override
void paint(Canvas canvas, Size size) {
if (imageWidth == 0 || imageHeight == 0) return;
// Compute the rect the image actually occupies inside the widget (BoxFit.contain).
final scale = (size.width / imageWidth).clamp(0.0, size.height / imageHeight);
final renderedW = imageWidth * scale;
final renderedH = imageHeight * scale;
final offsetX = (size.width - renderedW) / 2;
final offsetY = (size.height - renderedH) / 2;
final paint = Paint()
..color = const Color(0xFFFF3B30)
..style = PaintingStyle.stroke
..strokeWidth = 2;
for (final d in detections) {
final rect = Rect.fromLTWH(
offsetX + d.boundingBox.left * renderedW,
offsetY + d.boundingBox.top * renderedH,
d.boundingBox.width * renderedW,
d.boundingBox.height * renderedH,
);
canvas.drawRect(rect, paint);
}
}
@override
bool shouldRepaint(DetectionOverlayPainter old) =>
old.detections != detections;
}
Use it in a Stack over your image:
Stack(
fit: StackFit.expand,
children: [
Image.memory(imageBytes, fit: BoxFit.contain),
if (XPunge.tier == XPungeTier.paid)
CustomPaint(
painter: DetectionOverlayPainter(detections, imageWidth, imageHeight),
),
],
)
imageWidth and imageHeight are the original pixel dimensions of the image, not the widget size. Decode them once after loading:
final codec = await ui.instantiateImageCodec(bytes);
final frame = await codec.getNextFrame();
final imageWidth = frame.image.width;
final imageHeight = frame.image.height;
frame.image.dispose();
Tiers
| Plan | Price | Images / month | Bounding boxes |
|---|---|---|---|
| Free | $0 | 1,000 | No |
| Basic | $29 | 50,000 | Yes |
| Pro | $79 | 200,000 | Yes |
| Growth | $149 | 500,000 | Yes |
| Scale | $499 | 5,000,000 | Yes |
| Enterprise | Custom | 5M+ | Yes |
All paid tiers include bounding boxes.
Free tier: detection labels and confidence only — boundingBox is always Rect.zero.
Manage your subscription at xpunge.markatlarge.com · Terms of Service
Privacy
- Images are processed entirely on-device
- No image data is ever transmitted
- The only network call is a lightweight usage count reported at app launch (one integer — no image content)
- The model is encrypted at rest in the app bundle and decrypted to memory only at runtime
Platform notes
Android
- Minimum SDK: 24 (Android 7.0)
- CPU-only inference — GPU delegate omitted due to native crash risk on many devices
- Runtime: LiteRT 1.4.0
iOS
- Minimum deployment target: iOS 14.0
- Compute units:
.all(CoreML routes to Neural Engine, GPU, or CPU as available) - Runtime: CoreML + Vision framework
Example
A full working example is in the example/ directory — image picker, detection overlay, and tier-aware bounding box rendering.
cd example
flutter run