flutter_face_liveness 3.1.0
flutter_face_liveness: ^3.1.0 copied to clipboard
Production-ready Flutter SDK for face detection, liveness verification, and anti-spoof protection using ML Kit and TensorFlow Lite.
3.1.0 #
New Features #
-
8-signal on-device replay detection — five new pure-Dart pixel-level signals run every frame alongside MiniFASNet. Final score = minimum of all available signals, so an attacker must simultaneously defeat every layer:
Signal File What it catches S5 – ReplayAnalyzer analysis/replay_analyzer.dartLooped video (perceptual fingerprint), stabilised video (angular micro-jitter), periodic replay (motion entropy), frozen replay (blink consistency) S6 – ScreenArtifactDetector analysis/screen_artifact_detector.dartLCD/OLED glare (specular highlight density), screen backlight (skin chromatic warmth, iOS only), steady backlight (temporal luma stability) S7 – OpticalFlowAnalyzer analysis/optical_flow_analyzer.dartStatic photo (stasis — all blocks near-zero MAD), rigid-body replay (low spatial variance of block motion energies) S8 – FaceGeometryAnalyzer analysis/face_geometry_analyzer.dartFlat surface (3-D depth via cos(yaw) Pearson correlation, eye-ratio consistency), no motion (nose landmark velocity), suspicious asymmetry (eye-open symmetry) -
Face landmarks —
FaceDetectorOptions.enableLandmarks: truenow active.FaceDataexposes 10LandmarkPoint? ({double x, double y})fields:leftEyePosition,rightEyePosition,noseBasePosition,leftCheekPosition,rightCheekPosition,leftMouthPosition,rightMouthPosition,bottomMouthPosition,leftEarPosition,rightEarPosition. -
New
LivenessControllerscore getters —liveReplayScore,liveScreenScore,liveFlowScore,liveGeoScoreexpose rolling per-signal scores for custom overlays.
Improvements #
-
openMouthdetection faster and more reliable — replaced single-frame bbox delta (required 8% jump in one frame — too strict) with a 6-frame rolling median baseline comparison at 5% threshold, held for 2 frames. AddedsmilingProbability > 0.65as a secondary OR-signal (ML Kit raises this when teeth are visible). Net result: detection fires in ~2 frames (~100 ms at 20 fps). -
MiniFASNet preprocessing corrected —
NormalisedMiniFASwrapper expects input[−1, 1]; the previous wrapper-compensation formula produced[−4.6, +4.3]which collapsed outputs to a degenerate ~0.94 for all inputs. Changed to simple BGRp / 127.5 − 1.0. -
Debug overlay extended —
showDebugOverlay: truenow shows all 8 signals:VR-B,LAP,HET,TF,RA,SCR,FLOW,GEOwith inline⚠/okindicators.
3.0.0 #
New Features #
-
Video Replay Attack Detection —
enableVideoReplayDetection: trueadds a second TFLite model (MiniFASNet-V2, 1.7 MB) that runs alongside the existing anti-spoof model to detect pre-recorded video replay attacks. Auto-downloads and caches on first use — no manual model management.LivenessConfig.enableVideoReplayDetection(defaultfalse)LivenessConfig.videoReplayThreshold(default0.50) — score below this flagsvideoReplayDetected: trueLivenessConfig.videoReplayModelPath/videoReplayModelUrl/videoReplayInputSize— custom model supportLivenessResult.videoReplayScore— raw MiniFASNet real-face probability (0.0–1.0)LivenessResult.videoReplayDetected—truewhen a video replay attack is flaggedVideoReplayModelDownloader— streaming HTTP download with progress, primary + fallback URL, cache validation
-
Deepfake threshold —
LivenessConfig.tfliteDeepfakeThreshold(default0.40).deepfakeDetectedis now correctly set based on TFLite real-face score vs this threshold.
Improvements #
-
Anti-spoof engine upgraded to 9 signals — two new heuristic signals added:
- Signal 8 — Brightness variance (weight 0.12): screens have a stable backlight; real rooms flicker subtly
- Signal 9 — Motion jitter (weight 0.05): real humans have micro-tremors; video playback is unnaturally smooth
- Composite threshold raised from 0.45 → 0.50
-
TFLite
_singleScoreuses softmax — replaced raw value clamping with numerically stable softmax. Fixes incorrect 0% real-face scores when model outputs raw logits with negative values.
Bug Fixes #
_dualScorespoof fraction inverted — leaf[i]=1 means a spoof vote;realScoreis now correctly1.0 − spoofFraction(was usingspoofFractionas real score, giving ~8% for real faces).deepfakeDetectedalways false — was never set; now correctly derived fromtfliteScore < tfliteDeepfakeThreshold.
2.9.0 #
New Features #
- Bundled anti-spoof model — zero-config TFLite —
enableTFLite: trueis now all you need. The package automatically downloadsFaceAntiSpoofing.tflite(3.9 MB) on first launch and caches it permanently. NotfliteModelUrl, notfliteInputSize, no model file to bundle. Custom models are still fully supported viatfliteModelPath/tfliteModelUrl.TFLiteModelDownloader.bundledModelUrl— package-internal constant; not exposed in the public API.TFLiteModelDownloader.bundledInputSize—256(required by the bundled FaceAntiSpoofing model).LivenessConfig.tfliteInputSizechanged fromint(default128) toint?(defaultnull→ resolves to256for the bundled model automatically).LivenessConfig.tfliteModelUrl— still accepted for custom models; when omitted, the bundled URL is used.
Improvements #
-
TFLite inference moved to a persistent background isolate —
TFLiteServicenow spawns a long-livedIsolatethat owns theInterpreter. All frame preprocessing (pixel iteration, YUV→RGB, bbox crop/resize) andinvoke()run entirely off the main thread. The camera preview and face-detection pipeline are never blocked, fixing the lag introduced when TFLite was enabled in v2.8.0.Interpreter.fromBuffer()is used in the worker isolate so no Flutter asset bundle is required there.TransferableTypedDatais used for per-frame image bytes — zero-copy transfer to the worker isolate.- Main thread only sends a message and awaits a
Completer; it yields the event loop while waiting.
-
_tfliteWarningbanner auto-clears — The red warning banner now disappears automatically once a successful TFLite inference result is received, rather than persisting for the whole session. -
Race condition fix —
tfliteScorealways non-null on success —LivenessController._onEngineComplete()now tracks_tfliteFutureandawaits it before reading_lastTfliteScore. Previously, if the session completed on the same frame that fired the lastunawaitedinference, the score was alwaysnull.
Bug Fixes #
-
Camera lag and eye-blink detection broken when TFLite enabled — Root cause:
allocateTensors()was being called on every camera frame (an expensive synchronous native call). Fixed by callingresizeInputTensor()+allocateTensors()once at model-load time inload()and removing them fromrun(). Combined with the isolate move above, the main thread is now completely free of TFLite work. -
Blink detection: instant fire on close —
BlinkDetectorpreviously required the full close → re-open cycle before confirming a blink, adding 150–300 ms of latency. Now fires immediately when both eyes drop below the closed threshold. A_wasOpenWindowMs = 1500 msguard (both eyes must have been clearly open within the last 1.5 s) prevents false positives from naturally droopy eyelids. -
Blink detection: raised closed threshold to
0.50— Fast blinks at 20 fps often only drop ML Kit's eye-open probability to0.45–0.55. The previous threshold of0.25(and even0.40) silently missed these.0.50catches them reliably. -
Blink detection: L/R eye sync window widened to 200 ms — ML Kit at
FaceDetectorMode.fastoften reports left and right eye close events 1–3 frames apart. The previous implementation required both eyes in the exact same frame. The new_eyeSyncWindowMs = 200 mswindow (≈ 4 frames at 20 fps) counts them as simultaneous. -
Blink debounce lowered to 400 ms — Was 800 ms; user can retry a missed blink in under half a second.
2.8.0 #
Bug Fixes #
-
TFLite inference was never executed —
TFLiteService.load()printed a success log but never instantiated anInterpreter;TFLiteService.run()returnednullimmediately because_interpreterwas alwaysnull;LivenessController._processFrame()never called_tflite?.run()during frame processing. Net effect:LivenessResult.tfliteScorewas alwaysnullregardless ofenableTFLite: true.- Fixed
TFLiteService.load()to callInterpreter.fromFile()(absolute path) orInterpreter.fromAsset()(Flutter asset key) depending on whethertfliteModelPathstarts with/. - Fixed
TFLiteService.run()— now accepts raw camera frame bytes + face bounding box + sensor orientation, internally crops and resizes the face region toinputSize × inputSize, and calls_interpreter!.runForMultipleInputs()for real inference. - Fixed
LivenessController._processFrame()to fire_tflite!.run()asynchronously on every frame where a face is detected;_isTfliteRunningguard prevents frame queue-up when inference is slower than the camera rate. - Fixed
LivenessController._onEngineComplete()to attach the cached_lastTfliteScoretoLivenessResultvia the newwithTfliteScore()method. - Added
LivenessResult.withTfliteScore()helper (mirrors the existingwithFaceId()pattern). captureRawFramein_processFrame()is now also enabled whenenableTFLite: true(was only enabled forenableFaceId).
- Fixed
-
tfliteModelPathaccepted asset paths in docs but required absolute paths in code — updatedLivenessConfig.tfliteModelPathdocumentation andTFLiteService.load()to explicitly support both Flutter asset keys and absolute filesystem paths.
2.7.0 #
Bug Fixes #
- iOS headLeft / headRight detection inverted — iOS front-camera delivers horizontally-mirrored BGRA8888 frames;
_buildInputImage()passes them to ML Kit withrotation0degand no mirror correction. This caused ML Kit to report a flippedheadEulerAngleYsign: physical right turn produced positive yaw (mapped toturnLeft), physical left turn produced negative yaw (mapped toturnRight). Fixed inFaceData.fromFace()by negatingheadEulerAngleYonPlatform.isIOS, aligning both platforms to the same convention (positive yaw = user physically turned left). Android is unaffected — ML Kit's sensor-rotation correction already provides the correct sign there.
2.5.0 #
Bug Fixes #
- Added
library;declaration toflutter_face_liveness.dart— fixes dangling library doc comment lint warning - Enclosed
forloop body inface_embedding_model.dartwith braces — fixescurly_braces_in_flow_control_structureslint warning - Wrapped home screen
ColumninSingleChildScrollViewin example app — fixesRenderFlexoverflow on small screens; replacedSpacer()withSizedBox(height: 20)(Spacer is incompatible with scroll views)
2.6.0 #
Improvements #
- Added Swift Package Manager (SPM) support for iOS —
ios/flutter_face_liveness/Package.swiftadded with correctSources/structure
Bug Fixes #
- Removed unnecessary
as List<double>cast inTFLiteService._runInference()(line 88) — type was already inferred correctly fromList.filled - Removed unused
_Float32Reshapeextension onFloat32List—reshape()call it depended on was already commented out
2.2.0 #
Improvements #
- Added banner image to README for pub.dev and GitHub documentation
- Upgraded Android Gradle Plugin to 8.9.1 (required by
androidx.camera:1.6.0) - Upgraded Gradle wrapper to 8.11.1
- Updated
permission_handlerto^12.0.1(requires Flutter 3.24+ / Dart 3.5+) - Example app Face ID history screen — locally stores and displays all registered Face IDs with match/new status
2.0.0 #
New Features #
Persistent Face Identity (Face ID)
FaceIdentityService— assigns a stableFID-XXXXidentifier to each unique face that persists across all app sessions usingSharedPreferencesFaceEmbeddingModel— wraps a FaceNet TFLite model (128-dim L2-normalised embeddings); model is auto-downloaded on first use (~23 MB, cached permanently)FaceModelDownloader— streaming HTTP download with progress callback; primary URL + fallback URL; re-downloads automatically if the cached file is corruptedFacePreprocessor— crops + resizes face region to 160×160, normalises pixels to[-1, 1]; runs in acompute()isolate; handles both NV21 (Android) and BGRA8888 (iOS) inputLivenessConfig.enableFaceIdflag (defaultfalse) — zero-config opt-in; no model file to bundleLivenessConfig.faceIdSimilarityThreshold(default0.65) — cosine-similarity cutoff for same-face matchingLivenessResult.faceId— returned alongsidesessionIdon successful verificationLivenessController.clearFaceIdentities()— removes all stored embeddings (e.g. on logout)- Embedding adaptation — stored embedding is updated toward each confirmed new observation (
75% old + 25% new, then re-normalised) so the template improves over time
Isolate-based ML Preprocessing
FrameProcessor— YUV→NV21 conversion, brightness, blur score, and FNV-1a hash all computed in a backgroundcompute()isolate; UI thread stays at 60 fps
Frame Quality Validation
- Per-frame brightness check with debounce (6 consecutive bad frames required before reporting
lowLight/overExposed, absorbing camera auto-exposure settling time) - Platform-correct brightness calculation: iOS BGRA8888 uses BT.601 luminance (
Y = (77R + 150G + 29B) >> 8); Android NV21 uses Y-plane directly - Blur detection via Y-plane variance
Anti-Spoof Engine
- 7-signal composite scoring: eye variance, face geometry, head pose naturalness, eye-open probability, face tracking continuity, micro-motion (yaw/pitch variance), and frame quality
- Rolling 12-frame history — no model file required
Security
SessionManager— cryptographically unique session IDs usingRandom.secure()(12-char timestamp hex + 8-char secure random hex, e.g.LV-018F3A2B9C4E-D7E31F08)FrameHasher— FNV-1a sliding-window replay detection- Fisher-Yates shuffle for randomised action sequences
New Liveness Action
LivenessAction.openMouth— detected via bounding-box height growth (>8%) with low smile probability
UI
LivenessStepIndicator— animated progress dots for current / completed / remaining steps- Download-progress loading screen — shows
%while FaceNet model downloads on first run - Dark / light theme support via
LivenessConfig.themeMode @Deprecated showDebugInfo— replaced byLivenessConfig.showDebugOverlay
New Exports
FaceIdentityService,FaceModelDownloader,FaceModelDownloadExceptionAntiSpoofEngine,AntiSpoofResult,TFLiteServiceSessionManager,RawFrameData
Bug Fixes #
- iOS brightness falsely reported as "too dark" — single-plane BGRA frames were being sampled as if they were NV21 Y-plane data (Blue channel average ≠ luminance); fixed with BT.601 per-pixel luminance
- iOS face crop height clamping —
_resampleBgrausedw-1for both axes; portrait/landscape frames could produce out-of-bounds crops; fixed toh-1for the Y axis - iOS raw frame bytes mismatch —
RawFrameDatastored NV21-converted bytes even on iOS whereFacePreprocessorexpects BGRA8888; now storesimage.planes[0].bytes(original BGRA) on iOS - Same face → different Face ID — similarity threshold
0.78was too strict for cross-session lighting/angle variation; lowered to0.65; stored embedding now adapts toward each confirmed match - Session ID collision — old generator used deterministic XOR of timestamp; replaced with
Random.secure() tflite_flutter 0.10.4compilation failure on Dart ≥ 3.4 —UnmodifiableUint8ListViewwas removed fromdart:typed_data; resolved by overriding totflite_fluttergitmain(v0.12.1)completedActions/remainingActionsnot defined onLivenessController— fixed broken_EngineSequenceextension; getters added directly to controllerbrightness > 0.90false overexposure — threshold raised to0.92to match real sensor output
Breaking Changes #
- Android
minSdkVersionraised from 21 → 26 — required bytflite_flutter 0.12.1 LivenessResultgains optionalfaceIdfield (non-breaking;nullwhenenableFaceIdisfalse)brightnessMindefault changed from0.20→0.12brightnessMaxdefault changed from0.90→0.92
Dependencies Added #
tflite_flutter: (git main — v0.12.1)
shared_preferences: ^2.2.2
http: ^1.2.1
path_provider: ^2.1.3
1.0.0 #
- Initial release
- Real-time face detection via Google ML Kit Face Detection
- Liveness actions: blink, turnLeft, turnRight, lookUp, lookDown, smile
- Anti-spoofing heuristic validator (5-signal composite score)
- Animated face overlay with status indicator and progress bar
- Clean architecture: Camera → ML → Liveness engine → UI layers
- Full null-safety support (Dart 3 / Flutter 3.10+)
- Android API 21+ and iOS 13+ support
- Example app with standard and custom challenge modes