apple_vision_face 0.0.3 copy "apple_vision_face: ^0.0.3" to clipboard
apple_vision_face: ^0.0.3 copied to clipboard

PlatformiOSmacOS

A Flutter plugin to use Apple Vision Face Detection to detect faces in an image or video stream, identify key facial features, and get the contours of detected faces.

apple_vision_face #

Pub Version analysis Star on Github License: MIT

Apple Vision Face Detection is a Flutter plugin that enables Flutter apps to use Apple Vision Face Detection.

  • This plugin is not sponsor or maintained by Apple. The authors are developers who wanted to make a similar plugin to Google's ml kit for macos.

Requirements #

MacOS

  • Minimum osx Deployment Target: 11.0
  • Xcode 13 or newer
  • Swift 5
  • ML Kit only supports 64-bit architectures (x86_64 and arm64).

iOS

  • Minimum ios Deployment Target: 13.0
  • Xcode 13 or newer
  • Swift 5
  • ML Kit only supports 64-bit architectures (x86_64 and arm64).

Getting Started #

You need to first import 'package:apple_vision/apple_vision.dart';

  final GlobalKey cameraKey = GlobalKey(debugLabel: "cameraKey");
  late AppleVisionFaceController cameraController;
  late List<CameraMacOSDevice> _cameras;
  CameraMacOSController? controller;
  String? deviceId;

  FaceData? faceData;

  @override
  void initState() {
    cameraController = AppleVisionFaceController();
    CameraMacOS.instance.listDevices(deviceType: CameraMacOSDeviceType.video).then((value){
      _cameras = value;
      deviceId = _cameras.first.deviceId;
    });
    super.initState();
  }
  @override
  void dispose() {
    controller?.destroy();
    super.dispose();
  }
  void onTakePictureButtonPressed() async{
    CameraMacOSFile? file = await controller?.takePicture();
    if(file != null && mounted) {
      Uint8List? image = file.bytes;
      cameraController.process(image!, const Size(640,480)).then((data){
        faceData = data;
        setState(() {
          
        });
      });
    }
  }

  @override
  Widget build(BuildContext context) {
    deviceWidth = MediaQuery.of(context).size.width;
    deviceHeight = MediaQuery.of(context).size.height;
    return Stack(
      children:<Widget>[
        SizedBox(
          width: 640, 
          height: 480, 
          child: _getScanWidgetByPlatform()
      ),
      ]+showPoints()
    );
  }

  List<Widget> showPoints(){
    if(faceData == null || faceData!.marks.isEmpty) return[];
    Map<LandMark,Color> colors = {
      LandMark.faceContour: Colors.amber,
      LandMark.outerLips: Colors.red,
      LandMark.innerLips: Colors.pink,
      LandMark.leftEye: Colors.green,
      LandMark.rightEye: Colors.green,
      LandMark.leftPupil: Colors.purple,
      LandMark.rightPupil: Colors.purple,
      LandMark.leftEyebrow: Colors.lime,
      LandMark.rightEyebrow: Colors.lime,
    };
    List<Widget> widgets = [];

    for(int i = 0; i < faceData!.marks.length; i++){
      List<Point> points = faceData!.marks[i].location;
      for(int j = 0; j < points.length;j++){
        widgets.add(
          Positioned(
            left: points[j].x,
            bottom: points[j].y,
            child: Container(
              width: 10,
              height: 10,
              decoration: BoxDecoration(
                color: colors[faceData!.marks[i].landmark],
                borderRadius: BorderRadius.circular(5)
              ),
            )
          )
        );
      }
    }
    return widgets;
  }

  Widget _getScanWidgetByPlatform() {
    return CameraMacOSView(
      key: cameraKey,
      fit: BoxFit.fill,
      cameraMode: CameraMacOSMode.photo,
      enableAudio: false,
      onCameraLoading: (ob){
        return Container(
          width: deviceWidth,
          height: deviceHeight,
          color: Theme.of(context).canvasColor,
          alignment: Alignment.center,
          child: const CircularProgressIndicator(color: Colors.blue)
        );
      },
      onCameraInizialized: (CameraMacOSController controller) {
        setState(() {
          this.controller = controller;
          Timer.periodic(const Duration(milliseconds: 32),(_){
            onTakePictureButtonPressed();
          });
        });
      },
    );
  }

Example #

Find the example for this API here.

Contributing #

Contributions are welcome. In case of any problems look at existing issues, if you cannot find anything related to your problem then open an issue. Create an issue before opening a pull request for non trivial fixes. In case of trivial fixes open a pull request directly.

2
likes
150
points
36
downloads

Publisher

unverified uploader

Weekly Downloads

A Flutter plugin to use Apple Vision Face Detection to detect faces in an image or video stream, identify key facial features, and get the contours of detected faces.

Repository (GitHub)
View/report issues

Documentation

API reference

License

MIT (license)

Dependencies

apple_vision_commons, flutter

More

Packages that depend on apple_vision_face