apple_vision 0.0.6 copy "apple_vision: ^0.0.6" to clipboard
apple_vision: ^0.0.6 copied to clipboard

PlatformiOSmacOS

A Flutter plugin to use all API's from Apple vision made for osx and ios.

apple_vision #

Pub Version analysis Star on Github License: MIT

Apple Vision is a Flutter plugin that enables Flutter apps to use Apple Vision.

PLEASE READ THIS before continuing or posting a new issue:

  • Apple Vision was built only for osx and ios apps.

  • This plugin is not sponsor or maintained by Apple. The authors are developers who wanted to make a similar plugin to Google's ml kit for macos.

Features #

Vision APIs #

Feature Plugin Source Code MacOS iOS
Face Detection Points apple_vision_face Pub Version GitHub
Face Mesh apple_vision_face_mesh Pub Version GitHub
Face Detection apple_vision_face_detection Pub Version GitHub
Pose Detection apple_vision_pose Pub Version GitHub
Hand Detection apple_vision_hand Pub Version GitHub
Hand Detection 3D apple_vision_hand_3d Pub Version GitHub
Object Classification apple_vision_object Pub Version GitHub
Object Tracking apple_vision_object_tracking Pub Version GitHub
Selfie Segmentation apple_vision_selfie Pub Version GitHub
Text Recognition apple_vision_text_recognition Pub Version GitHub
Image Classification apple_vision_image_classification Pub Version GitHub
Barcode Scanner apple_vision_scanner Pub Version GitHub
Animal Pose apple_vision_animal_pose Pub Version GitHub
Pose 3D apple_vision_pose_3d Pub Version GitHub
Saliency apple_vision_saliency Pub Version GitHub
Lift Subjects apple_vision_lift_subjects Pub Version GitHub
Image Depth apple_vision_image_depth Pub Version GitHub

Requirements #

MacOS

  • Minimum os Deployment Target: 15.0
  • Xcode 15 or newer
  • Swift 5
  • ML Kit only supports 64-bit architectures (x86_64 and arm64).

iOS

  • Minimum os Deployment Target: 17.0
  • Xcode 15 or newer
  • Swift 5
  • ML Kit only supports 64-bit architectures (x86_64 and arm64).

Getting Started #

PLEASE READ THIS before continuing or posting a new issue:

  • Apple Vision was built only for osx apps with the intention on making it available for iOS in future releases.

  • This plugin is not sponsor or maintained by Apple. The authors are developers who wanted to make a similar plugin to Google's ml kit for macos.

  • Apple Vision API in only developed natively for osx. This plugin uses Flutter Platform Channels as explained here.

    Because this plugin uses platform channels, no Machine Learning processing is done in Flutter/Dart, all the calls are passed to the native platform using FlutterMethodChannel, and executed using the Apple Vision API.

  • Since the plugin uses platform channels, you may encounter issues with the native API. Before submitting a new issue, identify the source of the issue. This plugin is only for osx. The authors do not have access to the source code of their native APIs, so you need to report the issue to them. If you have an issue using this plugin, then look at our closed and open issues. If you cannot find anything that can help you then report the issue and provide enough details. Be patient, someone from the community will eventually help you.

Example #

You need to first import 'package:apple_vision/apple_vision.dart';

final GlobalKey cameraKey = GlobalKey(debugLabel: "cameraKey");
AppleVisionFaceController visionController = AppleVisionFaceController();
InsertCamera camera = InsertCamera();
String? deviceId;
bool loading = true;
Size imageSize = const Size(640,640*9/16);

List<FaceData>? faceData;
late double deviceWidth;
late double deviceHeight;

@override
void initState() {
  camera.setupCameras().then((value){
    setState(() {
      loading = false;
    });
    camera.startLiveFeed((InputImage i){
      if(i.metadata?.size != null){
        imageSize = i.metadata!.size;
      }
      if(mounted) {
        Uint8List? image = i.bytes;
        visionController.processImage(image!, i.metadata!.size).then((data){
          faceData = data;
          setState(() {
            
          });
        });
      }
    });
  });
  super.initState();
}
@override
void dispose() {
  camera.dispose();
  super.dispose();
}

@override
Widget build(BuildContext context) {
  deviceWidth = MediaQuery.of(context).size.width;
  deviceHeight = MediaQuery.of(context).size.height;
  return Stack(
    children:<Widget>[
      SizedBox(
        width: imageSize.width, 
        height: imageSize.height, 
        child: loading?Container():CameraSetup(camera: camera,size: imageSize,)
    ),
    ]+showPoints()
  );
}

List<Widget> showPoints(){
  if(faceData == null || faceData!.isEmpty) return[];
  List<Widget> widgets = [];
  Map<LandMark,Color> colors = {
    LandMark.faceContour: Colors.amber,
    LandMark.outerLips: Colors.red,
    LandMark.innerLips: Colors.pink,
    LandMark.leftEye: Colors.green,
    LandMark.rightEye: Colors.green,
    LandMark.leftPupil: Colors.purple,
    LandMark.rightPupil: Colors.purple,
    LandMark.leftEyebrow: Colors.lime,
    LandMark.rightEyebrow: Colors.lime,
  };

  for(int k = 0; k < faceData!.length;k++){
    if(faceData![k].marks.isNotEmpty){
      for(int i = 0; i < faceData![k].marks.length; i++){
        List<FacePoint> points = faceData![k].marks[i].location;
        for(int j = 0; j < points.length;j++){
          widgets.add(
            Positioned(
              left: points[j].x,
              top: points[j].y,
              child: Container(
                width: 10,
                height: 10,
                decoration: BoxDecoration(
                  color: colors[faceData![k].marks[i].landmark],
                  borderRadius: BorderRadius.circular(5)
                ),
              )
            )
          );
        }
      }
    }
  }
  return widgets;
}

Widget loadingWidget(){
  return Container(
    width: deviceWidth,
    height: deviceHeight,
    color: Theme.of(context).canvasColor,
    alignment: Alignment.center,
    child: const CircularProgressIndicator(color: Colors.blue)
  );
}

Example #

Find the example for this API here.

Contributing #

Contributions are welcome. In case of any problems look at existing issues, if you cannot find anything related to your problem then open an issue. Create an issue before opening a pull request for non trivial fixes. In case of trivial fixes open a pull request directly.