apple_vision_lift_subjects 0.0.2 copy "apple_vision_lift_subjects: ^0.0.2" to clipboard
apple_vision_lift_subjects: ^0.0.2 copied to clipboard

PlatformiOSmacOS

A Flutter plugin to use Apple Vision Lift Subject to extract objects in real time from a continuous video or static image.

apple_vision_lift_subjects #

Pub Version analysis Star on Github License: MIT

Apple Vision Lift Subject is a Flutter plugin that enables Flutter apps to use Apple Vision Lift Subject.

  • This plugin is not sponsor or maintained by Apple. The authors are developers who wanted to make a similar plugin to Google's ml kit for macos.

Requirements #

MacOS

  • Minimum osx Deployment Target: 14.0
  • Xcode 15 or newer
  • Swift 5
  • ML Kit only supports 64-bit architectures (x86_64 and arm64).

iOS

  • In develpoment not yet supported
  • Minimum ios Deployment Target: 17.0
  • Xcode 15 or newer
  • Swift 5
  • ML Kit only supports 64-bit architectures (x86_64 and arm64).

Getting Started #

You need to first import 'package:apple_vision/apple_vision.dart';

  final GlobalKey cameraKey = GlobalKey(debugLabel: "cameraKey");
  late AppleVisionliftSubjectsController visionController = AppleVisionliftSubjectsController();
  Size imageSize = const Size(640,640*9/16);
  String? deviceId;
  bool loading = true;

  List<Uint8List?> images = [];
  late double deviceWidth;
  late double deviceHeight;
  Uint8List? bg;
  Uint8List? flowers;
  List<Uint8List?> sepImages = [];
  Point? point;

  @override
  void initState() {
    rootBundle.load('assets/WaterOnTheMoonFull.jpg').then((value){
      bg = value.buffer.asUint8List();
    });
    processImages();
    super.initState();
  }

  void processImages(){
    rootBundle.load('assets/rose.jpg').then((value){
      visionController.processImage(
        LiftedSubjectsData(
          image: value.buffer.asUint8List(),
          imageSize: const Size(640,425),
          crop: true,
        )
      ).then((value){
        if(value != null){
          images.add(value);
          setState(() {});
        }
      });
    });
    rootBundle.load('assets/human.png').then((value){
      visionController.processImage(
        LiftedSubjectsData(
          image: value.buffer.asUint8List(),
          imageSize: const Size(512,512),
          backGround: bg
        )
      ).then((value){
        if(value != null){
          images.add(value);
          setState(() {});
        }
      });
    });
    rootBundle.load('assets/flowers.jpg').then((value){
      flowers = value.buffer.asUint8List();
      onTouch(false);
    });
  }

  void onTouch(bool useSep){
    visionController.processImage(
      LiftedSubjectsData(
        image: flowers!,
        imageSize: const Size(600,400),
        crop: useSep,
        touchPoint: point
      )
    ).then((value){
      if(value != null){
        if(useSep){
          sepImages.add(value);
        }
        else{
          images.add(value);
        }
        setState(() {});
      }
    });
  }

  List<Widget> showImages(){
    List<Widget> widgets = [];

    for(int i = 0; i < images.length; i++){
      if(i == images.length-1&& images[i] != null){
        double w = 600;
        double h = 400;
        widgets.add(
          SizedBox(
            width: w,
            height: h,
            child: GestureDetector(
              onTapDown: (td){
                point = Point(
                  td.localPosition.dx/w,
                  td.localPosition.dy/h
                );
                sepImages = [];
                onTouch(true);
              },
              child: Image.memory(
                images[i]!,
                fit: BoxFit.fitHeight,
              ),
            )
          )
        );
      }
      else if(images[i] != null){
        widgets.add(
          Image.memory(
            images[i]!,
            fit: BoxFit.fitHeight,
          )
        );
      }
    }
    for(int i = 0; i < sepImages.length; i++){
      if(sepImages[i] != null){
        widgets.add(
          Image.memory(
            sepImages[i]!,
            fit: BoxFit.fitHeight,
          )
        );
      }
    }
    return widgets;
  }

  @override
  Widget build(BuildContext context) {
    deviceWidth = MediaQuery.of(context).size.width;
    deviceHeight = MediaQuery.of(context).size.height;
    return ListView(
      children:<Widget>[
        Wrap(
          children: showImages(),
        )

      ]
    );
  }

Example #

Find the example for this API here.

Contributing #

Contributions are welcome. In case of any problems look at existing issues, if you cannot find anything related to your problem then open an issue. Create an issue before opening a pull request for non trivial fixes. In case of trivial fixes open a pull request directly.

0
likes
150
pub points
24%
popularity

Publisher

unverified uploader

A Flutter plugin to use Apple Vision Lift Subject to extract objects in real time from a continuous video or static image.

Repository (GitHub)
View/report issues

Documentation

API reference

License

MIT (license)

Dependencies

apple_vision_commons, flutter

More

Packages that depend on apple_vision_lift_subjects