opencv library

OpenCV bindings for Flutter

Classes

AgastFeatureDetector
AgastFeatureDetector is a wrapper around the cv::AgastFeatureDetector.
AKAZE
AKAZE is a wrapper around the cv::AKAZE algorithm.
AlignMTB
AlignMTB for converts images to median threshold bitmaps. of type AlignMTB converts images to median threshold bitmaps (1 for pixels brighter than median luminance and 0 otherwise) and than aligns the resulting bitmaps using bit operations. For further details, please see: https://docs.opencv.org/master/d6/df5/group__photo__hdr.html https://docs.opencv.org/master/d7/db6/classcv_1_1AlignMTB.html https://docs.opencv.org/master/d6/df5/group__photo__hdr.html#ga2f1fafc885a5d79dbfb3542e08db0244
ArucoDetector
ArucoDetectorParameters
ArucoDictionary
AsyncArray
AverageHash
AverageHash is implementation of the AverageHash algorithm.
BackgroundSubtractorKNN
BackgroundSubtractorMOG2
BFMatcher
BFMatcher is a wrapper around the cv::BFMatcher.
BlockMeanHash
BlockMeanHash is implementation of the BlockMeanHash algorithm.
BRISK
BRISK is a wrapper around the cv::BRISK algorithm.
CascadeClassifier
CLAHE
ColorMomentHash
ColorMomentHash is implementation of the ColorMomentHash algorithm.
CvObject<T extends NativeType>
CvStruct<T extends Struct>
CvVec<T extends Struct>
DMatch
EdgeBoxes
https://docs.opencv.org/4.x/dd/d65/classcv_1_1ximgproc_1_1EdgeBoxes.html#details
EdgeDrawing
EdgeDrawingParams
FaceDetectorYN
DNN-based face detector.
FaceRecognizerSF
DNN-based face recognizer.
FastFeatureDetector
FastFeatureDetector is a wrapper around the cv::FastFeatureDetector.
Fisheye
FlannBasedMatcher
FlannBasedMatcher is a wrapper around the cv::FlannBasedMatcher.
Float16List
GFTTDetector
GFTTDetector is a wrapper around the cv::GFTTDetector.
GraphSegmentation
https://docs.opencv.org/4.x/dd/d19/classcv_1_1ximgproc_1_1segmentation_1_1GraphSegmentation.html
HOGDescriptor
ICvStruct<T extends Struct>
ImgHashBase
KalmanFilter
KalmanFilter implements a standard Kalman filter http://en.wikipedia.org/wiki/Kalman_filter. However, you can modify transitionMatrix, controlMatrix, and measurementMatrix to get an extended Kalman filter functionality.
KAZE
KAZE is a wrapper around the cv::KAZE.
KeyPoint
Layer
Layer is a wrapper around the cv::dnn::Layer algorithm.
MarrHildrethHash
MarrHildrethHash is implementation of the MarrHildrethHash algorithm.
Mat
MergeMertens
MergeMertens algorithm merge the ldr image should result in a HDR image. For further details, please see: https://docs.opencv.org/master/d6/df5/group__photo__hdr.html https://docs.opencv.org/master/d7/dd6/classcv_1_1MergeMertens.html https://docs.opencv.org/master/d6/df5/group__photo__hdr.html#ga79d59aa3cb3a7c664e59a4b5acc1ccb6
Moments
struct returned by cv::moments
MSER
MSER is a wrapper around the cv::MSER.
Net
Net allows you to create and manipulate comprehensive artificial neural networks.
ORB
ORB is a wrapper around the cv::ORB.
PHash
PHash is implementation of the PHash algorithm.
Point
Point2f
Point3f
Point3i
QRCodeDetector
QualityBRISQUE
QualityGMSD
QualityMSE
QualityPSNR
QualitySSIM
RadialVarianceHash
RadialVarianceHash is implementation of the RadialVarianceHash algorithm.
Rect
Rect2f
RFFeatureGetter
Rng
RotatedRect
Scalar
SIFT
SIFT is a wrapper around the cv::SIFT.
SimpleBlobDetector
SimpleBlobDetector is a wrapper around the cv::SimpleBlobDetector.
SimpleBlobDetectorParams
Size
Size2f
Stitcher
High level image stitcher.
StructuredEdgeDetection
https://docs.opencv.org/4.x/d8/d54/classcv_1_1ximgproc_1_1StructuredEdgeDetection.html#details
Subdiv2D
SVD
SVDCompute decomposes matrix and stores the results to user-provided matrices
TermCriteria
TermCriteria is the criteria for iterative algorithms.
TrackerMIL
Tracker is the base interface for object tracking.
Vec<N extends Struct, T>
Vec2b
uchar
Vec2d
double
Vec2f
float
Vec2i
int
Vec2s
short
Vec2w
ushort
Vec3b
uchar
Vec3d
double
Vec3f
float
Vec3i
int
Vec3s
short
Vec3w
ushort
Vec4b
uchar
Vec4d
double
Vec4f
float
Vec4i
int
Vec4s
short
Vec4w
ushort
Vec6d
double
Vec6f
float
Vec6i
int
Vec8i
int
VecChar
VecCharIterator
VecDMatch
VecDMatchIterator
VecF16
VecF16Iterator
VecF32
VecF32Iterator
VecF64
VecF64Iterator
VecI16
VecI16Iterator
VecI32
VecI32Iterator
VecIterator<T>
VecKeyPoint
VecKeyPointIterator
VecMat
VecMatIterator
VecPoint
VecPoint2f
VecPoint2fIterator
VecPoint3f
VecPoint3fIterator
VecPoint3i
VecPoint3iIterator
VecPointIterator
VecRect
VecRect2f
VecRect2fIterator
VecRectIterator
VecU16
VecU16Iterator
VecUChar
VecUCharIterator
VecVec4f
VecVec4fIterator
VecVec4i
VecVec4iIterator
VecVec6f
VecVec6fIterator
VecVecChar
VecVecCharIterator
VecVecDMatch
VecVecDMatchIterator
VecVecPoint
VecVecPoint2f
VecVecPoint2fIterator
VecVecPoint3f
VecVecPoint3fIterator
VecVecPointIterator
VideoCapture
VideoWriter
WBDetector
WaldBoost detector.
WeChatQRCode
ximgproc
ximgproc_rl

Extension Types

ErrorCode
Float16P
MatType

Extensions

AgastFeatureDetectorAsync on AgastFeatureDetector
AKAZEAsync on AKAZE
AlignMTBAsync on AlignMTB
ArucoDetectorAsync on ArucoDetector
BackgroundSubtractorKNNAsync on BackgroundSubtractorKNN
BackgroundSubtractorMOG2Async on BackgroundSubtractorMOG2
BFMatcherAsync on BFMatcher
BRISKAsync on BRISK
CascadeClassifierAsync on CascadeClassifier
DoubleFp16Extension on double
FaceDetectorYNAsync on FaceDetectorYN
FaceRecognizerSFAsync on FaceRecognizerSF
FastFeatureDetectorAsync on FastFeatureDetector
FlannBasedMatcherAsync on FlannBasedMatcher
GFTTDetectorAsync on GFTTDetector
HOGDescriptorAsync on HOGDescriptor
IntFp16Extension on int
KalmanFilterAsync on KalmanFilter
KalmanFilter implements a standard Kalman filter http://en.wikipedia.org/wiki/Kalman_filter. However, you can modify transitionMatrix, controlMatrix, and measurementMatrix to get an extended Kalman filter functionality.
KAZEAsync on KAZE
ListDMatchExtension on List<DMatch>
ListFloatExtension on List<double>
ListKeyPointExtension on List<KeyPoint>
ListListCharExtension on List<List<int>>
ListListDMatchExtension on List<List<DMatch>>
ListListPoint2fExtension on List<List<Point2f>>
ListListPoint3fExtension on List<List<Point3f>>
ListListPointExtension on List<List<Point>>
ListMatExtension on List<Mat>
ListPoint2fExtension on List<Point2f>
ListPoint3fExtension on List<Point3f>
ListPoint3iExtension on List<Point3i>
ListPointExtension on List<Point>
ListRect2fExtension on List<Rect2f>
ListRectExtension on List<Rect>
ListStringExtension on List<String>
ListUCharExtension on List<int>
MatAsync on Mat
MergeMertensAsync on MergeMertens
MSERAsync on MSER
NetAsync on Net
ORBAsync on ORB
Point2fRecordExtension on (double, double)
Point3fRecordExtension on (double, double, double)
PointerCharExtension on Pointer<Char>
PointerUint16Extension on Pointer<Uint16>
PointRecordExtension on (int, int)
QRCodeDetectorAsync on QRCodeDetector
RecordScalarExtension on (double, double, double, double)
RecordSize2fExtension1 on (double, double)
RecordSizeExtension1 on (int, int)
SIFTAsync on SIFT
SimpleBlobDetectorAsync on SimpleBlobDetector
StitcherAsync on Stitcher
StringVecExtension on String
Subdiv2DAsync on Subdiv2D
Async version of Subdiv2D
TermCriteriaExtension on (int, int, double)
TrackerMILAsync on TrackerMIL
Tracker is the base interface for object tracking.
VecPointExtension on VecPoint
VecVec4fExtension on List<Vec4f>
VecVec4iExtension on List<Vec4i>
VecVec6fExtension on List<Vec6f>
VideoCaptureAsync on VideoCapture
VideoWriterAsync on VideoWriter

Constants

ADAPTIVE_THRESH_GAUSSIAN_C → const int
ADAPTIVE_THRESH_MEAN_C → const int
BLOCK_MEAN_HASH_MODE_0 → const int
!< use fewer block and generate 16*16/8 uchar hash value
BLOCK_MEAN_HASH_MODE_1 → const int
!< use block blocks(step sizes/2), generate 31*31/8 + 1 uchar hash value
BORDER_CONSTANT → const int
BORDER_DEFAULT → const int
BORDER_ISOLATED → const int
BORDER_REFLECT → const int
BORDER_REFLECT101 → const int
BORDER_REFLECT_101 → const int
BORDER_REPLICATE → const int
BORDER_TRANSPARENT → const int
BORDER_WRAP → const int
CALIB_CB_ACCURACY → const int
CALIB_CB_ADAPTIVE_THRESH → const int
CALIB_CB_ASYMMETRIC_GRID → const int
CALIB_CB_CLUSTERING → const int
CALIB_CB_EXHAUSTIVE → const int
CALIB_CB_FAST_CHECK → const int
CALIB_CB_FILTER_QUADS → const int
CALIB_CB_LARGER → const int
CALIB_CB_MARKER → const int
CALIB_CB_NORMALIZE_IMAGE → const int
CALIB_CB_PLAIN → const int
CALIB_CB_SYMMETRIC_GRID → const int
CALIB_FIX_ASPECT_RATIO → const int
CALIB_FIX_FOCAL_LENGTH → const int
CALIB_FIX_INTRINSIC → const int
CALIB_FIX_K1 → const int
CALIB_FIX_K2 → const int
CALIB_FIX_K3 → const int
CALIB_FIX_K4 → const int
CALIB_FIX_K5 → const int
CALIB_FIX_K6 → const int
CALIB_FIX_PRINCIPAL_POINT → const int
CALIB_FIX_S1_S2_S3_S4 → const int
CALIB_FIX_TANGENT_DIST → const int
CALIB_FIX_TAUX_TAUY → const int
CALIB_NINTRINSIC → const int
CALIB_RATIONAL_MODEL → const int
CALIB_SAME_FOCAL_LENGTH → const int
CALIB_THIN_PRISM_MODEL → const int
CALIB_TILTED_MODEL → const int
CALIB_USE_EXTRINSIC_GUESS → const int
CALIB_USE_INTRINSIC_GUESS → const int
CALIB_USE_LU → const int
CALIB_USE_QR → const int
CALIB_ZERO_DISPARITY → const int
CALIB_ZERO_TANGENT_DIST → const int
CAP_ANDROID → const int
CAP_ANY → const int
CAP_ARAVIS → const int
CAP_AVFOUNDATION → const int
CAP_CMU1394 → const int
CAP_DC1394 → const int
CAP_DSHOW → const int
CAP_FFMPEG → const int
CAP_FIREWARE → const int
CAP_FIREWIRE → const int
CAP_GIGANETIX → const int
CAP_GPHOTO2 → const int
CAP_GSTREAMER → const int
CAP_IEEE1394 → const int
CAP_IMAGES → const int
CAP_INTEL_MFX → const int
CAP_INTELPERC → const int
CAP_INTELPERC_DEPTH_GENERATOR → const int
CAP_INTELPERC_DEPTH_MAP → const int
CAP_INTELPERC_GENERATORS_MASK → const int
CAP_INTELPERC_IMAGE → const int
CAP_INTELPERC_IMAGE_GENERATOR → const int
CAP_INTELPERC_IR_GENERATOR → const int
CAP_INTELPERC_IR_MAP → const int
CAP_INTELPERC_UVDEPTH_MAP → const int
CAP_MSMF → const int
CAP_OBSENSOR → const int
CAP_OBSENSOR_BGR_IMAGE → const int
CAP_OBSENSOR_DEPTH_GENERATOR → const int
CAP_OBSENSOR_DEPTH_MAP → const int
CAP_OBSENSOR_GENERATORS_MASK → const int
CAP_OBSENSOR_IMAGE_GENERATOR → const int
CAP_OBSENSOR_IR_GENERATOR → const int
CAP_OBSENSOR_IR_IMAGE → const int
CAP_OPENCV_MJPEG → const int
CAP_OPENNI → const int
CAP_OPENNI2 → const int
CAP_OPENNI2_ASTRA → const int
CAP_OPENNI2_ASUS → const int
CAP_OPENNI_ASUS → const int
CAP_OPENNI_BGR_IMAGE → const int
CAP_OPENNI_DEPTH_GENERATOR → const int
CAP_OPENNI_DEPTH_GENERATOR_BASELINE → const int
CAP_OPENNI_DEPTH_GENERATOR_FOCAL_LENGTH → const int
CAP_OPENNI_DEPTH_GENERATOR_PRESENT → const int
CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION → const int
CAP_OPENNI_DEPTH_GENERATOR_REGISTRATION_ON → const int
CAP_OPENNI_DEPTH_MAP → const int
CAP_OPENNI_DISPARITY_MAP → const int
CAP_OPENNI_DISPARITY_MAP_32F → const int
CAP_OPENNI_GENERATORS_MASK → const int
CAP_OPENNI_GRAY_IMAGE → const int
CAP_OPENNI_IMAGE_GENERATOR → const int
CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE → const int
CAP_OPENNI_IMAGE_GENERATOR_PRESENT → const int
CAP_OPENNI_IR_GENERATOR → const int
CAP_OPENNI_IR_GENERATOR_PRESENT → const int
CAP_OPENNI_IR_IMAGE → const int
CAP_OPENNI_POINT_CLOUD_MAP → const int
CAP_OPENNI_QVGA_30HZ → const int
CAP_OPENNI_QVGA_60HZ → const int
CAP_OPENNI_SXGA_15HZ → const int
CAP_OPENNI_SXGA_30HZ → const int
CAP_OPENNI_VALID_DEPTH_MASK → const int
CAP_OPENNI_VGA_30HZ → const int
CAP_PROP_APERTURE → const int
CAP_PROP_ARAVIS_AUTOTRIGGER → const int
CAP_PROP_AUDIO_BASE_INDEX → const int
CAP_PROP_AUDIO_DATA_DEPTH → const int
CAP_PROP_AUDIO_POS → const int
CAP_PROP_AUDIO_SAMPLES_PER_SECOND → const int
CAP_PROP_AUDIO_SHIFT_NSEC → const int
CAP_PROP_AUDIO_STREAM → const int
CAP_PROP_AUDIO_SYNCHRONIZE → const int
CAP_PROP_AUDIO_TOTAL_CHANNELS → const int
CAP_PROP_AUDIO_TOTAL_STREAMS → const int
CAP_PROP_AUTO_EXPOSURE → const int
CAP_PROP_AUTO_WB → const int
CAP_PROP_AUTOFOCUS → const int
CAP_PROP_BACKEND → const int
CAP_PROP_BACKLIGHT → const int
CAP_PROP_BITRATE → const int
CAP_PROP_BRIGHTNESS → const int
CAP_PROP_BUFFERSIZE → const int
CAP_PROP_CHANNEL → const int
CAP_PROP_CODEC_EXTRADATA_INDEX → const int
CAP_PROP_CODEC_PIXEL_FORMAT → const int
CAP_PROP_CONTRAST → const int
CAP_PROP_CONVERT_RGB → const int
CAP_PROP_DC1394_MAX → const int
CAP_PROP_DC1394_MODE_AUTO → const int
CAP_PROP_DC1394_MODE_MANUAL → const int
CAP_PROP_DC1394_MODE_ONE_PUSH_AUTO → const int
CAP_PROP_DC1394_OFF → const int
CAP_PROP_EXPOSURE → const int
CAP_PROP_EXPOSUREPROGRAM → const int
CAP_PROP_FOCUS → const int
CAP_PROP_FORMAT → const int
CAP_PROP_FOURCC → const int
CAP_PROP_FPS → const int
CAP_PROP_FRAME_COUNT → const int
CAP_PROP_FRAME_HEIGHT → const int
CAP_PROP_FRAME_TYPE → const int
CAP_PROP_FRAME_WIDTH → const int
CAP_PROP_GAIN → const int
CAP_PROP_GAMMA → const int
CAP_PROP_GIGA_FRAME_HEIGH_MAX → const int
CAP_PROP_GIGA_FRAME_OFFSET_X → const int
CAP_PROP_GIGA_FRAME_OFFSET_Y → const int
CAP_PROP_GIGA_FRAME_SENS_HEIGH → const int
CAP_PROP_GIGA_FRAME_SENS_WIDTH → const int
CAP_PROP_GIGA_FRAME_WIDTH_MAX → const int
CAP_PROP_GPHOTO2_COLLECT_MSGS → const int
CAP_PROP_GPHOTO2_FLUSH_MSGS → const int
CAP_PROP_GPHOTO2_PREVIEW → const int
CAP_PROP_GPHOTO2_RELOAD_CONFIG → const int
CAP_PROP_GPHOTO2_RELOAD_ON_CHANGE → const int
CAP_PROP_GPHOTO2_WIDGET_ENUMERATE → const int
CAP_PROP_GSTREAMER_QUEUE_LENGTH → const int
CAP_PROP_GUID → const int
CAP_PROP_HUE → const int
CAP_PROP_HW_ACCELERATION → const int
CAP_PROP_HW_ACCELERATION_USE_OPENCL → const int
CAP_PROP_HW_DEVICE → const int
CAP_PROP_IMAGES_BASE → const int
CAP_PROP_IMAGES_LAST → const int
CAP_PROP_INTELPERC_DEPTH_CONFIDENCE_THRESHOLD → const int
CAP_PROP_INTELPERC_DEPTH_FOCAL_LENGTH_HORZ → const int
CAP_PROP_INTELPERC_DEPTH_FOCAL_LENGTH_VERT → const int
CAP_PROP_INTELPERC_DEPTH_LOW_CONFIDENCE_VALUE → const int
CAP_PROP_INTELPERC_DEPTH_SATURATION_VALUE → const int
CAP_PROP_INTELPERC_PROFILE_COUNT → const int
CAP_PROP_INTELPERC_PROFILE_IDX → const int
CAP_PROP_IOS_DEVICE_EXPOSURE → const int
CAP_PROP_IOS_DEVICE_FLASH → const int
CAP_PROP_IOS_DEVICE_FOCUS → const int
CAP_PROP_IOS_DEVICE_TORCH → const int
CAP_PROP_IOS_DEVICE_WHITEBALANCE → const int
CAP_PROP_IRIS → const int
CAP_PROP_ISO_SPEED → const int
CAP_PROP_LRF_HAS_KEY_FRAME → const int
CAP_PROP_MODE → const int
CAP_PROP_MONOCHROME → const int
CAP_PROP_N_THREADS → const int
CAP_PROP_OBSENSOR_INTRINSIC_CX → const int
CAP_PROP_OBSENSOR_INTRINSIC_CY → const int
CAP_PROP_OBSENSOR_INTRINSIC_FX → const int
CAP_PROP_OBSENSOR_INTRINSIC_FY → const int
CAP_PROP_OPEN_TIMEOUT_MSEC → const int
CAP_PROP_OPENNI2_MIRROR → const int
CAP_PROP_OPENNI2_SYNC → const int
CAP_PROP_OPENNI_APPROX_FRAME_SYNC → const int
CAP_PROP_OPENNI_BASELINE → const int
CAP_PROP_OPENNI_CIRCLE_BUFFER → const int
CAP_PROP_OPENNI_FOCAL_LENGTH → const int
CAP_PROP_OPENNI_FRAME_MAX_DEPTH → const int
CAP_PROP_OPENNI_GENERATOR_PRESENT → const int
CAP_PROP_OPENNI_MAX_BUFFER_SIZE → const int
CAP_PROP_OPENNI_MAX_TIME_DURATION → const int
CAP_PROP_OPENNI_OUTPUT_MODE → const int
CAP_PROP_OPENNI_REGISTRATION → const int
CAP_PROP_OPENNI_REGISTRATION_ON → const int
CAP_PROP_ORIENTATION_AUTO → const int
CAP_PROP_ORIENTATION_META → const int
CAP_PROP_PAN → const int
CAP_PROP_POS_AVI_RATIO → const int
CAP_PROP_POS_FRAMES → const int
CAP_PROP_POS_MSEC → const int
CAP_PROP_PVAPI_BINNINGX → const int
CAP_PROP_PVAPI_BINNINGY → const int
CAP_PROP_PVAPI_DECIMATIONHORIZONTAL → const int
CAP_PROP_PVAPI_DECIMATIONVERTICAL → const int
CAP_PROP_PVAPI_FRAMESTARTTRIGGERMODE → const int
CAP_PROP_PVAPI_MULTICASTIP → const int
CAP_PROP_PVAPI_PIXELFORMAT → const int
CAP_PROP_READ_TIMEOUT_MSEC → const int
CAP_PROP_RECTIFICATION → const int
CAP_PROP_ROLL → const int
CAP_PROP_SAR_DEN → const int
CAP_PROP_SAR_NUM → const int
CAP_PROP_SATURATION → const int
CAP_PROP_SETTINGS → const int
CAP_PROP_SHARPNESS → const int
CAP_PROP_SPEED → const int
CAP_PROP_STREAM_OPEN_TIME_USEC → const int
CAP_PROP_TEMPERATURE → const int
CAP_PROP_TILT → const int
CAP_PROP_TRIGGER → const int
CAP_PROP_TRIGGER_DELAY → const int
CAP_PROP_VIDEO_STREAM → const int
CAP_PROP_VIDEO_TOTAL_CHANNELS → const int
CAP_PROP_VIEWFINDER → const int
CAP_PROP_WB_TEMPERATURE → const int
CAP_PROP_WHITE_BALANCE_BLUE_U → const int
CAP_PROP_WHITE_BALANCE_RED_V → const int
CAP_PROP_XI_ACQ_BUFFER_SIZE → const int
CAP_PROP_XI_ACQ_BUFFER_SIZE_UNIT → const int
CAP_PROP_XI_ACQ_FRAME_BURST_COUNT → const int
CAP_PROP_XI_ACQ_TIMING_MODE → const int
CAP_PROP_XI_ACQ_TRANSPORT_BUFFER_COMMIT → const int
CAP_PROP_XI_ACQ_TRANSPORT_BUFFER_SIZE → const int
CAP_PROP_XI_AE_MAX_LIMIT → const int
CAP_PROP_XI_AEAG → const int
CAP_PROP_XI_AEAG_LEVEL → const int
CAP_PROP_XI_AEAG_ROI_HEIGHT → const int
CAP_PROP_XI_AEAG_ROI_OFFSET_X → const int
CAP_PROP_XI_AEAG_ROI_OFFSET_Y → const int
CAP_PROP_XI_AEAG_ROI_WIDTH → const int
CAP_PROP_XI_AG_MAX_LIMIT → const int
CAP_PROP_XI_APPLY_CMS → const int
CAP_PROP_XI_AUTO_BANDWIDTH_CALCULATION → const int
CAP_PROP_XI_AUTO_WB → const int
CAP_PROP_XI_AVAILABLE_BANDWIDTH → const int
CAP_PROP_XI_BINNING_HORIZONTAL → const int
CAP_PROP_XI_BINNING_PATTERN → const int
CAP_PROP_XI_BINNING_SELECTOR → const int
CAP_PROP_XI_BINNING_VERTICAL → const int
CAP_PROP_XI_BPC → const int
CAP_PROP_XI_BUFFER_POLICY → const int
CAP_PROP_XI_BUFFERS_QUEUE_SIZE → const int
CAP_PROP_XI_CC_MATRIX_00 → const int
CAP_PROP_XI_CC_MATRIX_01 → const int
CAP_PROP_XI_CC_MATRIX_02 → const int
CAP_PROP_XI_CC_MATRIX_03 → const int
CAP_PROP_XI_CC_MATRIX_10 → const int
CAP_PROP_XI_CC_MATRIX_11 → const int
CAP_PROP_XI_CC_MATRIX_12 → const int
CAP_PROP_XI_CC_MATRIX_13 → const int
CAP_PROP_XI_CC_MATRIX_20 → const int
CAP_PROP_XI_CC_MATRIX_21 → const int
CAP_PROP_XI_CC_MATRIX_22 → const int
CAP_PROP_XI_CC_MATRIX_23 → const int
CAP_PROP_XI_CC_MATRIX_30 → const int
CAP_PROP_XI_CC_MATRIX_31 → const int
CAP_PROP_XI_CC_MATRIX_32 → const int
CAP_PROP_XI_CC_MATRIX_33 → const int
CAP_PROP_XI_CHIP_TEMP → const int
CAP_PROP_XI_CMS → const int
CAP_PROP_XI_COLOR_FILTER_ARRAY → const int
CAP_PROP_XI_COLUMN_FPN_CORRECTION → const int
CAP_PROP_XI_COOLING → const int
CAP_PROP_XI_COUNTER_SELECTOR → const int
CAP_PROP_XI_COUNTER_VALUE → const int
CAP_PROP_XI_DATA_FORMAT → const int
CAP_PROP_XI_DEBOUNCE_EN → const int
CAP_PROP_XI_DEBOUNCE_POL → const int
CAP_PROP_XI_DEBOUNCE_T0 → const int
CAP_PROP_XI_DEBOUNCE_T1 → const int
CAP_PROP_XI_DEBUG_LEVEL → const int
CAP_PROP_XI_DECIMATION_HORIZONTAL → const int
CAP_PROP_XI_DECIMATION_PATTERN → const int
CAP_PROP_XI_DECIMATION_SELECTOR → const int
CAP_PROP_XI_DECIMATION_VERTICAL → const int
CAP_PROP_XI_DEFAULT_CC_MATRIX → const int
CAP_PROP_XI_DEVICE_MODEL_ID → const int
CAP_PROP_XI_DEVICE_RESET → const int
CAP_PROP_XI_DEVICE_SN → const int
CAP_PROP_XI_DOWNSAMPLING → const int
CAP_PROP_XI_DOWNSAMPLING_TYPE → const int
CAP_PROP_XI_EXP_PRIORITY → const int
CAP_PROP_XI_EXPOSURE → const int
CAP_PROP_XI_EXPOSURE_BURST_COUNT → const int
CAP_PROP_XI_FFS_ACCESS_KEY → const int
CAP_PROP_XI_FFS_FILE_ID → const int
CAP_PROP_XI_FFS_FILE_SIZE → const int
CAP_PROP_XI_FRAMERATE → const int
CAP_PROP_XI_FREE_FFS_SIZE → const int
CAP_PROP_XI_GAIN → const int
CAP_PROP_XI_GAIN_SELECTOR → const int
CAP_PROP_XI_GAMMAC → const int
CAP_PROP_XI_GAMMAY → const int
CAP_PROP_XI_GPI_LEVEL → const int
CAP_PROP_XI_GPI_MODE → const int
CAP_PROP_XI_GPI_SELECTOR → const int
CAP_PROP_XI_GPO_MODE → const int
CAP_PROP_XI_GPO_SELECTOR → const int
CAP_PROP_XI_HDR → const int
CAP_PROP_XI_HDR_KNEEPOINT_COUNT → const int
CAP_PROP_XI_HDR_T1 → const int
CAP_PROP_XI_HDR_T2 → const int
CAP_PROP_XI_HEIGHT → const int
CAP_PROP_XI_HOUS_BACK_SIDE_TEMP → const int
CAP_PROP_XI_HOUS_TEMP → const int
CAP_PROP_XI_HW_REVISION → const int
CAP_PROP_XI_IMAGE_BLACK_LEVEL → const int
CAP_PROP_XI_IMAGE_DATA_BIT_DEPTH → const int
CAP_PROP_XI_IMAGE_DATA_FORMAT → const int
CAP_PROP_XI_IMAGE_DATA_FORMAT_RGB32_ALPHA → const int
CAP_PROP_XI_IMAGE_IS_COLOR → const int
CAP_PROP_XI_IMAGE_PAYLOAD_SIZE → const int
CAP_PROP_XI_IS_COOLED → const int
CAP_PROP_XI_IS_DEVICE_EXIST → const int
CAP_PROP_XI_KNEEPOINT1 → const int
CAP_PROP_XI_KNEEPOINT2 → const int
CAP_PROP_XI_LED_MODE → const int
CAP_PROP_XI_LED_SELECTOR → const int
CAP_PROP_XI_LENS_APERTURE_VALUE → const int
CAP_PROP_XI_LENS_FEATURE → const int
CAP_PROP_XI_LENS_FEATURE_SELECTOR → const int
CAP_PROP_XI_LENS_FOCAL_LENGTH → const int
CAP_PROP_XI_LENS_FOCUS_DISTANCE → const int
CAP_PROP_XI_LENS_FOCUS_MOVE → const int
CAP_PROP_XI_LENS_FOCUS_MOVEMENT_VALUE → const int
CAP_PROP_XI_LENS_MODE → const int
CAP_PROP_XI_LIMIT_BANDWIDTH → const int
CAP_PROP_XI_LUT_EN → const int
CAP_PROP_XI_LUT_INDEX → const int
CAP_PROP_XI_LUT_VALUE → const int
CAP_PROP_XI_MANUAL_WB → const int
CAP_PROP_XI_OFFSET_X → const int
CAP_PROP_XI_OFFSET_Y → const int
CAP_PROP_XI_OUTPUT_DATA_BIT_DEPTH → const int
CAP_PROP_XI_OUTPUT_DATA_PACKING → const int
CAP_PROP_XI_OUTPUT_DATA_PACKING_TYPE → const int
CAP_PROP_XI_RECENT_FRAME → const int
CAP_PROP_XI_REGION_MODE → const int
CAP_PROP_XI_REGION_SELECTOR → const int
CAP_PROP_XI_ROW_FPN_CORRECTION → const int
CAP_PROP_XI_SENSOR_BOARD_TEMP → const int
CAP_PROP_XI_SENSOR_CLOCK_FREQ_HZ → const int
CAP_PROP_XI_SENSOR_CLOCK_FREQ_INDEX → const int
CAP_PROP_XI_SENSOR_DATA_BIT_DEPTH → const int
CAP_PROP_XI_SENSOR_FEATURE_SELECTOR → const int
CAP_PROP_XI_SENSOR_FEATURE_VALUE → const int
CAP_PROP_XI_SENSOR_MODE → const int
CAP_PROP_XI_SENSOR_OUTPUT_CHANNEL_COUNT → const int
CAP_PROP_XI_SENSOR_TAPS → const int
CAP_PROP_XI_SHARPNESS → const int
CAP_PROP_XI_SHUTTER_TYPE → const int
CAP_PROP_XI_TARGET_TEMP → const int
CAP_PROP_XI_TEST_PATTERN → const int
CAP_PROP_XI_TEST_PATTERN_GENERATOR_SELECTOR → const int
CAP_PROP_XI_TIMEOUT → const int
CAP_PROP_XI_TRANSPORT_PIXEL_FORMAT → const int
CAP_PROP_XI_TRG_DELAY → const int
CAP_PROP_XI_TRG_SELECTOR → const int
CAP_PROP_XI_TRG_SOFTWARE → const int
CAP_PROP_XI_TRG_SOURCE → const int
CAP_PROP_XI_TS_RST_MODE → const int
CAP_PROP_XI_TS_RST_SOURCE → const int
CAP_PROP_XI_USED_FFS_SIZE → const int
CAP_PROP_XI_WB_KB → const int
CAP_PROP_XI_WB_KG → const int
CAP_PROP_XI_WB_KR → const int
CAP_PROP_XI_WIDTH → const int
CAP_PROP_ZOOM → const int
CAP_PVAPI → const int
CAP_PVAPI_DECIMATION_2OUTOF16 → const int
CAP_PVAPI_DECIMATION_2OUTOF4 → const int
CAP_PVAPI_DECIMATION_2OUTOF8 → const int
CAP_PVAPI_DECIMATION_OFF → const int
CAP_PVAPI_FSTRIGMODE_FIXEDRATE → const int
CAP_PVAPI_FSTRIGMODE_FREERUN → const int
CAP_PVAPI_FSTRIGMODE_SOFTWARE → const int
CAP_PVAPI_FSTRIGMODE_SYNCIN1 → const int
CAP_PVAPI_FSTRIGMODE_SYNCIN2 → const int
CAP_PVAPI_PIXELFORMAT_BAYER16 → const int
CAP_PVAPI_PIXELFORMAT_BAYER8 → const int
CAP_PVAPI_PIXELFORMAT_BGR24 → const int
CAP_PVAPI_PIXELFORMAT_BGRA32 → const int
CAP_PVAPI_PIXELFORMAT_MONO16 → const int
CAP_PVAPI_PIXELFORMAT_MONO8 → const int
CAP_PVAPI_PIXELFORMAT_RGB24 → const int
CAP_PVAPI_PIXELFORMAT_RGBA32 → const int
CAP_QT → const int
CAP_REALSENSE → const int
CAP_UEYE → const int
CAP_UNICAP → const int
CAP_V4L → const int
CAP_V4L2 → const int
CAP_VFW → const int
CAP_WINRT → const int
CAP_XIAPI → const int
CAP_XINE → const int
CC_STAT_AREA → const int
CC_STAT_HEIGHT → const int
CC_STAT_LEFT → const int
CC_STAT_MAX → const int
CC_STAT_TOP → const int
CC_STAT_WIDTH → const int
CCL_BBDT → const int
CCL_BOLELLI → const int
CCL_DEFAULT → const int
CCL_GRANA → const int
CCL_SAUF → const int
CCL_SPAGHETTI → const int
CCL_WU → const int
CHAIN_APPROX_NONE → const int
CHAIN_APPROX_SIMPLE → const int
CHAIN_APPROX_TC89_KCOS → const int
CHAIN_APPROX_TC89_L1 → const int
CMP_EQ → const int
CMP_GE → const int
CMP_GT → const int
CMP_LE → const int
CMP_LT → const int
CMP_NE → const int
COLOR_BayerBG2BGR → const int
COLOR_BayerBG2BGR_EA → const int
COLOR_BayerBG2BGR_VNG → const int
COLOR_BayerBG2BGRA → const int
COLOR_BayerBG2GRAY → const int
COLOR_BayerBG2RGB → const int
COLOR_BayerBG2RGB_EA → const int
COLOR_BayerBG2RGB_VNG → const int
COLOR_BayerBG2RGBA → const int
COLOR_BayerBGGR2BGR → const int
COLOR_BayerBGGR2BGR_EA → const int
COLOR_BayerBGGR2BGR_VNG → const int
COLOR_BayerBGGR2BGRA → const int
COLOR_BayerBGGR2GRAY → const int
COLOR_BayerBGGR2RGB → const int
COLOR_BayerBGGR2RGB_EA → const int
COLOR_BayerBGGR2RGB_VNG → const int
COLOR_BayerBGGR2RGBA → const int
COLOR_BayerGB2BGR → const int
COLOR_BayerGB2BGR_EA → const int
COLOR_BayerGB2BGR_VNG → const int
COLOR_BayerGB2BGRA → const int
COLOR_BayerGB2GRAY → const int
COLOR_BayerGB2RGB → const int
COLOR_BayerGB2RGB_EA → const int
COLOR_BayerGB2RGB_VNG → const int
COLOR_BayerGB2RGBA → const int
COLOR_BayerGBRG2BGR → const int
COLOR_BayerGBRG2BGR_EA → const int
COLOR_BayerGBRG2BGR_VNG → const int
COLOR_BayerGBRG2BGRA → const int
COLOR_BayerGBRG2GRAY → const int
COLOR_BayerGBRG2RGB → const int
COLOR_BayerGBRG2RGB_EA → const int
COLOR_BayerGBRG2RGB_VNG → const int
COLOR_BayerGBRG2RGBA → const int
COLOR_BayerGR2BGR → const int
COLOR_BayerGR2BGR_EA → const int
COLOR_BayerGR2BGR_VNG → const int
COLOR_BayerGR2BGRA → const int
COLOR_BayerGR2GRAY → const int
COLOR_BayerGR2RGB → const int
COLOR_BayerGR2RGB_EA → const int
COLOR_BayerGR2RGB_VNG → const int
COLOR_BayerGR2RGBA → const int
COLOR_BayerGRBG2BGR → const int
COLOR_BayerGRBG2BGR_EA → const int
COLOR_BayerGRBG2BGR_VNG → const int
COLOR_BayerGRBG2BGRA → const int
COLOR_BayerGRBG2GRAY → const int
COLOR_BayerGRBG2RGB → const int
COLOR_BayerGRBG2RGB_EA → const int
COLOR_BayerGRBG2RGB_VNG → const int
COLOR_BayerGRBG2RGBA → const int
COLOR_BayerRG2BGR → const int
COLOR_BayerRG2BGR_EA → const int
COLOR_BayerRG2BGR_VNG → const int
COLOR_BayerRG2BGRA → const int
COLOR_BayerRG2GRAY → const int
COLOR_BayerRG2RGB → const int
COLOR_BayerRG2RGB_EA → const int
COLOR_BayerRG2RGB_VNG → const int
COLOR_BayerRG2RGBA → const int
COLOR_BayerRGGB2BGR → const int
COLOR_BayerRGGB2BGR_EA → const int
COLOR_BayerRGGB2BGR_VNG → const int
COLOR_BayerRGGB2BGRA → const int
COLOR_BayerRGGB2GRAY → const int
COLOR_BayerRGGB2RGB → const int
COLOR_BayerRGGB2RGB_EA → const int
COLOR_BayerRGGB2RGB_VNG → const int
COLOR_BayerRGGB2RGBA → const int
COLOR_BGR2BGR555 → const int
COLOR_BGR2BGR565 → const int
COLOR_BGR2BGRA → const int
COLOR_BGR2GRAY → const int
COLOR_BGR2HLS → const int
COLOR_BGR2HLS_FULL → const int
COLOR_BGR2HSV → const int
COLOR_BGR2HSV_FULL → const int
COLOR_BGR2Lab → const int
COLOR_BGR2Luv → const int
COLOR_BGR2RGB → const int
COLOR_BGR2RGBA → const int
COLOR_BGR2XYZ → const int
COLOR_BGR2YCrCb → const int
COLOR_BGR2YUV → const int
COLOR_BGR2YUV_I420 → const int
COLOR_BGR2YUV_IYUV → const int
COLOR_BGR2YUV_UYNV → const int
COLOR_BGR2YUV_UYVY → const int
COLOR_BGR2YUV_Y422 → const int
COLOR_BGR2YUV_YUNV → const int
COLOR_BGR2YUV_YUY2 → const int
COLOR_BGR2YUV_YUYV → const int
COLOR_BGR2YUV_YV12 → const int
COLOR_BGR2YUV_YVYU → const int
COLOR_BGR5552BGR → const int
COLOR_BGR5552BGRA → const int
COLOR_BGR5552GRAY → const int
COLOR_BGR5552RGB → const int
COLOR_BGR5552RGBA → const int
COLOR_BGR5652BGR → const int
COLOR_BGR5652BGRA → const int
COLOR_BGR5652GRAY → const int
COLOR_BGR5652RGB → const int
COLOR_BGR5652RGBA → const int
COLOR_BGRA2BGR → const int
COLOR_BGRA2BGR555 → const int
COLOR_BGRA2BGR565 → const int
COLOR_BGRA2GRAY → const int
COLOR_BGRA2RGB → const int
COLOR_BGRA2RGBA → const int
COLOR_BGRA2YUV_I420 → const int
COLOR_BGRA2YUV_IYUV → const int
COLOR_BGRA2YUV_UYNV → const int
COLOR_BGRA2YUV_UYVY → const int
COLOR_BGRA2YUV_Y422 → const int
COLOR_BGRA2YUV_YUNV → const int
COLOR_BGRA2YUV_YUY2 → const int
COLOR_BGRA2YUV_YUYV → const int
COLOR_BGRA2YUV_YV12 → const int
COLOR_BGRA2YUV_YVYU → const int
COLOR_COLORCVT_MAX → const int
COLOR_GRAY2BGR → const int
COLOR_GRAY2BGR555 → const int
COLOR_GRAY2BGR565 → const int
COLOR_GRAY2BGRA → const int
COLOR_GRAY2RGB → const int
COLOR_GRAY2RGBA → const int
COLOR_HLS2BGR → const int
COLOR_HLS2BGR_FULL → const int
COLOR_HLS2RGB → const int
COLOR_HLS2RGB_FULL → const int
COLOR_HSV2BGR → const int
COLOR_HSV2BGR_FULL → const int
COLOR_HSV2RGB → const int
COLOR_HSV2RGB_FULL → const int
COLOR_Lab2BGR → const int
COLOR_Lab2LBGR → const int
COLOR_Lab2LRGB → const int
COLOR_Lab2RGB → const int
COLOR_LBGR2Lab → const int
COLOR_LBGR2Luv → const int
COLOR_LRGB2Lab → const int
COLOR_LRGB2Luv → const int
COLOR_Luv2BGR → const int
COLOR_Luv2LBGR → const int
COLOR_Luv2LRGB → const int
COLOR_Luv2RGB → const int
COLOR_mRGBA2RGBA → const int
COLOR_RGB2BGR → const int
COLOR_RGB2BGR555 → const int
COLOR_RGB2BGR565 → const int
COLOR_RGB2BGRA → const int
COLOR_RGB2GRAY → const int
COLOR_RGB2HLS → const int
COLOR_RGB2HLS_FULL → const int
COLOR_RGB2HSV → const int
COLOR_RGB2HSV_FULL → const int
COLOR_RGB2Lab → const int
COLOR_RGB2Luv → const int
COLOR_RGB2RGBA → const int
COLOR_RGB2XYZ → const int
COLOR_RGB2YCrCb → const int
COLOR_RGB2YUV → const int
COLOR_RGB2YUV_I420 → const int
COLOR_RGB2YUV_IYUV → const int
COLOR_RGB2YUV_UYNV → const int
COLOR_RGB2YUV_UYVY → const int
COLOR_RGB2YUV_Y422 → const int
COLOR_RGB2YUV_YUNV → const int
COLOR_RGB2YUV_YUY2 → const int
COLOR_RGB2YUV_YUYV → const int
COLOR_RGB2YUV_YV12 → const int
COLOR_RGB2YUV_YVYU → const int
COLOR_RGBA2BGR → const int
COLOR_RGBA2BGR555 → const int
COLOR_RGBA2BGR565 → const int
COLOR_RGBA2BGRA → const int
COLOR_RGBA2GRAY → const int
COLOR_RGBA2mRGBA → const int
COLOR_RGBA2RGB → const int
COLOR_RGBA2YUV_I420 → const int
COLOR_RGBA2YUV_IYUV → const int
COLOR_RGBA2YUV_UYNV → const int
COLOR_RGBA2YUV_UYVY → const int
COLOR_RGBA2YUV_Y422 → const int
COLOR_RGBA2YUV_YUNV → const int
COLOR_RGBA2YUV_YUY2 → const int
COLOR_RGBA2YUV_YUYV → const int
COLOR_RGBA2YUV_YV12 → const int
COLOR_RGBA2YUV_YVYU → const int
COLOR_XYZ2BGR → const int
COLOR_XYZ2RGB → const int
COLOR_YCrCb2BGR → const int
COLOR_YCrCb2RGB → const int
COLOR_YUV2BGR → const int
COLOR_YUV2BGR_I420 → const int
COLOR_YUV2BGR_IYUV → const int
COLOR_YUV2BGR_NV12 → const int
COLOR_YUV2BGR_NV21 → const int
COLOR_YUV2BGR_UYNV → const int
COLOR_YUV2BGR_UYVY → const int
COLOR_YUV2BGR_Y422 → const int
COLOR_YUV2BGR_YUNV → const int
COLOR_YUV2BGR_YUY2 → const int
COLOR_YUV2BGR_YUYV → const int
COLOR_YUV2BGR_YV12 → const int
COLOR_YUV2BGR_YVYU → const int
COLOR_YUV2BGRA_I420 → const int
COLOR_YUV2BGRA_IYUV → const int
COLOR_YUV2BGRA_NV12 → const int
COLOR_YUV2BGRA_NV21 → const int
COLOR_YUV2BGRA_UYNV → const int
COLOR_YUV2BGRA_UYVY → const int
COLOR_YUV2BGRA_Y422 → const int
COLOR_YUV2BGRA_YUNV → const int
COLOR_YUV2BGRA_YUY2 → const int
COLOR_YUV2BGRA_YUYV → const int
COLOR_YUV2BGRA_YV12 → const int
COLOR_YUV2BGRA_YVYU → const int
COLOR_YUV2GRAY_420 → const int
COLOR_YUV2GRAY_I420 → const int
COLOR_YUV2GRAY_IYUV → const int
COLOR_YUV2GRAY_NV12 → const int
COLOR_YUV2GRAY_NV21 → const int
COLOR_YUV2GRAY_UYNV → const int
COLOR_YUV2GRAY_UYVY → const int
COLOR_YUV2GRAY_Y422 → const int
COLOR_YUV2GRAY_YUNV → const int
COLOR_YUV2GRAY_YUY2 → const int
COLOR_YUV2GRAY_YUYV → const int
COLOR_YUV2GRAY_YV12 → const int
COLOR_YUV2GRAY_YVYU → const int
COLOR_YUV2RGB → const int
COLOR_YUV2RGB_I420 → const int
COLOR_YUV2RGB_IYUV → const int
COLOR_YUV2RGB_NV12 → const int
COLOR_YUV2RGB_NV21 → const int
COLOR_YUV2RGB_UYNV → const int
COLOR_YUV2RGB_UYVY → const int
COLOR_YUV2RGB_Y422 → const int
COLOR_YUV2RGB_YUNV → const int
COLOR_YUV2RGB_YUY2 → const int
COLOR_YUV2RGB_YUYV → const int
COLOR_YUV2RGB_YV12 → const int
COLOR_YUV2RGB_YVYU → const int
COLOR_YUV2RGBA_I420 → const int
COLOR_YUV2RGBA_IYUV → const int
COLOR_YUV2RGBA_NV12 → const int
COLOR_YUV2RGBA_NV21 → const int
COLOR_YUV2RGBA_UYNV → const int
COLOR_YUV2RGBA_UYVY → const int
COLOR_YUV2RGBA_Y422 → const int
COLOR_YUV2RGBA_YUNV → const int
COLOR_YUV2RGBA_YUY2 → const int
COLOR_YUV2RGBA_YUYV → const int
COLOR_YUV2RGBA_YV12 → const int
COLOR_YUV2RGBA_YVYU → const int
COLOR_YUV420p2BGR → const int
COLOR_YUV420p2BGRA → const int
COLOR_YUV420p2GRAY → const int
COLOR_YUV420p2RGB → const int
COLOR_YUV420p2RGBA → const int
COLOR_YUV420sp2BGR → const int
COLOR_YUV420sp2BGRA → const int
COLOR_YUV420sp2GRAY → const int
COLOR_YUV420sp2RGB → const int
COLOR_YUV420sp2RGBA → const int
COLORMAP_AUTUMN → const int
COLORMAP_BONE → const int
COLORMAP_CIVIDIS → const int
COLORMAP_COOL → const int
COLORMAP_DEEPGREEN → const int
COLORMAP_HOT → const int
COLORMAP_HSV → const int
COLORMAP_INFERNO → const int
COLORMAP_JET → const int
COLORMAP_MAGMA → const int
COLORMAP_OCEAN → const int
COLORMAP_PARULA → const int
COLORMAP_PINK → const int
COLORMAP_PLASMA → const int
COLORMAP_RAINBOW → const int
COLORMAP_SPRING → const int
COLORMAP_SUMMER → const int
COLORMAP_TURBO → const int
COLORMAP_TWILIGHT → const int
COLORMAP_TWILIGHT_SHIFTED → const int
COLORMAP_VIRIDIS → const int
COLORMAP_WINTER → const int
CONTOURS_MATCH_I1 → const int
CONTOURS_MATCH_I2 → const int
CONTOURS_MATCH_I3 → const int
COVAR_COLS → const int
COVAR_NORMAL → const int
COVAR_ROWS → const int
COVAR_SCALE → const int
COVAR_SCRAMBLED → const int
COVAR_USE_AVG → const int
CV_2PI → const double
CV__CAP_PROP_LATEST → const int
CV__VIDEOWRITER_PROP_LATEST → const int
CV_F32_MAX → const double
CV_F64_MAX → const double
CV_I16_MAX → const int
CV_I16_MIN → const int
CV_I32_MAX → const int
CV_I32_MIN → const int
CV_I8_MAX → const int
CV_I8_MIN → const int
CV_LOG2 → const double
CV_PI → const double
CV_U16_MAX → const int
CV_U16_MIN → const int
CV_U32_MAX → const int
CV_U32_MIN → const int
CV_U8_MAX → const int
CV_U8_MIN → const int
cvRunAsync → const Future<T> Function<T>(Pointer<CvStatus> func(CvCallback_0 callback), void onComplete(Completer<T> completer))
DCT_INVERSE → const int
DCT_ROWS → const int
DECOMP_CHOLESKY → const int
DECOMP_EIG → const int
DECOMP_LU → const int
DECOMP_NORMAL → const int
DECOMP_QR → const int
DECOMP_SVD → const int
DFT_COMPLEX_INPUT → const int
DFT_COMPLEX_OUTPUT → const int
DFT_INVERSE → const int
DFT_REAL_OUTPUT → const int
DFT_ROWS → const int
DFT_SCALE → const int
DIST_C → const int
DIST_FAIR → const int
DIST_HUBER → const int
DIST_L1 → const int
DIST_L12 → const int
DIST_L2 → const int
DIST_LABEL_CCOMP → const int
DIST_LABEL_PIXEL → const int
DIST_MASK_3 → const int
DIST_MASK_5 → const int
DIST_MASK_PRECISE → const int
DIST_USER → const int
DIST_WELSCH → const int
DNN_BACKEND_CANN → const int
DNN_BACKEND_CUDA → const int
DNN_BACKEND_DEFAULT → const int
DNN_BACKEND_HALIDE → const int
DNN_BACKEND_INFERENCE_ENGINE → const int
DNN_BACKEND_OPENCV → const int
DNN_BACKEND_TIMVX → const int
DNN_BACKEND_VKCOM → const int
DNN_BACKEND_WEBNN → const int
DNN_TARGET_CPU → const int
DNN_TARGET_CPU_FP16 → const int
Only the ARM platform is supported. Low precision computing, accelerate model inference.
DNN_TARGET_CUDA → const int
DNN_TARGET_CUDA_FP16 → const int
DNN_TARGET_FPGA → const int
FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin.
DNN_TARGET_HDDL → const int
DNN_TARGET_MYRIAD → const int
DNN_TARGET_NPU → const int
DNN_TARGET_OPENCL → const int
DNN_TARGET_OPENCL_FP16 → const int
DNN_TARGET_VULKAN → const int
FILLED → const int
FILTER_SCHARR → const int
FLOODFILL_FIXED_RANGE → const int
FLOODFILL_MASK_ONLY → const int
FONT_HERSHEY_COMPLEX → const int
FONT_HERSHEY_COMPLEX_SMALL → const int
FONT_HERSHEY_DUPLEX → const int
FONT_HERSHEY_PLAIN → const int
FONT_HERSHEY_SCRIPT_COMPLEX → const int
FONT_HERSHEY_SCRIPT_SIMPLEX → const int
FONT_HERSHEY_SIMPLEX → const int
FONT_HERSHEY_TRIPLEX → const int
FONT_ITALIC → const int
GC_BGD → const int
GC_EVAL → const int
GC_EVAL_FREEZE_MODEL → const int
GC_FGD → const int
GC_INIT_WITH_MASK → const int
GC_INIT_WITH_RECT → const int
GC_PR_BGD → const int
GC_PR_FGD → const int
GEMM_1_T → const int
GEMM_2_T → const int
GEMM_3_T → const int
HISTCMP_BHATTACHARYYA → const int
HISTCMP_CHISQR → const int
HISTCMP_CHISQR_ALT → const int
HISTCMP_CORREL → const int
HISTCMP_HELLINGER → const int
HISTCMP_INTERSECT → const int
HISTCMP_KL_DIV → const int
HOMOGRAPY_ALL_POINTS → const int
HOMOGRAPY_LMEDS → const int
HOMOGRAPY_RANSAC → const int
HOUGH_GRADIENT → const int
HOUGH_GRADIENT_ALT → const int
HOUGH_MULTI_SCALE → const int
HOUGH_PROBABILISTIC → const int
HOUGH_STANDARD → const int
IMREAD_ANYCOLOR → const int
IMREAD_ANYDEPTH → const int
IMREAD_COLOR → const int
IMREAD_GRAYSCALE → const int
IMREAD_IGNORE_ORIENTATION → const int
IMREAD_LOAD_GDAL → const int
IMREAD_REDUCED_COLOR_2 → const int
IMREAD_REDUCED_COLOR_4 → const int
IMREAD_REDUCED_COLOR_8 → const int
IMREAD_REDUCED_GRAYSCALE_2 → const int
IMREAD_REDUCED_GRAYSCALE_4 → const int
IMREAD_REDUCED_GRAYSCALE_8 → const int
IMREAD_UNCHANGED → const int
IMWRITE_AVIF_DEPTH → const int
IMWRITE_AVIF_QUALITY → const int
IMWRITE_AVIF_SPEED → const int
IMWRITE_EXR_COMPRESSION → const int
IMWRITE_EXR_COMPRESSION_B44 → const int
IMWRITE_EXR_COMPRESSION_B44A → const int
IMWRITE_EXR_COMPRESSION_DWAA → const int
IMWRITE_EXR_COMPRESSION_DWAB → const int
IMWRITE_EXR_COMPRESSION_NO → const int
IMWRITE_EXR_COMPRESSION_PIZ → const int
IMWRITE_EXR_COMPRESSION_PXR24 → const int
IMWRITE_EXR_COMPRESSION_RLE → const int
IMWRITE_EXR_COMPRESSION_ZIP → const int
IMWRITE_EXR_COMPRESSION_ZIPS → const int
IMWRITE_EXR_DWA_COMPRESSION_LEVEL → const int
IMWRITE_EXR_TYPE → const int
IMWRITE_EXR_TYPE_FLOAT → const int
IMWRITE_EXR_TYPE_HALF → const int
IMWRITE_HDR_COMPRESSION → const int
IMWRITE_HDR_COMPRESSION_NONE → const int
IMWRITE_HDR_COMPRESSION_RLE → const int
IMWRITE_JPEG2000_COMPRESSION_X1000 → const int
IMWRITE_JPEG_CHROMA_QUALITY → const int
IMWRITE_JPEG_LUMA_QUALITY → const int
IMWRITE_JPEG_OPTIMIZE → const int
IMWRITE_JPEG_PROGRESSIVE → const int
IMWRITE_JPEG_QUALITY → const int
IMWRITE_JPEG_RST_INTERVAL → const int
IMWRITE_JPEG_SAMPLING_FACTOR → const int
IMWRITE_JPEG_SAMPLING_FACTOR_411 → const int
IMWRITE_JPEG_SAMPLING_FACTOR_420 → const int
IMWRITE_JPEG_SAMPLING_FACTOR_422 → const int
IMWRITE_JPEG_SAMPLING_FACTOR_440 → const int
IMWRITE_JPEG_SAMPLING_FACTOR_444 → const int
IMWRITE_PAM_FORMAT_BLACKANDWHITE → const int
IMWRITE_PAM_FORMAT_GRAYSCALE → const int
IMWRITE_PAM_FORMAT_GRAYSCALE_ALPHA → const int
IMWRITE_PAM_FORMAT_NULL → const int
IMWRITE_PAM_FORMAT_RGB → const int
IMWRITE_PAM_FORMAT_RGB_ALPHA → const int
IMWRITE_PAM_TUPLETYPE → const int
IMWRITE_PNG_BILEVEL → const int
IMWRITE_PNG_COMPRESSION → const int
IMWRITE_PNG_STRATEGY → const int
IMWRITE_PNG_STRATEGY_DEFAULT → const int
IMWRITE_PNG_STRATEGY_FILTERED → const int
IMWRITE_PNG_STRATEGY_FIXED → const int
IMWRITE_PNG_STRATEGY_HUFFMAN_ONLY → const int
IMWRITE_PNG_STRATEGY_RLE → const int
IMWRITE_PXM_BINARY → const int
IMWRITE_TIFF_COMPRESSION → const int
IMWRITE_TIFF_RESUNIT → const int
IMWRITE_TIFF_XDPI → const int
IMWRITE_TIFF_YDPI → const int
IMWRITE_WEBP_QUALITY → const int
INPAINT_NS → const int
INPAINT_TELEA → const int
INTER_AREA → const int
INTER_BITS → const int
INTER_BITS2 → const int
INTER_CUBIC → const int
INTER_LANCZOS4 → const int
INTER_LINEAR → const int
INTER_LINEAR_EXACT → const int
INTER_MAX → const int
INTER_NEAREST → const int
INTER_NEAREST_EXACT → const int
INTER_TAB_SIZE → const int
INTER_TAB_SIZE2 → const int
INTERSECT_FULL → const int
INTERSECT_NONE → const int
INTERSECT_PARTIAL → const int
KMEANS_PP_CENTERS → const int
KMEANS_RANDOM_CENTERS → const int
KMEANS_USE_INITIAL_LABELS → const int
LINE_4 → const int
LINE_8 → const int
LINE_AA → const int
LMEDS → const int
LOG_LEVEL_DEBUG → const int
LOG_LEVEL_ERROR → const int
LOG_LEVEL_FATAL → const int
LOG_LEVEL_INFO → const int
LOG_LEVEL_SILENT → const int
Constants for log levels
LOG_LEVEL_VERBOSE → const int
LOG_LEVEL_WARNING → const int
LSD_REFINE_ADV → const int
LSD_REFINE_NONE → const int
LSD_REFINE_STD → const int
MARKER_CROSS → const int
MARKER_DIAMOND → const int
MARKER_SQUARE → const int
MARKER_STAR → const int
MARKER_TILTED_CROSS → const int
MARKER_TRIANGLE_DOWN → const int
MARKER_TRIANGLE_UP → const int
MIXED_CLONE → const int
MONOCHROME_TRANSFER → const int
MORPH_BLACKHAT → const int
MORPH_CLOSE → const int
MORPH_CROSS → const int
MORPH_DILATE → const int
MORPH_ELLIPSE → const int
MORPH_ERODE → const int
MORPH_GRADIENT → const int
MORPH_HITMISS → const int
MORPH_OPEN → const int
MORPH_RECT → const int
MORPH_TOPHAT → const int
MOTION_AFFINE → const int
MOTION_EUCLIDEAN → const int
MOTION_HOMOGRAPHY → const int
MOTION_TRANSLATION → const int
NORM_HAMMING → const int
NORM_HAMMING2 → const int
NORM_INF → const int
NORM_L1 → const int
NORM_L2 → const int
NORM_L2SQR → const int
NORM_MINMAX → const int
NORM_RELATIVE → const int
NORM_TYPE_MASK → const int
NORMAL_CLONE → const int
NORMCONV_FILTER → const int
OPTFLOW_FARNEBACK_GAUSSIAN → const int
OPTFLOW_LK_GET_MIN_EIGENVALS → const int
OPTFLOW_USE_INITIAL_FLOW → const int
RANSAC → const int
RECURS_FILTER → const int
REDUCE_AVG → const int
REDUCE_MAX → const int
REDUCE_MIN → const int
REDUCE_SUM → const int
REDUCE_SUM2 → const int
RETR_CCOMP → const int
RETR_EXTERNAL → const int
RETR_FLOODFILL → const int
RETR_LIST → const int
RETR_TREE → const int
RHO → const int
RNG_DIST_NORMAL → const int
RNG_DIST_UNIFORM → const int
ROTATE_180 → const int
ROTATE_90_CLOCKWISE → const int
ROTATE_90_COUNTERCLOCKWISE → const int
SORT_ASCENDING → const int
SORT_DESCENDING → const int
SORT_EVERY_COLUMN → const int
SORT_EVERY_ROW → const int
TERM_COUNT → const int
TERM_EPS → const int
TERM_MAX_ITER → const int
THRESH_BINARY → const int
THRESH_BINARY_INV → const int
THRESH_MASK → const int
THRESH_OTSU → const int
THRESH_TOZERO → const int
THRESH_TOZERO_INV → const int
THRESH_TRIANGLE → const int
THRESH_TRUNC → const int
TM_CCOEFF → const int
TM_CCOEFF_NORMED → const int
TM_CCORR → const int
TM_CCORR_NORMED → const int
TM_SQDIFF → const int
TM_SQDIFF_NORMED → const int
USAC_ACCURATE → const int
USAC_DEFAULT → const int
USAC_FAST → const int
USAC_FM_8PTS → const int
USAC_MAGSAC → const int
USAC_PARALLEL → const int
USAC_PROSAC → const int
VIDEOWRITER_PROP_DEPTH → const int
VIDEOWRITER_PROP_FRAMEBYTES → const int
VIDEOWRITER_PROP_HW_ACCELERATION → const int
VIDEOWRITER_PROP_HW_ACCELERATION_USE_OPENCL → const int
VIDEOWRITER_PROP_HW_DEVICE → const int
VIDEOWRITER_PROP_IS_COLOR → const int
VIDEOWRITER_PROP_KEY_FLAG → const int
VIDEOWRITER_PROP_KEY_INTERVAL → const int
VIDEOWRITER_PROP_NSTRIPES → const int
VIDEOWRITER_PROP_QUALITY → const int
VIDEOWRITER_PROP_RAW_VIDEO → const int
WARP_FILL_OUTLIERS → const int
WARP_INVERSE_MAP → const int
WARP_POLAR_LINEAR → const int
WARP_POLAR_LOG → const int

Functions

absDiff(Mat src1, Mat src2, {Mat? dst}) Mat
AbsDiff calculates the per-element absolute difference between two arrays or between an array and a scalar.
absDiffAsync(Mat src1, Mat src2, {Mat? dst}) Future<Mat>
AbsDiff calculates the per-element absolute difference between two arrays or between an array and a scalar.
accumulate(InputArray src, InputOutputArray dst, {InputArray? mask}) Mat
Adds the square of a source image to the accumulator image.
accumulateAsync(InputArray src, InputOutputArray dst, {InputArray? mask}) Future<Mat>
Adds the square of a source image to the accumulator image.
accumulateProduct(InputArray src1, InputArray src2, InputOutputArray dst, {InputArray? mask}) Mat
Adds the per-element product of two input images to the accumulator image.
accumulateProductAsync(InputArray src1, InputArray src2, InputOutputArray dst, {InputArray? mask}) Future<Mat>
Adds the per-element product of two input images to the accumulator image.
accumulateSquare(InputArray src, InputOutputArray dst, {InputArray? mask}) Mat
Adds the square of a source image to the accumulator image.
accumulateSquareAsync(InputArray src, InputOutputArray dst, {InputArray? mask}) Future<Mat>
Adds the square of a source image to the accumulator image.
accumulateWeighted(InputArray src, InputOutputArray dst, double alpha, {InputArray? mask}) Mat
Updates a running average.
accumulateWeightedAsync(InputArray src, InputOutputArray dst, double alpha, {InputArray? mask}) Future<Mat>
Updates a running average.
adaptiveThreshold(InputArray src, double maxValue, int adaptiveMethod, int thresholdType, int blockSize, double C, {OutputArray? dst}) Mat
AdaptiveThreshold applies a fixed-level threshold to each array element.
adaptiveThresholdAsync(InputArray src, double maxValue, int adaptiveMethod, int thresholdType, int blockSize, double C, {OutputArray? dst}) Future<Mat>
AdaptiveThreshold applies a fixed-level threshold to each array element.
add(Mat src1, Mat src2, {Mat? dst, int dtype = -1, Mat? mask}) Mat
Add calculates the per-element sum of two arrays or an array and a scalar.
addAsync(Mat src1, Mat src2, {Mat? dst, int dtype = -1, Mat? mask}) Future<Mat>
Add calculates the per-element sum of two arrays or an array and a scalar.
addWeighted(InputArray src1, double alpha, InputArray src2, double beta, double gamma, {OutputArray? dst, int dtype = -1}) Mat
AddWeighted calculates the weighted sum of two arrays.
addWeightedAsync(InputArray src1, double alpha, InputArray src2, double beta, double gamma, {OutputArray? dst, int dtype = -1}) Future<Mat>
AddWeighted calculates the weighted sum of two arrays.
applyColorMap(InputArray src, int colormap, {OutputArray? dst}) Mat
ApplyColorMap applies a GNU Octave/MATLAB equivalent colormap on a given image. colormap: ColormapTypes For further details, please see: https:///docs.opencv.org/master/d3/d50/group__imgproc__colormap.html#gadf478a5e5ff49d8aa24e726ea6f65d15
applyColorMapAsync(InputArray src, int colormap, {OutputArray? dst}) Future<Mat>
ApplyColorMap applies a GNU Octave/MATLAB equivalent colormap on a given image. colormap: ColormapTypes For further details, please see: https:///docs.opencv.org/master/d3/d50/group__imgproc__colormap.html#gadf478a5e5ff49d8aa24e726ea6f65d15
applyCustomColorMap(InputArray src, InputArray userColor, {OutputArray? dst}) Mat
ApplyCustomColorMap applies a custom defined colormap on a given image.
applyCustomColorMapAsync(InputArray src, InputArray userColor, {OutputArray? dst}) Future<Mat>
ApplyCustomColorMap applies a custom defined colormap on a given image.
approxPolyDP(VecPoint curve, double epsilon, bool closed) VecPoint
ApproxPolyDP approximates a polygonal curve(s) with the specified precision.
approxPolyDPAsync(VecPoint curve, double epsilon, bool closed) Future<VecPoint>
ApproxPolyDP approximates a polygonal curve(s) with the specified precision.
arcLength(VecPoint curve, bool closed) double
ArcLength calculates a contour perimeter or a curve length.
arcLengthAsync(VecPoint curve, bool closed) Future<double>
ArcLength calculates a contour perimeter or a curve length.
arrowedLine(InputOutputArray img, Point pt1, Point pt2, Scalar color, {int thickness = 1, int line_type = 8, int shift = 0, double tipLength = 0.1}) Mat
ArrowedLine draws a arrow segment pointing from the first point to the second one.
arrowedLineAsync(InputOutputArray img, Point pt1, Point pt2, Scalar color, {int thickness = 1, int line_type = 8, int shift = 0, double tipLength = 0.1}) Future<Mat>
ArrowedLine draws a arrow segment pointing from the first point to the second one.
arucoDrawDetectedMarkers(Mat img, VecVecPoint2f markerCorners, VecI32 markerIds, Scalar borderColor) → void
arucoDrawDetectedMarkersAsync(Mat img, VecVecPoint2f markerCorners, VecI32 markerIds, Scalar borderColor) Future<void>
arucoGenerateImageMarker(PredefinedDictionaryType dictionaryId, int id, int sidePixels, int borderBits, [Mat? outImg]) Mat
arucoGenerateImageMarkerAsync(PredefinedDictionaryType dictionaryId, int id, int sidePixels, int borderBits, [Mat? outImg]) Future<Mat>
batchDistance(InputArray src1, InputArray src2, int dtype, {OutputArray? dist, OutputArray? nidx, int normType = NORM_L2, int K = 0, InputArray? mask, int update = 0, bool crosscheck = false}) → (Mat, Mat)
BatchDistance is a naive nearest neighbor finder.
batchDistanceAsync(InputArray src1, InputArray src2, int dtype, {OutputArray? dist, OutputArray? nidx, int normType = NORM_L2, int K = 0, InputArray? mask, int update = 0, bool crosscheck = false}) Future<(Mat, Mat)>
BatchDistance is a naive nearest neighbor finder.
bilateralFilter(Mat src, int diameter, double sigmaColor, double sigmaSpace, {Mat? dst}) Mat
BilateralFilter applies a bilateral filter to an image.
bilateralFilterAsync(Mat src, int diameter, double sigmaColor, double sigmaSpace, {Mat? dst}) Future<Mat>
BilateralFilter applies a bilateral filter to an image.
bitwiseAND(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask}) Mat
BitwiseAnd computes bitwise conjunction of the two arrays (dst = src1 & src2). Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar.
bitwiseANDAsync(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask}) Future<Mat>
BitwiseAnd computes bitwise conjunction of the two arrays (dst = src1 & src2). Calculates the per-element bit-wise conjunction of two arrays or an array and a scalar.
bitwiseNOT(InputArray src, {OutputArray? dst, InputArray? mask}) Mat
BitwiseNot inverts every bit of an array.
bitwiseNOTAsync(InputArray src, {OutputArray? dst, InputArray? mask}) Future<Mat>
BitwiseNot inverts every bit of an array.
bitwiseOR(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask}) Mat
BitwiseOr calculates the per-element bit-wise disjunction of two arrays or an array and a scalar.
bitwiseORAsync(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask}) Future<Mat>
BitwiseOr calculates the per-element bit-wise disjunction of two arrays or an array and a scalar.
bitwiseXOR(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask}) Mat
BitwiseXor calculates the per-element bit-wise "exclusive or" operation on two arrays or an array and a scalar.
bitwiseXORAsync(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask}) Future<Mat>
BitwiseXor calculates the per-element bit-wise "exclusive or" operation on two arrays or an array and a scalar.
blobFromImage(InputArray image, {double scalefactor = 1.0, (int, int) size = (0, 0), Scalar? mean, bool swapRB = false, bool crop = false, int ddepth = MatType.CV_32F}) Mat
Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.
blobFromImageAsync(InputArray image, {double scalefactor = 1.0, (int, int) size = (0, 0), Scalar? mean, bool swapRB = false, bool crop = false, int ddepth = MatType.CV_32F}) Future<Mat>
blobFromImages(VecMat images, {Mat? blob, double scalefactor = 1.0, (int, int) size = (0, 0), Scalar? mean, bool swapRB = false, bool crop = false, int ddepth = MatType.CV_32F}) Mat
Creates 4-dimensional blob from series of images. Optionally resizes and crops images from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels. https://docs.opencv.org/4.x/d6/d0f/group__dnn.html#ga0b7b7c3c530b747ef738178835e1e70f
blobFromImagesAsync(VecMat images, {Mat? blob, double scalefactor = 1.0, (int, int) size = (0, 0), Scalar? mean, bool swapRB = false, bool crop = false, int ddepth = MatType.CV_32F}) Future<Mat>
blur(Mat src, (int, int) ksize, {Mat? dst}) Mat
Blur blurs an image Mat using a normalized box filter.
blurAsync(Mat src, (int, int) ksize, {Mat? dst}) Future<Mat>
Blur blurs an image Mat using a normalized box filter.
borderInterpolate(int p, int len, int borderType) int
BorderInterpolate computes the source location of an extrapolated pixel.
borderInterpolateAsync(int p, int len, int borderType) Future<int>
BorderInterpolate computes the source location of an extrapolated pixel.
boundingRect(VecPoint points) Rect
BoundingRect calculates the up-right bounding rectangle of a point set.
boundingRectAsync(VecPoint points) Future<Rect>
BoundingRect calculates the up-right bounding rectangle of a point set.
boxFilter(Mat src, int depth, (int, int) ksize, {Point? anchor, bool normalize = true, int borderType = BORDER_DEFAULT, Mat? dst}) Mat
BoxFilter blurs an image using the box filter.
boxFilterAsync(Mat src, int depth, (int, int) ksize, {Point? anchor, bool normalize = true, int borderType = BORDER_DEFAULT, Mat? dst}) Future<Mat>
BoxFilter blurs an image using the box filter.
boxPoints(RotatedRect rect, {VecPoint2f? pts}) VecPoint2f
BoxPoints finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.
boxPointsAsync(RotatedRect rect, {VecPoint2f? pts}) Future<VecPoint2f>
BoxPoints finds the four vertices of a rotated rect. Useful to draw the rotated rectangle.
calcBackProject(VecMat src, VecI32 channels, Mat hist, VecF32 ranges, {Mat? dst, double scale = 1.0}) Mat
CalcBackProject calculates the back projection of a histogram.
calcBackProjectAsync(VecMat src, VecI32 channels, Mat hist, VecF32 ranges, {Mat? dst, double scale = 1.0}) Future<Mat>
CalcBackProject calculates the back projection of a histogram.
calcCovarMatrix(InputArray samples, InputOutputArray mean, int flags, {OutputArray? covar, int ctype = MatType.CV_64F}) → (Mat, Mat)
CalcCovarMatrix calculates the covariance matrix of a set of vectors.
calcCovarMatrixAsync(InputArray samples, InputOutputArray mean, int flags, {OutputArray? covar, int ctype = MatType.CV_64F}) Future<(Mat, Mat)>
CalcCovarMatrix calculates the covariance matrix of a set of vectors.
calcHist(VecMat src, VecI32 channels, Mat mask, VecI32 histSize, VecF32 ranges, {Mat? hist, bool accumulate = false}) Mat
CalcHist Calculates a histogram of a set of images
calcHistAsync(VecMat src, VecI32 channels, Mat mask, VecI32 histSize, VecF32 ranges, {Mat? hist, bool accumulate = false}) Future<Mat>
CalcHist Calculates a histogram of a set of images
calcOpticalFlowFarneback(InputArray prev, InputArray next, InputOutputArray flow, double pyrScale, int levels, int winsize, int iterations, int polyN, double polySigma, int flags) Mat
Apply computes a foreground mask using the current BackgroundSubtractorMOG2.
calcOpticalFlowFarnebackAsync(InputArray prev, InputArray next, InputOutputArray flow, double pyrScale, int levels, int winsize, int iterations, int polyN, double polySigma, int flags) Future<Mat>
Apply computes a foreground mask using the current BackgroundSubtractorMOG2.
calcOpticalFlowPyrLK(InputArray prevImg, InputArray nextImg, VecPoint2f prevPts, VecPoint2f nextPts, {VecUChar? status, VecF32? err, (int, int) winSize = (21, 21), int maxLevel = 3, (int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4), int flags = 0, double minEigThreshold = 1e-4}) → (VecPoint2f, VecUChar?, VecF32?)
CalcOpticalFlowPyrLK calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.
calcOpticalFlowPyrLKAsync(InputArray prevImg, InputArray nextImg, VecPoint2f prevPts, VecPoint2f nextPts, {VecUChar? status, VecF32? err, (int, int) winSize = (21, 21), int maxLevel = 3, (int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4), int flags = 0, double minEigThreshold = 1e-4}) Future<(VecPoint2f, VecUChar, VecF32)>
CalcOpticalFlowPyrLK calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.
calibrateCamera(Contours3f objectPoints, Contours2f imagePoints, (int, int) imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, {Mat? rvecs, Mat? tvecs, int flags = 0, (int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4)}) → (double, Mat, Mat, Mat, Mat)
calibrateCameraAsync(Contours3f objectPoints, Contours2f imagePoints, (int, int) imageSize, InputOutputArray cameraMatrix, InputOutputArray distCoeffs, {Mat? rvecs, Mat? tvecs, int flags = 0, (int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4)}) Future<(double, Mat, Mat, Mat, Mat)>
canny(Mat image, double threshold1, double threshold2, {OutputArray? edges, int apertureSize = 3, bool l2gradient = false}) Mat
Canny finds edges in an image using the Canny algorithm. The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http:///en.wikipedia.org/wiki/Canny_edge_detector
cannyAsync(Mat image, double threshold1, double threshold2, {OutputArray? edges, int apertureSize = 3, bool l2gradient = false}) Future<Mat>
Canny finds edges in an image using the Canny algorithm. The function finds edges in the input image image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http:///en.wikipedia.org/wiki/Canny_edge_detector
cartToPolar(InputArray x, InputArray y, {OutputArray? magnitude, OutputArray? angle, bool angleInDegrees = false}) → (Mat, Mat)
CartToPolar calculates the magnitude and angle of 2D vectors.
cartToPolarAsync(InputArray x, InputArray y, {OutputArray? magnitude, OutputArray? angle, bool angleInDegrees = false}) Future<(Mat, Mat)>
CartToPolar calculates the magnitude and angle of 2D vectors.
checkRange(InputArray a, {bool quiet = true, double minVal = -CV_F64_MAX, double maxVal = CV_F64_MAX}) → (bool, Point)
CheckRange checks every element of an input array for invalid values.
checkRangeAsync(InputArray a, {bool quiet = true, double minVal = -CV_F64_MAX, double maxVal = CV_F64_MAX}) Future<(bool, Point)>
CheckRange checks every element of an input array for invalid values.
circle(InputOutputArray img, Point center, int radius, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Mat
CircleWithParams draws a circle.
circleAsync(InputOutputArray img, Point center, int radius, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Future<Mat>
CircleWithParams draws a circle.
clipLine(Rect imgRect, Point pt1, Point pt2) → (bool, Point, Point)
ClipLine clips the line against the image rectangle. For further details, please see: https:///docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#gaf483cb46ad6b049bc35ec67052ef1c2c
clipLineAsync(Rect imgRect, Point pt1, Point pt2) Future<(bool, Point, Point)>
ClipLine clips the line against the image rectangle. For further details, please see: https:///docs.opencv.org/master/d6/d6e/group__imgproc__draw.html#gaf483cb46ad6b049bc35ec67052ef1c2c
colorChange(InputArray src, InputArray mask, {double redMul = 1.0, double greenMul = 1.0, double blueMul = 1.0}) Mat
ColorChange mix two differently colored versions of an image seamlessly. For further details, please see: https://docs.opencv.org/master/df/da0/group__photo__clone.html#ga6684f35dc669ff6196a7c340dc73b98e
colorChangeAsync(InputArray src, InputArray mask, {double redMul = 1.0, double greenMul = 1.0, double blueMul = 1.0}) Future<Mat>
compare(InputArray src1, InputArray src2, int cmpop, {OutputArray? dst}) Mat
Compare performs the per-element comparison of two arrays or an array and scalar value.
compareAsync(InputArray src1, InputArray src2, int cmpop, {OutputArray? dst}) Future<Mat>
Compare performs the per-element comparison of two arrays or an array and scalar value.
compareHist(Mat hist1, Mat hist2, {int method = 0}) double
CompareHist Compares two histograms. mode: HistCompMethods For further details, please see: https:///docs.opencv.org/master/d6/dc7/group__imgproc__hist.html#gaf4190090efa5c47cb367cf97a9a519bd
compareHistAsync(Mat hist1, Mat hist2, {int method = 0}) Future<double>
CompareHist Compares two histograms. mode: HistCompMethods For further details, please see: https:///docs.opencv.org/master/d6/dc7/group__imgproc__hist.html#gaf4190090efa5c47cb367cf97a9a519bd
completeSymm(InputOutputArray m, {bool lowerToUpper = false}) Mat
CompleteSymm copies the lower or the upper half of a square matrix to its another half.
completeSymmAsync(InputOutputArray m, {bool lowerToUpper = false}) Future<Mat>
CompleteSymm copies the lower or the upper half of a square matrix to its another half.
connectedComponents(Mat image, Mat labels, int connectivity, int ltype, int ccltype) int
ConnectedComponents computes the connected components labeled image of boolean image.
connectedComponentsAsync(Mat image, OutputArray labels, int connectivity, int ltype, int ccltype) Future<int>
ConnectedComponents computes the connected components labeled image of boolean image.
connectedComponentsWithStats(Mat src, Mat labels, Mat stats, Mat centroids, int connectivity, int ltype, int ccltype) int
ConnectedComponentsWithStats computes the connected components labeled image of boolean image and also produces a statistics output for each label.
connectedComponentsWithStatsAsync(Mat src, Mat labels, Mat stats, Mat centroids, int connectivity, int ltype, int ccltype) Future<int>
ConnectedComponentsWithStats computes the connected components labeled image of boolean image and also produces a statistics output for each label.
contourArea(VecPoint contour) double
ContourArea calculates a contour area.
contourAreaAsync(VecPoint contour) Future<double>
ContourArea calculates a contour area.
convertScaleAbs(InputArray src, {OutputArray? dst, double alpha = 1, double beta = 0}) Mat
ConvertScaleAbs scales, calculates absolute values, and converts the result to 8-bit.
convertScaleAbsAsync(InputArray src, {OutputArray? dst, double alpha = 1, double beta = 0}) Future<Mat>
ConvertScaleAbs scales, calculates absolute values, and converts the result to 8-bit.
convexHull(VecPoint points, {Mat? hull, bool clockwise = false, bool returnPoints = true}) Mat
ConvexHull finds the convex hull of a point set.
convexHullAsync(VecPoint points, {Mat? hull, bool clockwise = false, bool returnPoints = true}) Future<Mat>
ConvexHull finds the convex hull of a point set.
convexityDefects(VecPoint contour, Mat hull, {Mat? convexityDefects}) Mat
ConvexityDefects finds the convexity defects of a contour.
convexityDefectsAsync(VecPoint contour, Mat hull, {Mat? convexityDefects}) Future<Mat>
ConvexityDefects finds the convexity defects of a contour.
copyMakeBorder(InputArray src, int top, int bottom, int left, int right, int borderType, {OutputArray? dst, Scalar? value}) Mat
CopyMakeBorder forms a border around an image (applies padding).
copyMakeBorderAsync(InputArray src, int top, int bottom, int left, int right, int borderType, {OutputArray? dst, Scalar? value}) Future<Mat>
CopyMakeBorder forms a border around an image (applies padding).
copyTo(InputArray src, InputArray dst, {InputArray? mask}) Mat
CopyTo
copyToAsync(InputArray src, InputArray dst, {InputArray? mask}) Future<Mat>
CopyTo
cornerSubPix(InputArray image, VecPoint2f corners, (int, int) winSize, (int, int) zeroZone, [(int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4)]) VecPoint2f
CornerSubPix Refines the corner locations. The function iterates to find the sub-pixel accurate location of corners or radial saddle points.
cornerSubPixAsync(InputArray image, VecPoint2f corners, (int, int) winSize, (int, int) zeroZone, [(int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4)]) Future<VecPoint2f>
CornerSubPix Refines the corner locations. The function iterates to find the sub-pixel accurate location of corners or radial saddle points.
countNonZero(Mat src) int
CountNonZero counts non-zero array elements.
createBackgroundSubtractorMOG2({int history = 500, double varThreshold = 16, bool detectShadows = true}) BackgroundSubtractorMOG2
NewBackgroundSubtractorMOG2 returns a new BackgroundSubtractor algorithm of type MOG2. MOG2 is a Gaussian Mixture-based Background/Foreground Segmentation Algorithm.
createCLAHE({double clipLimit = 40, (int, int) tileGridSize = (8, 8)}) CLAHE
createTrackbar(String trackbarName, String winName, int maxval, {Dartcv_TrackbarCallbackFunction? onChange}) → void
currentUIFramework() String
cvAssert(bool condition, [String? msg]) → void
cvRun(Pointer<CvStatus> func()) → void
cvRunArena<R>(R computation(Arena arena), [Allocator wrappedAllocator = calloc, bool keep = false]) → R
cvRunAsync0<T>(Pointer<CvStatus> func(CvCallback_0 callback), void onComplete(Completer<T> completer)) Future<T>
cvRunAsync1<T>(Pointer<CvStatus> func(CvCallback_1 callback), void onComplete(Completer<T> completer, VoidPtr p)) Future<T>
cvRunAsync2<T>(Pointer<CvStatus> func(CvCallback_2 callback), void onComplete(Completer<T> completer, VoidPtr p, VoidPtr p1)) Future<T>
cvRunAsync3<T>(Pointer<CvStatus> func(CvCallback_3 callback), void onComplete(Completer<T> completer, VoidPtr p, VoidPtr p1, VoidPtr p2)) Future<T>
cvRunAsync4<T>(Pointer<CvStatus> func(CvCallback_4 callback), void onComplete(Completer<T> completer, VoidPtr p, VoidPtr p1, VoidPtr p2, VoidPtr p3)) Future<T>
cvRunAsync5<T>(Pointer<CvStatus> func(CvCallback_5 callback), void onComplete(Completer<T> completer, VoidPtr p, VoidPtr p1, VoidPtr p2, VoidPtr p3, VoidPtr p4)) Future<T>
cvtColor(Mat src, int code, {Mat? dst}) Mat
CvtColor converts an image from one color space to another. It converts the src Mat image to the dst Mat using the code param containing the desired ColorConversionCode color space.
cvtColorAsync(Mat src, int code, {Mat? dst}) Future<Mat>
CvtColor converts an image from one color space to another. It converts the src Mat image to the dst Mat using the code param containing the desired ColorConversionCode color space.
dct(InputArray src, {OutputArray? dst, int flags = 0}) Mat
DCT performs a forward or inverse discrete Cosine transform of 1D or 2D array.
dctAsync(InputArray src, {OutputArray? dst, int flags = 0}) Future<Mat>
DCT performs a forward or inverse discrete Cosine transform of 1D or 2D array.
destroyAllWindows() → void
destroy all windows.
destroyWindow(String winName) → void
detailEnhance(InputArray src, {double sigmaS = 10, double sigmaR = 0.15}) Mat
DetailEnhance filter enhances the details of a particular image For further details, please see: https://docs.opencv.org/4.x/df/dac/group__photo__render.html#ga0de660cb6f371a464a74c7b651415975
detailEnhanceAsync(InputArray src, {double sigmaS = 10, double sigmaR = 0.15}) Future<Mat>
determinant(InputArray mtx) double
Determinant returns the determinant of a square floating-point matrix.
determinantAsync(InputArray mtx) Future<double>
Determinant returns the determinant of a square floating-point matrix.
dft(InputArray src, {OutputArray? dst, int flags = 0, int nonzeroRows = 0}) Mat
DFT performs a forward or inverse Discrete Fourier Transform (DFT) of a 1D or 2D floating-point array.
dftAsync(InputArray src, {OutputArray? dst, int flags = 0, int nonzeroRows = 0}) Future<Mat>
DFT performs a forward or inverse Discrete Fourier Transform (DFT) of a 1D or 2D floating-point array.
dilate(Mat src, Mat kernel, {Mat? dst, Point? anchor, int iterations = 1, int borderType = BORDER_CONSTANT, Scalar? borderValue}) Mat
Dilate dilates an image by using a specific structuring element.
dilateAsync(Mat src, Mat kernel, {Mat? dst, Point? anchor, int iterations = 1, int borderType = BORDER_CONSTANT, Scalar? borderValue}) Future<Mat>
Dilate dilates an image by using a specific structuring element.
distanceTransform(Mat src, int distanceType, int maskSize, int labelType, {Mat? dst, Mat? labels}) → (Mat, Mat)
DistanceTransform Calculates the distance to the closest zero pixel for each pixel of the source image.
distanceTransformAsync(Mat src, int distanceType, int maskSize, int labelType, {Mat? dst, Mat? labels}) Future<(Mat, Mat)>
DistanceTransform Calculates the distance to the closest zero pixel for each pixel of the source image.
divide(InputArray src1, InputArray src2, {OutputArray? dst, double scale = 1, int dtype = -1}) Mat
Divide performs the per-element division on two arrays or an array and a scalar.
divideAsync(InputArray src1, InputArray src2, {OutputArray? dst, double scale = 1, int dtype = -1}) Future<Mat>
Divide performs the per-element division on two arrays or an array and a scalar.
drawChessboardCorners(InputOutputArray image, (int, int) patternSize, VecPoint2f corners, bool patternWasFound) Mat
drawChessboardCornersAsync(InputOutputArray image, (int, int) patternSize, VecPoint2f corners, bool patternWasFound) Future<Mat>
drawContours(InputOutputArray image, Contours contours, int contourIdx, Scalar color, {int thickness = 1, int lineType = LINE_8, InputArray? hierarchy, int maxLevel = 0x3f3f3f3f, Point? offset}) Mat
DrawContours draws contours outlines or filled contours.
drawContoursAsync(InputOutputArray image, Contours contours, int contourIdx, Scalar color, {int thickness = 1, int lineType = LINE_8, InputArray? hierarchy, int maxLevel = 0x3f3f3f3f, Point? offset}) Future<Mat>
DrawContours draws contours outlines or filled contours.
drawKeyPoints(Mat src, VecKeyPoint keypoints, Mat dst, Scalar color, DrawMatchesFlag flag) → void
drawKeyPointsAsync(Mat src, VecKeyPoint keypoints, Mat dst, Scalar color, DrawMatchesFlag flag) Future<void>
drawMatches(InputArray img1, VecKeyPoint keypoints1, InputArray img2, VecKeyPoint keypoints2, VecDMatch matches1to2, InputOutputArray outImg, {Scalar? matchColor, Scalar? singlePointColor, VecChar? matchesMask, DrawMatchesFlag flags = DrawMatchesFlag.DEFAULT}) → void
DrawMatches draws matches on combined train and querry images.
drawMatchesAsync(InputArray img1, VecKeyPoint keypoints1, InputArray img2, VecKeyPoint keypoints2, VecDMatch matches1to2, InputOutputArray outImg, {Scalar? matchColor, Scalar? singlePointColor, VecChar? matchesMask, DrawMatchesFlag flags = DrawMatchesFlag.DEFAULT}) Future<void>
DrawMatches draws matches on combined train and querry images.
edgePreservingFilter(InputArray src, {int flags = 1, double sigmaS = 60, double sigmaR = 0.4}) Mat
EdgePreservingFilter filtering is the fundamental operation in image and video processing. Edge-preserving smoothing filters are used in many different applications. For further details, please see: https://docs.opencv.org/4.x/df/dac/group__photo__render.html#gafaee2977597029bc8e35da6e67bd31f7
edgePreservingFilterAsync(InputArray src, {int flags = 1, double sigmaS = 60, double sigmaR = 0.4}) Future<Mat>
eigen(InputArray src, {OutputArray? eigenvalues, OutputArray? eigenvectors}) → (bool, Mat, Mat)
Eigen calculates eigenvalues and eigenvectors of a symmetric matrix.
eigenAsync(InputArray src, {OutputArray? eigenvalues, OutputArray? eigenvectors}) Future<(bool, Mat, Mat)>
Eigen calculates eigenvalues and eigenvectors of a symmetric matrix.
eigenNonSymmetric(InputArray src, {OutputArray? eigenvalues, OutputArray? eigenvectors}) → (Mat, Mat)
EigenNonSymmetric calculates eigenvalues and eigenvectors of a non-symmetric matrix (real eigenvalues only).
eigenNonSymmetricAsync(InputArray src, {OutputArray? eigenvalues, OutputArray? eigenvectors}) Future<(Mat, Mat)>
EigenNonSymmetric calculates eigenvalues and eigenvectors of a non-symmetric matrix (real eigenvalues only).
ellipse(InputOutputArray img, Point center, Point axes, double angle, double startAngle, double endAngle, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Mat
Ellipse draws a simple or thick elliptic arc or fills an ellipse sector.
ellipseAsync(InputOutputArray img, Point center, Point axes, double angle, double startAngle, double endAngle, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Future<Mat>
Ellipse draws a simple or thick elliptic arc or fills an ellipse sector.
enableModelDiagnostics(bool isDiagnosticsMode) → void
equalizeHist(Mat src, {Mat? dst}) Mat
EqualizeHist Equalizes the histogram of a grayscale image.
equalizeHistAsync(Mat src, {Mat? dst}) Future<Mat>
EqualizeHist Equalizes the histogram of a grayscale image.
erode(Mat src, Mat kernel, {Mat? dst, Point? anchor, int iterations = 1, int borderType = BORDER_CONSTANT, Scalar? borderValue}) Mat
Erode erodes an image by using a specific structuring element.
erodeAsync(Mat src, Mat kernel, {Mat? dst, Point? anchor, int iterations = 1, int borderType = BORDER_CONSTANT, Scalar? borderValue}) Future<Mat>
Erode erodes an image by using a specific structuring element.
estimateAffine2D(VecPoint2f from, VecPoint2f to, {int method = RANSAC, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10, OutputArray? inliers}) → (Mat, Mat)
estimateAffine2DAsync(VecPoint2f from, VecPoint2f to, {int method = RANSAC, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10, OutputArray? inliers}) Future<(Mat, Mat)>
estimateAffinePartial2D(VecPoint2f from, VecPoint2f to, {int method = RANSAC, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10, OutputArray? inliers}) → (Mat, Mat)
estimateAffinePartial2DAsync(VecPoint2f from, VecPoint2f to, {int method = RANSAC, double ransacReprojThreshold = 3, int maxIters = 2000, double confidence = 0.99, int refineIters = 10, OutputArray? inliers}) Future<(Mat, Mat)>
exp(InputArray src, {OutputArray? dst}) Mat
Exp calculates the exponent of every array element.
expAsync(InputArray src, {OutputArray? dst}) Future<Mat>
Exp calculates the exponent of every array element.
extractChannel(InputArray src, int coi, {OutputArray? dst}) Mat
ExtractChannel extracts a single channel from src (coi is 0-based index).
extractChannelAsync(InputArray src, int coi, {OutputArray? dst}) Future<Mat>
ExtractChannel extracts a single channel from src (coi is 0-based index).
fastNlMeansDenoising(InputArray src, {double h = 3, int templateWindowSize = 7, int searchWindowSize = 21}) Mat
FastNlMeansDenoising performs image denoising using Non-local Means Denoising algorithm http://www.ipol.im/pub/algo/bcm_non_local_means_denoising/ For further details, please see: https://docs.opencv.org/4.x/d1/d79/group__photo__denoise.html#ga4c6b0031f56ea3f98f768881279ffe93
fastNlMeansDenoisingAsync(InputArray src, {double h = 3, int templateWindowSize = 7, int searchWindowSize = 21}) Future<Mat>
fastNlMeansDenoisingColored(InputArray src, {double h = 3, double hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21}) Mat
FastNlMeansDenoisingColored is a modification of fastNlMeansDenoising function for colored images. For further details, please see: https://docs.opencv.org/4.x/d1/d79/group__photo__denoise.html#ga21abc1c8b0e15f78cd3eff672cb6c476
fastNlMeansDenoisingColoredAsync(InputArray src, {double h = 3, double hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21}) Future<Mat>
fastNlMeansDenoisingColoredMulti(VecMat srcImgs, int imgToDenoiseIndex, int temporalWindowSize, {double h = 3, double hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21}) Mat
FastNlMeansDenoisingColoredMulti denoises the selected images. For further details, please see: https://docs.opencv.org/master/d1/d79/group__photo__denoise.html#gaa501e71f52fb2dc17ff8ca5e7d2d3619
fastNlMeansDenoisingColoredMultiAsync(VecMat srcImgs, int imgToDenoiseIndex, int temporalWindowSize, {double h = 3, double hColor = 3, int templateWindowSize = 7, int searchWindowSize = 21}) Future<Mat>
fillPoly(InputOutputArray img, VecVecPoint pts, Scalar color, {int lineType = LINE_8, int shift = 0, Point? offset}) Mat
FillPolyWithParams fills the area bounded by one or more polygons.
fillPolyAsync(InputOutputArray img, VecVecPoint pts, Scalar color, {int lineType = LINE_8, int shift = 0, Point? offset}) Future<Mat>
FillPolyWithParams fills the area bounded by one or more polygons.
filter2D(InputArray src, int ddepth, InputArray kernel, {OutputArray? dst, Point? anchor, double delta = 0, int borderType = BORDER_DEFAULT}) Mat
Filter2D applies an arbitrary linear filter to an image.
filter2DAsync(InputArray src, int ddepth, InputArray kernel, {OutputArray? dst, Point? anchor, double delta = 0, int borderType = BORDER_DEFAULT}) Future<Mat>
Filter2D applies an arbitrary linear filter to an image.
findChessboardCorners(InputArray image, (int, int) patternSize, {VecPoint2f? corners, int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE}) → (bool, VecPoint2f)
findChessboardCornersAsync(InputArray image, (int, int) patternSize, {VecPoint2f? corners, int flags = CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE}) Future<(bool, VecPoint2f)>
findChessboardCornersSB(InputArray image, (int, int) patternSize, {int flags = 0, VecPoint2f? corners}) → (bool, VecPoint2f)
findChessboardCornersSBAsync(InputArray image, (int, int) patternSize, int flags, {VecPoint2f? corners}) Future<(bool, VecPoint2f)>
findChessboardCornersSBWithMeta(InputArray image, (int, int) patternSize, int flags, {VecPoint2f? corners, OutputArray? meta}) → (bool, VecPoint2f, Mat)
findChessboardCornersSBWithMetaAsync(InputArray image, (int, int) patternSize, int flags, {VecPoint2f? corners, OutputArray? meta}) Future<(bool, VecPoint2f, Mat)>
findContours(Mat src, int mode, int method) → (Contours, Mat)
FindContours finds contours in a binary image.
findContoursAsync(Mat src, int mode, int method) Future<(Contours, Mat)>
FindContours finds contours in a binary image.
findHomography(InputArray srcPoints, InputArray dstPoints, {int method = 0, double ransacReprojThreshold = 3, OutputArray? mask, int maxIters = 2000, double confidence = 0.995}) Mat
FindHomography finds an optimal homography matrix using 4 or more point pairs (as opposed to GetPerspectiveTransform, which uses exactly 4)
findHomographyAsync(InputArray srcPoints, InputArray dstPoints, {int method = 0, double ransacReprojThreshold = 3, OutputArray? mask, int maxIters = 2000, double confidence = 0.995}) Future<(Mat, Mat)>
FindHomography finds an optimal homography matrix using 4 or more point pairs (as opposed to GetPerspectiveTransform, which uses exactly 4)
findNonZero(InputArray src, {OutputArray? idx}) Mat
FindNonZero returns the list of locations of non-zero pixels.
findNonZeroAsync(InputArray src, {OutputArray? idx}) Future<Mat>
FindNonZero returns the list of locations of non-zero pixels.
findTransformECC(InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, int motionType, (int, int, double) criteria, InputArray inputMask, int gaussFiltSize) → (double, Mat)
FindTransformECC finds the geometric transform (warp) between two images in terms of the ECC criterion.
findTransformECCAsync(InputArray templateImage, InputArray inputImage, InputOutputArray warpMatrix, int motionType, (int, int, double) criteria, InputArray inputMask, int gaussFiltSize) Future<(double, Mat)>
FindTransformECC finds the geometric transform (warp) between two images in terms of the ECC criterion.
fitEllipse(VecPoint points) RotatedRect
FitEllipse Fits an ellipse around a set of 2D points.
fitEllipseAsync(VecPoint points) Future<RotatedRect>
FitEllipse Fits an ellipse around a set of 2D points.
fitLine(VecPoint points, int distType, double param, double reps, double aeps, {OutputArray? line}) Mat
FitLine fits a line to a 2D or 3D point set. distType: DistanceTypes For further details, please see: https:///docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gaf849da1fdafa67ee84b1e9a23b93f91f
fitLineAsync(VecPoint points, int distType, double param, double reps, double aeps, {OutputArray? line}) Future<Mat>
FitLine fits a line to a 2D or 3D point set. distType: DistanceTypes For further details, please see: https:///docs.opencv.org/master/d3/dc0/group__imgproc__shape.html#gaf849da1fdafa67ee84b1e9a23b93f91f
flip(InputArray src, int flipCode, {OutputArray? dst}) Mat
Flip flips a 2D array around horizontal(0), vertical(1), or both axes(-1).
flipAsync(InputArray src, int flipCode, {OutputArray? dst}) Future<Mat>
Flip flips a 2D array around horizontal(0), vertical(1), or both axes(-1).
flipND(InputArray src, int axis, {OutputArray? dst}) Mat
flipNDAsync(InputArray src, int axis, {OutputArray? dst}) Future<Mat>
float16(int w) double
float16Inv(double x) int
floodFill(InputOutputArray image, Point seedPoint, Scalar newVal, {InputOutputArray? mask, Scalar? loDiff, Scalar? upDiff, int flags = 4}) → (int, Mat, Mat, Rect)
Fills a connected component with the given color.
floodFillAsync(InputOutputArray image, Point seedPoint, Scalar newVal, {InputOutputArray? mask, Scalar? loDiff, Scalar? upDiff, int flags = 4}) Future<(int, Mat, Mat, Rect)>
Fills a connected component with the given color.
gaussianBlur(Mat src, (int, int) ksize, double sigmaX, {Mat? dst, double sigmaY = 0, int borderType = BORDER_DEFAULT}) Mat
GaussianBlur blurs an image Mat using a Gaussian filter. The function convolves the src Mat image into the dst Mat using the specified Gaussian kernel params.
gaussianBlurAsync(Mat src, (int, int) ksize, double sigmaX, {Mat? dst, double sigmaY = 0, int borderType = BORDER_DEFAULT}) Future<Mat>
GaussianBlur blurs an image Mat using a Gaussian filter. The function convolves the src Mat image into the dst Mat using the specified Gaussian kernel params.
gemm(InputArray src1, InputArray src2, double alpha, InputArray src3, double beta, {OutputArray? dst, int flags = 0}) Mat
Gemm performs generalized matrix multiplication.
gemmAsync(InputArray src1, InputArray src2, double alpha, InputArray src3, double beta, {OutputArray? dst, int flags = 0}) Future<Mat>
Gemm performs generalized matrix multiplication.
getAffineTransform(VecPoint src, VecPoint dst) Mat
GetAffineTransform returns a 2x3 affine transformation matrix for the corresponding 3 point pairs as image.Point.
getAffineTransform2f(VecPoint2f src, VecPoint2f dst) Mat
getAffineTransform2fAsync(VecPoint2f src, VecPoint2f dst) Future<Mat>
getAffineTransformAsync(VecPoint src, VecPoint dst) Future<Mat>
GetAffineTransform returns a 2x3 affine transformation matrix for the corresponding 3 point pairs as image.Point.
getAvailableBackends() List<(int, int)>
getAvailableBackends
getAvailableTargets(int backend) List<int>
getAvailableTargets https://docs.opencv.org/4.x/d6/d0f/group__dnn.html#ga711e5056b6642b33d9480c98c6889f56
getBlobChannel(Mat blob, int imgidx, int chnidx) Mat
GetBlobChannel extracts a single (2d)channel from a 4 dimensional blob structure (this might e.g. contain the results of a SSD or YOLO detection,
getBlobChannelAsync(Mat blob, int imgidx, int chnidx) Future<Mat>
getBlobSize(Mat blob) VecI32
GetBlobSize retrieves the 4 dimensional size information in (N,C,H,W) order
getBuildInformation() String
Returns full configuration time cmake output.
getGaussianKernel(int ksize, double sigma, {int ktype = 6}) Mat
GetGaussianKernel returns Gaussian filter coefficients.
getGaussianKernelAsync(int ksize, double sigma, {int ktype = 6}) Future<Mat>
GetGaussianKernel returns Gaussian filter coefficients.
getLogLevel() int
Gets the global logging level.
getMouseWheelDelta(int flags) int
getNumThreads() int
Get the number of threads for OpenCV.
getOptimalDFTSize(int vecsize) int
GetOptimalDFTSize returns the optimal Discrete Fourier Transform (DFT) size for a given vector size.
getOptimalDFTSizeAsync(int vecsize) Future<int>
GetOptimalDFTSize returns the optimal Discrete Fourier Transform (DFT) size for a given vector size.
getOptimalNewCameraMatrix(InputArray cameraMatrix, InputArray distCoeffs, (int, int) imageSize, double alpha, {(int, int) newImgSize = (0, 0), bool centerPrincipalPoint = false}) → (Mat, Rect)
GetOptimalNewCameraMatrixWithParams computes and returns the optimal new camera matrix based on the free scaling parameter.
getOptimalNewCameraMatrixAsync(InputArray cameraMatrix, InputArray distCoeffs, (int, int) imageSize, double alpha, {(int, int) newImgSize = (0, 0), bool centerPrincipalPoint = false}) Future<(Mat, Rect)>
GetOptimalNewCameraMatrixWithParams computes and returns the optimal new camera matrix based on the free scaling parameter.
getPerspectiveTransform(VecPoint src, VecPoint dst, [int solveMethod = DECOMP_LU]) Mat
GetPerspectiveTransform returns 3x3 perspective transformation for the corresponding 4 point pairs as image.Point.
getPerspectiveTransform2f(VecPoint2f src, VecPoint2f dst, [int solveMethod = DECOMP_LU]) Mat
GetPerspectiveTransform2f returns 3x3 perspective transformation for the corresponding 4 point pairs as gocv.Point2f.
getPerspectiveTransform2fAsync(VecPoint2f src, VecPoint2f dst, [int solveMethod = DECOMP_LU]) Future<Mat>
GetPerspectiveTransform2f returns 3x3 perspective transformation for the corresponding 4 point pairs as gocv.Point2f.
getPerspectiveTransformAsync(VecPoint src, VecPoint dst, [int solveMethod = DECOMP_LU]) Future<Mat>
GetPerspectiveTransform returns 3x3 perspective transformation for the corresponding 4 point pairs as image.Point.
getRectSubPix(InputArray image, (int, int) patchSize, Point2f center, {OutputArray? patch, int patchType = -1}) Mat
GetRectSubPix retrieves a pixel rectangle from an image with sub-pixel accuracy.
getRectSubPixAsync(InputArray image, (int, int) patchSize, Point2f center, {OutputArray? patch, int patchType = -1}) Future<Mat>
GetRectSubPix retrieves a pixel rectangle from an image with sub-pixel accuracy.
getRotationMatrix2D(Point2f center, double angle, double scale) Mat
GetRotationMatrix2D calculates an affine matrix of 2D rotation.
getRotationMatrix2DAsync(Point2f center, double angle, double scale) Future<Mat>
GetRotationMatrix2D calculates an affine matrix of 2D rotation.
getStructuringElement(int shape, (int, int) ksize, {Point? anchor}) Mat
GetStructuringElement returns a structuring element of the specified size and shape for morphological operations.
getStructuringElementAsync(int shape, (int, int) ksize, {Point? anchor}) Future<Mat>
GetStructuringElement returns a structuring element of the specified size and shape for morphological operations.
getTextSize(String text, int fontFace, double fontScale, int thickness) → (Size, int)
GetTextSizeWithBaseline calculates the width and height of a text string including the basline of the text. It returns an image.Point with the size required to draw text using a specific font face, scale, and thickness as well as its baseline.
getTextSizeAsync(String text, int fontFace, double fontScale, int thickness) Future<(Size, int)>
GetTextSizeWithBaseline calculates the width and height of a text string including the basline of the text. It returns an image.Point with the size required to draw text using a specific font face, scale, and thickness as well as its baseline.
getTickCount() int
GetTickCount returns the number of ticks.
getTickFrequency() double
GetTickFrequency returns the number of ticks per second.
getTrackbarPos(String trackbarName, String winName) int
getWindowImageRect(String winName) Rect
getWindowProperty(String winName, WindowPropertyFlags flag) double
getWindowProperty returns properties of a window.
goodFeaturesToTrack(InputArray image, int maxCorners, double qualityLevel, double minDistance, {VecPoint2f? corners, InputArray? mask, int blockSize = 3, int? gradientSize, bool useHarrisDetector = false, double k = 0.04}) VecPoint2f
GoodFeaturesToTrack determines strong corners on an image. The function finds the most prominent corners in the image or in the specified image region.
goodFeaturesToTrackAsync(InputArray image, int maxCorners, double qualityLevel, double minDistance, {VecPoint2f? corners, InputArray? mask, int blockSize = 3, int? gradientSize, bool useHarrisDetector = false, double k = 0.04}) Future<VecPoint2f>
GoodFeaturesToTrack determines strong corners on an image. The function finds the most prominent corners in the image or in the specified image region.
grabCut(InputArray img, InputOutputArray mask, Rect rect, InputOutputArray bgdModel, InputOutputArray fgdModel, int iterCount, {int mode = GC_EVAL}) → (Mat, Mat, Mat)
Grabcut runs the GrabCut algorithm. The function implements the GrabCut image segmentation algorithm. For further details, please see: https:///docs.opencv.org/master/d3/d47/group__imgproc__segmentation.html#ga909c1dda50efcbeaa3ce126be862b37f
grabCutAsync(InputArray img, InputOutputArray mask, Rect rect, InputOutputArray bgdModel, InputOutputArray fgdModel, int iterCount, {int mode = GC_EVAL}) Future<(Mat, Mat, Mat)>
Grabcut runs the GrabCut algorithm. The function implements the GrabCut image segmentation algorithm. For further details, please see: https:///docs.opencv.org/master/d3/d47/group__imgproc__segmentation.html#ga909c1dda50efcbeaa3ce126be862b37f
groupRectangles(VecRect rects, int groupThreshold, double eps) VecRect
groupRectanglesAsync(VecRect rects, int groupThreshold, double eps) Future<VecRect>
hasNonZero(InputArray src) bool
Checks for the presence of at least one non-zero array element.
hconcat(InputArray src1, InputArray src2, {OutputArray? dst}) Mat
Hconcat applies horizontal concatenation to given matrices.
hconcatAsync(InputArray src1, InputArray src2, {OutputArray? dst}) Future<Mat>
Hconcat applies horizontal concatenation to given matrices.
HoughCircles(InputArray image, int method, double dp, double minDist, {OutputArray? circles, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0}) Mat
HoughCircles finds circles in a grayscale image using the Hough transform. The only "method" currently supported is HoughGradient. If you want to pass more parameters, please see HoughCirclesWithParams.
HoughCirclesAsync(InputArray image, int method, double dp, double minDist, {OutputArray? circles, double param1 = 100, double param2 = 100, int minRadius = 0, int maxRadius = 0}) Future<Mat>
HoughCircles finds circles in a grayscale image using the Hough transform. The only "method" currently supported is HoughGradient. If you want to pass more parameters, please see HoughCirclesWithParams.
HoughLines(InputArray image, double rho, double theta, int threshold, {OutputArray? lines, double srn = 0, double stn = 0, double min_theta = 0, double max_theta = CV_PI}) Mat
HoughLines implements the standard or standard multi-scale Hough transform algorithm for line detection. For a good explanation of Hough transform, see: http:///homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
HoughLinesAsync(InputArray image, double rho, double theta, int threshold, {OutputArray? lines, double srn = 0, double stn = 0, double min_theta = 0, double max_theta = CV_PI}) Future<Mat>
HoughLines implements the standard or standard multi-scale Hough transform algorithm for line detection. For a good explanation of Hough transform, see: http:///homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
HoughLinesP(InputArray image, double rho, double theta, int threshold, {OutputArray? lines, double minLineLength = 0, double maxLineGap = 0}) Mat
HoughLinesP implements the probabilistic Hough transform algorithm for line detection. For a good explanation of Hough transform, see: http:///homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
HoughLinesPAsync(InputArray image, double rho, double theta, int threshold, {OutputArray? lines, double minLineLength = 0, double maxLineGap = 0}) Future<Mat>
HoughLinesP implements the probabilistic Hough transform algorithm for line detection. For a good explanation of Hough transform, see: http:///homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
HoughLinesPointSet(InputArray point, int lines_max, int threshold, double min_rho, double max_rho, double rho_step, double min_theta, double max_theta, double theta_step, {OutputArray? lines}) Mat
HoughLinesPointSet implements the Hough transform algorithm for line detection on a set of points. For a good explanation of Hough transform, see: http:///homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
HoughLinesPointSetAsync(InputArray point, int lines_max, int threshold, double min_rho, double max_rho, double rho_step, double min_theta, double max_theta, double theta_step, {OutputArray? lines}) Future<Mat>
HoughLinesPointSet implements the Hough transform algorithm for line detection on a set of points. For a good explanation of Hough transform, see: http:///homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm
idct(InputArray src, {OutputArray? dst, int flags = 0}) Mat
IDCT calculates the inverse Discrete Cosine Transform of a 1D or 2D array.
idctAsync(InputArray src, {OutputArray? dst, int flags = 0}) Future<Mat>
IDCT calculates the inverse Discrete Cosine Transform of a 1D or 2D array.
idft(InputArray src, {OutputArray? dst, int flags = 0, int nonzeroRows = 0}) Mat
IDFT calculates the inverse Discrete Fourier Transform of a 1D or 2D array.
idftAsync(InputArray src, {OutputArray? dst, int flags = 0, int nonzeroRows = 0}) Future<Mat>
IDFT calculates the inverse Discrete Fourier Transform of a 1D or 2D array.
illuminationChange(InputArray src, InputArray mask, {double alpha = 0.2, double beta = 0.4}) Mat
IlluminationChange modifies locally the apparent illumination of an image. For further details, please see: https://docs.opencv.org/master/df/da0/group__photo__clone.html#gac5025767cf2febd8029d474278e886c7
illuminationChangeAsync(InputArray src, InputArray mask, {double alpha = 0.2, double beta = 0.4}) Future<Mat>
imagesFromBlob(Mat blob) List<Mat>
ImagesFromBlob Parse a 4D blob and output the images it contains as 2D arrays through a simpler data structure (std::vectorcv::Mat).
imagesFromBlobAsync(Mat blob) Future<List<Mat>>
imdecode(Uint8List buf, int flags, {Mat? dst}) Mat
imdecode reads an image from a buffer in memory. The function imdecode reads an image from the specified buffer in memory. If the buffer is too short or contains invalid data, the function returns an empty matrix. @param buf Input array or vector of bytes. @param flags The same flags as in cv::imread, see cv::ImreadModes. For further details, please see: https://docs.opencv.org/master/d4/da8/group__imgcodecs.html#ga26a67788faa58ade337f8d28ba0eb19e
imdecodeAsync(Uint8List buf, int flags, {Mat? dst}) Future<Mat>
imencode(String ext, InputArray img, {VecI32? params}) → (bool, Uint8List)
IMEncode encodes an image Mat into a memory buffer. This function compresses the image and stores it in the returned memory buffer, using the image format passed in in the form of a file extension string.
imencodeAsync(String ext, InputArray img, {VecI32? params}) Future<(bool, Uint8List)>
imread(String filename, {int flags = IMREAD_COLOR}) Mat
read an image from a file into a Mat. The flags param is one of the IMReadFlag flags. If the image cannot be read (because of missing file, improper permissions, unsupported or invalid format), the function returns an empty Mat.
imreadAsync(String filename, {int flags = IMREAD_COLOR}) Future<Mat>
imshow(String winName, Mat img) → void
displays an image Mat in the specified window. This function should be followed by the WaitKey function which displays the image for specified milliseconds. Otherwise, it won't display the image.
imwrite(String filename, InputArray img, {VecI32? params}) bool
write a Mat to an image file.
imwriteAsync(String filename, InputArray img, {VecI32? params}) Future<bool>
initUndistortRectifyMap(InputArray cameraMatrix, InputArray distCoeffs, InputArray R, InputArray newCameraMatrix, (int, int) size, int m1type, {OutputArray? map1, OutputArray? map2}) → (Mat, Mat)
InitUndistortRectifyMap computes the joint undistortion and rectification transformation and represents the result in the form of maps for remap
initUndistortRectifyMapAsync(InputArray cameraMatrix, InputArray distCoeffs, InputArray R, InputArray newCameraMatrix, (int, int) size, int m1type, {OutputArray? map1, OutputArray? map2}) Future<(Mat, Mat)>
InitUndistortRectifyMap computes the joint undistortion and rectification transformation and represents the result in the form of maps for remap
inpaint(InputArray src, InputArray inpaintMask, double inpaintRadius, int flags) Mat
Inpaint reconstructs the selected image area from the pixel near the area boundary. The function may be used to remove dust and scratches from a scanned photo, or to remove undesirable objects from still images or video. For further details, please see: https://docs.opencv.org/4.x/d7/d8b/group__photo__inpaint.html#gaedd30dfa0214fec4c88138b51d678085
inpaintAsync(InputArray src, InputArray inpaintMask, double inpaintRadius, int flags) Future<Mat>
inRange(InputArray src, InputArray lowerb, InputArray upperb, {OutputArray? dst}) Mat
InRange checks if array elements lie between the elements of two Mat arrays.
inRangeAsync(InputArray src, InputArray lowerb, InputArray upperb, {OutputArray? dst}) Future<Mat>
InRange checks if array elements lie between the elements of two Mat arrays.
inRangebyScalar(InputArray src, Scalar lowerb, Scalar upperb, {OutputArray? dst}) Mat
InRangeWithScalar checks if array elements lie between the elements of two Scalars
inRangebyScalarAsync(InputArray src, Scalar lowerb, Scalar upperb, {OutputArray? dst}) Future<Mat>
InRangeWithScalar checks if array elements lie between the elements of two Scalars
insertChannel(InputArray src, InputOutputArray dst, int coi) Mat
InsertChannel inserts a single channel to dst (coi is 0-based index) (it replaces channel i with another in dst).
insertChannelAsync(InputArray src, InputOutputArray dst, int coi) Future<Mat>
InsertChannel inserts a single channel to dst (coi is 0-based index) (it replaces channel i with another in dst).
integral(InputArray src, {OutputArray? sum, OutputArray? sqsum, OutputArray? tilted, int sdepth = -1, int sqdepth = -1}) → (Mat, Mat, Mat)
Integral calculates one or more integral images for the source image. For further details, please see: https:///docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#ga97b87bec26908237e8ba0f6e96d23e28
integralAsync(InputArray src, {OutputArray? sum, OutputArray? sqsum, OutputArray? tilted, int sdepth = -1, int sqdepth = -1}) Future<(Mat, Mat, Mat)>
Integral calculates one or more integral images for the source image. For further details, please see: https:///docs.opencv.org/master/d7/d1b/group__imgproc__misc.html#ga97b87bec26908237e8ba0f6e96d23e28
invert(InputArray src, {OutputArray? dst, int flags = DECOMP_LU}) → (double, Mat)
Invert finds the inverse or pseudo-inverse of a matrix.
invertAffineTransform(InputArray M, {OutputArray? iM}) Mat
Inverts an affine transformation. The function computes an inverse affine transformation represented by 2×3 matrix M: The result is also a 2×3 matrix of the same type as M.
invertAffineTransformAsync(InputArray M, {OutputArray? iM}) Future<Mat>
Inverts an affine transformation. The function computes an inverse affine transformation represented by 2×3 matrix M: The result is also a 2×3 matrix of the same type as M.
invertAsync(InputArray src, {OutputArray? dst, int flags = DECOMP_LU}) Future<(double, Mat)>
Invert finds the inverse or pseudo-inverse of a matrix.
isWindowOpen(String winName) bool
kmeans(InputArray data, int K, InputOutputArray bestLabels, (int, int, double) criteria, int attempts, int flags, {OutputArray? centers}) → (double, Mat, Mat)
KMeans finds centers of clusters and groups input samples around the clusters.
kmeansAsync(InputArray data, int K, InputOutputArray bestLabels, (int, int, double) criteria, int attempts, int flags, {OutputArray? centers}) Future<(double, Mat, Mat)>
KMeans finds centers of clusters and groups input samples around the clusters.
kmeansByPoints(VecPoint2f pts, int K, InputOutputArray bestLabels, (int, int, double) criteria, int attempts, int flags, {OutputArray? centers}) → (double, Mat, Mat)
KMeansPoints finds centers of clusters and groups input samples around the clusters.
kmeansByPointsAsync(VecPoint2f pts, int K, InputOutputArray bestLabels, (int, int, double) criteria, int attempts, int flags, {OutputArray? centers}) Future<(double, Mat, Mat)>
KMeansPoints finds centers of clusters and groups input samples around the clusters.
laplacian(Mat src, int ddepth, {Mat? dst, int ksize = 1, double scale = 1, double delta = 0, int borderType = BORDER_DEFAULT}) Mat
Laplacian calculates the Laplacian of an image.
laplacianAsync(Mat src, int ddepth, {Mat? dst, int ksize = 1, double scale = 1, double delta = 0, int borderType = BORDER_DEFAULT}) Future<Mat>
Laplacian calculates the Laplacian of an image.
line(InputOutputArray img, Point pt1, Point pt2, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Mat
Line draws a line segment connecting two points.
linearPolar(InputArray src, Point2f center, double maxRadius, int flags, {OutputArray? dst}) Mat
LinearPolar remaps an image to polar coordinates space.
linearPolarAsync(InputArray src, Point2f center, double maxRadius, int flags, {OutputArray? dst}) Future<Mat>
LinearPolar remaps an image to polar coordinates space.
lineAsync(InputOutputArray img, Point pt1, Point pt2, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Future<Mat>
Line draws a line segment connecting two points.
log(InputArray src, {OutputArray? dst}) Mat
Log calculates the natural logarithm of every array element.
logAsync(InputArray src, {OutputArray? dst}) Future<Mat>
Log calculates the natural logarithm of every array element.
logPolar(InputArray src, Point2f center, double M, int flags, {OutputArray? dst}) Mat
LogPolar remaps an image to semilog-polar coordinates space.
logPolarAsync(InputArray src, Point2f center, double M, int flags, {OutputArray? dst}) Future<Mat>
LogPolar remaps an image to semilog-polar coordinates space.
LUT(InputArray src, InputArray lut, {OutputArray? dst}) Mat
Performs a look-up table transform of an array. Support CV_8U, CV_8S, CV_16U, CV_16S
LUTAsync(InputArray src, InputArray lut, {OutputArray? dst}) Future<Mat>
Performs a look-up table transform of an array. Support CV_8U, CV_8S, CV_16U, CV_16S
magnitude(InputArray x, InputArray y, {OutputArray? magnitude}) Mat
Magnitude calculates the magnitude of 2D vectors.
magnitudeAsync(InputArray x, InputArray y, {OutputArray? magnitude}) Future<Mat>
Magnitude calculates the magnitude of 2D vectors.
matchShapes(VecPoint contour1, VecPoint contour2, int method, double parameter) double
Compares two shapes. method: ShapeMatchModes For further details, please see: https:///docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317
matchShapesAsync(VecPoint contour1, VecPoint contour2, int method, double parameter) Future<double>
Compares two shapes. method: ShapeMatchModes For further details, please see: https:///docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317
matchTemplate(Mat image, Mat templ, int method, {OutputArray? result, Mat? mask}) Mat
MatchTemplate compares a template against overlapped image regions.
matchTemplateAsync(Mat image, Mat templ, int method, {OutputArray? result, Mat? mask}) Future<Mat>
MatchTemplate compares a template against overlapped image regions.
max(InputArray src1, InputArray src2, {OutputArray? dst}) Mat
Max calculates per-element maximum of two arrays or an array and a scalar.
maxAsync(InputArray src1, InputArray src2, {OutputArray? dst}) Future<Mat>
Max calculates per-element maximum of two arrays or an array and a scalar.
mean(InputArray src, {InputArray? mask}) Scalar
mean
meanStdDev(InputArray src, {InputArray? mask}) → (Scalar, Scalar)
MeanStdDev calculates a mean and standard deviation of array elements.
meanStdDevAsync(InputArray src, {InputArray? mask}) Future<(Scalar, Scalar)>
MeanStdDev calculates a mean and standard deviation of array elements.
medianBlur(Mat src, int ksize, {OutputArray? dst}) Mat
MedianBlur blurs an image using the median filter.
medianBlurAsync(Mat src, int ksize, {OutputArray? dst}) Future<Mat>
MedianBlur blurs an image using the median filter.
merge(VecMat mv, {OutputArray? dst}) Mat
Merge creates one multi-channel array out of several single-channel ones.
mergeAsync(VecMat mv, {OutputArray? dst}) Future<Mat>
Merge creates one multi-channel array out of several single-channel ones.
min(InputArray src1, InputArray src2, {OutputArray? dst}) Mat
Min calculates per-element minimum of two arrays or an array and a scalar.
minAreaRect(VecPoint points) RotatedRect
MinAreaRect finds a rotated rectangle of the minimum area enclosing the input 2D point set.
minAreaRectAsync(VecPoint points) Future<RotatedRect>
MinAreaRect finds a rotated rectangle of the minimum area enclosing the input 2D point set.
minAsync(InputArray src1, InputArray src2, {OutputArray? dst}) Future<Mat>
Min calculates per-element minimum of two arrays or an array and a scalar.
minEnclosingCircle(VecPoint points) → (Point2f, double)
MinEnclosingCircle finds a circle of the minimum area enclosing the input 2D point set.
minEnclosingCircleAsync(VecPoint points) Future<(Point2f, double)>
MinEnclosingCircle finds a circle of the minimum area enclosing the input 2D point set.
minMaxIdx(InputArray src, {InputArray? mask}) → (double, double, int, int)
MinMaxIdx finds the global minimum and maximum in an array.
minMaxIdxAsync(InputArray src, {InputArray? mask}) Future<(double, double, int, int)>
MinMaxIdx finds the global minimum and maximum in an array.
minMaxLoc(InputArray src, {InputArray? mask}) → (double, double, Point, Point)
MinMaxLoc finds the global minimum and maximum in an array.
minMaxLocAsync(InputArray src, {InputArray? mask}) Future<(double, double, Point, Point)>
MinMaxLoc finds the global minimum and maximum in an array.
mixChannels(VecMat src, VecMat dst, VecI32 fromTo) VecMat
Copies specified channels from input arrays to the specified channels of output arrays.
mixChannelsAsync(VecMat src, VecMat dst, VecI32 fromTo) Future<VecMat>
Copies specified channels from input arrays to the specified channels of output arrays.
moments(Mat src, {bool binaryImage = false}) Moments
Moments calculates all of the moments up to the third order of a polygon or rasterized shape.
momentsAsync(Mat src, {bool binaryImage = false}) Future<Moments>
Moments calculates all of the moments up to the third order of a polygon or rasterized shape.
morphologyDefaultBorderValue() Scalar
MorphologyDefaultBorder returns "magic" border value for erosion and dilation. It is automatically transformed to Scalar::all(-DBL_MAX) for dilation.
morphologyDefaultBorderValueAsync() Future<Scalar>
MorphologyDefaultBorder returns "magic" border value for erosion and dilation. It is automatically transformed to Scalar::all(-DBL_MAX) for dilation.
morphologyEx(Mat src, int op, Mat kernel, {Mat? dst, Point? anchor, int iterations = 1, int borderType = BORDER_CONSTANT, Scalar? borderValue}) Mat
MorphologyEx performs advanced morphological transformations.
morphologyExAsync(Mat src, int op, Mat kernel, {Mat? dst, Point? anchor, int iterations = 1, int borderType = BORDER_CONSTANT, Scalar? borderValue}) Future<Mat>
MorphologyEx performs advanced morphological transformations.
moveWindow(String winName, int x, int y) → void
MoveWindow moves window to the specified position.
mulSpectrums(InputArray a, InputArray b, int flags, {OutputArray? c, bool conjB = false}) Mat
Mulspectrums performs the per-element multiplication of two Fourier spectrums.
mulSpectrumsAsync(InputArray a, InputArray b, int flags, {OutputArray? c, bool conjB = false}) Future<Mat>
Mulspectrums performs the per-element multiplication of two Fourier spectrums.
multiply(InputArray src1, InputArray src2, {OutputArray? dst, double scale = 1, int dtype = -1}) Mat
Multiply calculates the per-element scaled product of two arrays. Both input arrays must be of the same size and the same type.
multiplyAsync(InputArray src1, InputArray src2, {OutputArray? dst, double scale = 1, int dtype = -1}) Future<Mat>
Multiply calculates the per-element scaled product of two arrays. Both input arrays must be of the same size and the same type.
mulTransposed(InputArray src, OutputArray dst, bool aTa, {InputArray? delta, double scale = 1, int dtype = -1}) Mat
mulTransposed
mulTransposedAsync(InputArray src, OutputArray dst, bool aTa, {InputArray? delta, double scale = 1, int dtype = -1}) Future<Mat>
mulTransposed
namedWindow(String winName, [int flags = 0]) → void
creates a new named OpenCV window
NMSBoxes(VecRect bboxes, VecF32 scores, double scoreThreshold, double nmsThreshold, {double eta = 1.0, int topK = 0}) List<int>
NMSBoxes performs non maximum suppression given boxes and corresponding scores.
NMSBoxesAsync(VecRect bboxes, VecF32 scores, double scoreThreshold, double nmsThreshold, {double eta = 1.0, int topK = 0}) Future<List<int>>
norm(InputArray src1, {int normType = NORM_L2, InputArray? mask}) double
Norm calculates the absolute norm of an array.
norm1(InputArray src1, InputArray src2, {int normType = NORM_L2, InputArray? mask}) double
Norm calculates the absolute difference/relative norm of two arrays.
norm1Async(InputArray src1, InputArray src2, {int normType = NORM_L2, InputArray? mask}) Future<double>
Norm calculates the absolute difference/relative norm of two arrays.
normalize(InputArray src, InputOutputArray dst, {double alpha = 1, double beta = 0, int normType = NORM_L2, int dtype = -1, InputArray? mask}) Mat
Normalize normalizes the norm or value range of an array.
normalizeAsync(InputArray src, InputOutputArray dst, {double alpha = 1, double beta = 0, int normType = NORM_L2, int dtype = -1, InputArray? mask}) Future<Mat>
Normalize normalizes the norm or value range of an array.
normAsync(InputArray src1, {int normType = NORM_L2, InputArray? mask}) Future<double>
Norm calculates the absolute norm of an array.
OcvFinalizer<T extends NativeType>(NativeFinalizerFunctionT<T> func) NativeFinalizer
openCvVersion() String
get version
patchNaNs(InputArray a, double val) Mat
patchNaNs
PCABackProject(InputArray data, InputArray mean, InputArray eigenvectors, {OutputArray? dst}) Mat
https://docs.opencv.org/4.x/d2/de8/group__core__array.html#gab26049f30ee8e94f7d69d82c124faafc
PCABackProjectAsync(InputArray data, InputArray mean, InputArray eigenvectors, {OutputArray? dst}) Future<Mat>
PCACompute(InputArray data, InputOutputArray mean, {OutputArray? eigenvectors, OutputArray? eigenvalues, int maxComponents = 0}) → (Mat, Mat, Mat)
PCACompute performs PCA.
PCACompute1(InputArray data, InputOutputArray mean, double retainedVariance, {OutputArray? eigenvectors, OutputArray? eigenvalues}) → (Mat, Mat, Mat)
PCACompute1Async(InputArray data, InputOutputArray mean, double retainedVariance, {OutputArray? eigenvectors, OutputArray? eigenvalues}) Future<(Mat, Mat, Mat)>
PCAComputeAsync(InputArray data, InputOutputArray mean, {OutputArray? eigenvectors, OutputArray? eigenvalues, int maxComponents = 0}) Future<(Mat, Mat, Mat)>
PCACompute performs PCA.
PCAProject(Mat data, Mat mean, Mat eigenvectors, {OutputArray? result}) → (Mat, Mat)
PCAProjectAsync(Mat data, Mat mean, Mat eigenvectors, {OutputArray? result}) Future<(Mat, Mat)>
pencilSketch(InputArray src, {double sigmaS = 60, double sigmaR = 0.07, double shadeFactor = 0.02}) → (Mat, Mat)
PencilSketch pencil-like non-photorealistic line drawing. For further details, please see: https://docs.opencv.org/4.x/df/dac/group__photo__render.html#gae5930dd822c713b36f8529b21ddebd0c
pencilSketchAsync(InputArray src, {double sigmaS = 60, double sigmaR = 0.07, double shadeFactor = 0.02}) Future<(Mat, Mat)>
perspectiveTransform(InputArray src, InputArray m, {OutputArray? dst}) Mat
PerspectiveTransform performs the perspective matrix transformation of vectors.
perspectiveTransformAsync(InputArray src, InputArray m, {OutputArray? dst}) Future<Mat>
PerspectiveTransform performs the perspective matrix transformation of vectors.
phase(InputArray x, InputArray y, {OutputArray? angle, bool angleInDegrees = false}) Mat
Phase calculates the rotation angle of 2D vectors.
phaseAsync(InputArray x, InputArray y, {OutputArray? angle, bool angleInDegrees = false}) Future<Mat>
Phase calculates the rotation angle of 2D vectors.
phaseCorrelate(InputArray src1, InputArray src2, {InputArray? window}) → (Point2f, double)
Apply phaseCorrelate.
phaseCorrelateAsync(InputArray src1, InputArray src2, {InputArray? window}) Future<(Point2f, double)>
Apply phaseCorrelate.
pointPolygonTest(VecPoint points, Point2f pt, bool measureDist) double
PointPolygonTest performs a point-in-contour test.
pointPolygonTestAsync(VecPoint points, Point2f pt, bool measureDist) Future<double>
PointPolygonTest performs a point-in-contour test.
polarToCart(InputArray magnitude, InputArray angle, {OutputArray? x, OutputArray? y, bool angleInDegrees = false}) → (Mat, Mat)
PolatToCart calculates x and y coordinates of 2D vectors from their magnitude and angle.
polarToCartAsync(InputArray magnitude, InputArray angle, {OutputArray? x, OutputArray? y, bool angleInDegrees = false}) Future<(Mat, Mat)>
PolatToCart calculates x and y coordinates of 2D vectors from their magnitude and angle.
pollKey() int
polylines(InputOutputArray img, VecVecPoint pts, bool isClosed, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Mat
Polylines draws several polygonal curves.
polylinesAsync(InputOutputArray img, VecVecPoint pts, bool isClosed, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Future<Mat>
Polylines draws several polygonal curves.
pow(InputArray src, double power, {OutputArray? dst}) Mat
Pow raises every array element to a power.
powAsync(InputArray src, double power, {OutputArray? dst}) Future<Mat>
Pow raises every array element to a power.
PSNR(InputArray src1, InputArray src2, {double R = 255.0}) double
Computes the Peak Signal-to-Noise Ratio (PSNR) image quality metric.
PSNRAsync(InputArray src1, InputArray src2, {double R = 255.0}) Future<double>
Computes the Peak Signal-to-Noise Ratio (PSNR) image quality metric.
putText(InputOutputArray img, String text, Point org, int fontFace, double fontScale, Scalar color, {int thickness = 1, int lineType = LINE_8, bool bottomLeftOrigin = false}) Mat
PutTextWithParams draws a text string. It renders the specified text string into the img Mat at the location passed in the "org" param, using the desired font face, font scale, color, and line thinkness.
putTextAsync(InputOutputArray img, String text, Point org, int fontFace, double fontScale, Scalar color, {int thickness = 1, int lineType = LINE_8, bool bottomLeftOrigin = false}) Future<Mat>
PutTextWithParams draws a text string. It renders the specified text string into the img Mat at the location passed in the "org" param, using the desired font face, font scale, color, and line thinkness.
pyrDown(Mat src, {Mat? dst, (int, int) dstsize = (0, 0), int borderType = BORDER_DEFAULT}) Mat
PyrDown blurs an image and downsamples it.
pyrDownAsync(Mat src, {Mat? dst, (int, int) dstsize = (0, 0), int borderType = BORDER_DEFAULT}) Future<Mat>
PyrDown blurs an image and downsamples it.
pyrUp(Mat src, {Mat? dst, (int, int) dstsize = (0, 0), int borderType = BORDER_DEFAULT}) Mat
PyrUp upsamples an image and then blurs it.
pyrUpAsync(Mat src, {Mat? dst, (int, int) dstsize = (0, 0), int borderType = BORDER_DEFAULT}) Future<Mat>
PyrUp upsamples an image and then blurs it.
randn(InputOutputArray dst, Scalar mean, Scalar stddev) Mat
RandN Fills the array with normally distributed random numbers.
randnAsync(InputOutputArray dst, Scalar mean, Scalar stddev) Future<Mat>
RandN Fills the array with normally distributed random numbers.
randShuffle(InputOutputArray dst, {double iterFactor = 1, Rng? rng}) Mat
RandShuffle Shuffles the array elements randomly.
randShuffleAsync(InputOutputArray dst, {double iterFactor = 1, Rng? rng}) Future<Mat>
RandShuffle Shuffles the array elements randomly.
randu(InputOutputArray dst, Scalar low, Scalar high) Mat
RandU Generates a single uniformly-distributed random number or an array of random numbers.
randuAsync(InputOutputArray dst, Scalar low, Scalar high) Future<Mat>
RandU Generates a single uniformly-distributed random number or an array of random numbers.
rectangle(InputOutputArray img, Rect rect, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Mat
Rectangle draws a simple, thick, or filled up-right rectangle. It renders a rectangle with the desired characteristics to the target Mat image.
rectangleAsync(InputOutputArray img, Rect rect, Scalar color, {int thickness = 1, int lineType = LINE_8, int shift = 0}) Future<Mat>
Rectangle draws a simple, thick, or filled up-right rectangle. It renders a rectangle with the desired characteristics to the target Mat image.
reduce(InputArray src, int dim, int rtype, {OutputArray? dst, int dtype = -1}) Mat
Reduce reduces a matrix to a vector.
reduceArgMax(InputArray src, int axis, {OutputArray? dst, bool lastIndex = false}) Mat
Finds indices of max elements along provided axis.
reduceArgMaxAsync(InputArray src, int axis, {OutputArray? dst, bool lastIndex = false}) Future<Mat>
Finds indices of max elements along provided axis.
reduceArgMin(InputArray src, int axis, {OutputArray? dst, bool lastIndex = false}) Mat
Finds indices of min elements along provided axis.
reduceArgMinAsync(InputArray src, int axis, {OutputArray? dst, bool lastIndex = false}) Future<Mat>
Finds indices of min elements along provided axis.
reduceAsync(InputArray src, int dim, int rtype, {OutputArray? dst, int dtype = -1}) Future<Mat>
Reduce reduces a matrix to a vector.
remap(InputArray src, InputArray map1, InputArray map2, int interpolation, {OutputArray? dst, int borderMode = BORDER_CONSTANT, Scalar? borderValue}) Mat
Remap applies a generic geometrical transformation to an image.
remapAsync(InputArray src, InputArray map1, InputArray map2, int interpolation, {OutputArray? dst, int borderMode = BORDER_CONSTANT, Scalar? borderValue}) Future<Mat>
Remap applies a generic geometrical transformation to an image.
repeat(InputArray src, int ny, int nx, {OutputArray? dst}) Mat
Repeat fills the output array with repeated copies of the input array.
repeatAsync(InputArray src, int ny, int nx, {OutputArray? dst}) Future<Mat>
Repeat fills the output array with repeated copies of the input array.
resize(InputArray src, (int, int) dsize, {OutputArray? dst, double fx = 0, double fy = 0, int interpolation = INTER_LINEAR}) Mat
Resize resizes an image. It resizes the image src down to or up to the specified size, storing the result in dst. Note that src and dst may be the same image. If you wish to scale by factor, an empty sz may be passed and non-zero fx and fy. Likewise, if you wish to scale to an explicit size, a non-empty sz may be passed with zero for both fx and fy.
resizeAsync(InputArray src, (int, int) dsize, {OutputArray? dst, double fx = 0, double fy = 0, int interpolation = INTER_LINEAR}) Future<Mat>
Resize resizes an image. It resizes the image src down to or up to the specified size, storing the result in dst. Note that src and dst may be the same image. If you wish to scale by factor, an empty sz may be passed and non-zero fx and fy. Likewise, if you wish to scale to an explicit size, a non-empty sz may be passed with zero for both fx and fy.
resizeWindow(String winName, int width, int height) → void
ResizeWindow resizes window to the specified size.
rotate(InputArray src, int rotateCode, {OutputArray? dst}) Mat
Rotate rotates a 2D array in multiples of 90 degrees
rotateAsync(InputArray src, int rotateCode, {OutputArray? dst}) Future<Mat>
Rotate rotates a 2D array in multiples of 90 degrees
scaleAdd(InputArray src1, double alpha, InputArray src2, {OutputArray? dst}) Mat
Calculates the sum of a scaled array and another array.
scaleAddAsync(InputArray src1, double alpha, InputArray src2, {OutputArray? dst}) Future<Mat>
Calculates the sum of a scaled array and another array.
scharr(Mat src, int ddepth, int dx, int dy, {Mat? dst, double scale = 1, double delta = 0, int borderType = BORDER_DEFAULT}) Mat
Scharr calculates the first x- or y- image derivative using Scharr operator.
scharrAsync(Mat src, int ddepth, int dx, int dy, {Mat? dst, double scale = 1, double delta = 0, int borderType = BORDER_DEFAULT}) Future<Mat>
Scharr calculates the first x- or y- image derivative using Scharr operator.
seamlessClone(InputArray src, InputArray dst, InputArray mask, Point p, int flags) Mat
SeamlessClone blend two image by Poisson Blending. For further details, please see: https://docs.opencv.org/master/df/da0/group__photo__clone.html#ga2bf426e4c93a6b1f21705513dfeca49d
seamlessCloneAsync(InputArray src, InputArray dst, InputArray mask, Point p, int flags) Future<Mat>
selectROI(String winName, Mat img, {bool showCrosshair = true, bool fromCenter = false, bool printNotice = true}) Rect
SelectROI selects a Region Of Interest (ROI) on the given image. It creates a window and allows user to select a ROI cvRunArena mouse.
selectROIs(String winName, Mat img, {bool showCrosshair = true, bool fromCenter = false, bool printNotice = true}) VecRect
SelectROIs selects multiple Regions Of Interest (ROI) on the given image. It creates a window and allows user to select ROIs cvRunArena mouse.
sepFilter2D(InputArray src, int ddepth, InputArray kernelX, InputArray kernelY, {OutputArray? dst, Point? anchor, double delta = 0, int borderType = BORDER_DEFAULT}) Mat
SepFilter2D applies a separable linear filter to the image.
sepFilter2DAsync(InputArray src, int ddepth, InputArray kernelX, InputArray kernelY, {OutputArray? dst, Point? anchor, double delta = 0, int borderType = BORDER_DEFAULT}) Future<Mat>
SepFilter2D applies a separable linear filter to the image.
setIdentity(InputOutputArray mtx, {Scalar? s}) Mat
SetIdentity initializes a scaled identity matrix. For further details, please see:
setIdentityAsync(InputOutputArray mtx, {Scalar? s}) Future<Mat>
SetIdentity initializes a scaled identity matrix. For further details, please see:
setLogLevel(int logLevel) → void
Sets the global logging level.
setNumThreads(int n) → void
Set the number of threads for OpenCV.
setTrackbarMax(String trackbarName, String winName, int maxval) → void
setTrackbarMin(String trackbarName, String winName, int minval) → void
setTrackbarPos(String trackbarName, String winName, int pos) → void
setWindowProperty(String winName, WindowPropertyFlags flag, double value) → void
setWindowProperty changes parameters of a window dynamically.
setWindowTitle(String winName, String title) → void
SetWindowTitle updates window title.
sobel(Mat src, int ddepth, int dx, int dy, {Mat? dst, int ksize = 3, double scale = 1, double delta = 0, int borderType = BORDER_DEFAULT}) Mat
Sobel calculates the first, second, third, or mixed image derivatives using an extended Sobel operator
sobelAsync(Mat src, int ddepth, int dx, int dy, {Mat? dst, int ksize = 3, double scale = 1, double delta = 0, int borderType = BORDER_DEFAULT}) Future<Mat>
Sobel calculates the first, second, third, or mixed image derivatives using an extended Sobel operator
solve(InputArray src1, InputArray src2, {OutputArray? dst, int flags = DECOMP_LU}) → (bool, Mat)
Solve solves one or more linear systems or least-squares problems.
solveAsync(InputArray src1, InputArray src2, {OutputArray? dst, int flags = DECOMP_LU}) Future<(bool, Mat)>
Solve solves one or more linear systems or least-squares problems.
solveCubic(InputArray coeffs, {OutputArray? roots}) → (int, Mat)
SolveCubic finds the real roots of a cubic equation.
solveCubicAsync(InputArray coeffs, {OutputArray? roots}) Future<(int, Mat)>
SolveCubic finds the real roots of a cubic equation.
solvePoly(InputArray coeffs, {OutputArray? roots, int maxIters = 300}) → (double, Mat)
SolvePoly finds the real or complex roots of a polynomial equation.
solvePolyAsync(InputArray coeffs, {OutputArray? roots, int maxIters = 300}) Future<(double, Mat)>
SolvePoly finds the real or complex roots of a polynomial equation.
sort(InputArray src, int flags, {OutputArray? dst}) Mat
Sort sorts each row or each column of a matrix.
sortAsync(InputArray src, int flags, {OutputArray? dst}) Future<Mat>
Sort sorts each row or each column of a matrix.
sortIdx(InputArray src, int flags, {OutputArray? dst}) Mat
SortIdx sorts each row or each column of a matrix. Instead of reordering the elements themselves, it stores the indices of sorted elements in the output array
sortIdxAsync(InputArray src, int flags, {OutputArray? dst}) Future<Mat>
SortIdx sorts each row or each column of a matrix. Instead of reordering the elements themselves, it stores the indices of sorted elements in the output array
spatialGradient(Mat src, {Mat? dx, Mat? dy, int ksize = 3, int borderType = BORDER_DEFAULT}) → (Mat, Mat)
SpatialGradient calculates the first order image derivative in both x and y using a Sobel operator.
spatialGradientAsync(Mat src, {Mat? dx, Mat? dy, int ksize = 3, int borderType = BORDER_DEFAULT}) Future<(Mat, Mat)>
SpatialGradient calculates the first order image derivative in both x and y using a Sobel operator.
split(InputArray m) VecMat
Split creates an array of single channel images from a multi-channel image Created images should be closed manualy to avoid memory leaks.
splitAsync(InputArray m) Future<VecMat>
Split creates an array of single channel images from a multi-channel image Created images should be closed manualy to avoid memory leaks.
sqrBoxFilter(Mat src, int depth, (int, int) ksize, {Point? anchor, bool normalize = true, int borderType = BORDER_DEFAULT, Mat? dst}) Mat
SqBoxFilter calculates the normalized sum of squares of the pixel values overlapping the filter.
sqrBoxFilterAsync(Mat src, int depth, (int, int) ksize, {Point? anchor, bool normalize = true, int borderType = BORDER_DEFAULT, Mat? dst}) Future<Mat>
SqBoxFilter calculates the normalized sum of squares of the pixel values overlapping the filter.
sqrt(Mat src, {Mat? dst}) Mat
Calculates a square root of array elements.
sqrtAsync(Mat src, {Mat? dst}) Future<Mat>
Calculates a square root of array elements.
stylization(InputArray src, {double sigmaS = 60, double sigmaR = 0.45}) Mat
Stylization aims to produce digital imagery with a wide variety of effects not focused on photorealism. Edge-aware filters are ideal for stylization, as they can abstract regions of low contrast while preserving, or enhancing, high-contrast features. For further details, please see: https://docs.opencv.org/4.x/df/dac/group__photo__render.html#gacb0f7324017df153d7b5d095aed53206
stylizationAsync(InputArray src, {double sigmaS = 60, double sigmaR = 0.45}) Future<Mat>
subtract(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask, int dtype = -1}) Mat
Subtract calculates the per-element subtraction of two arrays or an array and a scalar.
subtractAsync(InputArray src1, InputArray src2, {OutputArray? dst, InputArray? mask, int dtype = -1}) Future<Mat>
Subtract calculates the per-element subtraction of two arrays or an array and a scalar.
sum(Mat src) Scalar
Calculates the sum of array elements.
sumAsync(Mat src) Future<Scalar>
Calculates the sum of array elements.
SVBackSubst(InputArray w, InputArray u, InputArray vh, InputArray rhs, {OutputArray? dst}) Mat
SVBackSubst
SVDecomp(InputArray src, {OutputArray? w, OutputArray? u, OutputArray? vt, int flags = 0}) → (Mat, Mat, Mat)
SVDecomp
textureFlattening(InputArray src, InputArray mask, {double lowThreshold = 30, double highThreshold = 45, int kernelSize = 3}) Mat
TextureFlattening washes out the texture of the selected region, giving its contents a flat aspect. For further details, please see: https://docs.opencv.org/master/df/da0/group__photo__clone.html#gad55df6aa53797365fa7cc23959a54004
textureFlatteningAsync(InputArray src, InputArray mask, {double lowThreshold = 30, double highThreshold = 45, int kernelSize = 3}) Future<Mat>
theRNG() Rng
TheRNG Returns the default random number generator.
threshold(InputArray src, double thresh, double maxval, int type, {OutputArray? dst}) → (double, Mat)
Threshold applies a fixed-level threshold to each array element.
thresholdAsync(InputArray src, double thresh, double maxval, int type, {OutputArray? dst}) Future<(double, Mat)>
Threshold applies a fixed-level threshold to each array element.
throwIfFailed(Pointer<CvStatus> s) → void
trace(InputArray mtx) Scalar
Trace returns the trace of a matrix.
traceAsync(InputArray mtx) Future<Scalar>
Trace returns the trace of a matrix.
transform(InputArray src, InputArray m, {OutputArray? dst}) Mat
Transform performs the matrix transformation of every array element.
transformAsync(InputArray src, InputArray m, {OutputArray? dst}) Future<Mat>
Transform performs the matrix transformation of every array element.
transpose(InputArray src, {OutputArray? dst}) Mat
Transpose transposes a matrix.
transposeAsync(InputArray src, {OutputArray? dst}) Future<Mat>
Transpose transposes a matrix.
transposeND(InputArray src, List<int> order, {OutputArray? dst}) Mat
Transpose for n-dimensional matrices.
transposeNDAsync(InputArray src, List<int> order, {OutputArray? dst}) Future<Mat>
Transpose for n-dimensional matrices.
undistort(InputArray src, InputArray cameraMatrix, InputArray distCoeffs, {OutputArray? dst, InputArray? newCameraMatrix}) Mat
undistortAsync(InputArray src, InputArray cameraMatrix, InputArray distCoeffs, {OutputArray? dst, InputArray? newCameraMatrix}) Future<Mat>
undistortPoints(InputArray src, InputArray cameraMatrix, InputArray distCoeffs, {OutputArray? dst, InputArray? R, InputArray? P, (int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4)}) Mat
undistortPointsAsync(InputArray src, InputArray cameraMatrix, InputArray distCoeffs, {OutputArray? dst, InputArray? R, InputArray? P, (int, int, double) criteria = (TERM_COUNT + TERM_EPS, 30, 1e-4)}) Future<Mat>
vconcat(InputArray src1, InputArray src2, {OutputArray? dst}) Mat
Vconcat applies vertical concatenation to given matrices.
vconcatAsync(InputArray src1, InputArray src2, {OutputArray? dst}) Future<Mat>
Vconcat applies vertical concatenation to given matrices.
waitKey(int delay) int
waits for a pressed key. This function is the only method in OpenCV's HighGUI that can fetch and handle events, so it needs to be called periodically for normal event processing
waitKeyEx(int delay) int
warpAffine(InputArray src, InputArray M, (int, int) dsize, {OutputArray? dst, int flags = INTER_LINEAR, int borderMode = BORDER_CONSTANT, Scalar? borderValue}) Mat
WarpAffine applies an affine transformation to an image.
warpAffineAsync(InputArray src, InputArray M, (int, int) dsize, {OutputArray? dst, int flags = INTER_LINEAR, int borderMode = BORDER_CONSTANT, Scalar? borderValue}) Future<Mat>
WarpAffine applies an affine transformation to an image.
warpPerspective(InputArray src, InputArray M, (int, int) dsize, {OutputArray? dst, int flags = INTER_LINEAR, int borderMode = BORDER_CONSTANT, Scalar? borderValue}) Mat
WarpPerspective applies a perspective transformation to an image. For more parameters please check WarpPerspectiveWithParams.
warpPerspectiveAsync(InputArray src, InputArray M, (int, int) dsize, {OutputArray? dst, int flags = INTER_LINEAR, int borderMode = BORDER_CONSTANT, Scalar? borderValue}) Future<Mat>
WarpPerspective applies a perspective transformation to an image. For more parameters please check WarpPerspectiveWithParams.
watershed(InputArray image, InputOutputArray markers) Mat
Watershed performs a marker-based image segmentation using the watershed algorithm.
watershedAsync(InputArray image, InputOutputArray markers) Future<Mat>
Watershed performs a marker-based image segmentation using the watershed algorithm.

Exceptions / Errors

CvdException
CvException