scanWithConfiguration static method

Future<Map> scanWithConfiguration(
  1. Map configuration
)

Starts a scanning flow with 3 screens (Camera, Document Detection, Post Processing)

It takes a configuration parameter which can take the following options:

  • source: camera, image or library (defaults to camera)
  • sourceImageUrl: an absolute image url, required if source is image. Example: file:///var/…/image.png
  • multiPage: boolean (defaults to true). If true, after a page is scanned, a prompt to scan another page will be displayed. If false, a single page will be scanned.
  • multiPageFormat: pdf, tiff, none (defaults to pdf)
  • defaultFilter: the filter that will be applied by default to enhance scans, or none if no enhancement should be performed by default. Default value is automatic.
  • availableFilters: an array of filters that the user can select when they tap on the edit filter button. Defaults to `none`, `automatic`, `automaticMonochrome`, `automaticBlackAndWhite`, `automaticColor`, `photo`.
  • pdfPageSize: fit, a4, letter, defaults to fit.
  • pdfMaxScanDimension: max dimension in pixels when images are scaled before PDF generation, for example 2000 to fit both height and width within 2000px. Defaults to 0, which means no scaling is performed.
  • pdfFontFileUrl: Custom font file used during the PDF generation to embed an invisible text layer. If null, a default font is used, which only supports Latin languages.
  • jpegQuality: JPEG quality used to compress captured images. Between 0 and 100, 100 being the best quality. Default is 60.
  • postProcessingActions: an array with the desired actions to display during the post processing screen (defaults to all actions). Possible actions are rotate, editFilter and correctDistortion.
  • defaultCurvatureCorrection: enabled or disabled whether a curvature correction should be applied by default (Android only). Disabled by default.
  • flashButtonHidden: boolean (default to false)
  • defaultFlashMode: auto, on, off (default to off)
  • foregroundColor: string representing a color, must start with a #. The color of the icons, text (defaults to '#ffffff').
  • backgroundColor: string representing a color, must start with a #. The color of the toolbar, screen background (defaults to black)
  • highlightColor: string representing a color, must start with a #. The color of the image overlays (default to blue)
  • menuColor: string representing a color, must start with a #. The color of the menus (defaults to system defaults.)
  • ocrConfiguration: text recognition options. Text recognition will run on a background thread for every captured image. No text recognition will be applied if this parameter is not present.
    • languages: list of the BCP 47 language tags (eg ["en-US"]) for which to run text recognition. Note that text recognition will take longer if multiple languages are specified.
    • outputFormats: an array with the formats in which the OCR result is made available in the ScanFlow result (defaults to all formats). Possible formats are rawText, hOCR and textLayerInPDF.
  • structuredData: an array of the structured data you want to extract. E.g.: ['receipt', 'businessCard']. Possible values are bankDetails, receipt, businessCard. Only available on iOS.

The ScanFlow offers a variety of filters to enhance the appearance of different kinds of documents. Some filters are dynamic (or automatic), meaning they will apply the best enhancement possible, possibly with some constraints. For example, the automaticBlackAndWhite filter will apply the best enhancement, assuming that the scan is a text document and making sure the output will have a grayscale color palette. Here is a list of all possible dynamic filters: automatic, automaticColor, automaticBlackAndWhite, automaticMonochrome. Other filters are static filters, which means they always perform the same enhancement operation, without any logic on the document characteristics. The different static filters are: photo, softBlackAndWhite, softColor, strongMonochrome, strongBlackAndWhite, strongColor, darkBackground.

It returns a Future<Map> containing:

  • multiPageDocumentUrl: a document containing all the scanned pages (example: "file://
  • scans: an array of scan objects. Each scan object has:
    • originalUrl: The original file as scanned from the camera. "file://
    • enhancedUrl: The cropped and enhanced file, as processed by the SDK. "file://
    • ocrResult: the result of text recognition for this scan
      • text: the raw text that was recognized
      • hocrTextLayout: the recognized text in hOCR format (with position, style…)
    • structuredData: the result of the structured data extraction. A subdictionary will be present for each type of structured data detected by the scan flow.

Implementation

static Future<Map> scanWithConfiguration(Map configuration) async {
  return await _channel.invokeMethod('scanWithConfiguration', configuration);
}