Skip to content

Direct processing API in Swift

dino.gustin edited this page Jan 4, 2016 · 2 revisions

This guide will in short present you how to process UIImage objects with BlinkID SDK, without starting the camera video capture.

With this feature you can solve various use cases like:

  • recognizing text on images in Camera roll
  • taking full resolution photo and sending it to processing
  • scanning barcodes on images in e-mail
  • etc.

This guide will be essentially the same as in the "Getting Started" guide. It also closely follows NoCamera-sample app in the SDK repository.

NoCamera-sample demo app here will present UIImagePickerController for taking full resolution photos, and then process it with MicroBlink SDK to get scanning results using Direct processing API.

1. Initial integration steps

The same as in "Getting Started" guide.

2. Referencing header file

The same as in "Getting Started" guide.

3. Initializing the scanning library

To initiate the scanning process, first decide where in your app you want to add scanning functionality. Usually, users of the scanning library have a button which, when tapped, starts the scanning process. Initialization code is then placed in touch handler for that button. Here we're listing the initialization code as it looks in a touch handler method.

@IBAction func takePhoto(sender: AnyObject) {
    print("Take photo!")

    /** Instantiate the scanning coordinator */
    let error : NSErrorPointer = nil
    let coordinator:PPCoordinator?=self.coordinatorWithError(error)

    /** If scanning isn't supported, present an error */
    if coordinator == nil {
        let messageString: String = (error.memory?.localizedDescription) ?? ""
        UIAlertView(title: "Warning", message: messageString, delegate: nil, cancelButtonTitle: "Ok").show()
        return
    }

    let cameraUI : UIImagePickerController = UIImagePickerController()

    // Use rear camera
    cameraUI.sourceType = UIImagePickerControllerSourceType.Camera;
    cameraUI.cameraDevice = UIImagePickerControllerCameraDevice.Rear;

    // Displays a control that allows the user to choose only photos
    cameraUI.mediaTypes = [String]()
    cameraUI.mediaTypes.append(kUTTypeImage as String)

    // Hides the controls for moving & scaling pictures, or for trimming movies.
    cameraUI.allowsEditing = false;

    // Shows default camera control overlay over camera preview.
    cameraUI.showsCameraControls = true;

    // set delegate
    cameraUI.delegate = self

    // Show view
    self.presentViewController(cameraUI, animated: true, completion: nil)
}

func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : AnyObject]) {
    let mediaType : String = info[UIImagePickerControllerMediaType] as! String

    // Handle a still image capture
    if (CFStringCompare(mediaType, kUTTypeImage, CFStringCompareFlags.CompareCaseInsensitive) == CFComparisonResult.CompareEqualTo) {
        let originalImage : UIImage = info[UIImagePickerControllerOriginalImage] as! UIImage

        // call process image
        self.coordinator.processImage(originalImage, scanningRegion: CGRectMake(0.0, 0.0, 1.0, 1.0), delegate: self)
    }

    self.dismissViewControllerAnimated(true, completion: nil)
}

4. Registering for scanning events

The same as in "Getting Started" guide.

Conclusion

Now you've seen how to implement the Direct processing API.

In essence, this API consists of two steps:

  1. Initialization of the scanner.

  2. Call of processImage:scanningRegion:delegate method for each UIImage you have.

Clone this wiki locally