In the last article we talked about image tagging using Firebase ML Kit APIs. In this post we are going to learn how to scan bar codes using Firebase ML Kit.
The first step is to install CocoaPods responsible for providing the barcode scanning functionality. The pod file is shown below:
After installing CocoaPods open the Xcode project workspace.
Previewing Live Video:
To make our app more fluent and seamless we are going to scan the barcode live instead of taking pictures and scanning them. The live video is captured using AVCaptureSession class and then displayed using AVCaptureVideoPreviewLayer.
The startLiveVideo function initiates the live video.
The imageLayer.videoGravity is used to adjust the size of the video. Once the AVCaptureVideoSession is running, it will provide us the video in the form of buffer. This is captured using the captureOutput function.
Inside the captureOutput function we can run our barcode detection using Firebase ML Kit APIs. The updated captureOutput function is shown below:
The code is very similar to detection image tags in the last article. The running demo of the application is shown below:
I personally, find Google Firebase ML Kit to be much easier to use as compared to the iOS Vision API. Hopefully, they will also add features like object tracking etc.
If you are interested in learning more about integrating Firebase with your iOS apps then check out my course “Mastering Firebase for iOS Using Swift Language” below. Thanks for your support!