IOS learning-iOS native QR code scanning, ios

Source: Internet
Author: User

IOS learning-iOS native QR code scanning, ios

In recent projects, we need to develop the scanning QR code for sign-in. This function is mainly used in meeting sign-in scenarios. To avoid cheating, we only use direct scanning during development, in addition, when the QR code scan is successful, the background will automatically upload the current location of the user. How to automatically locate and obtain the current location of the user in the previous articleIOS learning-Automatic PositioningThis article briefly describes how to use the native module of iOS to scan QR codes.

QR code scanning is implemented by many applications. The famous third-party open-source library is ZXing produced by Google, and its OC porting version is ZXingObjc. The native QR code scanning module of iOS is launched after iOS7. It is mainly implemented using the rear camera of iOS devices.

To call the System camera to identify the QR code, we need to import the system's AVFoundation library. When using the System camera, we generally need the following five objects: one rear camera device (AVCaptureDevice), one input (AVCaptureDeviceInput), one output (AVCaptureMetadataOutput), and one coordinated controller (AVCaptureSession), A preview layer (AVCaptureVideoPreviewLayer), in addition to better experience, we added a scaling gesture, you can manually scale the scan area when scanning the QR code, to get better scanning results.

@ Interface CJScanQRCodeViewController () <strong> @ property (strong, nonatomic) AVCaptureDevice * device; // capture device, default rear camera @ property (strong, nonatomic) AVCaptureDeviceInput * input; // input device @ property (strong, nonatomic) AVCaptureMetadataOutput * output; // output device. Specify its output type and scan range @ property (strong, nonatomic) AVCaptureSession * session; // central hub of the AVFoundation framework capture class, coordinates the input and output devices to obtain the data @ property (strong, nonatomic) AVCaptureVideoPreviewLayer * previewLayer; // displays the layer of the captured image, is a subclass of CALayer @ property (strong, nonatomic) UIPinchGestureRecognizer * pinchGes; // scaling gesture @ property (assign, nonatomic) CGFloat scanRegion_W; // The width of the square scan area of the QR code, @ end based on different models

First, we need to configure some of our devices. In comparison, we need to use automatic positioning to configure the positioning information, and then configure the relevant devices for QR code scanning, then we set the scaling gesture. After all the configurations are complete, we can start the QR code scan directly, when the code is scanned and the information is identified, the corresponding response proxy's-(void) captureOutput :( AVCaptureOutput *) captureOutput didOutputMetadataObjects :( NSArray *) metadataObjects fromConnection :( AVCaptureConnection *) is called *) for post-processing of the connection method, we need to implement the proxy method and write the required functional logic in it.

-(Void) viewDidLoad {[super viewDidLoad]; // The page title self. title = @ "scan"; // configure the positioning information [self configLocation]; // configure the QR code scan [self configBasicDevice]; // configure the scaling gesture [self configPinchGes]; // start [self. session startRunning];}

Generally, we first initialize the five required devices, there is no specific setup process or method for the corresponding settings. See the following code and comments.

-(Void) configBasicDevice {// by default, the rear camera is used for scanning, and AVMediaTypeVideo is used to represent the video self. device = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo]; // device input initializes self. input = [[AVCaptureDeviceInput alloc] initWithDevice: self. device error: nil]; // initialize the device output, and set the proxy and callback. When the device scans data, the agent outputs the queue. Generally, the output queue is set as the master queue, it is also the queue environment self where the callback method execution is set. output = [[AVCaptureMetadataOutput alloc] init]; [self. output setMetadataObjectsDelegate: self queue: dispatch_get_main_queue ()]; // Session Initialization connects the input and output of the device through a session, and sets the sampling quality to high self. session = [[AVCaptureSession alloc] init]; [self. session setSessionPreset: AVCaptureSessionPresetHigh]; // adds the input and output of the device to the session and establishes the connection if ([self. session canAddInput: self. input]) {[self. session addInput: self. input];} if ([self. session canAddOutput: self. output]) {[self. session addOutput: self. output];} // specify the recognition type of the device. Here, only the QR code is specified to identify this type. AVMetadataObjectTypeQRCode // you must specify the recognition type after the output is added to the session, otherwise, the device's class recognition type will be blank and the program will crash [self. output setMetadataObjectTypes: @ [AVMetadataObjectTypeQRCode]; // you can specify a Square area in the center of the scan information, the area width is scanRegion_W // The height of the navigation bar is considered here, so the calculation is a bit troublesome. The smaller the recognition area, the higher the recognition efficiency. Therefore, the whole screen CGFloat navH = self is not set. navigationController. navigationBar. bounds. size. height; CGFloat viewH = ZYAppHeight-navH; CGFloat scanViewH = self. scanRegion_W; [self. output setRectOfInterest: CGRectMake (ZYAppWidth-scanViewH)/(2 * ZYAppWidth), (viewH-scanViewH)/(2 * viewH), scanViewH/ZYAppWidth, scanViewH/viewH)]; // preview layer initialization, self. the session is responsible for driving input to collect information, and the layer is responsible for setting the area of the Image Rendering display // preview layer to the entire screen, so that we can easily move the QR code to the scan area, we have set the self in the scan area above. previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession: self. session]; self. previewLayer. frame = CGRectMake (0, 0, ZYAppWidth, ZYAppHeight); self. previewLayer. videoGravity = AVLayerVideoGravityResizeAspectFill; [self. view. layer addSublayer: self. previewLayer]; // scan box and Scan Line Layout and settings to simulate the scanning process. This addition does not affect our effect, only plays an intuitive role. TNWCameraScanView * clearView = [[TNWCameraScanView alloc] initWithFrame: self. view. frame navH: navH]; [self. view addSubview: clearView]; // information label layout under the scan box UILabel * label = [[UILabel alloc] initWithFrame: CGRectMake (0, (viewH + scanViewH)/2 + 10.0f, ZYAppWidth, 20366f)]; label. text = @ "scan only for conference sign-in"; label. font = FONT (15.0f); label. textColor = [UIColor whiteColor]; label. textAlignment = NSTextAlignmentCenter; [self. view addSubview: label];}

Next, let's take a look at how to configure our scaling gesture. This is relatively simple. add a zoom gesture to the view and modify the focal length of our camera device in the corresponding method to achieve the zoom goal.

-(Void) configPinchGes {self. pinchGes = [[UIPinchGestureRecognizer alloc] initWithTarget: self action: @ selector (pinchDetected :)]; [self. view addGestureRecognizer: self. pinchGes];}-(void) pinchDetected :( UIPinchGestureRecognizer *) recogniser {if (! _ Device) {return;} // determines the gesture status if (recogniser. state = UIGestureRecognizerStateBegan) {_ initScale = _ device. videoZoomFactor;} // The camera device must be locked before changing certain parameters until the change ends. NSError * error = nil; [_ device lockForConfiguration: & error]; // lock the camera device if (! Error) {CGFloat zoomFactor; // scaling factor CGFloat scale = recogniser. scale; if (scale <1.0f) {zoomFactor = self. initScale-pow (self. device. activeFormat. videoMaxZoomFactor, 1.0f-recogniser. scale);} else {zoomFactor = self. initScale + pow (self. device. activeFormat. videoMaxZoomFactor, (recogniser. scale-1.0f)/2.0f);} zoomFactor = MIN (15.0f, zoomFactor); zoomFactor = MAX (1.0f, zoomFactor); _ device. videoZoomFactor = zoomFactor; [_ device unlockForConfiguration];}

Finally, we need to rewrite the callback method of the proxy to implement the functional logic we need to implement after successfully recognizing the QR code. In this way, our QR code scanning function is complete.

# Pragma mark-Detail // The rear camera scans the QR code information-(void) captureOutput :( AVCaptureOutput *) captureOutput response :( NSArray *) metadataObjects fromConnection :( AVCaptureConnection *) connection {[self. session stopRunning]; // stop scanning if ([metadataObjects count]> = 1) {// all objects in the array are AVMetadataMachineReadableCodeObject objects, this object contains the decoded data AVMetadataMachineReadableCodeObject * qrObject = [metadataObjects lastObject]; // The scanned content is here for personalized processing NSString * result = qrObject. stringValue; // parse the data for processing and implement the corresponding logic // Code omitted}

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.