Prior to IOS7, developers typically use third-party libraries for code-scanning programming.commonly used is the ZBarsdk,ios7 after , the system's Avmetadataobject class, we provide an interface for parsing two-dimensional code. Tested, the use of native API scanning and processing is very efficient, far higher than the third-party library.
I. Examples of usage methods
The official interface is very simple and the code is as follows:
@interface ViewController () <avcapturemetadataoutputobjectsdelegate>//agent { for processing collected information avcapturesession * session;//Intermediate bridge for input and output} @end @implementation viewcontroller - (void) viewdidload { [super viewdidload]; // Do any additional setup after loading the view, typically from a nib. //Acquiring camera Equipment AVCaptureDevice * device = [avcapturedevice defaultdevicewithmediatype:avmediatypevideo]; Create an input stream AVCaptureDeviceInput * input = [AVCaptureDeviceInput deviceinputwithdevice:device error:nil]; //creating an output stream avcapturemetadataoutput * output = [[avcapturemetadataoutput alloc]init]; //set proxy refresh &nbs in main threadP; [output setmetadataobjectsdelegate:self queue:dispatch_get_main_queue ()]; //initializing a linked object session = [[ avcapturesession alloc]init]; //High quality acquisition rate [session Setsessionpreset:avcapturesessionpresethigh]; [session addinput:input]; [session addoutput:output]; // Set the encoding format supported by the scan code (set barcode and QR code compatible) [email protected][avmetadataobjecttypeqrcode, avmetadataobjecttypeean13code, avmetadataobjecttypeean8code, avmetadataobjecttypecode128code]; avcapturevideopreviewlayer * layer = [AVCaptureVideoPreviewLayer layerWithSession:session]; Layer.videogravity=avlayervideogravityresizeaspectfill; layer.frame=self.view.layer.bounds; [self.view.layer insertsublayer:layer atindex:0]; //Start Capturing [session Startrunning];}
After we have seen what the camera captures on our UI, we can do a QR code scan by implementing the method in the proxy:
-(void) Captureoutput: (avcaptureoutput *) Captureoutput didoutputmetadataobjects: (nsarray *) Metadataobjects fromconnection: ( avcaptureconnection *) connection{ if (metadataobjects.count>0) { //[session stopRunning]; avmetadatamachinereadablecodeobject * metadataobject = [metadataobjects objectatindex : 0 ]; //Output Scan String nslog (@ "%@", metadataobject.stringvalue); }}
second, some optimization
Through the code test above, we can find that the resolution processing efficiency of the system is quite high, the API provided by iOS is really very powerful, however, we can do further optimization, will improve efficiency:
First ,There is one such attribute in the Avcapturemetadataoutput class (available after IOS7.0):
@property(Nonatomic) CGRect rectofinterest;
This property roughly means to tell the system that it needs to be aware of the area, most apps have a box in the sweep code UI, reminding you to put the barcode into that area, the role of this property is here, it can set a range, Only information about the images captured within this range is processed. As a result, we can imagine that the efficiency of our code will be greatly improved, when using this property. A few things to note:
1, this cgrect parameter is not the same as the normal RECT range, its four values range is 0-1, which represents the scale.
2, after testing that the x in this parameter corresponds exactly to the vertical distance from the upper left corner, and y corresponds to the horizontal distance from the upper left corner.
3, width and height settings are also similar.
3. For example, if we want the scanned processing area to be the lower half of the screen, we set
?
1 |
output.rectOfInterest=CGRectMake(0.5,0,0.5, 1); |
specifically why does Apple design this, or is this parameter my usage there is not right, but also need to know the friend to give a guide.
QR Code barcode scanning using IOS7 native API