Configuration and face recognition technology for iOS OPENCV

Source: Internet
Author: User
<span id="Label3"></p><p><p>As a very curious person, the face of the unknown world all want to Explore. So I did a personal face recognition demo. At present, the domestic about OPENCV technical articles are very few, are copying each other, the key is to copy a small part is not Complete. Time is long ago, and now some things do not. nothing, I am a real person, not much to say, directly on the Beginning. During the reference to the domestic many OPENCV articles, Code section Reference http://m.blog.csdn.net/blog/u013810454/27868973, you can VIEW. But he had a problem downloading the Project. I have this fusion of all the advantages, more comprehensive, from configuration to Use.</p></p><p><p>First let's configure OpenCV in the Xcode project.</p></p><p><p>1.OPENCV Download the framework under iOS and download the opencv2.framework First. Then drag directly into the previously created Project.</p></p><p><p><br></p></p><p><p>And then</p></p><p><p><br></p></p><p><p>And then</p></p><p><p><br></p></p><p><p>Now that the basic configuration is complete, it's time to show the real technology. Of course, don't forget to change. m to. mm in order to use C + +.</p></p><p><p></p></p><pre name="code" class="objc">#import "ViewController.h" #import <foundation/foundation.h>int currentvalue = 9; @interface viewcontroller () <uiimagepickercontrollerdelegate,uinavigationcontrollerdelegate>{//display Picture Uiimageview *_imageView; UIImage *image;} @end @implementation viewcontroller-(void) viewdidload {[super viewdidload]; Do any additional setup after loading the VIEW. [self createbutton]; Create a Uiimagepickercontroller object Uiimagepickercontroller *ctrl = [[uiimagepickercontroller alloc] init]; Set type Ctrl.sourcetype = uiimagepickercontrollersourcetypephotolibrary; Set Proxy ctrl.delegate = self; Show [self Presentviewcontroller:ctrl Animated:yes completion:nil]; Self.view.backgroundColor = [uicolor whitecolor]; Create a Uiimageview to display the selected picture _imageview = [[uiimageview alloc] initwithframe:cgrectmake (50, 100, 300, 400)]; [self.view addsubview:_imageview]; } #pragma mark-uiimagepickercontroller Agent-(void) imagepickercontroller: (uiimagEpickercontroller *) Picker didfinishpickingmediawithinfo: (nsdictionary *) info{//fetch to Selected picture image = Info[uiimagepickerc ontrolleroriginalimage]; Uiimageorientation imageorientation=image.imageorientation; If (imageorientation!=uiimageorientationup) {//the original picture can be displayed according to the angle of the camera, but UIImage cannot be judged, so the image obtained will turn left 90 degrees. The following is a partial uigraphicsbeginimagecontext (image.size) that adjusts the angle of the picture; [image drawinrect:cgrectmake (0, 0, image.size.width, image.size.height)]; Image = Uigraphicsgetimagefromcurrentimagecontext (); Uigraphicsendimagecontext (); Adjust picture angle finished}//processing picture _imageview.image = image; [picker Dismissviewcontrolleranimated:yes completion:nil]; }-(void) imagepickercontrollerdidcancel: (uiimagepickercontroller *) picker{[picker dismissviewcontrolleranimated: YES completion:nil];} Convert image to OpenCV image format-(iplimage *) createiplimagefromuiimage: (UIImage *) image {cgimageref Imageref = Image. cgimage; Cgcolorspaceref ColorSpace =Cgcolorspacecreatedevicergb (); Iplimage *iplimage = cvcreateimage (cvsize (image.size.width, image.size.height), ipl_depth_8u, 4); Cgcontextref contextref = cgbitmapcontextcreate (iplimage->imagedata, iplimage->width, iplimage->height, iplimage->depth, iplimage->widthstep, colorspace, kcgimagealphapremultipliedlast|kcgbitmapbyteorderdefault); Cgcontextdrawimage (contextref, cgrectmake (0, 0, image.size.width, image.size.height), imageref); Cgcontextrelease (contextref); Cgcolorspacerelease (colorspace); Iplimage *ret = cvcreateimage (cvgetsize (iplimage), ipl_depth_8u, 3); Cvcvtcolor (iplimage, ret, cv_rgba2bgr); Cvreleaseimage (&iplimage); Return ret;} -(void) opencvfacedetect {uiimage* img = [image copy]; If (img) {cvseterrmode (cv_errmodeparent); Iplimage *image = [self createiplimagefromuiimage:img]; Iplimage *grayimg = cvcreateimage (cvgetsize (image), ipl_depth_8u, 1); First to grayscale image Cvcvtcolor (image, grayimg, cv_bgr2gray); Reduce the input image by 4 times times to speed up the processing speed int scale = 4; Iplimage *small_image = cvcreateimage (cvsize (image->width/scale,image->height/scale), IPL_DEPTH_8U, 1); Cvresize (grayimg, small_image); Load classifier nsstring *path = [[nsbundle mainbundle] pathforresource:@ "haarcascade_frontalface_alt2" ofType:@ "xml"]; cvhaarclassifiercascade* cascade = (cvhaarclassifiercascade*) cvload ([path cstringusingencoding: nsasciistringencoding], null, null, null); cvmemstorage* storage = Cvcreatememstorage (0); Cvclearmemstorage (storage); Key section, using cvhaardetectobjects for detection, get a series of boxes cvseq* faces = cvhaardetectobjects (small_image, cascade, storage, 1.1, cur rentvalue, cv_haar_do_canny_pruning, cvsize (0,0), cvsize (0, 0)); NSLog (@ "faces:%d", faces->total); Create a canvas to mark a part of a human face Cgimageref Imageref = Img. cgimage; Cgcolorspaceref colorspace = Cgcolorspacecreatedevicergb (); Cgcontextref contextref = cgbitmapcontextcreate (NULL, img.size.width, img.size.height,8, img.size.width * 4,colorspace , kcgimagealphapremultipliedlast|kcgbitmapbyteorderdefault); Cgcontextdrawimage (contextref, cgrectmake (0, 0, img.size.width, img.size.height), imageref); Cgcontextsetlinewidth (contextref, 4); Cgcontextsetrgbstrokecolor (contextref, 1.0, 0.0, 0.0, 1); Mark the face for (int i = 0; i < faces->total; i++) {//Calc The rect of faces Cvrect Cvre CT = * (cvrect*) cvgetseqelem (faces, i); CGRect face_rect = cgcontextconvertrecttodevicespace (contextref, cgrectmake (cvrect.x*scale, cvrect.y*scale, cvrect.width*scale, cvrect.height*scale)); Cgcontextstrokerect (contextref, face_rect); } _imageview.image = [UIImage imaGewithcgimage:cgbitmapcontextcreateimage (contextref)]; }//detection is a bit time-consuming, open a new thread to handle It-(void) btn{[nsthread detachnewthreadselector: @selector (opencvfacedetect) totarget:self withob ject:nil];} -(void) Createbutton{UIButton *btn = [[UIButton alloc]init]; Btn.backgroundcolor = [uicolor redcolor]; Btn.frame = CGRectMake (0, 100, 30, 30); [BTN addtarget:self action: @selector (btn) forcontrolevents:uicontroleventtouchupinside]; [self.view addsubview:btn]; } @end</pre><br>ok, Now you can detect the face,<p><p></p></p><p><p><br></p></p><p><p>Isn't it magical, it's fun? Give it a try.</p></p> <p style="font-size:12px;"><p style="font-size:12px;">Copyright Notice: This article for Bo Master original article, without Bo Master permission not Reproduced.</p></p> <p><p>Configuration and face recognition technology for iOS OPENCV</p></p></span>

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.