Cidetectortracking Correct use method

Source: Internet
Author: User

Apple has recently been added to the new constant CIDetector class called CIDetectorTracking the appearance, so that the faces can be tracked between video frames. If I could manage to figure out how it works, it would be very helpful to me ...

I tried to add this item to the option dictionary using every object I can think of is remote about including my avcapturestillimageoutput instance, my work, UIImage's probe 1, and so on.

NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy,myAVCaptureStillImageOutput,CIDetectorTracking, nil];

But no matter what I try to pass the arguments, it either crashes (obviously I guess it's here) or the debugger outputs:

An unknown cidetectortracking is specified. Ignore.

Normally, I wouldn't guess at this, but the resources on this topic are almost nonexistent. Apple's class cite countries:

The key is used to enable or disable tracing of the probe's face. Use this option when you want to cross frames in a video-traced face.

This is true in addition to the availability of IOS 6 + and OS X 10.8+.

The comments inside CIDetector.h :

/* The key in the dictionary of options used to specify tracking for this function should be used. */

If it's not bad enough, search on Google offers 7 results (8 o'clock they found this article) all these are Apple class references, API differences, and ask how to achieve this goal in IOS 5, third-party or pre-copy positions.

That said, any hints or tips for proper use of the method CIDetectorTracking will be appreciated!

Workaround 1:

You're right, this key is not well documented. There is no explanation in the API documentation beside:

    • CIDetector.h header File
    • Core Image Programming Guide
    • WWDC 2012 Session "520-what is new in camera capture"
    • Sample code for this meeting (Stachecam 2)

Tried different values CIDetectorTracking and accepted unique values that seem to be @(YES) and @(NO) . This message is printed in the console with other values:

An unknown cidetectortracking is specified. Ignore.

When you set the value to @(YES) you should get the Tracking ID with the detected facial feature.

However when you want to detect the content captured in the face from the camera, you should prefer the avfoundation in the face detection API. It has built-in face tracking and face detection in the background of the GPU and will be far more than coreimage face detection it requires IOS 6 and at least one iPhone 4S or IPad 2.

The face is sent as a metadata object ( AVMetadataFaceObject ) to the AVCaptureMetadataOutputObjectsDelegate .

You can use this code (from the slides of Stachecam 2 and the WWDC Conference session mentioned above) to install face detection and get the faces of the metadata objects:

- (void)Setupavfoundationfacedetection
{
Self.Metadataoutput= [Avcapturemetadataoutput New];
If ( ! [Self.Session Canaddoutput:Self.Metadataoutput] ) {
Return;
}

Metadata processing would be fast, and mostly updating UI which should is done on the main thread
So just use the main dispatch queue instead of creating a separate one
(Compare the expensive coreimage face detection, do on a separate queue)
[Self.Metadataoutput setmetadataobjectsdelegate:SelfQueue:Dispatch_get_main_queue()];
[Self.Session Addoutput:Self.Metadataoutput];

If ( ! [Self.Metadataoutput.Availablemetadataobjecttypes Containsobject:Avmetadataobjecttypeface] ) {
Face detection isn ' t supported (via AV Foundation), Fall back to Coreimage
Return;
}

We only want faces, if we don t set this we would detect everything available
(some objects may being expensive to detect, so the best form was to select if you need)
Self.Metadataoutput.Metadataobjecttypes=@[ Avmetadataobjecttypeface ];

}

Avcapturemetadataoutputobjectsdelegate
- (void)Captureoutput:(Avcaptureoutput *)Captureoutput
Didoutputmetadataobjects:(Nsarray *)Metadataobjects
Fromconnection:(Avcaptureconnection *)C
{
For ( Avmetadataobject *Object InchMetadataobjects) {
If ( [[ObjectType]IsEqual:Avmetadataobjecttypeface] ) {
Avmetadatafaceobject*Face= (Avmetadatafaceobject*)Object;
CmtimeTimestamp= [Face Time];
CGRectFacerectangle= [Face bounds];
NsintegerFaceID= [Face FaceID];
CGFloatRollangle= [Face Rollangle
      cgfloat Yawangle = [face yawangle];       nsnumber* FaceID = @ (face. Faceid //use this ID for Tracking
      //do interesting things with this face
     }
}

If you want to display the face frame in the preview layer of the face object that you need to convert:

AVMetadataFaceObject * adjusted = (AVMetadataFaceObject*)[self.previewLayer transformedMetadataObjectForMetadataObject:face];

Cidetectortracking Correct use method

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.