"Turn" Android Camera (v) understanding of using the Camera feature area

Source: Internet
Author: User

http://blog.csdn.net/think_soft/article/details/7998478

Using the camera feature

Most camera features are activated and controlled using the Camera.parameters object. The following example code illustrates this operation by getting the object by using the GetParameters () method of the Camera object instance and then setting the modified parameter object to the camera object:

Get Camera parameters

Camera.parameters params = Mcamera.getparameters ();

Set the focus mode

Params.setfocusmode (Camera.Parameters.FOCUS_MODE_AUTO);

Set Camera parameters

Mcamera.setparameters (params);

This technique uses almost all of the camera features, and most parameters can be changed at any time after the Camera object sample has been obtained. Typically, changes to parameters are immediately displayed to the user in the application's Camera preview window. In software, because of the impact of the hardware processing new instructions, the parameter changes will actually have a few frames of delay before sending the updated image data.

Important: Some camera features cannot be changed arbitrarily. In particular, changing the size and orientation of the camera Preview window requires first terminating the image preview, changing the size of the preview window, and then restarting the Image Preview window. Starting with Android4.0 (API Level14), changing the orientation of the preview window does not require you to restart the preview window.

Camera features that require more code to be implemented include:

1. metering and focusing;

2. Facial recognition;

3. Time Lapse photography.

Metering and focusing

In some camera scenarios, automatic focus and metering may not achieve the design results. Starting with Android4.0 (API level 14), your camera application can provide additional control to allow applications or users to specify specific areas of the image for the focus or light level settings, and pass these values to the camera hardware for capturing pictures or videos.

The metering and focusing areas work very much like other camera functions, and you can control them by camera.parameters the methods in the object. The following code shows how to set up two metering areas for the camera sample:

Create an instance of Camera

Mcamera = Getcamerainstance ();

Set Camera parameters

Camera.parameters params = Mcamera.getparameters ();

if (Params.getmaxnummeteringareas () > 0) {//check that metering areas is supported

list<camera.area> meteringareas = new arraylist<camera.area> ();

Rect areaRect1 = new rect (-100,-100, 100, 100); Specify an area in center of image

Meteringareas.add (New Camera.area (AreaRect1, 600)); Set weight to 60%

Rect areaRect2 = new Rect (800,-1000, 1000,-800); Specify an, upper right of image

Meteringareas.add (New Camera.area (AREARECT2, 400)); Set weight to 40%

Params.setmeteringareas (Meteringareas);

}

Mcamera.setparameters (params);

The Camera.area object contains two data parameters: A Rect object that specifies a rectangular area of the camera Preview window; A weight value: It tells the camera the importance level of the metering or focusing calculation that the specified area should give.
The Rect field in the Camera.area object represents a rectangle that is mapped to a 2000x2000 cell. The coordinates ( -1000,-1000) represent the upper-left corner of the camera image, (1000,1000) represents the lower-right corner of the camera image, as shown in:

Figure 1. The red line in the figure illustrates the coordinate system assigned to Camera.area in the Camera preview window. The value of rect is (333,333,667,667) The blue box shows the location and shape of the camera area.

The border of this coordinate system always corresponds to the outer edge of the image displayed in the Camera Preview window, and does not use zoom level to zoom out or zoom in. Similarly, using the Camera.setdisplayorientation () method to select a preview of an image does not remap the coordinate system.

facial recognition

For images that contain people, the face is usually the most important part of the picture, and when capturing the image, focus and white balance should be used for detection. The Android4.0 (API Level 14) framework provides an API for identifying faces and using face recognition techniques to calculate picture settings.

Note: the Setwitebalance (String), Setfocusareas (list) and Setmeteringareas (list) methods have no effect when you run the Face recognition feature.

Using facial recognition technology in your camera application typically requires the following steps:

1. Check whether the device supports face recognition;

2. Create a face-recognition listener;

3. Add a facial recognition listener to your camera object;

4. Face recognition is initiated after the preview has started (and after each restart of the preview window).

Facial recognition features are not supported by all devices. It is possible to detect whether the device supports this feature by calling the Getmaxnumdetectedfaces () method. The Startfacedetection () method in the following example is used for the check of the feature.

To notify and respond to facial recognition, your camera application sets up a listener that responds to facial recognition events. To get there, you have to create a listener class that implements the Camera.facedetectionlistener interface, as follows:

Class Myfacedetectionlistener implements Camera.facedetectionlistener {

@Override

public void Onfacedetection (face[] faces, camera camera) {

if (Faces.length > 0) {

LOG.D ("Facedetection", "Face detected:" + Faces.length +

"Face 1 Location X:" + faces[0].rect.centerx () +

"Y:" + faces[0].rect.centery ());

}

}

}

After creating this class, set it to the camera object of your application:

Mcamera.setfacedetectionlistener (Newmyfacedetectionlistener ());

Your app starts face recognition every time you start the Camera preview window with a restart. Create a method to start face recognition so that it can be called when needed, as shown in the following code:

public void Startfacedetection () {

Try starting face Detection

Camera.parameters params = Mcamera.getparameters ();

Start face detection only *after* preview have started

if (params.getmaxnumdetectedfaces () > 0) {

Camera supports face detection, so can start it:

Mcamera.startfacedetection ();

}

}

You must start face recognition every time you start (or restart) the Camera Preview window. If you use the preview class in "Create Preview class" Above, add the Startfacedetection () method to the preview class's surfacecreated () and surfacechanged () methods, as shown in the following code:

public void surfacecreated (Surfaceholder holder) {

try {

Mcamera.setpreviewdisplay (holder);

Mcamera.startpreview ();

Startfacedetection (); Start Face detection feature

} catch (IOException e) {

LOG.D (TAG, "Error setting Camera Preview:" + e.getmessage ());

}

}

public void surfacechanged (surfaceholder holder, int format, int w, int h) {

if (mholder.getsurface () = = null) {

Preview surface does not exist

LOG.D (TAG, "mholder.getsurface () = = null");

Return

}

try {

Mcamera.stoppreview ();

} catch (Exception e) {

Ignore:tried to stop a non-existent preview

LOG.D (TAG, "Error stopping camera Preview:" + e.getmessage ());

}

try {

Mcamera.setpreviewdisplay (Mholder);

Mcamera.startpreview ();

Startfacedetection (); Re-start Face detection feature

} catch (Exception e) {

Ignore:tried to stop a non-existent preview

LOG.D (TAG, "Error starting camera preview:" + e.getmessage ());

}

}

Note: After calling the Startpreview () method, remember to call this method. Do not attempt to start the face recognition method in the OnCreate () method of the main activity of your camera application. The preview is not valid at this point in time for your application.

Time Lapse photography

Time-lapse photography allows users to synthesize several images into a video clip for a few seconds or a few minutes. This function uses the Mediarecorder object to record the delay sequence of the image.

To record a time-lapse video with a Mediarecorder object, you must configure the Logger object, such as setting the number of frames per second to a smaller number, and using a delay quality setting, as shown in the following code, to record a regular video:

Step 3:set a camcorderprofile (requires API level 8 or higher)

Mmediarecorder.setprofile (Camcorderprofile.get (Camcorderprofile.quality_time_lapse_high));

...

Step 5.5:set The video capture rate to a low number

Mmediarecorder.setcapturerate (0.1); Capture a frame every ten seconds

These settings are a large part of the necessary settings to be made to the Mediarecorder object. For a full configuration code example, see "Configuring Mediarecorder" earlier in this article. Once the configuration is complete, you can record the video as a normal video clip. For more information on configuring and running the Mediarecorder object, see "Capturing Video" earlier in this article.

"Turn" Android Camera (v) understanding of using the Camera feature area

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.