Android multimedia and camera details 10

Source: Internet
Author: User
Detect available features

// get Camera parametersCamera.Parameters params = mCamera.getParameters();List<String> focusModes = params.getSupportedFocusModes();if (focusModes.contains(Camera.Parameters.FOCUS_MODE_AUTO)) {  // Autofocus mode is supported}

You can use the method shown above for most camera features. camera. the parameters object provides a getsupported... (), is... supported () or getmax... () method to determine whether a feature is supported.

If your application requires specific camera features, you can add requests to your application's manifest file. when you declare that you want to use specific camera features, such as flash and auto focus, Google Play will prevent your applications from being installed on devices that do not support these features.

Use camera features

When setting the camera features to be used, you must first understand whether all devices support all features. in addition, a feature may be supported at different levels or have different options. therefore, when developing a camera application, you need to decide what features are supported and what levels are supported. after the decision is made, you should include in the code to check whether the device supports the required features, and then there should be elegant handle errors when not supported.

You can obtain an instance of a camera parameter object to detect and detect related methods. the following code shows you how to get a camera. parameters object and check whether auto focus is supported:

Most camera features are available through camera. manage parameters. to obtain this object, first obtain an instance of the camera object and then call the getparameters () method. change the obtained parameter object and set it back to the camera object, as shown below:

// Obtain camera parameterscamera. parameters Params = mcamera. getparameters (); // sets the focus mode Params. setfocusmode (camera. parameters. focus_mode_auto); // sets camera parametersmcamera. setparameters (Params );


This method is effective for most camera features, and most parameters can be changed at any time. changes to parameters are usually immediately displayed in the app's camera preview. in terms of software decoding, parameter changes may take effect after multiple frames, because the hardware needs to process new commands and then send updated image data.

Important: some camera features cannot be changed by tasks. in particular, to change the camera preview size or direction, you must first stop previewing, change the preview size or direction, and then restart preview. from Android 4.0 (API Level 14), the preview direction can be changed without restarting.

Other camera features require more code, including:

L metering and focus areas

L face detection

L IME lapse video



Metering and focus areas

In some cases, automatic focus and metering cannot produce the expected results. from Android 4.0 (API Level 14), your camera application can provide additional control, allowing your application or user to specify a certain area of the image for focus or exposure level settings, then pass these values to the camera to capture an image or record a video.

The work of the metering and focusing areas is very similar to that of other cameras. You control them through the camera. Parameters object. The following Code demonstrates setting two metering areas for a camera instance:

// Create a camera instance

Mcamera = getcamerainstance (); // set camera parameters for camera. parameters Params = mcamera. getparameters (); If (Params. getmaxnummeteringareas ()> 0) {// check whether the list of metering regions is supported <camera. area> meteringareas = new arraylist <camera. area> (); rect arearect1 = new rect (-100,-100,100,100); // specify a region meteringareas in the center of the image. add (new camera. area (arearect1, 600); // set the width to 60% rect arearect2 = new rect (800,-1000,100 0,-800 ); // specify a region meteringareas in the upper-right corner of the image. add (new camera. area (arearect2, 400); // set the width to 40% Params. setmeteringareas (meteringareas);} mcamera. setparameters (Params );


The camera. Area object contains two data parameters: A rect object specifies an area in the camera's viewport, and a width that tells the camera the importance of this area during metering or focusing.

Camera. the rect field of the Area object describes the ing position of a rectangular area in an area consisting of 2000X2000 cells. coordinates-1000 and-1000 represent top and left, and coordinates 1000,100 0 represent bottom and right, as shown below:



Figure 1. red lines indicate that the coordinate system of a camera. area is specified in the camera preview. The blue box indicates the position and shape of a camera area, and its coordinate value is 333,333,667,667.

The boundary of this coordinate system is always the same as the outer boundary of the camera preview image, and will not become larger or smaller as the zoom. similarly, use camera. the setdisplayorientation () Rotation preview image will not be changed as a standard.


Face Detection

For images containing people, the face is often the most important part of the image, and when taking a photo, the face is used for focus and white balance. the android 4.0 (API Level 14) Framework provides APIs for facial recognition and image matching.

Note: When the face detection feature is enabled, setwhitebalance (string), setfocusareas (list), and setmeteringareas (list) no longer work.


Face Detection features usually take the following steps:

L check whether the device supports Face Detection

L create a facial detection listener

L add a facial detection listener to your camera object.

L start face detection after preview starts (and the same is true after each restart of Preview)

The facial detection feature is not supported on all devices. You can call getmaxnumdetectedfaces () to check whether it is supported.

To receive and respond to face detection notifications, you must set a listener for face detection events in your camera application. therefore, you must create a listener class to implement camera. the facedetectionlistener interface is shown in the following code:

class MyFaceDetectionListener implements Camera.FaceDetectionListener {    @Override    public void onFaceDetection(Face[] faces, Camera camera) {        if (faces.length > 0){            Log.d("FaceDetection", "face detected: "+ faces.length +                    " Face 1 Location X: " + faces[0].rect.centerX() +                    "Y: " + faces[0].rect.centerY() );        }    }}



After this class is created, you can set it to the camera object of your application, as shown in the following code:

mCamera.setFaceDetectionListener(new MyFaceDetectionListener());

Your application must start face detection every time it starts (or restarts) preview. Create a method dedicated to enabling face detection, and you can call it as needed, as shown in the following example:

public void startFaceDetection(){    // Try starting Face Detection    Camera.Parameters params = mCamera.getParameters();    // start face detection only *after* preview has started    if (params.getMaxNumDetectedFaces() > 0){        // camera supports face detection, so can start it:        mCamera.startFaceDetection();    }}

You must enable Face Detection each time you start (or restart) previewing. if you use the preview class in the "Create a preview class" section, add your startfacedetection () method to the surfacecreated () and surfacechanged () Methods of your preview class, as shown in the following code:

public void surfaceCreated(SurfaceHolder holder) {    try {        mCamera.setPreviewDisplay(holder);        mCamera.startPreview();        startFaceDetection(); // start face detection feature    } catch (IOException e) {        Log.d(TAG, "Error setting camera preview: " + e.getMessage());    }}public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) {    if (mHolder.getSurface() == null){        // preview surface does not exist        Log.d(TAG, "mHolder.getSurface() == null");        return;    }    try {        mCamera.stopPreview();    } catch (Exception e){        // ignore: tried to stop a non-existent preview        Log.d(TAG, "Error stopping camera preview: " + e.getMessage());    }    try {        mCamera.setPreviewDisplay(mHolder);        mCamera.startPreview();        startFaceDetection(); // re-start face detection feature    } catch (Exception e){        // ignore: tried to stop a non-existent preview        Log.d(TAG, "Error starting camera preview: " + e.getMessage());    }}

Note: Remember to call this function after calling startpreview (). Do not try to start Face Detection in oncreate () of actvitiy of your application, because the preview is not started yet.






Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.