Interpreting the design ideas of native app camera in Android 4.0

Source: Internet
Author: User

1. Set the camera direction

2. Open the thread and preview thread

3. Set Parameters

4. Camera peripheral buttons

5. Auto Focus and touch focus

6. Take a photo

7. Face Detection

8. Location Management

9. rotation Management

10. Zoom

11. Video

The architecture of camera is a typical C/S architecture. The client side and user behavior are the functions of application processes, servers, and devices. They are the daemon processes for the camera service, the client process carries users' requirements. The binder processes communicate with each other to the server to implement device functions. The server receives feedback from the callback function and message mechanism to the user.

PS: view the process. For example, Com. Android. Camera is the camera process on the client, and/system/bin/mediaserver is the daemon process on the server, which is enabled when the system starts.

1. Set the camera direction

The camera object (camera. Java) in the framework layer has a class camerainfo, which contains two public member variables facing and orientation, that is, the prefix and direction we will discuss. Initializefirsttime () when the program is initialized for the first time. getcamerainfo () is used to obtain the frontend and backend information. The client sends a request to getcamerainfo () to ask the server. The server calls the abstraction layer to obtain data, for the abstraction layer, refer to the data facing and orientation driven by the underlying camera sensor. These two values are hard-coded in the driver and cannot be changed by users, after the camera program starts, it stores them as global variables.

1.1 Front and rear

Rear Back camera back-to-screen mobile phone, face-to-face, high pixels, front camera, face yourself, face inside, low pixels.

Direction 1.2

The camera module has a long side and a short side. For example, if the image acquisition ratio is 4: 3, 4 is a long side and 3 is a short side. In theory, the long side of the camera module cannot be perpendicular to the long side of the screen. Why is my language too poor, there is no way to express it well... the purpose is to display the effect.

2 open thread and preview thread

In oncreate (), cameraopenthread and camerapreviewthread are enabled successively.

Do I still need a thread to open camera? The name of cameraopenthread is open. In fact, C/S connect connects to the server, and the binder processes communicate with each other, causing high overhead. The preview thread must be executed after the thread is opened. It runs through the process until the process ends. The entire preview process is relatively complex. It is implemented at the abstraction layer and the underlying driver. In summary, the preview thread starts two more threads, one is the frame with the sensor, and the other is sent to the framebuffer for display by surfaceflinger.

3. Set Parameters

Before previewing a video, you need to set many parameters, such as flash, white balance, scenario, and contrast.

The parameters in the program are stored in the sharedpreferences shared optimization option and parameters. preferences include parameters. Open the program to read the parameters and close the program to save the parameters. Considering that users often adjust parameters, parameters is introduced to save the parameter variables in the intermediate process from opening to closing. The Parameters key value is determined by the abstraction layer based on the hardware sensor capability.

4. Camera peripheral buttons

Register broadcast receiver in manifest,

<Cycler Android: Name = "com. Android. Camera. camerabuttonintentreceiver">
<Intent-filter>
<Action Android: Name = "android. Intent. Action. camera_button"/>
</Intent-filter>
</Cycler>

Some mobile phones have the camera button and the user presses the button. The Android input system has two implementation methods,

1) Send the broadcast camera_button and enable the main activity after receiving the broadcast.

2) report the key value keycode_camera. The program receives the message and can customize the implementation function, such as taking a photo.

Public Boolean onkeydown (INT keycode, keyevent event ){
Switch (keycode ){
Case keyevent. keycode_camera:
If (mfirsttimeinitialized & event. getrepeatcount () = 0 ){
Onshutterbuttonclick ();
}
Return true;

5. Auto Focus and touch focus

The external things are imaged by a convex lens to the sensor. The camera module enables the front and back translation lens of the motor to achieve the best imaging effect. This process is called focus.

5.1 auto focus

1) The camera module automatically enables focus and drive control due to changes in the photosensitive intensity.

2) you can press the shutter to enable auto focus.

3) The user presses the shutter to take a photo and automatically focus before shooting.

In the program, the camera object implements the methods onshutterbuttonfocus (Boolean pressed) and onshutterbuttonclick () in the onshutterbuttonlistener interface of the shutterbutton class. The latter is a photo, which will be discussed in the next section first ), this pressed determines whether it is a valid long press. If yes, execute autofocus (). This autofocus () is also the method in listener of the interface of the camera object implementation class focusmanager, binder to camera
Service, and then enable auto focus in the underlying driver.

5.2 touch focus

The focus area of auto focus is in the middle of the screen, and users can also touch the focus area.

The method ontouch () in the ontouchlistener interface of the camera object implementation Class View, input the XY coordinate of the motionevent reported by the system, save it in parameters, execute autofocus (), the abstraction layer reads the touch point coordinates of paramters to enable regional focus.

6. Take a photo

Take a photo in four steps, focus, take a photo, receive an image, save the image.

Mcameradevice. takepicture (mshuttercallback, mrawpicturecallback,
Mpostviewpicturecallback, new jpegpicturecallback (LOC ));

You need to understand the four callback functions. refer to the previous article.

1) Focus

If you have enabled focus in the area before taking the picture, disable auto focus. Otherwise, enable auto focus once. After the focus is completed, the bottom layer sends a message indicating whether the focus is successful or not to the camera object. focusmanager saves the status mstate. If the focus is incomplete (mstate = state_focusing), you cannot take a picture until the focus is complete.

2) Take a photo

Onshutterbuttonclick ()-> dosnap ()-> capture ()-> takepicture (). The actual implementation of the driver at the abstraction layer and the underlying layer is to take a preview image, the abstraction layer reads the parameters parameter configuration when taking a photo, including the size of the selected photo.

3) receive images

Send an image to the camera object at the underlying layer through callback.

Rawpicturecallback: to obtain the original image, the software must compress the JPEG image. (YUV to JPEG)

Prepare picturecallback to directly obtain the JPEG image, which requires hardware compression.

Postviewpicturecallback: preview the image after the video is taken.

4) Save the image

By the imagesaver thread, save the image and generate thumbnails.

The default path is/mnt/sdcard/dcim/camera/

7. Face Detection

Face detection can be implemented by software, hardware, CPU overhead, power consumption, and hardware implementation...

The upper-layer camera object implements the onfacedetection (face [] faces, camera) method of the Framework-layer camera interface facedetection listener. The callback mechanism is the same as above. The hardware sensor recognizes face information and sends face to the camera object, the Framework defines face features, such as eyes, mouth, and upper layer. It stores mfaces data and updates the UI.

8. Location Management

Location Management locationmanager is used to record the GPS position (dimension) of the captured image and store it in the exif file inserted in the JPEG header.

Select "Save location" in "camera settings" in the menu (if GPS is enabled). a gps icon appears in the upper-left corner of the screen, indicating that GPS information can be recorded now.

In the program, the camera object implements the showgpsonscreenindicator () and hidegpsonscreenindicator () interfaces of the Location Management listener locationmanager. Listener to display or hide the GPS icon.

Initializefirsttime () when the program is initialized for the first time. The bool value recordlocation is obtained by reading the preference option to determine whether to record GPS information () call the locationmanager method to get location LOC and save it to EXIF.

9. rotation Management

Assume that a mobile phone is properly installed on camera, and the vertical direction is used as the default direction (orientation = 0) to take a photo, that is, to take a portrait image (portrait ), the image is displayed in the vertical direction on the screen.

If you rotate the mobile phone 90 degrees to take a landscape image (landscape), there is no concept of rotation for camera sensor, so the software needs to rotate the image 90 degrees back.

In the software, you need to use the direction listener to update the direction information at any time and save it in parameters. The abstraction layer allows you to read the direction from parameters when taking a picture.

Specifically, the method onorientationchanged () of myorientationeventlistener in the internal class of the camera object saves the value of orientation. The Handler inherits the onsensorchanged () of orientationeventlistener and converts the XYZ coordinate obtained from sensormanager to the direction.

The program starts and registers and enables the sensor listener. The sensormanager continuously reports the underlying sensor data and sends it to the camera object through the message mechanism, the latter calculates the coordinate data to obtain the orientation value (which is outsourced to orientationlistener), and finally saves parameters.

10. Zoom

You can drag the zoom horizontal bar to zoom in and out the preview image. Another so-called state zoom function works in the same way.

The internal class zoomchangelistener of the camera object implements zoomcontrol. The essence is to assign zoomcontrol to the zoom task. The zoomcontrol listener processes the user's touch event dispatchtouchevent () to obtain and process the zoom factor mlistener. onzoomvaluechanged (INDEX), which is composed of mcameradevice. startsmoothzoom () is handed over to the camera service through the binder, and the camera service is handed over to the abstraction layer through the sendcomand command mechanism for Zoom. The abstraction layer enables the zoom thread, and the zoom changes the preview, the callback mechanism is used to send the message camera_msg_zoom and return the zoom multiple to the camera object. After the camera object receives the message, zoomlistener. onzoomchange () saves the zoom factor to parameters.

11. Video

Modepicker is responsible for switching the mode. There are three modes: Normal Mode, video mode, and panoramic mode. The three activities are declared in manifest in sequence.

Switch Mode, destroy the original activity, enable the new activity, close the preview, restart the preview, save the configuration, and read the configuration, which is costly.

The video videocamera. Java is similar to the idea of previewing camera. java. When you press the video button, the program listens to user events, enables video recording, and delivers the video to mediarecoder and stagefrightrecorder.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.