Android camera Learning
Overview
The Android Camera framework is a client/service architecture. There are two processes. One is a client process, which can be viewed as an AP.
It mainly includes Java code and some native-layer c/c ++ code. The other is the service process, which belongs to the server and is the native c/c ++ code,
It is mainly responsible for interacting with the camera driver in linux kernel, collecting data transmitted from the driver layer in linux kernel, and handing it to the display system (surface) for display. The client communicates with the service process through the Binder mechanism, and the client calls the service interface to implement specific functions.
Preview data is not copied from the service end to the client through the Binder Mechanism, but the buffer address of the preview data is transferred to the client through the callback function and message mechanism, the preview data can be processed in Java ap.
2. Call hierarchy
The calling process of Camera in Android can be divided into the following layers:
Package-> Framework-> JNI-> Camera (cpp) -- (binder) --> CameraService-> Camera HAL-> Camera Driver
Client:
In the Package, camera. java calls camera. java in the Framework (framework/base/core/java/android/hardware). Framework to call the native function in the JNI layer. Call implementation of the jni layer is compiled into the file libandroid_runtime.so in android_hardware_camera.cpp (framework/base/core/jni files), and the kernel (JNIEnv * env) the function registers native functions to virtual machines for JAVA code at the framework layer to call. These native functions implement specific functions by calling the camera class in libcamera_client.so.
The source code of the core libcamera_client.so dynamic library is located in: framework/base/libs/camera. Among them, Icamera, IcameraClient, and IcameraService are implemented in accordance with the framework required by Binder IPC communication, it is used to communicate with the service end.
Service end:
In the dynamic library libcameraservice. so, the source code of the service is in: frameworks/base/services/camera. The related classes of CameraService call the Camera HAL layer to implement specific functions. From the HAL layer down, it is not the standard code of Android. Each vendor has its own implementation. However, the idea should be the same: Camera follows the V4L2 architecture, uses ioctl to send VIDIOC_DQBUF commands to obtain valid image data, and then calls back the data callback interface of the HAL layer to notify CameraService, cameraService notifies Camera. cpp.
Ps: view the process. For example, com. android. camera is the Camera process on the client, and/system/bin/mediaserver is the daemon process on the server, which is enabled when the system starts.
N1, camera open thread and preview thread n
2. Set parameter n
3. Auto Focus and touch focus n
4. Location Management n
5. Rotation management n
6. Take pictures of nA, focus on nB, take pictures of nC, accept pictures of nD, Save pictures
1. Open the camera and preview thread
In onCreate (), CameraOpenThread and CameraPreviewThread are enabled successively.
Do I still need a thread to open camera? The name of CameraOpenThread is to open camera. In fact, C/S connect connects to the server, and the binder processes communicate with each other, causing high overhead. The preview thread must be executed after the thread is opened. It runs through the process until the process disappears. The entire preview process is complex and implemented at the abstraction layer and the underlying driver.
2. Set Parameters
-Before previewing a photo or video, you need to set many parameters, such as flash, white balance, scenario, and contrast. -The Parameters in the program are stored in the SharedPreferences shared optimization option and Parameters. Preferences include Parameters. Open the program to read the optimization option parameter and close the program to save the optimization option parameter, considering that users often adjust Parameters, Parameters is introduced to save the parameter variables for the previous intermediate process from opening to closing. The Parameters key value is determined by the abstraction layer based on the hardware sensor capability.
3. Auto Focus and touch focus
-External things are imaged by a convex lens focusing on the sensor. The camera module enables the front-and-Back Translation lens of the motor to achieve the best imaging effect. This process is called focus. -5.1 Auto Focus-1) the camera module automatically enables focus and drives control due to changes in the photosensitive intensity. -2) you can press the shutter for auto focus control. -3) The user presses the shutter to take a photo and automatically focus before shooting. -In the program, the methods onShutterButtonFocus (booleanpressed) and onShutterButtonClick () in the interface implementing the ShutterButton class of the Camera object are taken. The latter is a photo. For more information, see onShutterButtonFocus ), this pressed determines whether it is a valid long press. If yes, execute autoFocus (). This autoFocus () is also the method in Listener of the interface of the Camera object implementation class FocusManager, binder is handed over to cameraservice, and auto focus is implemented at the underlying driver. -5.2 touch focus-the focus area of auto focus is in the middle of the screen, and you can also touch the focus area. -The method onTouch () in the OnTouchListener interface of the Camera object implementation Class View, input the xy coordinate of the MotionEvent reported by the system, save it in Parameters, and execute autoFocus (), the abstraction layer reads the touch point coordinates of Paramters to enable regional focus.
4. storage location management
-Location Management LocationManager is used to record the GPS position (dimension) of the captured image and store it in the Exif file inserted in the JPEG header. -Open the "Save location" option in the "camera settings" menu (provided that GPS is enabled). a gps icon appears in the upper left corner of the screen, it indicates that GPS information can be recorded now. -In the program, the Camera object implements the showGpsOnScreenIndicator () and hideGpsOnScreenIndicator () interfaces of the Location Management Listener LocationManager. Listener to display or hide the GPS icon. -InitializeFirstTime () during the first initialization of the program. You can obtain the bool value recordLocation by reading the Preference option and determine whether to record GPS information () call the LocationManager method to obtain the Locationloc and save it to Exif.
5. rotation Management
-Assume that a mobile phone is properly installed on camera. The vertical direction is used as the default direction (orientation = 0) to take a photo, that is, to take a portrait image (portrait ), the image is displayed in the vertical direction on the screen. -If you rotate the mobile phone 90 degrees to take a landscape image (landscape), there is no concept of rotation for camerasensor, so the software should rotate the image 90 degrees back. -------- On the software, you need to use the direction listener to update the direction information at any time and save it in Parameters. The abstraction layer allows you to read the direction from Parameters when taking a picture. -Specifically, the method onOrientationChanged () of MyOrientationEventListener in the internal class of the camera object saves the value of orientation. The Handler inherits the onSensorChanged () of OrientationEventListener and converts the xyz coordinate obtained from sensorManager to the direction. -When the program starts and registers and enables the sensor listener, sensorManager continuously reports the underlying sensor data and sends it to the camera object through the message mechanism, the latter calculates the coordinate data to obtain the orientation value (which is outsourced to orientationListener), and finally saves Parameters.
6. Take a photo
-Take a photo in four steps: focus, take a photo, receive an image, and save the image. -MCameraDevice. takePicture (mShutterCallback, mRawPictureCallback,-mPostViewPictureCallback, new?picturecallback (loc);-1) Focus-if the focus is already in the area before taking the photo, auto focus is disabled. After the focus is completed, the bottom layer sends a message indicating whether the focus is successful or not to the camera object. FocusManager saves the status mState. If the focus is incomplete (mState = STATE_FOCUSING), you cannot take a picture until the focus is complete. -2) Take a photo-onShutterButtonClick ()-> doSnap ()-> capture ()-> takePicture (), which is implemented at the abstraction layer and underlying driver, the essence is to take a preview image, and the abstraction layer reads the Parameters parameter configuration when taking the photo, including the size of the selected photo. -3) receive images-send images to the camera object at the underlying layer through callback. -RawPictureCallback: to obtain the original image, the software must compress the Jpeg image. (YUV to Jpeg)-prepare picturecallback to directly obtain the Jpeg image. Hardware compression is required. -PostViewPictureCallback: Click it to preview the image. -4) Save the image-submit it to the ImageSaver thread to save the image and generate thumbnails. -The default path is/mnt/sdcard/DCIM/Camera/