Android camera Data Process Analysis

Source: Internet
Author: User

Previous Article android
Camera --- Architecture Overview

The hierarchical structure of http://blog.csdn.net/andyhuabing/article/details/7229557 is briefly introduced,

This article mainly analyzes the data process. Camera is generally used for image browsing, photographing, and video recording. Here, we analyze the Image Browsing and photographing data streams, and then analyze the video phone part.


1. For the Hal layer, supplement the camera Data Processing


In Linux, v4l2 is the most camera-driven, v4l2 is controlled through various IOCTL calls in the user space, and MMAP can be used for memory ing.

Common IOCTL functions:
The command parameters of the ioctl function are as follows:
. Vidioc_querycap = vidioc_querycap, // query driver Function
. Vidioc_enum_fmt_vid_cap = vidioc_enum_fmt_vid_cap, // obtain the video formats supported by the current driver
. Vidioc_g_fmt_vid_cap = vidioc_g_fmt_vid_cap, // read the frequency capture format of the current driver
. Vidioc_s_fmt_vid_cap = vidioc_s_fmt_vid_cap, // set the frequency capture format of the current driver
. Vidioc_try_fmt_vid_cap = vidioc_try_fmt_vid_cap, // verify the display format of the current driver
. Vidioc_reqbufs = vidioc_reqbufs, // allocate memory
. Vidioc_querybuf = vidioc_querybuf, // convert the data cache allocated in vidioc_reqbufs to a physical address
. Vidioc_qbuf = vidioc_qbuf, // read the data from the cache
. Vidioc_dqbuf = vidioc_dqbuf, // put the data back into the cache queue
. Vidioc_streamon = vidioc_streamon, // starts the video display function
. Vidioc_streamoff = vidioc_streamoff, // end video display function
. Vidioc_cropcap = vidioc_cropcap, // query driver pruning capability
. Vidioc_g_crop = vidioc_g_crop, // The rectangular border for reading Video Signals
. Vidioc_s_crop = vidioc_s_crop, // you can specify a rectangle for the video signal.
. Vidioc_querystd = vidioc_querystd, // check the standards supported by the current video device, such as pal or NTSC.

Set the basic parameters of camera during initialization, and then call the MMAP system call to map the data queue of the camera driver layer to the user space.

There are two main threads:
Picturethread
When a user uses the camera function, the camera thread is called (non-cyclic) to detect the frame data in the queue and retrieve the frame data from the queue,
The data to be taken must be uploaded to the Java layer. All data can be converted to JPEG format before uploading, or converted to RGB data can be uploaded to the Java layer.

Previewthread preview thread
When the preview method is called, start the preview thread to cyclically check whether frame data exists in the queue. If the frame data exists, read the frame data because the read data is in YUV format, all YUV data to be converted to RGB is sent to the display frame for display. You can also send the converted data to the video encoding module. After the encoding is successful, the data is stored as a video recording function.

All uploaded data processing must go through datacallback unless overlay is implemented.


2. Data Flow Control

In the previous section, we learned about its control level and logic. In order to better understand its data trend and optimize it in the future, it is very necessary to understand it.

Take the JPEG data format for example:
Register the callback function:
Public final void takepicture (shuttercallback shutter, picturecallback raw,
Picturecallback postview, picturecallback JPEG ){
Mshuttercallback = shutter;
Mrawimagecallback = raw;
Mpostviewcallback = postview;
M1_callback = JPEG;
Native_takepicture ();
}

Process back function data:
@ Override
Public void handlemessage (Message MSG ){
Switch (msg. What ){
Case camera_msg_shutter: // data Arrival notification
Case camera_msg_raw_image: // function for processing uncompressed photos
Case camera_msg_compressed_image: // function for processing compressed photos
If (m1_callback! = NULL ){
Mjpegcallback. onpicturetaken (byte []) msg. OBJ, mcamera );
}
Return;
Case camera_msg_preview_frame: // process the data preview function.
...
}

Application Registration callback function:
Android. Hardware. Camera mcameradevice; // Java-layer camera object
Mcameradevice. takepicture (mshuttercallback, mrawpicturecallback,
Mpostviewpicturecallback, new jpegpicturecallback (LOC ));

Application Data Acquisition Process:
Private final class implements picturecallback {
Public void onpicturetaken (
Final byte [] parse data, final Android. Hardware. Camera camera ){
...
Mimagecapture. storeimage (metric data, camera, mlocation );
...
}
}

Private class imagecapture {
Private int storeimage (byte [] data, location LOC ){
Imagemanager. addimage (
Mcontentresolver,
Title,
Datetaken,
Loc, // location from GPS/Network
Imagemanager. camera_image_bucket_name, filename,
Null, data,
Degree );
}
}

--> Oh, this is where data is actually stored. There are four areas in the Android system that can store the Common Data zone,
Contentprovider, sharedpreference, file, and SQLite. The file method is used here.

//
// Stores a bitmap or a jpeg byte array to a file (using the specified
// Directory and filename). Also Add an entry to the media store
// This picture. The title, datetaken, location are attributes for
// Picture. The degree is a one element array which returns the orientation
// Of the picture.
//
Public static URI addimage (contentresolver Cr, String title, long datetaken,
Location, string directory, string filename,
Bitmap source, byte [] parse data, int [] degree ){
...
File file = new file (directory, filename );
Outputstream = new fileoutputstream (File );
If (source! = NULL ){
Source. Compress (compressformat. JPEG, 75, outputstream );
Degree [0] = 0;
} Else {
Outputstream. Write (writable data );
Degree [0] = getexiforientation (filepath );
}
...
}

Holder. settype (surfaceholder. surface_type_push_buffers );
Surface_type_push_buffers indicates that the surface does not contain native data. The surface data is provided by other objects. This type of surface is used in the camera image preview, and camera is responsible for previewing the surface data, in this way, the image preview will be smoother.

OK, here we understand the Java layer callback process, below understand the JAVA-JNI-C ++ layer Data Process


The analysis from the bottom layer to the top layer is better:

1. Callback functions provided by camerahardwareinterface:

Typedef void (* notify_callback) (int32_t msgtype,
Int32_t ext1,
Int32_t ext2,
Void * User );

Typedef void (* data_callback) (int32_t msgtype,
Const sp <imemory> & dataptr,
Void * User );

Typedef void (* data_callback_timestamp) (nsecs_t timestamp,
Int32_t msgtype,
Const sp <imemory> & dataptr,
Void * User );

The interface is as follows:
/** Set the notification and data callbacks */
Virtual void setcallbacks (notify_callback notify_cb,
Data_callback data_cb,
Data_callback_timestamp data_cb_timestamp,
Void * User) = 0;

2. cameraservice:
Void cameraservice: Client: datacallback (int32_t msgtype, const sp <imemory> & dataptr, void * User)
{
//...
Switch (msgtype) {------------------------------------ 1 receives the Hal message
Case camera_msg_preview_frame:
Client-> handlepreviewdata (dataptr );
Break;
Case camera_msg_postview_frame:
Client-> handlepostview (dataptr );
Break;
Case camera_msg_raw_image:
Client-> handlerawpicture (dataptr );
Break;
Case camera_msg_compressed_image:
Client-> handlecompressedpicture (dataptr); --------- 2 process image compression messages
--> C-> datacallback (camera_msg_compressed_image, Mem); -------- 3 call the following callback function
Break;
Default:
If (C! = NULL ){
C-> datacallback (msgtype, dataptr );
}
Break;
}
//...
}

Void cameraservice: Client: notifycallback (int32_t msgtype, int32_t ext1, int32_t ext2, void * User)
{
Logv ("notifycallback (% d)", msgtype );

Sp <client> client = getclientfromcookie (User );
If (client = 0 ){
Return;
}

Switch (msgtype ){
Case camera_msg_shutter:
// Ext1 is the dimension of the YUV picture.
Client-> handleshutter (image_rect_type *) ext1 );
Break;
Default:
Sp <icameraclient> C = client-> mcameraclient;
If (C! = NULL ){
C-> yycallback (msgtype, ext1, ext2); ------------ 4 callback message (server)
}
Break;
}
}

3. Client processing:
// Callback from camera service when frame or image is ready ------------- data callback Processing
Void camera: datacallback (int32_t msgtype, const sp <imemory> & dataptr)
{
Sp <cameralistener> listener;
{
Mutex: autolock _ L (mlock );
Listener = mlistener;
}
If (listener! = NULL ){
Listener-> postdata (msgtype, dataptr );
}
}

// Callback from camera service ------------------- message callback Processing
Void camera: notifycallback (int32_t msgtype, int32_t ext1, int32_t ext2)
{
Sp <cameralistener> listener;
{
Mutex: autolock _ L (mlock );
Listener = mlistener;
}
If (listener! = NULL ){
Listener-> Policy (msgtype, ext1, ext2 );
}
}

4. JNI: android_hardware_camera.cpp
// Provides persistent context for callfrom native code to Java
Class jnicameracontext: Public cameralistener
{
...
Virtual void notify (int32_t msgtype, int32_t ext1, int32_t ext2 );
Virtual void postdata (int32_t msgtype, const sp <imemory> & dataptr );
...
}

Data is transmitted to the Java layer through the JNI layer, and the copyandpost function is used
Void jnicameracontext: postdata (int32_t msgtype, const sp <imemory> & dataptr)
{
// Return data based on callback type
Switch (msgtype ){
Case camera_msg_video_frame:
// Shocould never happen
Break;
// Don't return raw data to Java
Case camera_msg_raw_image:
Logv ("rawcallback ");
Env-> callstaticvoidmethod (mcamerajclass, fields. post_event,
Mcamerajobjectweak, msgtype, 0, 0, null );
Break;
Default:
// Todo: change to logv
Logv ("datacallback (% d, % P)", msgtype, dataptr. Get ());
Copyandpost (ENV, dataptr, msgtype );
Break;
}
}

Main data operations. Here, imemory is used for data transmission.
Void jnicameracontext: copyandpost (jnienv * ENV, const sp <imemory> & dataptr, int msgtype)
{
// Allocate JAVA byte array and copy data
If (dataptr! = NULL ){
Sp <imemoryheap> heap = dataptr-> getmemory (& offset, & size );
Uint8_t * heapbase = (uint8_t *) Heap-> base ();
// Buffer management by Application
Const jbyte * Data = reinterpret_cast <const jbyte *> (heapbase + offset );
OBJ = env-> newbytearray (size );
Env-> setbytearrayregion (OBJ, 0, size, data );
}

// Post image data to Java
Env-> callstaticvoidmethod (mcamerajclass, fields. post_event,
Mcamerajobjectweak, msgtype, 0, 0, OBJ );
}

Note that C ++ calls Java functions:
Fields. post_event = env-> getstaticmethodid (clazz, "posteventfromnative ",
"(Ljava/lang/object; iiiljava/lang/object;) V ");

Definition in camera. Java:
Private Static void posteventfromnative (Object camera_ref,
Int what, int arg1, int arg2, object OBJ)
{
Camera c = (CAMERA) (weakreference) camera_ref). Get ();
If (C = NULL)
Return;

If (C. meventhandler! = NULL ){
Message M = C. meventhandler. obtainmessage (what, arg1, arg2, OBJ );
C. meventhandler. sendmessage (m); // the application layer calls takepicture to process the callback data.
}
}
Because the video data stream is large, it will not be sent to the Java layer for processing, just by setting the output device surface for local processing.

OK. Through the above analysis, the data stream has passed. The following is an example:


Preview function: starts from startpreview and ends by calling stoppreview.
Startpreview () --> startcameramode () --> startpreviewmode ()

Status_t cameraservice: Client: startpreviewmode ()
{
...
If (museoverlay ){
// If Preview display has been set, set overlay now.
If (msurface! = 0 ){
Ret = setoverlay (); --> createoverlay/setoverlay operation
}
Ret = mhardware-> startpreview ();
} Else {
Ret = mhardware-> startpreview ();
// If Preview display has been set, register preview buffers now.
If (msurface! = 0 ){
// Unregister here because the surface registered with raw heap.
Msurface-> unregisterbuffers ();
Ret = registerpreviewbuffers ();
}
}
...
}

Whether to use overlay. Read camerahardwareinterface: useoverlay to determine whether to use overlay.
The data stream is processed in the hardware abstraction layer of camera. You only need to use setoverlay to set the overlay device to it.
--> Mhardware-> setoverlay (New overlay (moverlayref ));

If there is no overlay, You need to obtain the preview content data from the camera hardware, and then call the isurface's registerbuffers
Register the memory to the isurface of the output device, and then use surfaceflinger to synthesize the output.
-->
// Don't use a hardcoded format here
Isurface: bufferheap buffers (W, H, W, H,
Hal_pixel_format_ycrcb_420_sp,
Morientation,
0,
Mhardware-> getpreviewheap ());

Status_t ret = msurface-> registerbuffers (buffers );

Data callback processing process:
A. Register datacallback
Mhardware-> setcallbacks (notifycallback,
Datacallback,
Datacallbacktimestamp,
Mcameraservice. Get ());

B. process messages
Void cameraservice: Client: datacallback (int32_t msgtype, const sp <imemory> & dataptr, void * User)
{
Case camera_msg_preview_frame:
Client-> handlepreviewdata (dataptr );
Break;

}

C. Output Data
// Preview callback-frame buffer update
Void cameraservice: Client: handlepreviewdata (const sp <imemory> & MEm)
{
...
// Call the postbuffer of isurface to output video data
If (msurface! = NULL ){
Msurface-> postbuffer (offset );
}

// Call the callback function of icameraclientr to callback the video data to the upper layer.
// Is the specified ed frame copied out or not?
If (flags & frame_callback_flag_copy_out_mask ){
Logv ("frame is copied ");
Copyframeandpostcopiedframe (C, heap, offset, size );
} Else {
Logv ("frame is forwarded ");
C-> datacallback (camera_msg_preview_frame, Mem );
}
...
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.