Android core analysis (25)-android GDI's shared buffer mechanism

Source: Internet
Author: User
Tags lockstate
Androird GDI's shared buffer mechanism

1 native_handle_t package for private_handle_t

Private_handle_t is the private data structure of the local buffer used by gralloc. So, while native_handle_t is the abstract data structure that can be passed between processes. On the client, how does one restore the transmitted data structure? First, let's take a look at the abstract packaging of native_handle_t on private_handle_t.

Numfds = snumfds = 1;

Numints = snumints = 8;

This is the abstract mode used to describe the handle in parcel. Actually, native_handle points to the specific content of the handle object:

Numfds = 1 indicates that there is a file handle: FD/

Numints = 8 indicates that the data following the eight int types are magic, flags, size, offset, base, lockstate, writeowner, and PID;

In the upper-layer system, do not care about the data content in buffer_handle_t. Passing the buffer_handle_t (native_handle_t) handle between processes is actually to pass the handle content to the client. Read readnativehandle @ parcel. cpp through the binder on the client to generate an native_handle.

Native_handle * parcel: readnativehandle () const

{

...

Native_handle * H = native_handle_create (numfds, numints );

For (INT I = 0; err = no_error & I

H-> data [I] = DUP (readfiledescriptor ());

If (H-> data [I]

}

Err = read (H-> Data + numfds, sizeof (INT) * numints );

....

Return h;

}

Here we need to mention the processing of the file handle passed by the other party when constructing the native_handle of the client. Because it is not in the same process, DUP (…) is required (...) For the client. In this way, the client interested in the native_handle handle is copied from the server. In this way, the private_native_t data is copied to the client, including Magic, flags, size, offset, base, lockstate, writeowner, and PID.

The client uses this new native_buffer to be transferred back to gralloc by mapper. xxx. so, get the sharing buffer ing address associated with native_handle to get control of the buffer, and achieve the Memory sharing between the client and the server. Surfaceflinger shows that the plot area is shared.

2 What is graphic mapper?

The server (surfaceflinger) allocates a piece of memory as the surface drawing buffer. How does the client work in this drawing buffer? This is what Grapher (graphicbuffermapper) y is going to do. How do two processes share the memory and obtain the shared memory? This is what mapper does. Two information needs to be used: the allocated offset of the device handle of the shared buffer. Mapper uses the following principle:

The client only has lock and unlock, which are essentially MMAP and ummap operations. For the same shared buffer, the offset is always, and the starting address is not important. They actually operate on memory blocks of the same physical address. We have discussed the native_handle_t package process for private_handle_t and learned what the service end has passed to the client.

Process 1 pre-allocates 8 MB of memory on the shared memory device. All future allocations will be made in this 8 m space. For this file device, after 8 Mb physical memory is submitted, it actually occupies 8 MB of memory. Each process can share the 8 M memory with the same memory device, and the tool they use will be MMAP. Since MMAP uses 0 to get the ing address, all client processes have the same physical actual address, so the offset and size can identify a piece of memory. The offset and size are numeric values, which can be directly used from the service process to the client.

3 graphicbuffer (buffer proxy object)

Typedef struct android_native_buffer_t

{

Struct android_native_base_t common;

Int width;

Int height;

Int stride;

Int format;

Int usage;

...

Buffer_handle_t handle;

...

} Android_native_buffer_t;

Link Chart:

Graphicbuffer: eglnativebase: android_native_buffer_t

Graphicbuffer (parcel &) creates the native_buffer_t data of the local graphicbuffer. Construct graphicbuffer by receiving the native_buffer_t passed by the other party. Let's take a look at the function call of the client surface: lock to get the operation Buffer:

Surface: Lock (surfaceinfo * Other, Region * dirtyin, bool blocking)

{Int surface: dequeuebuffer (android_native_buffer_t ** buffer) (surface)

{Status_t surface: getbufferlocked (INT index, int usage)

{

SP Buffer = s-> requestbuffer (index, usage );

{

Virtual sp Requestbuffer (INT bufferidx, int usage)

{Remote ()-> transact (request_buffer, Data, & reply );

SP Buffer = new graphicbuffer (reply );

Surface: Lock creates a new graphicbuffer object on the client. This object uses the principle described in (1) to construct the buffer_handle_t data of surfaceflinger into the new client buffer_handle_t data. On the client's surface object, you can use graphicmapper to perform MMAP on the client's buffer_handle_t to get the starting address of the shared buffer.

4. Summary

Android uses shared memory in this section to manage and display the relevant buffer. It is designed to be two layers. The upper layer is the buffer management proxy graphicbuffer and Its Related native_buffer_t, the lower layer is the allocation management of specific buffers and the buffer itself. Objects on the upper layer can be transmitted frequently through the binder. In the process, instead of passing the buffer itself, MMAP is used to obtain the ing address pointing to the common physical memory.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.