Android binder IPC

Source: Internet
Author: User

The binder communicates between processes using a small custom kernel module. the binder IPC is used instead of standard Linux IPC facilities (name pipe, socket, signal and so on) so that we can efficiently model our IPC operations as "thread migration ".
That is, an IPC between processes looks as if the thread instigating the IPC has hopped over to the destination process to execute the code there, and then hopped back with the result.

 

The binder IPC mechanic itself, however, is not actually implemented using thread migration. instead, the binder 'suser-Space Code maintains a pool of available threads in each process, which are used to process incoming IPCS and execute local events in that
Process. the kernel module emulates a thread migration model by propagating threadpriorities when SS processes as IPCS are dispatched and ensuring that, if an ipcrecourses back into an originating process, the IPC is handled by its originating thread.

 

In addition to IPC itself, the binder 'skernel module is also responsible for tracking object references into SS processes. this involves mapping from remote object references in one process to the real object in its host process, and making sure that objects
Are not destroyed as long as other processes hold references on them.

 

The rest of this document will describe in detail how binder IPC works.

0. Get started

When a user-space process wants to fig in Binder IPC (either to send an IPC to another process or to sort ing an incoming IPC ), the first thing it must do is open the device supplied by the binder kernel module. this associates a file descriptor
With all threads of the process, which is used by the kernel module to identify the initiators and recipients of binder IPCS.

 

It is through this specified descriptor that all interaction with the IPC mechanic will happen, through a small set of IOCTL () commands. The main Commands are:

LBinder_write_readSends zero or more binder operations, and blocks waiting to receive incoming operations and return with a result. (this is the same as doing a normal write () followed by a read () on the file descriptor, just a little
More efficient .)

LBinder_set_idle_timeoutSets the time period threads will remain idle (waiting for a new in coming transaction) before they time out.

LBinder_set_max_threadsSets the maximum number of threads that the driver is allowed to create for that process's thread pool.

LBinder_set_idle_prioritySets the priority of threads in idle.

LBinder_set_context_mgrSets the current process asService Manager
(Service Manager will be explained later ).

LBinder_thread_exitDestroys one thread.

LBinder_versionGets the version info of the binder IPC driver.

 

The core functionality of the binder IPC driver is encapsulated in the binder_write_read operation. The IOCTL's data isthis structure:

 

46 struct binder_write_read {

47 signed long write_size;

48 signed long write_consumed;

49 unsigned long write_buffer;

50 signed long read_size;

51 signed long read_consumed;

52 unsigned long read_buffer;

53 };

 

Upon calling the driver, the "write_buffer" contains a series of commands for it to perform, and upon return the "read_buffer" is filled in with a series of responses for the thread to execute. the "write_size" and "read_size" Tell the total size of the "write_buffer"
And "read_buffer" respectively. and the "write_consumed" and "read_consumed" indicates the buffer size parsed by the binder IPC driver. in general the write buffer will consist of zero or more book-keeping commands (usually incrementing/decrementing object references)
And ending with a command requiring a response (such as sending an IPC transaction or attempt to acquire a strong reference on a remote object ). likewise, the receive buffer will be filled with a series of book-keeping commands and end with the result
The last written command, or a command to perform a new nested operation.

 

A process can send the following list ofthe commands to the binder ipc driver:

 

154 Enum binderdrivercommandprotocol {

155 bc_transaction = _ iow_bad ('C', 0,
Struct binder_transaction_data ),

156 bc_reply = _ iow_bad ('C', 1,
Structbinder_transaction_data ),

158 bc_acquire_result = _ iow_bad ('C', 2, INT ),

160 bc_free_buffer = _ iow_bad ('C', 3, INT ),

162 bc_increfs = _ iow_bad ('C', 4, INT ),

163 bc_acquire = _ iow_bad ('C', 5, INT ),

164 bc_release = _ iow_bad ('C', 6, INT ),

165 bc_decrefs = _ iow_bad ('C', 7, INT ),

167 bc_increfs_done = _ iow_bad ('C', 8, struct binder_ptr_cookie ),

168 bc_acquire_done = _ iow_bad ('C', 9, struct binder_ptr_cookie ),

170 bc_attempt_acquire = _ iow_bad ('C', 10, struct binder_pri_desc ),

172 bc_register_looper = _ IO ('C', 11 ),

174 bc_enter_looper = _ IO ('C', 12 ),

175 bc_exit_looper = _ IO ('C', 13 ),

177 bc_request_death_notification = _ iow_bad ('C', 14, struct binder_ptr_cookie ),

179 bc_clear_death_notification = _ iow_bad ('C', 15, struct binder_ptr_cookie ),

181 bc_dead_binder_done = _ iow_bad ('C', 16, void *),

183 };

 

The most interesting commands are bc_transaction and bc_reply, which initiate an IPC transaction and return are ply for a transaction, respectively.

 

To initiate an IPC transaction, you will essential perform a binder_read_write IOCTL with the write buffer containing bc_transaction command followed by a binder_transaction_data structure.

 

77 struct binder_transaction_data {

79 Union {

80 size_t handle;

81 void * PTR;

82} target;

83 void * cookie;

84 unsigned int code;

85

86 unsigned int flags;

87 pid_t sender_pid;

88 uid_t sender_euid;

89 size_t data_size;

90 size_t offsets_size;

91

92 Union {

93 struct {

95 const void * buffer;

97 const void * offsets;

98} PTR;

99 uint8_t Buf [8];

100} data;

101 };

 

In this structure "target" is the handle of the binder object that shoshould receive the transaction (the relationship between target. handle and target. PTR is explained later), "code" tells the object what to do when it records es the transaction, "priority"
Is the thread priority to run the IPC at, and there is a "data" buffer containing the transaction data, as well as an (optional) Additional offsets buffer of meta-data.

 

The "offsets" meta-data will consist ofzero or more binder objects. The structure of a binder object is as follows:

 

12 struct binder_object

13 {

14 uint32_t type;

15 uint32_t flags;

16 void * pointer;

17 void * cookie;

18 };

The types of binder objects are listed:

 

20 Enum {

21 binder_type_binder = B _pack_chars ('s ',' B ',' * ', B _type_large ),

22 binder_type_weak_binder = B _pack_chars ('w', 'B', '*', B _type_large ),

23 binder_type_handle = B _pack_chars ('s ', 'h',' * ', B _type_large ),

24 binder_type_weak_handle = B _pack_chars ('w', 'h', '*', B _type_large ),

25 binder_type_fd = B _pack_chars ('F', 'D', '*', B _type_large ),

26 };

 

Given the target handle, the binder IPC driver determines which process the object lives in and dispatches this transaction to one of the waiting threads in its thread pool (spawning a new thread if needed and the maximum number of threads is set by
Binder_set_max_threadsIOCTL command ). that thread is waiting in a binder_write_read IOCTL to the driver, and so returns with its read buffer filled in with the commands it needs to execute. these commands are very similar to the write commands,
For the most part corresponding to write operations on the other side.

 

119 Enum binderdriverreturnprotocol {

120 br_error = _ ior_bad ('R', 0, INT ),

122 br_ OK = _ IO ('R', 1 ),

124 br_transaction = _ ior_bad ('R', 2, structbinder_transaction_data ),

125 br_reply = _ ior_bad ('R', 3, structbinder_transaction_data ),

127 br_acquire_result = _ ior_bad ('R', 4, INT ),

129 br_dead_reply = _ IO ('R', 5 ),

131 br_transaction_complete = _ IO ('R', 6 ),

133 br_increfs = _ ior_bad ('R', 7, struct binder_ptr_cookie ),

134 br_acquire = _ ior_bad ('R', 8, struct binder_ptr_cookie ),

135 br_release = _ ior_bad ('R', 9, struct binder_ptr_cookie ),

136 br_decrefs = _ ior_bad ('R', 10, struct binder_ptr_cookie ),

138 br_attempt_acquire = _ ior_bad ('R', 11, struct binder_pri_ptr_cookie ),

140 br_noop = _ IO ('R', 12 ),

142 br_spawn_low.io ('R', 13 ),

144 br_finished = _ IO ('R', 14 ),

146 br_dead_binder = _ ior_bad ('R', 15, void *),

148 br_clear_death_notification_done = _ ior_bad ('R', 16, void *),

150 br_failed_reply = _ IO ('R', 17 ),

152 };

 

The processing thread will come back with a br_transaction command at the end of its buffer. this command uses the same binder_transaction_data structure that was used to send the data, basically containing the same information that was send but now available
In the local process space.

 

The recipient, in user space will then hand this transaction over to the target object for it to execute and return its result. upon getting the result, a new write buffer is created containing the bc_reply reply command with a binder_transaction_data Structure
Containing there sulting data. This is returned with a binder_write_read IOCTL () on the driver, sending the reply back to the original process and leaving the thread waiting for the next action to perform.

 

The original thread finally returns back from its own binder_write_read with a br_reply command containing the reply data.

 

Note that the original thread may also receive br_transaction commands while it is waiting for a reply. this represents a recursion into SS processes the processing ing thread making a call on to an object back in the original process. it is the responsibility
Of the drive to keep track of all active transactions, so it can dispatch transactions to the correct thread when recursion happens.

 

A simple transaction progress between two processes can be encoded strated as follows:

1. What's the service manager?

Usually, one process which provides binder objects maintains several services. that's the reason that kind of process is sometimes called "service provider ". the service manger works as a special service provider and is known by all other processes-"target. handle"
0 is reserved for the Service Manager. The source code of the Service manger is under the frameworks/base/cmds/servicemanager directory.

 

(1)Service Provider: A process provides services which can be accessed by another process. One service is identified by one binder object.

(2)Service Manager: Service Manager is actually a special service provider, which collects all services provided by other processes. that's to say, in order to let its service visible, one process shoshould register its service to the service
Manager. And only one service manager is allowed in Binder IPC mechanic.

(3) Any process, including the service provider itself, can access services provided by one service provider. Of course, the process shocould query the service manager to get the specified service.

 

One process issuing the "binder_set_context_mgr" will handle y the kerneldriver the current process is the service manager.

The interaction between processes andservice manger is as follows.

The process B is one service provider and registers one service to the Service Manager. The process a queries the service of process B and then communicates with process B directly.

 

2. What binder IPC driver does?

The binder IPC driver works as the bridge between two processes in IPC communication. the source code is located at kernel/Drivers/staging/Android/directory. the driver creates a miscellaneous device (major number is 10)-/dev/binder.

The relationship between the major structures is wrongly strated as follows.

From the structures, you can find that processes and binder objects maid in IPC communication are traced. the "binder_proc" structure is used to represent one process. once one process opens the binder device (/dev/binder), one "binder_proc" Structure
Will be created and the basic properties (PID, priority, threads number and so on) will be recorded. one thread in a process 'sthreads pool is represented by the "binder_thread" structure. all threads in a process share the same "binder_proc" structure.

The "binder_node" Structure describes one binder object provided by one process and the "binder_ref" structure indicates the reference to one binder object. if one process specifies one "binder_type_binder" or "binder_type_weak_binder" type object in a transaction,
One "binder_node" structure will be created by saving the value pointed by the "Pointer" in the object in the "PTR" field. and if one process specifies one "binder_type_handle" or "binder_type_weak_handle" type object in a transaction, one "binder_ref" Structure
Will be created in local process by saving the value pointed by the "pointer" in the object in the "DESC" field.

The "binder_transaction" is used to describe the transaction between two processes. The current handling transaction of a thread is recorded as the "transaction_stack" in the "binder_thread" structure.

The following figure tables strates how the "binder_node" and "binder_ref" structures are created during transactions between processes and service manager (different processes are distinguished by different colors ).

Process B is aservice provider and process a tries to create a transaction to process B. thecubics in different colors represent the "binder_node" in different process: there is one node in Service Manager (Service Manager node) and one node inprocess B.
The cycles in different colors show the "binder_ref" in differentprocess: There are two references in process a (one is for Service Manager nodeand the other is for node in process B) and only one reference in servicemanager to the node in process B. in order
To communicate with process B, process a shoshould always provide the target (binder_type_handle-1) for furthertransactions.

 

3. Services in Android

There are totally about 50 services provided in Android system. You can use the "service list" command to check all services. The services related with audio system are
Media. audio_policyAndMedia. audio_fligner. Those services are created by the mediaserver. The source code is under the "Frameworks/base/Media/mediaserver" directory.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.