Binder IPC Mechanism

Source: Internet
Author: User

<Binder process model | binder kit | pidgen>

________________________________________

Http://www.angryredplanet.com /~ Hackbod/openbinder/docs/html/binderprocessmodel.html

The binder uses a customized kernel module for inter-process communication. This is used to replace the standard Linux IPC facility, so that we can effectively model the IPC operation as "thread migration ". That is to say, in the IPC between processes, if the thread-triggered IPC has jumped onto the target process, execute the code, and then jump back.

 

The binder IPC Mechanism does not actually implement thread migration. On the contrary, the binder user space code maintains an available thread in each process. It is used to process some incoming IPC and execute local events in this process. The kernel module generates some simulated thread migration models with different thread priorities and ensures that the IPC is dispatched. If an IPC is recursive back to the original process, the IPC is processed by its original thread.

 

In addition to its own IPC, the binder kernel module is also responsible for tracking object references across processes. This involves remote object references from a process to map real objects in its host process, and ensures that objects are not destroyed as long as other processes hold their references.

 

The rest of this document describes in detail how the binder IPC works. These details are not exposed to application development, so that they can be safely ignored.

 

Getting started

When a user space thread wants to add a binder IPC (transmits an IPC to another process or receives an IPC), the first thing that must be done is to open the driver provided by the binder kernel module. This thread receives a file descriptor, which is used in the kernel module to identify the initiator and receiver of IPC.

It is through the description of this file that all interfaces with the IPC Mechanism will interact, through a series of IOCTL () command sets. The main Commands are:

? Binder_write_read sends zero or multiple binder operations, and then blocks waiting for receiving incoming operations and results. (This is the same as performing write () and read () operations on a file descriptor, but the efficiency is high)

? Binder_set_wakeup_time specifies that an event in one of the user spaces occurs when a process is called at a specified time.

? Binder_set_idle_timeout: the thread will remain idle (wait for a new transaction) and time out.

? Binder_set_reply_timeout: the thread stops waiting for replies until they time out.

? Binder_set_max_threads sets the maximum number of threads. This driver allows you to create a thread pool for this process.

The key is the binder_write_read command, which is the business foundation of all IPC. When entering the relevant details, however, it should be noted that the driver wants the user code to maintain a thread pool to wait for the incoming transactions. Make sure that there is always an available thread (thread to the maximum number you want) so that the IPC can be processed. When a new asynchronous event (from shandler) must be processed in the local process, the driver must also be aware of the wake-up thread pool. Binder_write_read

As mentioned above, the core function of the driver is encapsulated in binder_write_read operations. The IOCTL data is structured as follows:

Struct binder_write_read

{

Ssize_t write_size;

Const void * write_buffer;

Ssize_t read_size;

Void * read_buffer;

};

When the driver program is called, write_buffer contains a series of commands to execute it, and returns after read_buffer is filled with a series of threads to execute the response. In general, the write buffer will include zero or more book-keeping commands (which are usually referenced by incremental/decreasing objects) and end with one command, and the response is required (for example, the sent IPC transaction or attempt to obtain a strong reference response to the remote object ). Similarly, the receiving buffer will be filled with a series of book-keeping commands, either last written, or new nested command operations to end.

Here is a list of commands that can be sent by a process to the driver. Each command in the following data buffer is described:

Enum binderdrivercommandprotocol {

Bcnoop = 0,

No parameters!

 

Bctransaction,

Bcreply,

 

Binder_transaction_data: The sent command.

 

 

Bcacquire_result,

 

Int32: 0 if the last brattempt_acquire was not successful.

Else you have acquired a primary reference on the object.

 

 

Bcfree_buffer,

 

Void *: PTR to transaction data stored ed on a read

 

 

Bcincrefs,

Bcacquire,

Bcrelease,

Bcdecrefs,

 

Int32: Descriptor

 

 

Bcattempt_acquire,

 

Int32: Priority

Int32: Descriptor

 

 

Bcresume_thread,

 

Int32: thread ID

 

 

Bcset_thread_entry,

 

Void *: thread entry function for new threads created to handle tasks

Void *: argument passed to those threads

 

 

Bcregister_looper,

 

No parameters.

Register a spawned loginthread with the device. This must be

Called by the function that is supplied in bcset_thread_entry

Part of its initialization with the binder.

 

 

Bcenter_lo,

Bcexit_lo,

 

No parameters.

These two commands are sent as an application-level thread

Enters and exits the binder loop, respectively. They are

Used so the binder can have an accurate count of the number

Of looping threads it has available.

 

 

Bccatch_root_objects,

 

No parameters.

Call this to have your team start catching root objects

Published by other teams that are spawned outside of the binder.

When this happens, you will receive a brtransaction with

Tfrootobject flag set. (Note that this is distinct from processing ing

Normal root objects, which are a brreply .)

 

 

Bckill_team

 

No parameters.

Simulate death of a kernel team. for debugging only.

 

};

The most interesting command is bctransaction and bcreply, which are the initiated IPC transaction and reply to transaction. These commands are the following data structures:

Enum transaction_flags {

Tfinline = 0x01, // not yet implemented

Tfrootobject = 0x04, // contents are the component's root object

Tfstatuscode = 0x08 // contents are a 32-bit status code

};

 

Struct binder_transaction_data

{

// The first two are only used for bctransaction and brtransaction,

// Identifying the target and contents of the transaction.

Union {

Size_t handle; // target descriptor of command transaction

Void * PTR; // target descriptor of return transaction

} Target;

Uint32 code; // transaction command

 

// General information about the transaction.

Uint32 flags;

Int32 priority; // requested/current thread priority

Size_t data_size; // number of bytes of data

Size_t offsets_size; // number of bytes of object Offsets

 

// If this transaction is inline, the data immediately

// Follows here; otherwise, it ends with a pointer

// The data buffer.

Union {

Struct {

Const void * buffer; // transaction data

Const void * offsets; // binder object Offsets

} PTR;

Uint8 Buf [8];

} Data;

};

Therefore, to start a transaction of IPC, you will mainly execute an IOCTL of binder_read_write, and write data containing bctransaction and buffer after executing abinder_transaction_data. The goal of this structure is to receive the handle of the transaction object (we will discuss handle later). The Code should provide the object. What should it do when it receives the transaction, priority ----- the level at which the thread preferentially runs IPC, and a data buffer ----- additional offsetsbuffer that contains transaction data and (optional) metadata.

Out of the target handle, the driver determines the process of the object. The dispatches transaction gives the thread pool a waiting thread (if necessary, a new thread is generated ). This thread is waiting for a binder_write_read IOCTL () driver, so it returns its read buffer by filling the execution command in the buffer. These commands are very similar to write commands, and most of them correspond to the corresponding write operations:

Enum binderdriverreturnprotocol {

Brerror =-1,

 

Int32: Error Code

 

 

Brok = 0,

Brtimeout,

Brwakeup,

No parameters!

 

Brtransaction,

Brreply,

 

Binder_transaction_data: the specified ed command.

 

 

Bracquire_result,

 

Int32: 0 if the last bcattempt_acquire was not successful.

Else the remote object has acquired a primary reference.

 

 

Brdead_reply,

 

The target of the last transaction (either a bctransaction or

A bcattempt_acquire) is no longer with us. No parameters.

 

 

Brtransaction_complete,

 

No parameters... always refers to the last transaction requested

(Including replies). Note that this will be sent even for asynchronous

Transactions.

 

 

Brincrefs,

Bracquire,

Brrelease,

Brdecrefs,

 

Void *: PTR to Binder

 

 

Brattempt_acquire,

 

Int32: Priority

Void *: PTR to Binder

 

 

Brevent_occurred,

 

This is returned when the bcset_next_event_time has elapsed.

At this point the next event time is set to B _infinite_timeout,

So you must send another bcset_next_event_time command if you

Have another event pending.

 

 

Brfinished

};

In our example, the thread will receive the brtransaction command at the end of its buffer. This command uses the same binder_transaction_datastructure to send data. Basically, the same information has been sent, but is now available for local processes.

The receiver in the user space will send the transaction to the target object, execute and return the result. Based on the result, a new write buffer creates a response packet containing the binder_transaction_data structure bcreply, containing the result data. This is a driver that returns the ioctl () of binder_write_read, sends the reply back to the original process, and leaves the thread to wait for the next transaction to be executed.

The original thread finally returned its own binder_write_read ---- A brreply command containing the reply data.

Please note that when waiting for a response, the original thread may also receive the brtransaction command. This is a recursive process. When the receiving thread calls an object, it returns to the original process. This is the driver's responsibility to keep track of all active transactions, so it can be triggered by dispatch transactions recursion to the correct thread.

Object mapping and referencing

One of the important responsibilities of driver is to map the execution of an object in a process to another. This is two key communication mechanisms (targetting and referencing objects), and the capability model (allowing only one specific process to operate remote objects-and has been explicitly notified ).

There are two different object reference formats: As an address in the memory space of a process, or as an abstract 32-bit handle. These tables are mutually exclusive: All references of a process in a local object process are in the form of addresses, while all referenced objects in another process are always in the form of handles.

For example, pay attention to the target domain of binder_transaction_data. When sending a transaction, this includes the handle of the target object (because you always send transactions to other processes ). The receiver of this transactions considers this object as the point of the local address space. The pointer and handles ing between processes maintained by the driver so that it can perform this interpretation translation operation.

We must also be able to send object references through transactions. This is done by referencing the object (pointer whether local or remote) to the transactions. buffer. The driver must translate this reference, which corresponds to the reference of the receiving process. In this way, it is like the transaction target.

For reference translation, the driver needs to know where the reference of the transaction data appears. This is the extra buffer offset pointing. It has a series of data buffer indexes that describe where the object appears. The driver can rewrite the buffer data and translate every reference from the object in the sending process to the correct reference in the receiving process.

Note that the driver does not know any specific binder object until the object is sent to another process through the driver. At this point, the driver adds the address ing table of the object and requires the process to which it belongs to reference it. If no other process knows the object, it will delete it from the ing table. The process tells the release of the driver reference. This avoids maintaining the object (relatively important) driver state, as long as the State of the driver is used only in the local process.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.