Original address: http://blog.csdn.net/universus/article/details/6211589
Binder is one of the inter-process communication (IPC) modes of Android systems. The IPC means of interprocess communication already owned by Linux include (Internet process Connection): pipe, Signal (Signal) and Trace (trace), socket (socket), Message queue (message), Shared memory (Share memories) and semaphores (Semaphore). This article details the advantages of binder as the main IPC method for Android.
First, Introduction
The communication method based on Client-server is widely used in various fields, from Internet and database access to embedded handheld device internal communication. In the smartphone platform, especially Android, in order to provide rich and varied functions to the application developers, this communication mode is ubiquitous, such as media playback, video frequency capture, to a variety of mobile phones more intelligent sensors (acceleration, azimuth, temperature, light brightness, etc.) are managed by different servers, Applications can use these services simply as a client to establish a connection with these servers, with little time and effort to develop dazzling functionality. The widespread adoption of the client-server approach is a challenge to interprocess communication (IPC) mechanisms. The current Linux-supported IPC includes traditional pipelines, System V IPC, which is the message queue/shared memory/semaphore, and the socket only supports Client-server communication mode. Of course, you can also set up a protocol on these underlying mechanisms to achieve client-server communication, but this increases the complexity of the system, in the mobile phone this condition is complex, resources scarce environment, the reliability is also difficult to guarantee.
On the other hand, transmission performance. Socket as a general-purpose interface, its low transmission efficiency, high overhead, mainly used in cross-network inter-process communication and the local process of low-speed communication. Message Queuing and pipelines use storage-forwarding, where the data is copied from the sender buffer to the kernel-opened buffer, and then copied from the kernel buffer to the receiver buffer, at least two copies of the process. Shared memory, although not copied, is complex and difficult to use.
Table 1 Number of data copies in various IPC ways
Ipc |
Number of copies of data |
Shared memory |
0 |
Binder |
1 |
socket/Pipeline/Message Queuing |
2 |
Another point is for security reasons. End users do not want to download the program from the Internet without the knowledge of peeping private data, connected to wireless networks, long-term operation of the underlying equipment resulting in a rapid depletion of the battery and so on. Traditional IPC does not have any security measures, relying entirely on the upper layer protocol to ensure. First, the receiver of the traditional IPC is unable to obtain the reliable UID and PID (user ID Process ID) of the other process, so the identity of the other party cannot be identified. Android assigns its own UID to each installed application, so the UID of the process is an important sign of identification of the process. Using traditional IPC can only be used by the user to fill in the data packet uid and PID, but this is unreliable, easy to be exploited by malicious programs. A reliable identity token is only added by the IPC mechanism itself in the kernel. Second, the traditional IPC access point is open and cannot establish a private channel. For example, named pipe name, SYSTEMV key value, socket IP address or file name are open, as long as they know that the access point of the program can establish a connection to the end, no matter how to prevent the malicious program by guessing the receiver address to obtain the connection.
For these reasons, Android needs to establish a new IPC mechanism to meet the system's requirements for communication, transmission performance and security, which is binder. Binder based on the Client-server communication mode, the transfer process only one copy, for sending the Uid/pid identity, both support real name binder and anonymous binder support, high security.
Ii. Object-oriented Binder IPC
Binder uses Client-server communication: a process as a server to provide services such as video/audio decoding, video capture, address this query, network connectivity, and so on; Multiple processes initiate service requests to the server as a client to obtain the required services. To achieve client-server communication, the following two points must be achieved: first, the server must have a definite access point or address to accept the client's request, and the client can be in some way to learn the address of the server The second is to develop command-reply protocol to transmit data. For example, in network traffic, the access point of the server is the IP address + port number of the server host, and the transport Protocol is the TCP protocol. for binder, binder can be seen as the access point that the server provides to implement a particular service, and the client through this ' Address ' sends a request to the server to use the service; For the client, binder can be viewed as a conduit entry to the server, and in order to communicate with a server you must first establish the pipeline and obtain a pipeline entry.
Unlike other IPC, Binder uses object-oriented thinking to describe the binder that accesses the access point and its entry in the client: Binder is an object in the server that provides a set of methods for implementing a request to a service, like a member function of a class. The portal, which is spread over the client, can be viewed as a ' pointer ' to the Binder object, and once the ' pointer ' is obtained, the object's method can be invoked to access the server. In the client's view, it is no different to call the method provided by the binder ' pointer ' and to invoke any other local object through the pointer, although the former entity is in the remote server and the latter entity is in local memory. The ' pointer ' is a C + + term, and more commonly a reference, that is, the client accesses the server through a binder reference. Another term ' handle ' in the field of software can also be used to describe how binder exists in the client. From a communication point of view, the binder in the client can also be seen as the ' proxy ' of the server binder, which serves the client locally on behalf of the remote server. This article uses the term ' reference ' or ' handle ' as the two widely used terms.
The introduction of object-oriented thinking transforms interprocess communication into a method that invokes the object through a reference to a binder object, which is unique in that the Binder object is an object that can be referenced across processes, its entities are in a process, and its references are scattered throughout the system's processes. Most tempting of all, this reference can be as strong or weak as a reference in Java, and can be passed from one process to another, so that everyone can access the same server, just like assigning an object or reference to another reference. Binder blurs the process boundary, fades the process of interprocess communication, and the whole system seems to run in the same object-oriented program. The various binder objects as well as the dotted references are glued to the glue of each application, which is the original intent of binder in English.
Of course object-oriented is only for the application, for Binder drive and kernel other modules using the C language implementation, there is no concept of classes and objects. Binder drivers provide the underlying support for object-oriented interprocess communication.
Third, Binder communication model
The binder framework defines four roles: Server,client,servicemanager (hereafter referred to as smgr) and binder drivers. Where Server,client,smgr is running in user space and the drive is running in kernel space. The relationship between the four roles is similar to the Internet: Server is the client terminal, Smgr is the domain name server (DNS), and the driver is the router.
3.1 Binder Drive
Like routers, binder drivers, while obscure, are at the heart of communication. Despite being called ' drive ', there is actually nothing to do with the hardware device, just the way the implementation is and the device driver is the same. It works in the kernel state, driving the establishment of binder communication between processes, binder transfer between processes, binder reference count management, packet transfer and interaction between processes, and a series of underlying support.
3.2 ServiceManager and real name Binder
Similar to DNS, Smgr's role is to convert the binder name in the form of a character into a reference to the binder in the client, allowing the client to obtain a reference to the binder entity in the server through the binder name. The binder that registered the name is called the real binder, just as each website has its own URL in addition to an IP address. The server creates a binder entity, takes a character form, reads the easy-to-remember name, sends the binder along with the name as a packet through the binder driver to Smgr, notifies smgr to register a binder called Zhang San, which is located in a server. The driver creates an entity node in the kernel and smgr a reference to the entity in the binder that passes through the process boundary, passing the name and the new reference package to Smgr. After smgr the packet, take the name and the reference from it and fill it in a lookup table.
The attentive reader may find the catch: Smgr is a process, server is another process, and server registers with Smgr Binder will inevitably involve interprocess communication. The current implementation is inter-process communication, but also the use of interprocess communication, it is like eggs can be hatched chicken premise is to find a chicken to hatch eggs. The implementation of Binder is ingenious: Create a chicken in advance to incubate eggs: smgr and other processes also use binder communication, Smgr is the server side, has its own binder object (entity), other processes are client, It is necessary to use this binder reference to implement binder registration, query and fetch. The binder provided by Smgr is special, it has no name and does not need to be registered, and when a process uses the Binder_set_context_mgr command to register itself as a smgr, the binder driver automatically creates binder entities for it (this is the pre-made chicken). Second, the binder's reference is fixed to 0 in all clients without the need for other means. That is, if a server is to register its own binder with smgr, it must communicate through the binder of the 0 reference number and smgr. Analogy network communication, reference No. 0 is like the address of the domain name server, you must be pre-configured manually or dynamically. Note that the client here is relative to Smgr, an application may be a service server, but for smgr it is still a client.
3.3 Client Gets a reference to the real name binder
After the server registers the binder entity and its name with Smgr, the client can obtain a reference to the binder by name. The client also requested access to a binder using the reserved No. 0 reference to Smgr: I applied for a reference to the binder named Zhang San. Smgr receives the connection request, obtains the binder name from the request packet, finds the corresponding entry in the lookup table, extracts the binder reference from the entry, and sends the reference as a reply to the client that initiated the request. From an object-oriented perspective, this Binder object now has two references: one in Smgr and one in the client that originated the request. If there is more client requests for the binder then there will be more references to the binder in the system, just as there are multiple references to an object in Java. and similar references to binders are strongly typed, ensuring that any reference to binder entities is not released. As can be seen through the above process, smgr like a fire ticket to the point, collect all the train tickets, you can buy through it to take the train tickets-get a binder reference.
3.4 Anonymous Binder
Not all binders are required to be registered with Smgr to advertise. The server side can pass the created binder entity to the client through an already established binder connection, although the binder connection that has already been established must be implemented by real name Binder. Since this binder does not register a name with Smgr, it is an anonymous binder. The client will receive a reference to this anonymous binder, which sends a request to an entity located in the server through this reference. Anonymous Binder creates a private channel for both sides of the communication, as long as the server does not send anonymous binders to other processes, the other process cannot obtain a reference to the binder by means of exhaustive or guessing, and sends a request to the binder.
Iv. Binder Agreement (abbreviated)
V. Binder expression
Vi. Binder memory mapping and receiving buffer management
Aside from binder, consider the traditional IPC approach, how the data arrives at the receiving end from the sender? As a general rule, the sender stores the prepared data in the cache and invokes the API through the system call into the kernel. The Kernel service program allocates memory in the kernel space and copies the data from the sender buffer to the core buffer. A buffer is also provided when the receiver reads the data, and the kernel copies the data from the kernel buffer to the buffer provided by the receiver and wakes up the receiving thread to complete the data transmission. There are two drawbacks to this storage-forwarding mechanism: first, inefficient and two copies: User space, kernel space, user space. Linux uses Copy_from_user () and Copy_to_user () to implement these two cross-space copies, where high memory is used, which requires temporary setup/cancellation of page mappings, resulting in a performance penalty. Next is to receive the data cache to be provided by the receiver, the receiver does not know exactly how large the cache is enough, can only open as much space or call the API to receive the message header to obtain the size of the message body, and then open up the appropriate space to receive the message body. Both approaches are inadequate, not a waste of space or a waste of time.
Binder employs a new strategy: the binder driver is responsible for managing the data receive cache. We note that the binder driver implements the MMAP () system call, which is special for character devices, because mmap () is typically used on file systems with physical storage media, and no physical media such as binders, the character device used purely for communication does not necessarily support mmap (). Binder drivers are not, of course, designed to map in physical media and user space, but rather to create cache space for data reception. First look at how mmap () is used:
FD = open ("/dev/binder", O_RDWR);
Mmap (NULL, Map_size, Prot_read, Map_private, FD, 0);
In this way, the binder receiver has a size of map_size receiver buffer. The return value of Mmap () is the address of the memory-mapped user space, but this space is managed by the driver, and the user does not have to and cannot access it directly (mapping type is prot_read, read-only mapping).
The receiving buffer map can be used to receive and store data for the cache pool. As mentioned earlier, the structure of the receiving packet is Binder_transaction_data, but this is only the message header, and the real payload is in the memory pointed to by Data.buffer. This memory does not need to be provided by the receiver, it is precisely this cache pool from the mmap () map. When the data is copied from the sending direction, the driver uses the best matching algorithm to find a suitable size from the buffer pool based on the size of the sending packet, and copies the data from the sending buffer. It is important to note that the memory space of the BINDER_TRANSACTION_DATA structure itself and all the messages in table 4 will be provided by the receiver, but the data is fixed and the quantity is not too large to cause inconvenience to the receiver. The mapped cache pool is large enough because the receiver's thread pool may handle multiple concurrent interactions at the same time, each of which requires the destination store to be fetched from the cache pool, and once the cache pool is exhausted, it will have unintended consequences.
There must be an allocation of release. After the receiver finishes processing the packet, it informs the drive to release the memory area that the data.buffer points to. As mentioned in the introduction of the binder protocol, this is done by the command Bc_free_buffer.
As you can see from the above, the driver shares the most tedious task for the receiver: allocating/releasing the unpredictable payload buffers, and the receiver only needs to provide the cache to hold the fixed size, the maximum space can predict the message header. In efficiency, because mmap () allocated memory is mapped in the receiver user space, all the overall effect is equivalent to the payload data from the sender user space to the receiver user space of a direct copy of the data, eliminating the kernel in the staging of this step, to improve performance. Incidentally, the Linux kernel does not actually have a function to copy directly from one user space to another, it needs to be copied to the kernel space with Copy_from_user () and then copied to another user space using Copy_to_user (). In order to achieve a user space-to-user space copy, the memory allocated by MMAP () is mapped into the kernel space in addition to the receiver process. So calling Copy_from_user () to copy the data into the kernel space is also equivalent to copying the user space into the receiver, which is the ' secret ' of the binder's only one copy.
Vii. Binder receive thread management
Binder communication is actually a communication between threads that are located in different processes. If the process s is the server side and provides a binder entity, thread T1 sends a request to process s from the client process C1 through a binder reference. In order to handle this request, the thread T2 is started, and the thread T1 is in the waiting state to receive the returned data. T2 processing the request will return the processing result to t1,t1 to be woken up to get the processing result. In this process, T2 as if T1 in the process S agent, on behalf of T1 to perform remote tasks, and give T1 feeling is like traversing to s in the execution of a piece of code and back to C1. To make this traversal more realistic, the driver assigns some properties of T1 to T2, especially T1 's priority nice, so T2 uses and T1 similar times to complete the task. A lot of data can be used to describe this phenomenon with ' thread migration ', which makes people misunderstand. One thread is simply not able to jump between processes, and T2 in addition to the T1 priority, there are no similarities, including identity, open file, stack size, signal processing, private data, etc.
For the server process s, there may be many clients initiating requests at the same time, and in order to improve efficiency, the thread pool is often opened to handle the received requests concurrently. How do you use thread pooling to implement concurrent processing? This is related to the specific IPC mechanism. For example, the server-side socket is set to listen mode, and a dedicated thread uses the socket to listen for connection requests from the client, which is blocked on the accept (). This socket is like an egg-laying chicken, and once you receive a request from the client you will have an egg – create a new socket and return from accept (). The listening thread starts a worker thread from the thread pool and hands the egg just below to the thread. Subsequent business processing is done by the thread and interacts with the client through this single.
But for binder, there is neither a listening mode nor an egg, how to manage the thread pool? One simple approach is to create a bunch of threads, regardless of 3,721, and each thread reads the binder with the Binder_write_read command. These threads block on the wait queue that the driver sets for the binder, and once a data driver from the client wakes a thread from the queue to process it. This is simple and intuitive, eliminating the thread pool, but creating a bunch of threads at the beginning is a bit of a waste of resources. The binder protocol then introduces specialized commands or messages to help users manage the thread pool, including:
· Inder_set_max_threads
· Bc_register_loop
· Bc_enter_loop
· Bc_exit_loop
· Br_spawn_looper
The first to manage the thread pool is to know how big the pool is, and the application can tell the driver to create a few threads through Inder_set_max_threads. Each subsequent thread is created, enters the main loop, and exits the main loop, using Bc_register_loop,bc_enter_loop,bc_exit_loop to inform the driver, in order to drive the collection and recording of the status of the current thread pool. Whenever the driver receives a thread that returns a binder for reading, check to see if there are no idle threads. If it is, and the total number of threads does not exceed the thread pool maximum number of threads, a br_spawn_looper message is appended to the currently read packet, telling the user that the thread is not going to be enough, and start some more, or the next request may not respond in a timely manner. When a new thread starts, it informs the driver of the update status via Bc_xxx_loop. This way, as long as the thread is not exhausted, there are always idle threads waiting in the queue to handle requests in a timely manner.
The binder driver also does a little bit of optimization on the start of the worker thread. When the thread of the process P1 T1 the request to the process P2, the driver checks to see if the thread T1 is also processing a request from a thread from P2 but has not yet completed (no reply was sent). This typically occurs when two processes have binder entities and are requested when they are sent to each other. If the driver discovers such a thread in the process P2, such as T2, it will ask T2 to handle the T1 request. Since T2 sends a request to T1 that the package has not yet been returned, the T2 must (or will) block the state of the read back packet. At this time can let T2 do something, it is better than waiting for the idle. And if T2 is not a thread in the thread pool, it can also share some of the work for the thread pool, reducing the usage of threads.
Eight, packet receive queue and (thread) wait queue management
Typically, the receiving end of the data transfer has two queues: the packet receive queue and the (thread) waiting queue to mitigate the contradiction between supply and demand. When there is too much stock in the supermarket, the goods will accumulate in the warehouse, the people who shop (thread) are too many, will wait in line at the cashier, the truth is the same. In the driver, each process has a global receive queue, also called a to-do queue, which holds packets that are not destined for a particular thread; There is a global wait queue, and all threads waiting for data from the global receive queue are queued at the team joins. Each thread has its own private to-do queue, which holds the packets sent to that thread, and each thread has its own private wait queue, dedicated to the thread waiting to receive data from its own to-do queue. Although the queue is called, in fact the thread private wait queue up to only one thread, that is, it itself.
Since there is no special token at the time of sending, how does the driver determine which packets should be fed into the global to-do queue, and which packets should be sent to the to-do queue of a particular thread? There are two rules here. The request packets sent to the server by the rule 1:client are submitted to the global to-do queue of the server process. One exception, however, is the binder that was mentioned in the previous section to optimize the startup of the worker thread. Optimized, the request from T1 is not a global to-do queue submitted to P2, but instead feeds into the private to-do queue of T2. Rule 2: The return packets (packets sent by bc_reply) to the synchronization request are sent to the private to-do queue of the thread that originated the request. As in the above example, if the thread of the process P1 T1 the thread to the process P2 T2 the synchronization request, the T2 returned packet is sent to the T1 private to-do queue without committing to the P1 global to-do queue.
The unspoken rules of the packet entering the receiving queue also determine the unspoken rules for the thread to enter the waiting queue, that is, a thread should wait for a new task in the global wait queue as long as it does not receive the return packet, otherwise it should wait for the server's return data in its private wait queue. In the example above, T1 will have to wait in its private wait queue after sending a synchronization request to T2, rather than queuing in P1 's global wait queue, otherwise it won't get the returned packets for T2.
These unspoken rules are the restrictions imposed on both sides of the binder communication, embodied in the application is the thread consistency during the synchronization request interaction: 1) client side, the thread waiting to return the package must be the thread that sent the request, not one thread to send the request package, and the other thread to wait for the packet to be received. Otherwise it will not receive the return package; 2) server side, the thread that sends the corresponding return packet must be the thread that received the request packet, otherwise the returned packet will not be sent to the thread sending the request. This is because the binder that returns the packet is not specified by the user, but instead drives the record in the thread that receives the request packet, and if the thread that sends the return packet is not the thread driver that received the request packet, it will not know where the return packet will be sent.
Next, consider how the binder driver submits synchronous and asynchronous interactions. We know that the difference between synchronous and asynchronous interactions is that the request side of the synchronous interaction (client) waits for the return packet of the answering side (Server) after the request packet is made, and the interaction ends when the sending side of the asynchronous interaction makes the request packet. For both of these interactive request packets, the driver can be disposed of in the to-do queue of the receiving end regardless of 3,721. But instead of doing this, the driver does a limit on the asynchronous interaction, making it make way for synchronous interaction, as long as an asynchronous interaction is not processed for a binder entity, for example, it is being processed by a thread or is still queued in any of the to-do queues. Then the asynchronous interaction packets that are sent to the entity are no longer posted to the to-do queue, but instead are blocked in the asynchronous interactive receive queue (the Binder node's Async_todo domain) that drives the entity, but the synchronization interaction is still unrestricted to go directly into the to-do queue for processing. Until the asynchronous interaction finishes, the next asynchronous interaction can enter the to-do queue from the asynchronous interaction queue. This is done because the request side of the synchronous interaction needs to wait for the packet to return, it must be processed quickly to avoid affecting the response speed of the request side, and the asynchronous interaction is ' no matter after launch ', a little delay will not block other threads. So the special queue will be too many asynchronous interactions staged, so as not to burst into a large number of asynchronous interaction server-side processing capacity or exhaustion of thread constructor threads, and thus blocking the synchronization of interaction.
Ix. Summary
Binder uses Client-server communication method, security is good, simple and efficient, coupled with its object-oriented design idea, unique receive cache management and thread pool management mode, become the backbone of the Android process communication.
Android Binder mechanism