相关源码framework/base/core/java/andorid/os/Handler.javaframework/base/core/java/andorid/os/Looper.javaframework/base/core/java/andorid/os/Message.javaframework/base/core/java/andorid/os/MessageQueue.javalibcore/luni/src/main/java/java/lang/ThreadLocal.java
- Why do I need a messaging mechanism?
First we know that there are two particularly important mechanisms in Android, one is binder and the other is the message mechanism (Handler + Looper + MessageQueue). After all, Android is a message-driven way to interact.
- Why do you need to add such a message mechanism to Android?
Sometimes we need to do good I/O in a sub-thread, maybe read the file or access the network, when the time-consuming operation is complete, there may be some changes in the UI, but because of the Android development specification, it is not possible to update the UI on the child thread, or it will trigger the program exception. At this point, you can switch the update UI operation to the main thread by handler. So, in essence, handler is not dedicated to updating the UI, he is often used by developers to update the UI.
- Why can't you access the UI in a child thread?
This is because Android's UI space is not thread-safe, and concurrent access in multiple threads can cause UI space to be in an unpredictable state. Of course, there is no access lock in the UI space for two reasons, first locking will complicate the logic of UI access, and the second lock mechanism will reduce the efficiency of UI access, because the locking mechanism will block the execution of some threads.
Given these two drawbacks, the simplest and funniest approach is to use a single-threaded model to handle UI operations, which is not very cumbersome for developers, but requires handler to switch the execution threads of UI access.
2. Message Mechanism overview
Message: Messages are divided into hardware-generated messages and software-generated messages
MessageQueue: The main function of Message Queuing is to post messages to the message pool-enqueuemessage and take away the message-next of the message pool;
Handler: The message auxiliary class, the main function sends the various message event-sendmessage to the message pool and handles the corresponding message event-handlemessage;
Looper: Continuously loops through the-loop, distributing messages to the target processor by distribution mechanism.
You first need to create a handler object in the main thread, rewrite Handler.message (), and call SendMessage () to save it to MessageQueue. Looper will always try to remove the pending message from the MessageQueue and finally distribute it back to Handlemessage (), which is an infinite loop. Looper If there is no new message to be processed, it goes into a wait state until a message needs to be processed.
Simply put,
- There are MessageQueue in Looper.
- There is a set of message to be processed in the MessageQueue
- Message has a handler for handling messages
- There are looper and MessageQueue in handler.
3. Message mechanism analysis
Android's messaging mechanism mainly includes Hanler, Looper, MessageQueue, and Threadlocal
3.1 Threadlocal Working principle
Threadlocal is a data storage class inside a thread that can be understood as a hashmap.
Threadlocal can be used when some data is in the first scope and different threads have a copy of the data.
For handler, it needs to get the looper of the current thread, and it is obvious that Looper is a thread and different threads have different looper, which can easily be accessed in threadlocal thread by Looper. If threadlocal is not used, then the system must provide a global hash table that provides handler to find the specified looper. Finally, and threadlocal a truth.
Threadlocal commonly used on two methods, get and set. Set there is nothing to say, we mainly need to understand get. An array is removed from the respective thread by get,threadlocal, and then the corresponding value is found from the array based on the current threadlocal index. Thus threadlocal can maintain a copy of a set of data in different threads without interfering with each other.
Then we look at the set and get source code, through the two can be roughly understood how it works.
First look at the set source code
public void Set (T value) {Thread CurrentThread = Thread.CurrentThread (); Values values = VALUES (CurrentThread); if (values = = null) {values = Initializevalues (CurrentThread); } values.put (this, value); The values values (Thread current) {return current.localvalues; } void put (threadlocal<?> key, Object value) {cleanUp (); Keep track of first Tombstone. That's where we want to go back//and add an entry if necessary. int firsttombstone =-1; for (int index = Key.hash & mask;; index = Next (index)) {Object k = Table[index]; if (k = = key.reference) {//Replace existing entry. Table[index + 1] = value; Return } if (k = = null) {if (Firsttombstone = =-1) {//Fill in NULL SL Ot. Table[index] = key.reference; Table[index + 1] = value; size++; Return }//Go back and replace first tombstone. Table[firsttombstone] = key.reference; Table[firsttombstone + 1] = value; tombstones--; size++; Return }//Remember first tombstone. if (Firsttombstone = =-1 && k = = TOMBSTONE) {firsttombstone = index; } } }
This code lets you know that by using the thread method localvalues to get the current threadlocal data,
The value of localvalues can exist in a table array inside it by a put.
By reading the put source code, we can know that the value of the threadlocal in the table is always stored in order to threadlocal the reference field to identify the next location of the object, that is, table[index+1] = value.
Look again for get.
public T get() { // Optimized for the fast path. Thread currentThread = Thread.currentThread(); Values values = values(currentThread); if (values != null) { Object[] table = values.table; int index = hash & values.mask; if (this.reference == table[index]) { return (T) table[index + 1]; } } else { values = initializeValues(currentThread); } return (T) values.getAfterMiss(this); }
This logic is quite clear, the first two steps and set, and then determine whether value is null, if it is null, the initial value set by Initializevalues is returned, otherwise it returns the value of Table[index+1].
How the 3.2 MessageQueue works
MessageQueue mainly has two operations, inserting and reading, which corresponds to Enqueuemessage and next.
MessageQueue internal is a single-linked list of data structures to maintain the MessageQueue, after all, the single-linked list in the insertion and deletion of the above comparative advantage.
Boolean enqueuemessage (Message msg, long when) {if (Msg.target = = null) {throw new IllegalArgumentException ("M Essage must have a target. "); if (Msg.isinuse ()) {throw new IllegalStateException (msg + "This message is already"); } synchronized (this) {if (mquitting) {illegalstateexception e = new IllegalStateException ( Msg.target + "Sending message to a Handler on a dead thread"); LOG.W ("MessageQueue", E.getmessage (), E); Msg.recycle (); return false; } msg.markinuse (); Msg.when = when; Message p = mmessages; Boolean needwake; if (p = = NULL | | when = = 0 | | When < p.when) {//New head, Wake up the event queue if blocked. Msg.next = p; Mmessages = msg; Needwake = mblocked; } else {//Inserted within the middle of the queue. Usually we don ' t has to wake//Up The event queue unless there is a barrier at the head of the the queue//and the message is the the earliest Asynchro Nous message in the queue. Needwake = mblocked && P.target = = null && msg.isasynchronous (); Message prev; for (;;) {prev = p; p = p.next; if (p = = NULL | | When < p.when) {break; } if (Needwake && p.isasynchronous ()) {needwake = false; }} Msg.next = P; Invariant:p = = Prev.next Prev.next = msg; }//We can assume mptr! = 0 because mquitting is false. if (needwake) {nativewake (mptr); }} return true;
The primary operation of the
Enqueuemessage is the insertion of a single-linked list
Message Next () {//Return here if the message loop has already quit and been disposed. This can happen if the application tries to restart a looper after quit//which are not supported. Final long ptr = mptr; if (ptr = = 0) {return null; } int pendingidlehandlercount =-1; -1 only during first iteration int nextpolltimeoutmillis = 0; for (;;) {if (Nextpolltimeoutmillis! = 0) {binder.flushpendingcommands (); } nativepollonce (PTR, nextpolltimeoutmillis); Synchronized (this) {//Try to retrieve the next message. Return if found. Final Long now = Systemclock.uptimemillis (); Message prevmsg = null; Message msg = mmessages; if (msg! = NULL && Msg.target = = null) {//stalled by a barrier. Find the next asynchronous message in the queue. do {prevmsg = msg; msg = Msg.next; } while (msg! = null &&!msg.isasynchronous ()); if (msg! = null) {if (now < Msg.when) {//Next message are not ready. Set a timeout to wake up when it was ready. Nextpolltimeoutmillis = (int) math.min (Msg.when-now, Integer.max_value); } else {//Got a message. mblocked = false; if (prevmsg! = null) {Prevmsg.next = Msg.next; } else {mmessages = Msg.next; } msg.next = null; if (false) log.v ("MessageQueue", "returning message:" + msg); return msg; }} else {//No more messages. Nextpolltimeoutmillis =-1; }//Process the Quit message now and all pending messages has been handled. if (Mquitting) {Dispose (); return null; }//If first time idle, then get the number of idlers to run. Idle handles only run if the queue is empty or if the first message//in the queue (possibly a barrier) is Due to is handled in the future. if (Pendingidlehandlercount < 0 && (mmessages = = NULL | | Now < mmessages.when)) { Pendingidlehandlercount = Midlehandlers.size (); } if (Pendingidlehandlercount <= 0) {//No idle handlers to run. Loop and wait some more. Mblocked = true; Continue } if (mpendingidlehandlers = = null) {mpendingidlehandlers = new Idlehandler[math.max (pendingidl Ehandlercount, 4)]; } mpendingidlehandlers = Midlehandlers.toarray (mpendingidlehandlers); }//Run the idle handlers. We only EveR reach this code block during the first iteration. for (int i = 0; i < Pendingidlehandlercount; i++) {final Idlehandler idler = mpendingidlehandlers[i]; Mpendingidlehandlers[i] = null; Release the reference to the handler Boolean keep = false; try {keep = Idler.queueidle (); } catch (Throwable t) {log.wtf ("MessageQueue", "Idlehandler threw exception", T); } if (!keep) {synchronized (this) {midlehandlers.remove (idler); }}}//Reset The idle handler count to 0 so we don't run them again. Pendingidlehandlercount = 0; While calling an idle handler, a new message could has been delivered//so go back and look again for a Pendin G message without waiting. Nextpolltimeoutmillis = 0; }}
Next is read, and this is an infinite loop, and if there are no messages in the message queue, then next will block. When a new message arrives, the next method returns the message and removes it from the single-linked list.
How the 3.3 looper works
(1) Creating a message loop
Prepare () is used to create a looper message loop object. The Looper object is saved through a member variable threadlocal.
(2) Get message loop object
Mylooper () is used to get the current message loop object. The Looper object is obtained from the member variable threadlocal.
(3) Start message loop
Loop () to start the message loop. The loop process is as follows:
Each time a message is fetched from the MessageQueue of the messages queue
Use the message corresponding to handler to process the message
Processed message added to local message pool, cyclic multiplexing
Loop above step, exit loop if no message indicates message queue stopped
public static void Prepare () {prepare (true);} private static void Prepare (Boolean quitallowed) {if (sthreadlocal.get () = null) {throw new RuntimeException ( "One Looper may be created per thread"); } sthreadlocal.set (New Looper (quitallowed));} public static Looper Mylooper () {return sthreadlocal.get ();} public static void Loop () {final Looper me = Mylooper (); if (me = = null) {throw new RuntimeException ("No Looper; Looper.prepare () wasn ' t called on this thread. "); Final MessageQueue queue = Me.mqueue; Make sure the identity of the this thread is the the local process,//and keep track of what the identity token AC Tually is. Binder.clearcallingidentity (); Final Long ident = Binder.clearcallingidentity (); for (;;) {Message msg = Queue.next ();//might block if (msg = = NULL) {//No Message indicates that the Message queue is quitting. Return }//This must is in a LocaL variable, in case a UI event sets the logger Printer logging = me.mlogging; if (logging! = null) {logging.println (">>>>> dispatching to" + Msg.target + "" + Msg.callback + ":" + msg.what); } msg.target.dispatchMessage (msg); if (logging! = null) {logging.println ("<<<<< finished to" + Msg.target + "" + msg.callback); }//Make sure that during the course of dispatching the//identity of the thread wasn ' t corrupted. Final Long newident = Binder.clearcallingidentity (); if (ident! = newident) {LOG.WTF (TAG, "Thread identity changed from 0x" + long.tohexstring (i Dent) + "to 0x" + long.tohexstring (newident) + "when dispatching to" + Msg.targe T.getclass (). GetName () + "" + Msg.callback + "what=" + msg.what); } msg.recycleunchecked (); }}
How the 3.4 handler works
(1) Send a message
Handler supports 2 message types, namely runnable and message. So sending a message provides two methods of post (Runnable R) and SendMessage (Message msg). From the source below can be seen runnable assignment to the message of the callback, and ultimately encapsulated into a Message object object. Learning elder sister personally think that external calls are not unified use of the message, should be compatible with Java thread task, learn elder sister think this idea can also learn from the normal development process. Messages sent will be queued to the MessageQueue queue.
(2) Processing messages
Looper the loop process, the message is processed by DispatchMessage (msg). Processing: First look at whether it is a runnable object, if it is called Handlecallback (msg) for processing, eventually transferred to the Runnable.run () method execution thread, if not the Runnable object, Then see whether the external callback processing mechanism, if any, using external callback processing, if neither the Runnable object nor the external callback, then call Handlemessage (msg), This is the most frequently used method in our development process.
(3) Removing messages
Removecallbacksandmessages (), removing the message is actually removing the messages object from MessageQueue.
public void Handlemessage (message msg) {}public void DispatchMessage (Message msg) {if (msg.callback! = null) { Handlecallback (msg); } else {if (mcallback! = null) {if (Mcallback.handlemessage (msg)) {return; }} handlemessage (msg); }}private static void Handlecallback (Message message) {Message.callback.run ();} Public final Message Obtainmessage () {return message.obtain (this);} Public Final Boolean post (Runnable R) {return sendmessagedelayed (Getpostmessage (R), 0);} Public Final Boolean sendMessage (Message msg) {return sendmessagedelayed (msg, 0);} private static Message Getpostmessage (Runnable r) {Message M = Message.obtain (); M.callback = R; return m;} Public Final Boolean sendmessagedelayed (Message msg, long Delaymillis) {if (Delaymillis < 0) {Delaymillis = 0; } return Sendmessageattime (MSG, systemclock.uptimemillis () + Delaymillis);} public boolean sendmessageattime (Message msg, long Uptimemillis) {MessageQueue queue = Mqueue; if (queue = = null) {runtimeexception e = new RuntimeException (this + "sendmessageattime () called With no mqueue "); LOG.W ("Looper", E.getmessage (), E); return false; } return Enqueuemessage (Queue, MSG, uptimemillis);} Private Boolean enqueuemessage (MessageQueue queue, Message msg, long uptimemillis) {msg.target = this; if (masynchronous) {msg.setasynchronous (true); } return Queue.enqueuemessage (msg, uptimemillis);} Public final void Removecallbacksandmessages (Object token) {mqueue.removecallbacksandmessages (this, token);}
There's a callback in the code, and a trace will find that this is an interface that is used to create an instance of handler but does not need to derive handler subclasses, in daily development, The most common way is to derive a handler subclass and rewrite its handlemessage to handle specific messages, and callback gives us another way to use handler. Summarize the handler message processing flow as
4. Why is handler using pipelines?
Some textbooks say that binder is IPC, while handler is communication between threads, and pipelines belong to process communication, handler cannot use it. In fact, Handler really is the pipeline, not to go deep into the study, this way of learning is not complete.
First, let's look at the pipeline, which is also a file, but it differs from the normal file: The pipe buffer size is typically 1 pages, or 4K bytes. The pipeline is divided into read and write end, the reading end is responsible for the data from the pipeline, when the data is empty, block, write end to the pipeline to write data, when the pipe buffer full when the block.
In the Looper.loop method, the message is continually recycled, where the
Message msg = queue.next(); //用于获取消息队列中的下一条消息
The method calls the Nativepollonce () method, which is a native method, and then through the JNI call into the native layer, it uses the pipeline, such as Epoll_create/epoll_wait/epoll_ctl, Here is a map to let you see the Java layer and native connection.
So the question is, since it is the same process of thread communication, why do you need a pipeline?
Memory sharing between threads, through handler communication, the contents of a message pool do not need to be copied from one thread to another.
Because both threads can use the same area of memory when they have access, there is also a thread private zone threadlocal (not involved here). Now that you don't need to copy the memory, what does the pipeline do?
The pipeline function in the handler mechanism is that when a message is prepared by a thread A and placed into a pool of messages, another thread B needs to be notified to process the message. Thread A writes data 1 to the write end of the pipeline (for the old Android version is the write character W), and the pipe has data that wakes up thread B to process the message. The main function of a pipeline is to notify another thread, which is the most important.
Why do handler use pipes rather than binders?
First of all, it is clear that the handler does not use binder, not the binder can not complete this function, for two with independent address space of the process to communicate, of course, can also be used to share memory space between the two threads of communication, but too wasted CPU and memory resources. Binder uses the C/s architecture, which is often used for communication between different processes.
From the memory angle: The communication process also involves a memory copy, the message in the handler mechanism does not need to copy, itself is in the same memory. Handler just needs to tell the other thread that the data is there.
From the CPU point of view, for binder communication The underlying driver also needs to be why a binder thread pool, each communication involves the creation of binder threads and memory allocation and other waste of CPU resources.
Can the last handler message mechanism be used for interprocess communication? The answer is no, handler can only be used to communicate between two threads of the shared memory address space, that is, two threads from the same process.
Brief analysis of Handler source code