Analyze the association between native-layer message mechanism and java-layer message mechanism from the source code perspective.
The above section analyzes the message mechanism from the java layer in the source code analysis Handler mechanism. Next, this article analyzes the message mechanism in Android from the native layer.
In a message-driven system, the most important thing is the acquisition and processing of message queues and messages. From the previous article, we can see that the handler message mechanism relies mainly on MessageQueue for message queues, the logoff method is used to perform a message loop. The actual operation of polling a message in the logoff loop method depends on the next method of MessageQueue to obtain the message, that is to say, MessageQueue is the most important class in this message-driven mechanism. Before Android 2.3, only MessageQueue in the java layer can add messages to make the message driver operate normally. After Android 2.3, the core part of MessageQueue is moved to the native layer, messageQueue ensures message operation in both worlds.
In the construction method of MessageQueue:
MessageQueue(boolean quitAllowed) { mQuitAllowed = quitAllowed; mPtr = nativeInit();}
The constructor calls nativeInit, which is implemented by the Native layer. The actual implementation of the native layer is the android_ OS _MessageQueue_nativeInit method in android_ OS _messagequeue_cpp.
static void android_os_MessageQueue_nativeInit(JNIEnv* env, jobject obj) { NativeMessageQueue* nativeMessageQueue = new NativeMessageQueue(); if (! nativeMessageQueue) { jniThrowRuntimeException(env, "Unable to allocate native queue"); return; } android_os_MessageQueue_setNativeMessageQueue(env, obj, nativeMessageQueue);}NativeMessageQueue::NativeMessageQueue() { mLooper = Looper::getForThread(); if (mLooper == NULL) { mLooper = new Looper(false); Looper::setForThread(mLooper); }}
The android_ OS _MessageQueue_nativeInit function creates a nativeMessageQueue Message Queue corresponding to the java layer MessageQueue. The NativeMessageQueue construction obtains a looper from the current thread. If the current thread does not arrive, it is instantiated and bound to the current thread.
As mentioned in the previous article, when initialization of several objects related to the message mechanism is complete, the loop operation is started, and the loop is actually the next method for loop execution of MessageQueue.
Message next() { int pendingIdleHandlerCount = -1; // -1 only during first iteration int nextPollTimeoutMillis = 0; for (;;) { if (nextPollTimeoutMillis != 0) { Binder.flushPendingCommands(); } // We can assume mPtr != 0 because the loop is obviously still running. // The looper will not call this method after the loop quits. nativePollOnce(mPtr, nextPollTimeoutMillis); synchronized (this) { // Try to retrieve the next message. Return if found. final long now = SystemClock.uptimeMillis(); Message prevMsg = null; Message msg = mMessages; if (msg != null && msg.target == null) { // Stalled by a barrier. Find the next asynchronous message in the queue. do { prevMsg = msg; msg = msg.next; } while (msg != null && !msg.isAsynchronous()); } if (msg != null) { if (now < msg.when) { // Next message is not ready. Set a timeout to wake up when it is ready. nextPollTimeoutMillis = (int) Math.min(msg.when - now, Integer.MAX_VALUE); } else { // Got a message. mBlocked = false; if (prevMsg != null) { prevMsg.next = msg.next; } else { mMessages = msg.next; } msg.next = null; if (false) Log.v("MessageQueue", "Returning message: " + msg); msg.markInUse(); return msg; } } else { // No more messages. nextPollTimeoutMillis = -1; } // Process the quit message now that all pending messages have been handled. if (mQuitting) { dispose(); return null; } // If first time idle, then get the number of idlers to run. // Idle handles only run if the queue is empty or if the first message // in the queue (possibly a barrier) is due to be handled in the future. if (pendingIdleHandlerCount < 0 && (mMessages == null || now < mMessages.when)) { pendingIdleHandlerCount = mIdleHandlers.size(); } if (pendingIdleHandlerCount <= 0) { // No idle handlers to run. Loop and wait some more. mBlocked = true; continue; } if (mPendingIdleHandlers == null) { mPendingIdleHandlers = new IdleHandler[Math.max(pendingIdleHandlerCount, 4)]; } mPendingIdleHandlers = mIdleHandlers.toArray(mPendingIdleHandlers); } // Run the idle handlers. // We only ever reach this code block during the first iteration. for (int i = 0; i < pendingIdleHandlerCount; i++) { final IdleHandler idler = mPendingIdleHandlers[i]; mPendingIdleHandlers[i] = null; // release the reference to the handler boolean keep = false; try { keep = idler.queueIdle(); } catch (Throwable t) { Log.wtf("MessageQueue", "IdleHandler threw exception", t); } if (!keep) { synchronized (this) { mIdleHandlers.remove(idler); } } } // Reset the idle handler count to 0 so we do not run them again. pendingIdleHandlerCount = 0; // While calling an idle handler, a new message could have been delivered // so go back and look again for a pending message without waiting. nextPollTimeoutMillis = 0; }}
After the nativePollOnce method is returned, it indicates that the next method can obtain a message from mMessages. That is to say, if no message exists in the message queue, nativePollOnce will not be returned.
In the enqueueMessage method of MessageQueue
boolean enqueueMessage(Message msg, long when) { if (msg.isInUse()) { throw new AndroidRuntimeException(msg + " This message is already in use."); } if (msg.target == null) { throw new AndroidRuntimeException("Message must have a target."); } synchronized (this) { if (mQuitting) { RuntimeException e = new RuntimeException( msg.target + " sending message to a Handler on a dead thread"); Log.w("MessageQueue", e.getMessage(), e); return false; } msg.when = when; Message p = mMessages; boolean needWake; if (p == null || when == 0 || when < p.when) { // New head, wake up the event queue if blocked. msg.next = p; mMessages = msg; needWake = mBlocked; } else { // Inserted within the middle of the queue. Usually we don't have to wake // up the event queue unless there is a barrier at the head of the queue // and the message is the earliest asynchronous message in the queue. needWake = mBlocked && p.target == null && msg.isAsynchronous(); Message prev; for (;;) { prev = p; p = p.next; if (p == null || when < p.when) { break; } if (needWake && p.isAsynchronous()) { needWake = false; } } msg.next = p; // invariant: p == prev.next prev.next = msg; } // We can assume mPtr != 0 because mQuitting is false. if (needWake) { nativeWake(mPtr); } } return true;}
After the message is added, the nativeWake method of the native layer is called. This should trigger the nativePollOnce method returned above, so that the added message can be distributed and processed.
In android_ OS _MessageQueue.cpp:
static void android_os_MessageQueue_nativeWake(JNIEnv* env, jobject obj, jint ptr) { NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr); return nativeMessageQueue->wake();}void NativeMessageQueue::wake() { mLooper->wake();}
In logoff. cpp:
void Looper::wake() { #if DEBUG_POLL_AND_WAKE LOGD("%p ~ wake", this); #endif #ifdef LOOPER_STATISTICS // FIXME: Possible race with awoken() but this code is for testing only and is rarely enabled. if (mPendingWakeCount++ == 0) { mPendingWakeTime = systemTime(SYSTEM_TIME_MONOTONIC); } #endif ssize_t nWrite; do { nWrite = write(mWakeWritePipeFd, "W", 1); } while (nWrite == -1 && errno == EINTR); if (nWrite != 1) { if (errno != EAGAIN) { LOGW("Could not write wake signal, errno=%d", errno); } }}
In the wake method, I was surprised to find that a "w" was written into the pipeline. Can this wake up the nativePollOnce method to return? Does it mean that the nativePollOnce method carries the read operation of this pipeline? If this is the case, there must be such a process of monitoring the pipeline in the execution process of the nativePollOnce method? This is all speculation. Next we will analyze the specific implementation of the nativePollOnce method.
The implementation of nativePollOnce is in android_ OS _MessageQueue.cpp:
void NativeMessageQueue::pollOnce(int timeoutMillis) { mLooper->pollOnce(timeoutMillis);}
In logoff. cpp:
int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) { int result = 0; for (;;) { while (mResponseIndex < mResponses.size()) { const Response& response = mResponses.itemAt(mResponseIndex++); if (! response.request.callback) { #if DEBUG_POLL_AND_WAKE LOGD("%p ~ pollOnce - returning signalled identifier %d: " "fd=%d, events=0x%x, data=%p", this, response.request.ident, response.request.fd, response.events, response.request.data); #endif if (outFd != NULL) *outFd = response.request.fd; if (outEvents != NULL) *outEvents = response.events; if (outData != NULL) *outData = response.request.data; return response.request.ident; } } if (result != 0) { #if DEBUG_POLL_AND_WAKE LOGD("%p ~ pollOnce - returning result %d", this, result); #endif if (outFd != NULL) *outFd = 0; if (outEvents != NULL) *outEvents = NULL; if (outData != NULL) *outData = NULL; return result; } result = pollInner(timeoutMillis); }}
In the LOCE: pollOnce method, you will find that # if and # endif are used, which indicates that LOCE adopts the compilation option to control whether to use the epoll mechanism for I/O reuse. In linux network programming, select is used for event triggering for a long period of time. epoll is used in the new linux kernel to replace it. Compared with select, the biggest benefit of epoll is that it will not decrease the efficiency as the number of file descriptors increases. The select mechanism uses polling for processing. The larger the number of fd polling, the lower the efficiency. Epoll interfaces are very simple and only have three functions:
Return to the LOCE: pollOnce method. Each for loop calls a function. Let's take a look.
Int Logoff: pollInner (int timeoutMillis) {# if DEBUG_POLL_AND_WAKE LOGD ("% p ~ PollOnce-waiting: timeoutMillis = % d ", this, timeoutMillis); # endif int result = ALOOPER_POLL_WAKE; mResponses. clear (); mResponseIndex = 0; # ifdef LOOPER_STATISTICS nsecs_t pollStartTime = systemTime (SYSTEM_TIME_MONOTONIC ); # endif # ifdef future // It indicates that epoll io is used to reuse the ordinary struct epoll_event eventItems [future]; // call epoll_wait to wait for the occurrence of the event int eventCount = epoll_wait (mEpollFd, eventItems), EPOLL_MAX_EVENTS, timeoutMillis); bool acquiredLock = false; # else // Wait for wakeAndLock () waiters to run then set mPolling to true. mLock. lock (); while (mWaiters! = 0) {mResume. wait (mLock);} mPolling = true; mLock. unlock (); size_t requestedCount = mRequestedFds. size (); int eventCount = poll (mRequestedFds. editArray (), requestedCount, timeoutMillis); # endif if (eventCount <0) {if (errno = EINTR) {goto Done;} LOGW ("Poll failed with an unexpected error, errno = % d ", errno); result = ALOOPER_POLL_ERROR; goto Done;} if (eventCount = 0) {# if DEBUG_POLL_AND _ Wake logd ("% p ~ PollOnce-timeout ", this); # endif result = ALOOPER_POLL_TIMEOUT; goto Done ;}# if DEBUG_POLL_AND_WAKE LOGD (" % p ~ PollOnce-handling events from % d fds ", this, eventCount); # endif # ifdef LOOPER_USES_EPOLL for (int I = 0; I <eventCount; I ++) {int fd = eventItems [I]. data. fd; uint32_t epollEvents = eventItems [I]. events; if (fd = mWakeReadPipeFd) {if (epollEvents & EPOLLIN) {awoken ();} else {LOGW ("Ignoring unexpected epoll events 0x % x on wake read pipe. ", epollEvents) ;}} else {if (! AcquiredLock) {mLock. lock (); acquiredLock = true;} ssize_t requestIndex = mRequests. indexOfKey (fd); if (requestIndex> = 0) {int events = 0; if (epollEvents & EPOLLIN) events | = ALOOPER_EVENT_INPUT; if (epollEvents & EPOLLOUT) events | = oper_event_output; if (epollEvents & EPOLLERR) events | = ALOOPER_EVENT_ERROR; if (epollEvents & EPOLLHUP) events | = ALOOPER_EVENT_HANGUP; pushResponse (events, MRequests. valueAt (requestIndex);} else {LOGW ("Ignoring unexpected epoll events 0x % x on fd % d that is" "no longer registered. ", epollEvents, fd) ;}} if (acquiredLock) {mLock. unlock ();} Done:; # else for (size_t I = 0; I <requestedCount; I ++) {const struct pollfd & requestedFd = mRequestedFds. itemAt (I); short pollEvents = requestedFd. revents; if (pollEvents) {if (requestedFd. fd = mWakeR EadPipeFd) {if (pollEvents & POLLIN) {// a command is generated on the read end of the pipeline to directly read the data awoken ();} else {LOGW ("Ignoring unexpected poll events 0x % x on wake read pipe. ", pollEvents) ;}} else {int events = 0; if (pollEvents & POLLIN) events | = ALOOPER_EVENT_INPUT; if (pollEvents & POLLOUT) events | = ALOOPER_EVENT_OUTPUT; if (pollEvents & POLLERR) events | = ALOOPER_EVENT_ERROR; if (pollEvents & POLLHUP) events | = ALOOPER _ EVENT_HANGUP; if (pollEvents & POLLNVAL) events | = ALOOPER_EVENT_INVALID; pushResponse (events, mRequests. itemAt (I);} if (-- eventCount = 0) {break ;}} Done: // Set mPolling to false and wake up the wakeAndLock () waiters. mLock. lock (); mPolling = false; if (mWaiters! = 0) {mAwake. broadcast ();} mLock. unlock (); # endif # ifdef LOOPER_STATISTICS nsecs_t pollEndTime = systemTime (SYSTEM_TIME_MONOTONIC); mSampledPolls + = 1; if (timeoutMillis = 0) {timeout + = 1; latency + = pollEndTime-pollStartTime;} else if (timeoutMillis> 0 & result = ALOOPER_POLL_TIMEOUT) {mSampledTimeoutPollCount + = 1; mSampledTimeoutPollLatencySum + = poll EndTime-pollStartTime-milliseconds_to_nanoseconds (timeoutMillis);} if (mSampledPolls = SAMPLED_POLLS_TO_AGGREGATE) {LOGD ("% p ~ Poll latency statistics: % 0.3fms zero timeout, % 0.3fms non-zero timeout ", this, 0.000001f * float (inflow)/inflow, 0.000001f * float (inflow)/mSampledTimeoutPollCount ); mSampledPolls = 0; mSampledZeroPollCount = 0; bytes = 0; mSampledTimeoutPollCount = 0; mSampledTimeoutPollLatencySum = 0 ;}# endif for (size_t I = 0; I <mResponses. size (); I ++) {const Response & response = mResponses. itemAt (I); if (response. request. callback) {# if DEBUG_POLL_AND_WAKE | DEBUG_CALLBACKS LOGD ("% p ~ PollOnce-invoking callback: fd = % d, events = 0x % x, data = % p ", this, response. request. fd, response. events, response. request. data); # endif int callbackResult = response. request. callback (response. request. fd, response. events, response. request. data); if (callbackResult = 0) {removeFd (response. request. fd) ;}result = ALOOPER_POLL_CALLBACK ;}} return result ;}
In the above call to the epoll_wait method to wait for the event to occur, timeoutMillis is passed in at the java layer. If there is no message in the next method of MessageQueue, nextPollTimeoutMillis =-1, that is to say, the timeoutMillis is-1, so when waiting for something to happen, it may cause permanent blocking until an event occurs. If an event occurs and is an MPS queue read event, the data in the MPs queue is read directly. Previously, data was written to the pipeline when the Logoff: woke method was analyzed.
Copyright statement: This article is the original author of the blog, without the blog can not be reproduced (contact: QQ: 312037487 mailbox: andywuchuanlong@sina.cn ).