Analysis of the relationship between native layer message mechanism and Java Layer Message mechanism from the source angle

Source: Internet
Author: User
Tags epoll goto

Above from the source analysis handler mechanism from the Java layer Analysis of the message mechanism, the next article from the native layer to analyze the Android message mechanism.

In a message-driven system, the most important thing is Message Queuing and message acquisition and processing, from the previous article can be seen handler message mechanism is mainly by MessageQueue message queue, relying on Looper for message loop, The actual operation of polling messages in the Looper loop method relies on the MessageQueue next method to get the message, which means that the MessageQueue is the most important thing in this message-driven mechanism. Prior to Android 2.3, only the Java layer could add messages to MessageQueue to enable the message to function properly. After 2.3, the core of the MessageQueue moved to the native layer, MessageQueue the two worlds to ensure the operation of the message.
In the construction method of the MessageQueue:

MessageQueue(boolean quitAllowed) {    mQuitAllowed = quitAllowed;    mPtr = nativeInit();}

The constructor calls Nativeinit, which is implemented by the native layer, and the real implementation of the native layer is the Android_os_messagequeue_nativeinit method in Android_os_messagequeue.cpp.

static void android_os_MessageQueue_nativeInit(JNIEnv* env,     jobject obj) {    NativeMessageQueue* nativeMessageQueue = new    NativeMessageQueue();    if (! nativeMessageQueue) {        jniThrowRuntimeException(env, "Unable to allocate native queue");        return;    }    android_os_MessageQueue_setNativeMessageQueue(env, obj, nativeMessageQueue);}NativeMessageQueue::NativeMessageQueue() {    mLooper = Looper::getForThread();    if (mLooper == NULL) {        mLooper = new Looper(false);        Looper::setForThread(mLooper);    }}

The Android_os_messagequeue_nativeinit function creates a message queue with a corresponding point nativemessagequeue the Java layer MessageQueue, Gets a looper from the current thread in the Nativemessagequeue construct, instantiates one and binds to the current thread if the current thread does not have a conversation.

As mentioned in the previous article, when several objects related to the message mechanism are initialized, the loop operation begins, and loop is actually the next method of executing MessageQueue.

Message Next () {int pendingidlehandlercount =-1;//-1 only during first iteration int nextpolltimeoutmillis = 0; for (;;)        {if (Nextpolltimeoutmillis! = 0) {binder.flushpendingcommands ();        }//We can assume mptr! = 0 because the loop is obviously still running.        The looper would not be the call of this method after the loop quits.        Nativepollonce (Mptr, Nextpolltimeoutmillis);  Synchronized (this) {//Try to retrieve the next message.            Return if found.            Final Long now = Systemclock.uptimemillis ();            Message prevmsg = null;            Message msg = mmessages;  if (msg! = NULL && Msg.target = = null) {//stalled by a barrier.                Find the next asynchronous message in the queue.                    do {prevmsg = msg;                msg = Msg.next;            } while (msg! = null &&!msg.isasynchronous ()); } if (msg! = NULL) {if (now < Msg.when) {//Next message are not ready.                    Set a timeout to wake up when it was ready.                Nextpolltimeoutmillis = (int) math.min (Msg.when-now, Integer.max_value);                    } else {//Got a message.                    mblocked = false;                    if (prevmsg! = null) {Prevmsg.next = Msg.next;                    } else {mmessages = Msg.next;                    } msg.next = null;                    if (false) log.v ("MessageQueue", "returning message:" + msg);                    Msg.markinuse ();                return msg;                }} else {//No more messages.            Nextpolltimeoutmillis =-1;            }//Process the Quit message now and all pending messages has been handled.                if (mquitting) {Dispose ();     return null;       }//If first time idle, then get the number of idlers to run. Idle handles only run if the queue is empty or if the first message//in the queue (possibly a barrier) is            Due to is handled in the future.                if (Pendingidlehandlercount < 0 && (mmessages = = NULL | | Now < mmessages.when)) {            Pendingidlehandlercount = Midlehandlers.size ();  } if (Pendingidlehandlercount <= 0) {//No idle handlers to run.                Loop and wait some more.                Mblocked = true;            Continue } if (mpendingidlehandlers = = null) {mpendingidlehandlers = new Idlehandler[math.max (pendingidl            Ehandlercount, 4)];        } mpendingidlehandlers = Midlehandlers.toarray (mpendingidlehandlers);        }//Run the idle handlers.        We only ever reach this code block during the first iteration. for (int i = 0; i < Pendingidlehandlercount;            i++) {final Idlehandler idler = mpendingidlehandlers[i]; Mpendingidlehandlers[i] = null;            Release the reference to the handler Boolean keep = false;            try {keep = Idler.queueidle ();            } catch (Throwable t) {log.wtf ("MessageQueue", "Idlehandler threw exception", T);                } if (!keep) {synchronized (this) {midlehandlers.remove (idler);        }}}//Reset The idle handler count to 0 so we don't run them again.        Pendingidlehandlercount = 0; While calling an idle handler, a new message could has been delivered//so go back and look again for a Pendin        G message without waiting.    Nextpolltimeoutmillis = 0; }}

When the

Nativepollonce method returns, it represents the next method to get a message from the mmessages, meaning that if no message exists in the message queue nativepollonce it will not be returned.
in the Enqueuemessage method of MessageQueue

Boolean enqueuemessage (Message msg, long when) {if (Msg.isinuse ()) {throw new Androidruntimeexception (msg + ")    This message was already in use. ");    if (Msg.target = = null) {throw new Androidruntimeexception ("Message must has a target.");                    } synchronized (this) {if (mquitting) {runtimeexception e = new RuntimeException (            Msg.target + "Sending message to a Handler on a dead thread");            LOG.W ("MessageQueue", E.getmessage (), E);        return false;        } Msg.when = when;        Message p = mmessages;        Boolean needwake;            if (p = = NULL | | when = = 0 | | When < p.when) {//New head, Wake up the event queue if blocked.            Msg.next = p;            Mmessages = msg;        Needwake = mblocked;  } else {//Inserted within the middle of the queue. Usually we don ' t have to wake//up the event queue unless there are a barrier at the head of THe queue//and the message is the earliest asynchronous message in the queue.            Needwake = mblocked && P.target = = null && msg.isasynchronous ();            Message prev; for (;;)                {prev = p;                p = p.next;                if (p = = NULL | | When < p.when) {break;                } if (Needwake && p.isasynchronous ()) {needwake = false; }} Msg.next = P;        Invariant:p = = Prev.next Prev.next = msg;        }//We can assume mptr! = 0 because mquitting is false.        if (needwake) {nativewake (mptr); }} return true;

After adding the message, call the native layer of the Nativewake method, this should be triggered by the above mentioned Nativepollonce method return, so that the added message to be distributed processing.

In the Android_os_messagequeue.cpp:

static void android_os_MessageQueue_nativeWake(JNIEnv* env, jobject obj, jint ptr) {    NativeMessageQueue* nativeMessageQueue = reinterpret_cast<NativeMessageQueue*>(ptr);    return nativeMessageQueue->wake();}void NativeMessageQueue::wake() {    mLooper->wake();}

In the Looper.cpp:

void Looper::wake() {    #if DEBUG_POLL_AND_WAKE        LOGD("%p ~ wake", this);    #endif    #ifdef LOOPER_STATISTICS        // FIXME: Possible race with awoken() but this code is for testing only and is rarely enabled.        if (mPendingWakeCount++ == 0) {            mPendingWakeTime = systemTime(SYSTEM_TIME_MONOTONIC);        }    #endif        ssize_t nWrite;        do {            nWrite = write(mWakeWritePipeFd, "W", 1);        } while (nWrite == -1 && errno == EINTR);        if (nWrite != 1) {            if (errno != EAGAIN) {                LOGW("Could not write wake signal, errno=%d", errno);            }        }}

In the Wake method, it is surprising to find that a "W" is written to the pipeline, so can you wake the Nativepollonce method back? Does it mean that the Nativepollonce method carries the read operation of the pipe? If that's the case, is there a way to monitor this pipeline during the execution of the Nativepollonce method? This is all speculation, and we will then analyze the concrete implementation of the Nativepollonce method.

The implementation of Nativepollonce in Android_os_messagequeue.cpp:

void NativeMessageQueue::pollOnce(int timeoutMillis) {    mLooper->pollOnce(timeoutMillis);}

In the Looper.cpp:

int Looper::p ollonce (int timeoutmillis, int* outfd, int* outevents, void** outdata) {int result = 0; for (;;) {while (Mresponseindex < Mresponses.size ()) {const response& Response = Mresponses.itemat (mresp            onseindex++); if (! Response.request.callback) {#if debug_poll_and_wake logd ("%p ~ pollonce-returning Signa lled identifier%d: "" fd=%d, Events=0x%x, data=%p ", this, response.request.            Ident, RESPONSE.REQUEST.FD, Response.events, Response.request.data);                #endif if (outfd! = NULL) *outfd = RESPONSE.REQUEST.FD;                if (outevents! = NULL) *outevents = response.events;                if (outdata! = NULL) *outdata = Response.request.data;            return response.request.ident; }} if (Result! = 0) {#if debug_poll_and_wake logd ("%p ~ pollonce-returning Result %D ", this, result);            #endif if (outfd! = NULL) *outfd = 0;            if (outevents! = null) *outevents = NULL;            if (outdata! = null) *outdata = NULL;        return result;    } result = Pollinner (Timeoutmillis); }}

In the Looper::p Ollonce method You will find that you are using # if and #endif, which means that Looper uses the compile option to control whether the Epoll mechanism is used for I/O multiplexing. In Linux network programming, for a long time to use Select to do event triggering, in the new Linux kernel using Epoll to replace it, compared to select,epoll the biggest advantage is that it does not increase with the number of listening file descriptor is less efficient, The select mechanism is handled by polling, and the higher the number of FD polled, the lower the efficiency. The Epoll interface is very simple and only has three functions:

    1. int epoll_create (int size); Creates a epoll handle that can be seen in the/proc/process ID/FD after the handle has been created.
    2. int epoll_ctl (int epfd, int op, int fd, struct epoll_event *event), register event function.
    3. int epoll_wait (int epfd, struct epoll_event * events, int maxevents, int timeout), wait event occurs, parameter timeout is the timeout millisecond value, 0 returns immediately,- 1 will be indeterminate, that is, it is possible to block permanently. The function returns the number of events that need to be processed, such as returning 0 to indicate a timeout.

Go back to Looper::p The Ollonce method, each time a for loop calls a function, you might want to check it out.

int Looper::p ollinner (int timeoutmillis) {#if debug_poll_and_wake logd ("%p ~ pollonce-waiting:timeoutmillis=%d",    this, timeoutmillis);    #endif int result = Alooper_poll_wake;    Mresponses.clear ();    Mresponseindex = 0;    #ifdef looper_statistics nsecs_t pollstarttime = SystemTime (system_time_monotonic);    #endif #ifdef Looper_uses_epoll//is shown here is the use of Epoll IO multiplexing in any way struct epoll_event eventitems[epoll_max_events];    The invocation of epoll_wait waits for an event to occur int eventcount = epoll_wait (MEPOLLFD, Eventitems, epoll_max_events, Timeoutmillis);    BOOL Acquiredlock = false;    #else//Wait for Wakeandlock () waiters to run then set Mpolling to True.    Mlock.lock ();    while (mwaiters! = 0) {mresume.wait (mLock);    } mpolling = true;    Mlock.unlock ();    size_t Requestedcount = Mrequestedfds.size ();    int eventcount = poll (Mrequestedfds.editarray (), Requestedcount, Timeoutmillis); #endif if (EventCount < 0) {if (errno = = eintr) {goto doNe        } LOGW ("Poll failed with an unexpected error, errno=%d", errno);        result = Alooper_poll_error;    Goto done;    } if (EventCount = = 0) {#if debug_poll_and_wake logd ("%p ~ pollonce-timeout", this);        #endif result = Alooper_poll_timeout;    Goto done;    } #if debug_poll_and_wake Logd ("%p ~ pollonce-handling events from%d FDs", this, eventcount);        #endif #ifdef looper_uses_epoll for (int i = 0; i < EventCount; i++) {int FD = EVENTITEMS[I].DATA.FD;        uint32_t epollevents = eventitems[i].events;            if (fd = = MWAKEREADPIPEFD) {if (Epollevents & Epollin) {awoken ();            } else {LOGW ("Ignoring unexpected epoll events 0x%x on Wake Read pipe.", epollevents);                }} else {if (! Acquiredlock) {mlock.lock ();            Acquiredlock = true; } ssize_t Requestindex = Mrequests.indexofkey(FD);                if (requestindex >= 0) {int events = 0;                if (Epollevents & Epollin) events |= Alooper_event_input;                if (Epollevents & epollout) events |= Alooper_event_output;                if (Epollevents & Epollerr) events |= Alooper_event_error;                if (Epollevents & epollhup) events |= Alooper_event_hangup;            Pushresponse (Events, Mrequests.valueat (Requestindex)); } else {LOGW ("Ignoring unexpected epoll events 0x%x on FD%d so is" "no longer R            Egistered. ", Epollevents, FD);    }}} if (Acquiredlock) {mlock.unlock ();    } Done:; #else for (size_t i = 0; i < Requestedcount; i++) {const struct pollfd& REQUESTEDFD = Mrequestedfds.item        at (i);        Short pollevents = requestedfd.revents; if (pollevents) {if (requestedfd.fd = = MWAKEREADPIPEFD) {if (pollevents &Amp                Pollin) {//is the pipe read-side command directly reads the data in the pipeline awoken ();                } else {LOGW ("Ignoring unexpected poll events 0x%x on Wake Read pipe.", pollevents);                }} else {int events = 0;                if (Pollevents & Pollin) events |= Alooper_event_input;                if (Pollevents & pollout) events |= Alooper_event_output;                if (Pollevents & Pollerr) events |= Alooper_event_error;                if (Pollevents & pollhup) events |= Alooper_event_hangup;                if (Pollevents & pollnval) events |= Alooper_event_invalid;            Pushresponse (Events, Mrequests.itemat (i));            } if (--eventcount = = 0) {break;    }}} did://Set mpolling to False and wake up the Wakeandlock () waiters.    Mlock.lock ();    Mpolling = false;    if (mwaiters! = 0) {mawake.broadcast (); } mlock.unlock ();    #endif #ifdef looper_statistics nsecs_t pollendtime = SystemTime (system_time_monotonic);    Msampledpolls + = 1;        if (Timeoutmillis = = 0) {Msampledzeropollcount + = 1;    Msampledzeropolllatencysum + = Pollendtime-pollstarttime;        } else if (Timeoutmillis > 0 && result = = Alooper_poll_timeout) {msampledtimeoutpollcount + = 1;     Msampledtimeoutpolllatencysum + = Pollendtime-pollstarttime-milliseconds_to_nanoseconds (TimeoutMillis); } if (msampledpolls = = sampled_polls_to_aggregate) {logd ("%p ~ Poll Latency Statistics:%0.3FMS Zero Timeou                T,%0.3fms non-zero timeout ", this, 0.000001f * FLOAT (msampledzeropolllatencysum)/Msampledzeropollcount,        0.000001f * FLOAT (msampledtimeoutpolllatencysum)/msampledtimeoutpollcount);        Msampledpolls = 0;        Msampledzeropollcount = 0;        msampledzeropolllatencysum = 0;        Msampledtimeoutpollcount = 0; Msampledtimeoutpolllatencysum = 0; } #endif for (size_t i = 0; i < mresponses.size (); i++) {const response& Response = Mresponses.itemat        (i); if (response.request.callback) {#if Debug_poll_and_wake | |                    Debug_callbacks logd ("%p ~ pollonce-invoking callback:fd=%d, events=0x%x, data=%p", this,    RESPONSE.REQUEST.FD, Response.events, Response.request.data);  #endif int callbackresult = Response.request.callback (RESPONSE.REQUEST.FD, Response.events,            Response.request.data);            if (Callbackresult = = 0) {removefd (RESPONSE.REQUEST.FD);        } result = Alooper_poll_callback; }} return result;

In the above call the Epoll_wait method waits for the event to occur, Timeoutmillis is what we pass in the Java layer, in the MessageQueue next method if there is no message in which the Nextpolltimeoutmillis =- 1, which means that Timeoutmillis is 1, it can cause permanent blocking until an event occurs while waiting for something to happen. If an event occurs and is a pipe-read event, the data in the pipeline is read directly. Before parsing the Looper::woke method, data was written to the pipeline.

Copyright NOTICE: This article for Bo Master original article, without BO Master permission cannot reprint (Contact: QQ312037487 Email: [Email protected]).

Analysis of the relationship between native layer message mechanism and Java Layer Message mechanism from the source angle

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.