android binder 機制 (ServiceManager)

來源:互聯網
上載者:User

標籤:android   binder   service manager   binder機制   ipc   

Binder機製作為一種IPC通訊機制,在android系統中扮演了非常重要的角色,因此我也花了一些時間來研究它,按照我的理解,下面我將從4個方面來講一下Binder,如有不對的地方,還希望大家多多指教。下面的例子都將以MediaServer來講。

一、ServiceManager

ServiceManager在Binder系統中相當與DNS,Server會先在這裡註冊,然後Client會在這裡查詢服務以獲得與Service所在的Server進程建立通訊的通路。

在與ServiceManager的通訊中,書上是以addService為例來講,我這裡將以getService為例來講。直接上代碼。

/*static*/const sp<IMediaPlayerService>&IMediaDeathNotifier::getMediaPlayerService(){    ALOGV("getMediaPlayerService");    Mutex::Autolock _l(sServiceLock);    if (sMediaPlayerService == 0) {        sp<IServiceManager> sm = defaultServiceManager();        sp<IBinder> binder;        do {            binder = sm->getService(String16("media.player"));            if (binder != 0) {                break;            }            ALOGW("Media player service not published, waiting...");            usleep(500000); // 0.5 s        } while (true);        if (sDeathNotifier == NULL) {            sDeathNotifier = new DeathNotifier();        }        binder->linkToDeath(sDeathNotifier);        sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);    }    ALOGE_IF(sMediaPlayerService == 0, "no media player service!?");    return sMediaPlayerService;}

首先我們來看defaultServiceManager(),這是一個單例模式,實現如下:

sp<IServiceManager> defaultServiceManager() {    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;        {        AutoMutex _l(gDefaultServiceManagerLock);        while (gDefaultServiceManager == NULL) {            gDefaultServiceManager = interface_cast<IServiceManager>(                ProcessState::self()->getContextObject(NULL));            if (gDefaultServiceManager == NULL)                sleep(1);        }    }        return gDefaultServiceManager;}

其中,ProcessState::self()->getContextObject(NULL)會返回一個BpBinder(0),那麼就有:

       gDefaultServiceManager = interface_cast<IServiceManager>(BpBinder(0));

根據interface_cast的定義,就變成了:

       gDefaultServiceManager =BpServiceManager(BpBinder(0));

接下來看下面這句話的實現:

binder = sm->getService(String16("media.player"));

       由前面的分析,sm為BpServiceManager的執行個體,我們直接到IserviceManager.cpp裡面找到BpServiceManager的實現,並找到getService方法,其核心實現調用了checkService方法,實現如下:


virtual sp<IBinder> checkService( const String16& name) const{     Parcel data, reply;     data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());     data.writeString16(name);     remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);     return reply.readStrongBinder();}

這裡,我們會有一個疑問:remote()返回什麼。

       先看BpServiceManager的定義:

classBpServiceManager : public BpInterface<IServiceManager>

      

template<typenameINTERFACE>

classBpInterface : public INTERFACE, public BpRefBase

把模板替換一下,變成

classBpInterface : public IServiceManager, public BpRefBase

OK,在BpRefBase裡面找到了remote()的定義:

inline IBinder* remote(){ return mRemote; }

 

mRemote什麼時候賦值的呢?我們再來看BpServiceManager的建構函式:

BpServiceManager(const sp<IBinder>& impl)        : BpInterface<IServiceManager>(impl){}
inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)    : BpRefBase(remote){}
BpRefBase::BpRefBase(const sp<IBinder>& o)    : mRemote(o.get()), mRefs(NULL), mState(0){    extendObjectLifetime(OBJECT_LIFETIME_WEAK);    if (mRemote) {        mRemote->incStrong(this);           // Removed on first IncStrong().        mRefs = mRemote->createWeak(this);  // Held for our entire lifetime.    }}
至此,我們就可以知道remote()返回的是之前建立的BpBinder對象BpBinder(0)。那麼remote()->transact()實際上就是調用了BpBinder的transact方法。我們跳到BpBinder裡面,來看看transact的實現:
status_t BpBinder::transact(    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags){    // Once a binder has died, it will never come back to life.    if (mAlive) {        status_t status = IPCThreadState::self()->transact(            mHandle, code, data, reply, flags);        if (status == DEAD_OBJECT) mAlive = 0;        return status;    }    return DEAD_OBJECT;}
它把工作都交給IPCThreadState來做了。IPCThreadState是什麼呢?它就是Binder傳輸資料中真正幹活的夥計,每個線程都有一個IPCThreadState,每個IPCThreadState中都有一個mIn,一個mOut,其中,mIn用來接收來自Binder裝置的資料,mOut用來儲存發往Binder裝置的資料。OK,我們繼續跳到IPCThreadState裡面來。

status_t IPCThreadState::transact(int32_t handle,                                  uint32_t code, const Parcel& data,                                  Parcel* reply, uint32_t flags){    status_t err = data.errorCheck();    flags |= TF_ACCEPT_FDS;……err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);……err = waitForResponse(reply);……return err;}status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer){    binder_transaction_data tr;    tr.target.handle = handle;    tr.code = code;    tr.flags = binderFlags;    tr.cookie = 0;    tr.sender_pid = 0;    tr.sender_euid = 0;        const status_t err = data.errorCheck();    if (err == NO_ERROR) {        tr.data_size = data.ipcDataSize();        tr.data.ptr.buffer = data.ipcData();        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);        tr.data.ptr.offsets = data.ipcObjects();    } else if (statusBuffer) {        tr.flags |= TF_STATUS_CODE;        *statusBuffer = err;        tr.data_size = sizeof(status_t);        tr.data.ptr.buffer = statusBuffer;        tr.offsets_size = 0;        tr.data.ptr.offsets = NULL;    } else {        return (mLastError = err);    }        mOut.writeInt32(cmd);    mOut.write(&tr, sizeof(tr));        return NO_ERROR;}
writeTransactionData僅僅是把資料寫到了mOut裡面等待發送給Binder,接下來就waitForResponse。

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){    int32_t cmd;    int32_t err;    while (1) {        if ((err=talkWithDriver()) < NO_ERROR) break;        err = mIn.errorCheck();        if (err < NO_ERROR) break;        if (mIn.dataAvail() == 0) continue;                cmd = mIn.readInt32();                switch (cmd) {        …        case BR_REPLY:            {                binder_transaction_data tr;                err = mIn.read(&tr, sizeof(tr));                if (err != NO_ERROR) goto finish;                if (reply) {                    if ((tr.flags & TF_STATUS_CODE) == 0) {                        reply->ipcSetDataReference(                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                            tr.data_size,                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                            tr.offsets_size/sizeof(size_t),                            freeBuffer, this);                    } else {                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);                        freeBuffer(NULL,                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                            tr.data_size,                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                            tr.offsets_size/sizeof(size_t), this);                    }                } else {                    freeBuffer(NULL,                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),                        tr.data_size,                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),                        tr.offsets_size/sizeof(size_t), this);                    continue;                }            }            goto finish;        ……        default:            err = executeCommand(cmd);            if (err != NO_ERROR) goto finish;            break;        }    }finish:    if (err != NO_ERROR) {        if (acquireResult) *acquireResult = err;        if (reply) reply->setError(err);        mLastError = err;    }        return err;}
看,它在不停的talkWithDriver,看字面意思,應該是在這個函數裡面操作了Binder驅動,讓我們一探究竟吧。

status_t IPCThreadState::talkWithDriver(bool doReceive){    if (mProcess->mDriverFD <= 0) {        return -EBADF;    }        binder_write_read bwr;        // Is the read buffer empty?    const bool needRead = mIn.dataPosition() >= mIn.dataSize();        // We don't want to write anything if we are still reading    // from data left in the input buffer and the caller    // has requested to read the next data.    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;        bwr.write_size = outAvail;    bwr.write_buffer = (long unsigned int)mOut.data();    // This is what we'll read.    if (doReceive && needRead) {        bwr.read_size = mIn.dataCapacity();        bwr.read_buffer = (long unsigned int)mIn.data();    } else {        bwr.read_size = 0;        bwr.read_buffer = 0;    }        // Return immediately if there is nothing to do.    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;    bwr.write_consumed = 0;    bwr.read_consumed = 0;    status_t err;    do {        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)            err = NO_ERROR;        else            err = -errno;        if (mProcess->mDriverFD <= 0) {            err = -EBADF;        }    } while (err == -EINTR);    if (err >= NO_ERROR) {        if (bwr.write_consumed > 0) {            if (bwr.write_consumed < (ssize_t)mOut.dataSize())                mOut.remove(0, bwr.write_consumed);            else                mOut.setDataSize(0);        }        if (bwr.read_consumed > 0) {            mIn.setDataSize(bwr.read_consumed);            mIn.setDataPosition(0);        }        return NO_ERROR;    }        return err;}

在talkWithDriver中,IPCThreadState不斷的寫和讀取Binder驅動,於是首先writeTransactionData中mOut中準備的資料被寫到了Binder驅動,之後,便開始等待Binder中有新的資料出現,誰會往裡面寫資料呢?應該是目標進程才對,且讓我們來看一下這部分是怎麼實現的吧。

在Binder IPC通訊過程中,處理序間通訊都要先通過向Binder驅動發送BC_XXX命令,然後Binder 驅動稍做處理後通過對應的BR_XXX將命令轉給給目標進程。

如果有返回值,進程也是先將返回結果以BC_REPLY的形式先發給Binder驅動,然後通過驅動以BR_REPLY命令轉寄。



Binder1往Driver中寫資料後,Binder驅動首先會判斷當前命令接收方是Service Manager還是普通的Server端,判斷依據是tr->target.handle.if(tr->target.handle == 0)   表示該命令是發送特殊結點,即Service Manager,而else  針對一般情況,我們需要判斷Binder驅動中有沒有對應的結點引用,正常情況下應該是能夠找到handle對應的Binder結點引用的。通過結點引用,我們就可以定位到處理命令的Binder結點(實體結點)。

在上面的writeTransactionData中,tr->target.handle == 0,故Service Manager進程會收到BR_TRANSACTION命令,Service Manager在處理完命令後,會把結果通過BC_REPLY訊息寫回Binder驅動,使得上面的waitForResponse(存在與Client進程中)可以得到BR_REPLY的response,從而完成一次互動。

接下來,我們先跳到Service_Manager.c中,來看看Service Manager進程是怎麼處理BR_TRANSACTION命令的。

int main(int argc, char **argv){    struct binder_state *bs;    void *svcmgr = BINDER_SERVICE_MANAGER;    bs = binder_open(128*1024);    if (binder_become_context_manager(bs)) {        ALOGE("cannot become context manager (%s)\n", strerror(errno));        return -1;    }    svcmgr_handle = svcmgr;    binder_loop(bs, svcmgr_handler);    return 0;}void binder_loop(struct binder_state *bs, binder_handler func){    int res;    struct binder_write_read bwr;    unsigned readbuf[32];    bwr.write_size = 0;    bwr.write_consumed = 0;    bwr.write_buffer = 0;        readbuf[0] = BC_ENTER_LOOPER;    binder_write(bs, readbuf, sizeof(unsigned));    for (;;) {        bwr.read_size = sizeof(readbuf);        bwr.read_consumed = 0;        bwr.read_buffer = (unsigned) readbuf;        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);        if (res < 0) {            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));            break;        }        res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);        if (res == 0) {            ALOGE("binder_loop: unexpected reply?!\n");            break;        }        if (res < 0) {            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));            break;        }    }}

在主迴圈中,Service Manager進程不斷的操作Binder驅動,讀取到資料後,便調用binder_parse來處理。

int binder_parse(struct binder_state *bs, struct binder_io *bio,                 uint32_t *ptr, uint32_t size, binder_handler func){    int r = 1;    uint32_t *end = ptr + (size / 4);    while (ptr < end) {        uint32_t cmd = *ptr++;#if TRACE        fprintf(stderr,"%s:\n", cmd_name(cmd));#endif        switch(cmd) {……        case BR_TRANSACTION: {            struct binder_txn *txn = (void *) ptr;            if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {                ALOGE("parse: txn too small!\n");                return -1;            }            binder_dump_txn(txn);            if (func) {                unsigned rdata[256/4];                struct binder_io msg;                struct binder_io reply;                int res;                bio_init(&reply, rdata, sizeof(rdata), 4);                bio_init_from_txn(&msg, txn);                res = func(bs, txn, &msg, &reply);// 將結果寫回Binder驅動                binder_send_reply(bs, &reply, txn->data, res);            }            ptr += sizeof(*txn) / sizeof(uint32_t);            break;        }        ……        default:            ALOGE("parse: OOPS %d\n", cmd);            return -1;        }    }    return r;}

這裡的func就是main裡面的svcmgr,svcmgr函數用來處理各種命令,包括add_service和get_service等,處理完後,調用binder_send_reply將reply寫回binder驅動,從而返回給其用戶端進程。我們來看看svcmgr的實現。

int svcmgr_handler(struct binder_state *bs,                   struct binder_txn *txn,                   struct binder_io *msg,                   struct binder_io *reply){    struct svcinfo *si;    uint16_t *s;    unsigned len;    void *ptr;    uint32_t strict_policy;    int allow_isolated;//    ALOGI("target=%p code=%d pid=%d uid=%d\n",//         txn->target, txn->code, txn->sender_pid, txn->sender_euid);    if (txn->target != svcmgr_handle)        return -1;    // Equivalent to Parcel::enforceInterface(), reading the RPC    // header with the strict mode policy mask and the interface name.    // Note that we ignore the strict_policy and don't propagate it    // further (since we do no outbound RPCs anyway).    strict_policy = bio_get_uint32(msg);    s = bio_get_string16(msg, &len);    if ((len != (sizeof(svcmgr_id) / 2)) ||        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {        fprintf(stderr,"invalid id %s\n", str8(s));        return -1;    }    switch(txn->code) {    case SVC_MGR_GET_SERVICE:    case SVC_MGR_CHECK_SERVICE:        s = bio_get_string16(msg, &len);        ptr = do_find_service(bs, s, len, txn->sender_euid);        if (!ptr)            break;        bio_put_ref(reply, ptr);        return 0;    case SVC_MGR_ADD_SERVICE:        s = bio_get_string16(msg, &len);        ptr = bio_get_ref(msg);        allow_isolated = bio_get_uint32(msg) ? 1 : 0;        if (do_add_service(bs, s, len, ptr, txn->sender_euid, allow_isolated))            return -1;        break;    case SVC_MGR_LIST_SERVICES: {        unsigned n = bio_get_uint32(msg);        si = svclist;        while ((n-- > 0) && si)            si = si->next;        if (si) {            bio_put_string16(reply, si->name);            return 0;        }        return -1;    }    default:        ALOGE("unknown code %d\n", txn->code);        return -1;    }    bio_put_uint32(reply, 0);    return 0;}

是的,我們看到了SVC_MGR_CHECK_SERVICE,SVC_MGR_ADD_SERVICE等命令都最終在這裡得到了妥善的處理。

OK,到這裡,擷取Service的整個流程就完了。

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.