• Android4.0.3 Binder机制分析


    Android4.0.3 Binder机制分析

    一 说明
    Android系统最常见也是初学者最难搞明白的就是Binder了,很多很多的Service就是通过Binder机制来和客户端通讯交互的。所以搞明白Binder的话,在很大程度上就能理解程序运行的流程。
    我们这里将以MediaService的例子来分析Binder的使用:
    l ServiceManager,这是Android OS的整个服务的管理程序
    l MediaService,这个程序里边注册了提供媒体播放的服务程序MediaPlayerService,我们最后只分析这个
    l MediaPlayerClient,这个是与MediaPlayerService交互的客户端程序

    下面先讲讲MediaService应用程序。

    二 MediaService的诞生
    MediaService是一个应用程序,虽然Android搞了七七八八的JAVA之类的东西,但是在本质上,它还是一个完整的Linux操作系统,也还没有牛到什么应用程序都是JAVA写。所以,MS(MediaService)就是一个和普通的C++应用程序一样的东西。

    MediaService的源码文件在:framework\base\Media\MediaServer\Main_mediaserver.cpp中。让我们看看到底是个什么玩意儿!

    int main(int argc, char** argv)
    {
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        LOGI("ServiceManager: %p", sm.get());
        AudioFlinger::instantiate();
        MediaPlayerService::instantiate();
        CameraService::instantiate();
        AudioPolicyService::instantiate();
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
    }
    

    其中,我们只分析MediaPlayerService。
    这么多疑问,看来我们只有一个个函数深入分析了。不过,这里先简单介绍下sp这个东西。
    sp,究竟是smart pointer还是strong pointer呢?其实我后来发现不用太关注这个,就把它当做一个普通的指针看待,即 sp<IServiceManager> 等价于 IServiceManager* 吧。sp是google搞出来的为了方便C/C++程序员管理指针的分配和释放的一套方法,类似JAVA的什么WeakReference之类的。我个人觉得,要是自己写程序的话,不用这个东西也成。
    好了,以后的分析中,sp<XXX>就看成是XXX*就可以了。
    我们现在就按照上面的步骤依次来分析在实例化MediaPlayerService之前做了哪些准备工作

    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();

    第一步:

    首先调用ProcessState::self方法,具体实现在以下目录:framework\base\libs\binder\ProcessState.cpp

    sp<ProcessState> ProcessState::self()
    {
        if (gProcess != NULL) return gProcess;
        
        AutoMutex _l(gProcessMutex);
        if (gProcess == NULL) gProcess = new ProcessState;
        return gProcess;
    }

    程序第一次执行时,gProcess必定为NULL,所以这里会实例化一个ProcessState的对象,返回实例化的对象,那么我们就要好好看看这个对象在实例化的时候构造函数都干了些甚么?

    ProcessState::ProcessState()
        : mDriverFD(open_driver())
        , mVMStart(MAP_FAILED)
        , mManagesContexts(false)
        , mBinderContextCheckFunc(NULL)
        , mBinderContextUserData(NULL)
        , mThreadPoolStarted(false)
        , mThreadPoolSeq(1)
    {
        if (mDriverFD >= 0) {
            // XXX Ideally, there should be a specific define for whether we
            // have mmap (or whether we could possibly have the kernel module
            // availabla).
    #if !defined(HAVE_WIN32_IPC)
            // mmap the binder, providing a chunk of virtual address space to receive transactions.
            mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
            if (mVMStart == MAP_FAILED) {
                // *sigh*
                LOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");
                close(mDriverFD);
                mDriverFD = -1;
            }
    #else
            mDriverFD = -1;
    #endif
        }
    
        LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
    }
    

    我们来分析一下这个构造函数,首先我们要十分注意函数名+“:'后面的用于初始化的方法,这种方法在面向对象中十分常见,习惯于c的大侠们注意了,特别是这里更是要格外关注了,其他的变量初始化你没有关注可能不是很致命,但是如果你粗心大意漏掉了第一条,那可就危险了,看看这个罪魁祸首是什么??

    mDriverFD(open_driver()

    没错,就是他了,这个方法首先调用open_driver方法,将返回的结果赋值给mDriverFD,继续看看这个方法做了哪些工作吧?他为什么就这么重要呢?

    static int open_driver()
    {
        int fd = open("/dev/binder", O_RDWR);
        if (fd >= 0) {
            fcntl(fd, F_SETFD, FD_CLOEXEC);
            int vers;
            status_t result = ioctl(fd, BINDER_VERSION, &vers);
            if (result == -1) {
                LOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
                close(fd);
                fd = -1;
            }
            if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
                LOGE("Binder driver protocol does not match user space protocol!");
                close(fd);
                fd = -1;
            }
            size_t maxThreads = 15;
            result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
            if (result == -1) {
                LOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
            }
        } else {
            LOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));
        }
        return fd;
    }
    

    这个方法去 打开一个设备”/dev/binder“,这个设备对应于kernel层的binder driver,让后通过ioctl发送BINDER_VERSION命令,获取binder的版本信息,保存到vers变量中,判断版本信息是否合法,然后接着调用ioctl发送BINER_SET_MAX_THREADS命令,从字面意思我们也可以了解到,发送这个命令获取到最大的允许创建线程数,结果保存在maxThreads中。成功获取到fd之后,咱们接着往下看

    回到刚才ProcessState的默认构造函数中,接下来使用mmap方法,将fd映射为内存,这样内存的memcpy等操作就相当于write/read(fd)了。

    第一步到这里就完成了,这里做一下总结,第一步主要完成以下内容:

    1.实例化ProcessState的对象地址保存在proc这个指针中,同时初始化了这个对象的成员,为之后ProcessState对象的使用做好充分准备。这里先记下proc这个变量

    2.打开/dev/binder device,获取fd,并为binder device mmap 一块虚拟内存空间用来接收事件,同样这里先记下打开/dev/binder 设备的设备索引fd


    第二步:
    调用defaultServiceManager方法,返回一个IServiceManager类型的指针保存到sm中,看看他的具体实现吧:

    sp<IServiceManager> defaultServiceManager()
    {
        if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
        
        {
            AutoMutex _l(gDefaultServiceManagerLock);
            if (gDefaultServiceManager == NULL) {
                gDefaultServiceManager = interface_cast<IServiceManager>(
                    ProcessState::self()->getContextObject(NULL));
            }
        }
        
        return gDefaultServiceManager;
    }

    首次执行这个方法,gDefaultServiceManager必定是为NULL的,所以这里执行下面大括号中的方法,只有一条语句,似乎挺简单,实则不然啊

    gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL));

    我们依次来分析这个方法,按照优先级来说,首先运行括弧里面的内容,ProcessState::self()->getContextObject(NULL),这里ProcessState::self()返回的就是我们在第一步中实例化的ProcessState对象,,也就是我上面说的,记下的proc这个变量,所以接着调用对象的getContextObject这个方法,传入的参数为NULL:

    sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
    {
        return getStrongProxyForHandle(0);
    }
    sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
    {
        sp<IBinder> result;
    
        AutoMutex _l(mLock);
    
        handle_entry* e = lookupHandleLocked(handle);
    
        if (e != NULL) {
            // We need to create a new BpBinder if there isn't currently one, OR we
            // are unable to acquire a weak reference on this current one.  See comment
            // in getWeakProxyForHandle() for more info about this.
            IBinder* b = e->binder;
            if (b == NULL || !e->refs->attemptIncWeak(this)) {
                b = new BpBinder(handle); 
                e->binder = b;
                if (b) e->refs = b->getWeakRefs();
                result = b;
            } else {
                // This little bit of nastyness is to allow us to add a primary
                // reference to the remote proxy when this team doesn't have one
                // but another team is sending the handle to us.
                result.force_set(b);
                e->refs->decWeak(this);
            }
        }
    
        return result;
    }
    

    这里传入的handle为0,首先调用lookupHandleLocked,从数组中查找对应索引的资源,lookupHandleLocked这个就不说了,内部会返回一个handle_entry,结构定义如下:

    struct handle_entry {
          IBinder* binder;
          RefBase::weakref_type* refs;
    };

    获取到这个handle_entry之后,初次执行这里的操作,IBinder定义的b这个指针肯定也是null的,所以 这里实例化一个BpBinder,实例化这个BpBinder,跳不掉得说说这个类的默认构造方法了:

    BpBinder::BpBinder(int32_t handle)
        : mHandle(handle)
        , mAlive(1)
        , mObitsSent(0)
        , mObituaries(NULL)
    {
        LOGV("Creating BpBinder %p handle %d\n", this, mHandle);
    
        extendObjectLifetime(OBJECT_LIFETIME_WEAK);
        IPCThreadState::self()->incWeakHandle(handle);
    }

    初始化相关成员参数,最后调用了 IPCThreadState::self()->incWeakHandle(handle);汗,这是滚雪球啊,越滚越大,没办法,go on,先看看IPCThreadState的默认构造方法吧:

    IPCThreadState::IPCThreadState()
        : mProcess(ProcessState::self()),
          mMyThreadId(androidGetTid()),
          mStrictModePolicy(0),
          mLastTransactionBinderFlags(0)
    {
        pthread_setspecific(gTLS, this);
        clearCaller();
        mIn.setDataCapacity(256);
        mOut.setDataCapacity(256);
    }

    这个mIn,mOut是两个Parcel,干嘛用的啊?还是暂时别分析了,再深入解释,估计要晕倒了
    我们看看 IPCThreadState::self()方法吧

    IPCThreadState* IPCThreadState::self()
    {
        if (gHaveTLS) {
    restart:
            const pthread_key_t k = gTLS;
            IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
            if (st) return st;
            return new IPCThreadState;
        }
        
        if (gShutdown) return NULL;
        
        pthread_mutex_lock(&gTLSMutex);
        if (!gHaveTLS) {
            if (pthread_key_create(&gTLS, threadDestructor) != 0) {
                pthread_mutex_unlock(&gTLSMutex);
                return NULL;
            }
            gHaveTLS = true;
        }
        pthread_mutex_unlock(&gTLSMutex);
        goto restart;
    }
    

    分析一下这个方法,好抽象,汗,第一次执行,gHaveTLS为false,所以直接走下面if(!gHaveTLS) 的分支,在这个分支中创建了pthread_key,到底是什么意思,其实不是很理解,就先知道吧,就是创建一块私有数据区,用来保存handle,有关TLS的大概说明可以看一下这个 http://xianjunzhang.blog.sohu.com/21537031.html,接下来,通过goto语句接着返回到上面往下执行,通过创建的TLS获得IPCThreadState对象,的指针,返回这个指针,接着调用这个对象的incWeakHandle方法,不明白干什么用的,汗。。。
    new BpBinder就算完了。到这里,我们创建了些什么呢?
    ProcessState有了。
    IPCThreadState有了,而且是主线程的。
    BpBinder有了,内部handle值为0

    到这里gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL));你可以理解为调用了以下命令

    gDefaultServiceManager = interface_cast<IServiceManager>(new BpBinder(0));

    所以这里我们看一下interface_cast的定义,在以下路径:framework/base/include/binder/IInterface.h

    template<typename INTERFACE>
    inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
    {
        return INTERFACE::asInterface(obj);
    }

    所以上面调用的形式你可以继续转换, gDefaultServiceManager = IServiceManager::asInterface (new BpBinder(0));
    这个类有是什么呢?继续跟踪吧,定义在以下路径: framework/base/include/binder/IServiceManager.h

    class IServiceManager : public IInterface
    {
    public:
        DECLARE_META_INTERFACE(ServiceManager);
    
        /**
         * Retrieve an existing service, blocking for a few seconds
         * if it doesn't yet exist.
         */
        virtual sp<IBinder>         getService( const String16& name) const = 0;
    
        /**
         * Retrieve an existing service, non-blocking.
         */
        virtual sp<IBinder>         checkService( const String16& name) const = 0;
    
        /**
         * Register a service.
         */
        virtual status_t            addService( const String16& name,
                                                const sp<IBinder>& service) = 0;
    
        /**
         * Return list of all existing services.
         */
        virtual Vector<String16>    listServices() = 0;
    
        enum {
            GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION,
            CHECK_SERVICE_TRANSACTION,
            ADD_SERVICE_TRANSACTION,
            LIST_SERVICES_TRANSACTION,
        };
    };
    

    ServiceManager,字面上理解就是Service管理类,管理什么?增加服务,查询服务等,前面的”I“只是起到说明这个类是接口类的作用,是不是??不管了,暂时这里理解,O(∩_∩)O~
    DECLARE_META_INTERFACE(ServiceManager)??
    怎么和MFC这么类似?微软的影响很大啊!知道MFC的,有DELCARE肯定有IMPLEMENT
    果然,这两个宏DECLARE_META_INTERFACE和IMPLEMENT_META_INTERFACE(INTERFACE, NAME)都在
    刚才的IInterface.h中定义。我们先看看DECLARE_META_INTERFACE这个宏往IServiceManager加了什么?

    #define DECLARE_META_INTERFACE(INTERFACE)                               \
        static const android::String16 descriptor;                          \
        static android::sp<I##INTERFACE> asInterface(                       \
                const android::sp<android::IBinder>& obj);                  \
        virtual const android::String16& getInterfaceDescriptor() const;    \
        I##INTERFACE();                                                     \
        virtual ~I##INTERFACE();                                            \

    我们把它兑现到IServiceManager就是:
    static const android::String16 descriptor; 增加一个描述字符串
    static android::sp< IServiceManager > asInterface(const android::sp<android::IBinder>&obj); 增加一个asInterface函数
    virtual const android::String16& getInterfaceDescriptor() const; 增加一个get函数
    IServiceManager (); 增加构造函数 
    virtual ~IServiceManager(); 增加虚析购函数

    那IMPLEMENT宏在哪定义的呢?见IServiceManager.cpp。位于: framework/base/libs/binder/IServiceManager.cpp

    IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");

    我们接着看IMPLEMENT_META_INTERFACE的定义,依旧在刚才的IInterface.h中定义

    #define IMPLEMENT_META_INTERFACE(INTERFACE, NAME)                       \
        const android::String16 I##INTERFACE::descriptor(NAME);             \
        const android::String16&                                            \
                I##INTERFACE::getInterfaceDescriptor() const {              \
            return I##INTERFACE::descriptor;                                \
        }                                                                   \
        android::sp<I##INTERFACE> I##INTERFACE::asInterface(                \
                const android::sp<android::IBinder>& obj)                   \
        {                                                                   \
            android::sp<I##INTERFACE> intr;                                 \
            if (obj != NULL) {                                              \
                intr = static_cast<I##INTERFACE*>(                          \
                    obj->queryLocalInterface(                               \
                            I##INTERFACE::descriptor).get());               \
                if (intr == NULL) {                                         \
                    intr = new Bp##INTERFACE(obj);                          \
                }                                                           \
            }                                                               \
            return intr;                                                    \
        }                                                                   \
        I##INTERFACE::I##INTERFACE() { }                                    \
        I##INTERFACE::~I##INTERFACE() { }                                   \
    

    很麻烦吧?尤其是宏看着头疼。赶紧兑现下吧。
    const android::String16 IServiceManager::descriptor(“android.os.IServiceManager”);
    const android::String16& IServiceManager::getInterfaceDescriptor() const{ 

    return IServiceManager::descriptor;//返回上面那个android.os.IServiceManager

    android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)
    {
    android::sp<IServiceManager> intr;
    if (obj != NULL) {
    intr = static_cast<IServiceManager *>( obj->queryLocalInterface(IServiceManager::descriptor).get());
    if (intr == NULL) {
    intr = new BpServiceManager(obj); 


    return intr;
    }
    IServiceManager::IServiceManager () { }
    IServiceManager::~ IServiceManager() { }

    这里是对上面解析出来接口的实现,实现刚刚接口中定义的asInterface方法

    继续茫茫人生路啊,上面已经说到了

    gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL));你可以理解为调用了以下命令

    gDefaultServiceManager = interface_cast<IServiceManager>(new BpBinder(0));你可以继续解析

    gDefaultServiceManager = IServiceManager::asInterface (new BpBinder(0));

    到这里你该豁然开朗了,的定义 IServiceManager::asInterface 的定义就在我们刚刚解析出来的 IMPLEMENT_META_INTERFACE中,我勒个去,好纠结,赶紧看看他的实现吧

    android::sp<IServiceManager> IServiceManager::asInterface(const android::sp<android::IBinder>& obj)
    {
    android::sp<IServiceManager> intr;
    if (obj != NULL) {

    intr = static_cast<IServiceManager *>( obj->queryLocalInterface(IServiceManager::descriptor).get()); 
    if (intr == NULL) {

    intr = new BpServiceManager(obj);
    神呐,终于看到和IServiceManager相关的东西了,看来实际返回的是BpServiceManager(new BpBinder(0));
    }
    }
    return intr;
    }

    BpServiceManager是个什么玩意儿?p是什么个意思?
    BpServiceManager,终于可以讲解点架构上的东西了。p是proxy即代理的意思,Bp就是BinderProxy,BpServiceManager,就是SM的Binder代理。既然是代理,那肯定希望对用户是透明的,是吗?果然,BpServiceManager就在刚才的IServiceManager.cpp中定义。

    class BpServiceManager : public BpInterface<IServiceManager>
    {
    public:
        BpServiceManager(const sp<IBinder>& impl)
            : BpInterface<IServiceManager>(impl)
        {
        }
    
        virtual sp<IBinder> getService(const String16& name) const
        {
            unsigned n;
            for (n = 0; n < 5; n++){
                sp<IBinder> svc = checkService(name);
                if (svc != NULL) return svc;
                LOGI("Waiting for service %s...\n", String8(name).string());
                sleep(1);
            }
            return NULL;
        }
    
        virtual sp<IBinder> checkService( const String16& name) const
        {
            Parcel data, reply;
            data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
            data.writeString16(name);
            remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
            return reply.readStrongBinder();
        }
    
        virtual status_t addService(const String16& name, const sp<IBinder>& service)
        {
            Parcel data, reply;
            data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
            data.writeString16(name);
            data.writeStrongBinder(service);
            status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
            return err == NO_ERROR ? reply.readExceptionCode() : err;
        }
    
        virtual Vector<String16> listServices()
        {
            Vector<String16> res;
            int n = 0;
    
            for (;;) {
                Parcel data, reply;
                data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
                data.writeInt32(n++);
                status_t err = remote()->transact(LIST_SERVICES_TRANSACTION, data, &reply);
                if (err != NO_ERROR)
                    break;
                res.add(reply.readString16());
            }
            return res;
        }
    };
    

    class BpServiceManager : public BpInterface<IServiceManager>这种继承方式,表示同时继承BpInterface和IServiceManager,这样IServiceManger的addService必然在这个类中实现

    好了,到这里,我们知道了:sp<IServiceManager> sm = defaultServiceManager(); 返回的实际是BpServiceManager,它的remote对象是BpBinder,传入的那个handle参数是0。


    第三步:

    现在重新回到MediaService中接着走完下面的漫漫人生路,从这里开始才是真正跟MediaPlayerService挂上钩

    MediaPlayerService::instantiate();

    void MediaPlayerService::instantiate() {
        defaultServiceManager()->addService(
                String16("media.player"), new MediaPlayerService());
    }

    这里还是多花点时间废话一下吧,MediaPlayerService的定义在以下目录:frameworks/base/media/libmediaplayerservice/MediaPlayerService.h,我们还是看一下

    class MediaPlayerService : public BnMediaPlayerService

    MediaPlayerService从BnMediaPlayerService派生,BnXXX,BpXXX,快晕了。
    Bn 是Binder Native的含义,是和Bp相对的,Bp的p是proxy代理的意思,那么另一端一定有一个和代理打交道的东西,这个就是Bn。
    讲到这里会有点乱了,先分析下,到目前为止都构造出来了什么。
    1.BpServiceManager,这是第二步的功劳
    2.BnMediaPlayerService,其实是MediaPlayerService ,这个MediaPlayerService从BnMediaPlayerService派生,构造他时父类自然必须构造
    这两个东西不是相对的两端,从BnXXX就可以判断,BpServiceManager对应的应该是BnServiceManager,BnMediaPlayerService对应的应该是BpMediaPlayerService。
    所以这里重点来了,我创建一个新的Service—BnMediaPlayerService,想把它告诉ServiceManager。那我怎么和ServiceManager通讯呢?恩,利用BpServiceManager,所以说BpServiceManager是ServiceManager的代理得到了验证,通过他的代理来通知他,就好像通过秘书同时老板一样一样地,(*^__^*) 所以嘛,我调用了BpServiceManager的addService函数!
    还有,为什么要搞个ServiceManager来呢?这个和Android机制有关系。所有Service都需要加入到ServiceManager来管理。同时也方便了Client来查询系统存在哪些Service,没看见我们传入了字符串吗?这样就可以通过Human Readable的字符串来查找Service了。

    所以,首先实例化MediaPlayerService的对象,再传入addService,defaultServiceManager返回的是第二步中创建的BpServiceManager,调用它的addService函数。

    MediaPlayerService::MediaPlayerService()
    {
        LOGV("MediaPlayerService created");
        mNextConnId = 1;
    
        mBatteryAudio.refCount = 0;
        for (int i = 0; i < NUM_AUDIO_DEVICES; i++) {
            mBatteryAudio.deviceOn[i] = 0;
            mBatteryAudio.lastTime[i] = 0;
            mBatteryAudio.totalTime[i] = 0;
        }
        // speaker is on by default
        mBatteryAudio.deviceOn[SPEAKER] = 1;
    }

    我们待会再花点时间说说MediaplayerService这个类,我们还是接着看看addService这个方法

        virtual status_t addService(const String16& name, const sp<IBinder>& service)
        {
            Parcel data, reply;
            data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
            data.writeString16(name);
            data.writeStrongBinder(service);
            status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
            return err == NO_ERROR ? reply.readExceptionCode() : err;
        }

    data是发送到BnServiceManager的命令包,先把Interface名字写进去,也就是什么android.os.IServiceManager,再把新service的名字写进去 叫media.player,再把新服务service—>就是MediaPlayerService写到命令中,最后,调用remote的transact函数,返回的应答保存到reply中,最后结束,返回reply。到了这里这些个方法又让我头疼了,data.write这些方法就先从字面理解了吧,不去追究他的方法了,可是

    status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);

    这一行到底是什么情况,必须要弄清楚啊,头大呀,我们还是从这里往回走找找才可以,那么找到那里呢?看看BpServiceManager的默认构造方法吧:

    BpServiceManager(const sp<IBinder>& impl)
            : BpInterface<IServiceManager>(impl)
        {
        }

    没有任何实现,可是千万别漏掉了BpInterface<IServiceManager>(impl);这一条,可恨的android,我就漏掉了,那么这里的参数impl是什么呢?到了第一步了,是的,impl就是在第一步中我们构造的BpBinder,还要接着走啊,BpInterface<IServiceManager>(impl)到底做了些什么事情呢??
    看看 BpInterface的定义吧,在以下路径: mydroid/frameworks/base/include/binder/IInterface.h

    template<typename INTERFACE>
    inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote)
        : BpRefBase(remote)
    {
    }

    球的,还来,又是什么都没做,执行BpRefBase(remote),这里还没有忘记吧remote依旧还是第一步中认识的老朋友BpBinder的对象,接着找,BpRefBase的定义,路径:mydroid/frameworks/base/libs/binder/Binder.cpp

    BpRefBase::BpRefBase(const sp<IBinder>& o)
        : mRemote(o.get()), mRefs(NULL), mState(0)
    {
        extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    
        if (mRemote) {
            mRemote->incStrong(this);           // Removed on first IncStrong().
            mRefs = mRemote->createWeak(this);  // Held for our entire lifetime.
        }
    }

    看到这里眼睛一亮吧,mRemote出现了,o.get获得BpBinder赋值给mRemote,所以这里我们找到了mRemote了,上面就可以迎刃而解了,remote()返回的就是mRemote,所以相当于执行BpBinder::transact(ADD_SERVICE_TRANSACTION, data, &reply);,GO ON,BpBinder的位置在framework\base\libs\binder\BpBinder.cpp

    status_t BpBinder::transact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    {
        // Once a binder has died, it will never come back to life.
        if (mAlive) {
            status_t status = IPCThreadState::self()->transact(
                mHandle, code, data, reply, flags);
            if (status == DEAD_OBJECT) mAlive = 0;
            return status;
        }
    
        return DEAD_OBJECT;
    }

    接着调用IPCThreadState的transact函数

    status_t IPCThreadState::transact(int32_t handle,
                                      uint32_t code, const Parcel& data,
                                      Parcel* reply, uint32_t flags)
    {
        status_t err = data.errorCheck();
    
        flags |= TF_ACCEPT_FDS;
    
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
                << handle << " / code " << TypeCode(code) << ": "
                << indent << data << dedent << endl;
        }
        
        if (err == NO_ERROR) {
            LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
                (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
            err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
        }
        
        if (err != NO_ERROR) {
            if (reply) reply->setError(err);
            return (mLastError = err);
        }
        
        if ((flags & TF_ONE_WAY) == 0) {
            #if 0
            if (code == 4) { // relayout
                LOGI(">>>>>> CALLING transaction 4");
            } else {
                LOGI(">>>>>> CALLING transaction %d", code);
            }
            #endif
            if (reply) {
                err = waitForResponse(reply);
            } else {
                Parcel fakeReply;
                err = waitForResponse(&fakeReply);
            }
            #if 0
            if (code == 4) { // relayout
                LOGI("<<<<<< RETURNING transaction 4");
            } else {
                LOGI("<<<<<< RETURNING transaction %d", code);
            }
            #endif
            
            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
                alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                    << handle << ": ";
                if (reply) alog << indent << *reply << dedent << endl;
                else alog << "(none requested)" << endl;
            }
        } else {
            err = waitForResponse(NULL, NULL);
        }
        
        return err;
    }
    

    这里两个重要方法重点关注喽,writeTransactionData和waitForResponse,调用writeTransactionData 发送数据,waitForResponse则用来等待回复

    status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
        int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
    {
        binder_transaction_data tr;
    
        tr.target.handle = handle;
        tr.code = code;
        tr.flags = binderFlags;
        tr.cookie = 0;
        tr.sender_pid = 0;
        tr.sender_euid = 0;
        
        const status_t err = data.errorCheck();
        if (err == NO_ERROR) {
            tr.data_size = data.ipcDataSize();
            tr.data.ptr.buffer = data.ipcData();
            tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
            tr.data.ptr.offsets = data.ipcObjects();
        } else if (statusBuffer) {
            tr.flags |= TF_STATUS_CODE;
            *statusBuffer = err;
            tr.data_size = sizeof(status_t);
            tr.data.ptr.buffer = statusBuffer;
            tr.offsets_size = 0;
            tr.data.ptr.offsets = NULL;
        } else {
            return (mLastError = err);
        }
        
        mOut.writeInt32(cmd);
        mOut.write(&tr, sizeof(tr));
        
        return NO_ERROR;
    }
    

    上面把命令数据封装成binder_transaction_data,然后写到mOut中,mOut是命令的缓冲区,也是一个Parcel
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    仅仅写到了Parcel中,Parcel好像没和/dev/binder设备有什么关联啊?那么到底怎么使用的kernel device binder的呢?先看看waitForResponse的实现吧:

    status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    {
        int32_t cmd;
        int32_t err;
    
        while (1) {
            if ((err=talkWithDriver()) < NO_ERROR) break;
            err = mIn.errorCheck();
            if (err < NO_ERROR) break;
            if (mIn.dataAvail() == 0) continue;
            
            cmd = mIn.readInt32();
            
            IF_LOG_COMMANDS() {
                alog << "Processing waitForResponse Command: "
                    << getReturnString(cmd) << endl;
            }
    
            switch (cmd) {
            case BR_TRANSACTION_COMPLETE:
                if (!reply && !acquireResult) goto finish;
                break;
            
            case BR_DEAD_REPLY:
                err = DEAD_OBJECT;
                goto finish;
    
            case BR_FAILED_REPLY:
                err = FAILED_TRANSACTION;
                goto finish;
            
            case BR_ACQUIRE_RESULT:
                {
                    LOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                    const int32_t result = mIn.readInt32();
                    if (!acquireResult) continue;
                    *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
                }
                goto finish;
            
            case BR_REPLY:
                {
                    binder_transaction_data tr;
                    err = mIn.read(&tr, sizeof(tr));
                    LOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                    if (err != NO_ERROR) goto finish;
    
                    if (reply) {
                        if ((tr.flags & TF_STATUS_CODE) == 0) {
                            reply->ipcSetDataReference(
                                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                                tr.data_size,
                                reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                                tr.offsets_size/sizeof(size_t),
                                freeBuffer, this);
                        } else {
                            err = *static_cast<const status_t*>(tr.data.ptr.buffer);
                            freeBuffer(NULL,
                                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                                tr.data_size,
                                reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                                tr.offsets_size/sizeof(size_t), this);
                        }
                    } else {
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                        continue;
                    }
                }
                goto finish;
    
            default:
                err = executeCommand(cmd);
                if (err != NO_ERROR) goto finish;
                break;
            }
        }
    
    finish:
        if (err != NO_ERROR) {
            if (acquireResult) *acquireResult = err;
            if (reply) reply->setError(err);
            mLastError = err;
        }
        
        return err;
    }
    

    看见没?这里开始操作mIn了,看来talkWithDriver中把mOut发出去,然后从driver中读到数据放到mIn中了。真正与binder device交互的过程都在talkWithDriver,我暂时先不做研究,之后有兴趣可以看看。 好了,到这里,我们发送addService的流程就彻底走完了。BpServiceManager发送了一个addService命令到BnServiceManager,然后收到回复。

    第四步:

    新的里程碑又要开始了,上面三步完成的事情再回头梳理梳理吧!第一步:构造ProcessState,打开/dev/binder,第二步:构造BpBinder,再通过BpBinder构造BpServiceManager,第三步:构造BnMediaPlayerService,并添加到ServiceManager中;上面三步完成之后我们一定有一些疑问,为什么只有BpBinder,BpServiceManager,BnMediaPlayerService,那么与他们相对应的BnBinder,BnServiceManager,BpMediaPlayerService呢?为什么没有他们的身影,不合道理啊!是的,他们确实应该一直在忙碌着,只是还一直没有得到我们的关注,是未被发现的英雄,这里一步一步去找寻一下吧:

    先从BnServiceManager下手,上面说了,defaultServiceManager返回的是一个BpServiceManager,通过它可以把命令请求发送到binder设备,而且handle的值为0。那么,系统的另外一端肯定有个接收命令的,那又是谁呢?很可惜啊,BnServiceManager不存在,但确实有一个程序完成了BnServiceManager的工作,那就是service.exe(如果在windows上一定有exe后缀,叫service的名字太多了,这里加exe就表明它是一个程序)
    位置在framework/base/cmds/servicemanger.c中。

    int main(int argc, char **argv)
    {
        struct binder_state *bs;
        void *svcmgr = BINDER_SERVICE_MANAGER;
    
        bs = binder_open(128*1024);
    
        if (binder_become_context_manager(bs)) {
            LOGE("cannot become context manager (%s)\n", strerror(errno));
            return -1;
        }
    
        svcmgr_handle = svcmgr;
        binder_loop(bs, svcmgr_handler);
        return 0;
    }
    
    struct binder_state *binder_open(unsigned mapsize)
    {
        struct binder_state *bs;
    
        bs = malloc(sizeof(*bs));
        if (!bs) {
            errno = ENOMEM;
            return 0;
        }
    
        bs->fd = open("/dev/binder", O_RDWR);
        if (bs->fd < 0) {
            fprintf(stderr,"binder: cannot open device (%s)\n",
                    strerror(errno));
            goto fail_open;
        }
    
        bs->mapsize = mapsize;
        bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
        if (bs->mapped == MAP_FAILED) {
            fprintf(stderr,"binder: cannot map device (%s)\n",
                    strerror(errno));
            goto fail_map;
        }
    
            /* TODO: check version */
    
        return bs;
    
    fail_map:
        close(bs->fd);
    fail_open:
        free(bs);
        return 0;
    }
    

    我们努力找寻的第一个英雄出现了,通过这个open方法打开binder device,同样mmap一块虚拟内存空间以供之后使用

    int binder_become_context_manager(struct binder_state *bs)
    {
        return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
    }

    通过这个方法设置自己作为manager

    void binder_loop(struct binder_state *bs, binder_handler func)
    {
        int res;
        struct binder_write_read bwr;
        unsigned readbuf[32];
    
        bwr.write_size = 0;
        bwr.write_consumed = 0;
        bwr.write_buffer = 0;
        
        readbuf[0] = BC_ENTER_LOOPER;
        binder_write(bs, readbuf, sizeof(unsigned));
    
        for (;;) {
            bwr.read_size = sizeof(readbuf);
            bwr.read_consumed = 0;
            bwr.read_buffer = (unsigned) readbuf;
    
            res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    
            if (res < 0) {
                LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
                break;
            }
    
            res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
            if (res == 0) {
                LOGE("binder_loop: unexpected reply?!\n");
                break;
            }
            if (res < 0) {
                LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
                break;
            }
        }
    }
    

    在循环中通过ioctl(bs->fd, BINDER_WRITE_READ, &bwr)方法从binder中获取请求,然后通过binder_parse(bs, 0, readbuf, bwr.read_consumed, func)解析请求

    int binder_parse(struct binder_state *bs, struct binder_io *bio,
                     uint32_t *ptr, uint32_t size, binder_handler func)
    {
        int r = 1;
        uint32_t *end = ptr + (size / 4);
    
        while (ptr < end) {
            uint32_t cmd = *ptr++;
    #if TRACE
            fprintf(stderr,"%s:\n", cmd_name(cmd));
    #endif
            switch(cmd) {
            case BR_NOOP:
                break;
            case BR_TRANSACTION_COMPLETE:
                break;
            case BR_INCREFS:
            case BR_ACQUIRE:
            case BR_RELEASE:
            case BR_DECREFS:
    #if TRACE
                fprintf(stderr,"  %08x %08x\n", ptr[0], ptr[1]);
    #endif
                ptr += 2;
                break;
            case BR_TRANSACTION: {
                struct binder_txn *txn = (void *) ptr;
                if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
                    LOGE("parse: txn too small!\n");
                    return -1;
                }
                binder_dump_txn(txn);
                if (func) {
                    unsigned rdata[256/4];
                    struct binder_io msg;
                    struct binder_io reply;
                    int res;
    
                    bio_init(&reply, rdata, sizeof(rdata), 4);
                    bio_init_from_txn(&msg, txn);
                    res = func(bs, txn, &msg, &reply);
                    binder_send_reply(bs, &reply, txn->data, res);
                }
                ptr += sizeof(*txn) / sizeof(uint32_t);
                break;
            }
            case BR_REPLY: {
                struct binder_txn *txn = (void*) ptr;
                if ((end - ptr) * sizeof(uint32_t) < sizeof(struct binder_txn)) {
                    LOGE("parse: reply too small!\n");
                    return -1;
                }
                binder_dump_txn(txn);
                if (bio) {
                    bio_init_from_txn(bio, txn);
                    bio = 0;
                } else {
                        /* todo FREE BUFFER */
                }
                ptr += (sizeof(*txn) / sizeof(uint32_t));
                r = 0;
                break;
            }
            case BR_DEAD_BINDER: {
                struct binder_death *death = (void*) *ptr++;
                death->func(bs, death->ptr);
                break;
            }
            case BR_FAILED_REPLY:
                r = -1;
                break;
            case BR_DEAD_REPLY:
                r = -1;
                break;
            default:
                LOGE("parse: OOPS %d\n", cmd);
                return -1;
            }
        }
    
        return r;
    }
    

    这里解析数据并最终调用func处理,这里的func是最初传入的svcmgr_handler

    int svcmgr_handler(struct binder_state *bs,
                       struct binder_txn *txn,
                       struct binder_io *msg,
                       struct binder_io *reply)
    {
        struct svcinfo *si;
        uint16_t *s;
        unsigned len;
        void *ptr;
        uint32_t strict_policy;
    
    //    LOGI("target=%p code=%d pid=%d uid=%d\n",
    //         txn->target, txn->code, txn->sender_pid, txn->sender_euid);
    
        if (txn->target != svcmgr_handle)
            return -1;
    
        // Equivalent to Parcel::enforceInterface(), reading the RPC
        // header with the strict mode policy mask and the interface name.
        // Note that we ignore the strict_policy and don't propagate it
        // further (since we do no outbound RPCs anyway).
        strict_policy = bio_get_uint32(msg);
        s = bio_get_string16(msg, &len);
        if ((len != (sizeof(svcmgr_id) / 2)) ||
            memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
            fprintf(stderr,"invalid id %s\n", str8(s));
            return -1;
        }
    
        switch(txn->code) {
        case SVC_MGR_GET_SERVICE:
        case SVC_MGR_CHECK_SERVICE:
            s = bio_get_string16(msg, &len);
            ptr = do_find_service(bs, s, len);
            if (!ptr)
                break;
            bio_put_ref(reply, ptr);
            return 0;
    
        case SVC_MGR_ADD_SERVICE:
            s = bio_get_string16(msg, &len);
            ptr = bio_get_ref(msg);
            if (do_add_service(bs, s, len, ptr, txn->sender_euid))
                return -1;
            break;
    
        case SVC_MGR_LIST_SERVICES: {
            unsigned n = bio_get_uint32(msg);
    
            si = svclist;
            while ((n-- > 0) && si)
                si = si->next;
            if (si) {
                bio_put_string16(reply, si->name);
                return 0;
            }
            return -1;
        }
        default:
            LOGE("unknown code %d\n", txn->code);
            return -1;
        }
    
        bio_put_uint32(reply, 0);
        return 0;
    }
    

    这里说一下do_add_service,当BpServiceManager添加service的时候,这里可以理解为“BnServiceManager”,就是通过解析到SVC_MGR_ADD_SERVICE命令然后调用do_add_service将service添加到svclist这个列表中,svclist这个列表保存了当前注册到ServiceManager的所有service,所以ServiceManager存在的意义,为何需要一个这样的东西呢?在这里也得到了解释:原来,Android系统中Service信息都是先add到ServiceManager中,由ServiceManager来集中管理,这样就可以查询当前系统有哪些服务。而且,Android系统中某个服务例如MediaPlayerService的客户端想要和MediaPlayerService通讯的话,必须先向ServiceManager查询MediaPlayerService的信息,然后通过ServiceManager返回的东西再来和MediaPlayerService交互。毕竟,要是MediaPlayerService身体不好,老是挂掉的话,客户的代码就麻烦了,就不知道后续新生的MediaPlayerService的信息了,所以只能这样:MediaPlayerService向SM注册,MediaPlayerClient查询当前注册在SM中的MediaPlayerService的信息,根据这个信息,MediaPlayerClient和MediaPlayerService交互,另外,ServiceManager的handle标示是0,所以只要往handle是0的服务发送消息了,最终都会被传递到ServiceManager中去。


    第五步:

    上一节的知识,我们知道了:
    defaultServiceManager得到了BpServiceManager,然后MediaPlayerService 实例化后,调用BpServiceManager的addService函数,这个过程中,是service_manager收到addService的请求,然后把对应信息放到自己保存的一个服务list中,到这儿,我们可看到,service_manager有一个binder_looper函数,专门等着从binder中接收请求。虽然service_manager没有从BnServiceManager中派生,但是它肯定完成了BnServiceManager的功能。
    同样,我们创建了MediaPlayerService即BnMediaPlayerService,那它也应该:
    打开binder设备
    也搞一个looper循环,然后坐等请求
    好吧,既然MediaPlayerService的构造函数没有看到显示的打开binder设备,那么我们看看它的父类即BnXXX又到底干了些什么呢?

    class BnMediaPlayerService: public BnInterface<IMediaPlayerService>
    {
    public:
        virtual status_t    onTransact( uint32_t code,
                                        const Parcel& data,
                                        Parcel* reply,
                                        uint32_t flags = 0);
    };
    
    }; // nam

    看起来,BnInterface似乎更加和打开设备相关啊,那就接着再往父类上找

    class BnInterface : public INTERFACE, public BBinder
    {
    public:
        virtual sp<IInterface>      queryLocalInterface(const String16& _descriptor);
        virtual const String16&     getInterfaceDescriptor() const;
    
    protected:
        virtual IBinder*            onAsBinder();
    };

    首先转换一下吧,转换后结果,就是INTERFACE变了

    class BnInterface : public IMediaDisplayService, public BBinder
    {
    public:
        virtual sp<IInterface>      queryLocalInterface(const String16& _descriptor);
        virtual const String16&     getInterfaceDescriptor() const;
    
    protected:
        virtual IBinder*            onAsBinder();
    };

    这里冒出来一个BBinder,汗啊,头痛,BBinder的构造方法同样也没有与打开binder设备任何操作,原来打开binder设备的地方是和进程相关的啊?一个进程打开一个就可以了,所以,我们在开始的时候第一步就打开了binder,这里真是众里寻他千百度啊,O(∩_∩)O~那么,我在哪里进行类似的消息循环looper操作呢?回头看看Main_mediaservice中的main方法剩下的两步吧!也许你能找到你想要的答案。
    ProcessState::self()->startThreadPool();

    IPCThreadState::self()->joinThreadPool();

    void ProcessState::startThreadPool()
    {
        AutoMutex _l(mLock);
        if (!mThreadPoolStarted) {
            mThreadPoolStarted = true;
            spawnPooledThread(true);
        }
    }

    当然这里mThreadPoolStarted构造的时候就赋初值为false

    void ProcessState::spawnPooledThread(bool isMain)
    {
        if (mThreadPoolStarted) {
            int32_t s = android_atomic_add(1, &mThreadPoolSeq);
            char buf[32];
            sprintf(buf, "Binder Thread #%d", s);
            LOGV("Spawning new pooled thread, name=%s\n", buf);
            sp<Thread> t = new PoolThread(isMain);
            t->run(buf);
        }
    }

    这里创建一个线程池,并让他run起来,接着看,run起来都干了些甚么?

    virtual bool threadLoop()
        {
            IPCThreadState::self()->joinThreadPool(mIsMain);
            return false;
        }
    void IPCThreadState::joinThreadPool(bool isMain)
    {
        LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());
    
        mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
        
        // This thread may have been spawned by a thread that was in the background
        // scheduling group, so first we will make sure it is in the default/foreground
        // one to avoid performing an initial transaction in the background.
        androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT);
            
        status_t result;
        do {
            int32_t cmd;
            
            // When we've cleared the incoming command queue, process any pending derefs
            if (mIn.dataPosition() >= mIn.dataSize()) {
                size_t numPending = mPendingWeakDerefs.size();
                if (numPending > 0) {
                    for (size_t i = 0; i < numPending; i++) {
                        RefBase::weakref_type* refs = mPendingWeakDerefs[i];
                        refs->decWeak(mProcess.get());
                    }
                    mPendingWeakDerefs.clear();
                }
    
                numPending = mPendingStrongDerefs.size();
                if (numPending > 0) {
                    for (size_t i = 0; i < numPending; i++) {
                        BBinder* obj = mPendingStrongDerefs[i];
                        obj->decStrong(mProcess.get());
                    }
                    mPendingStrongDerefs.clear();
                }
            }
    
            // now get the next command to be processed, waiting if necessary
            result = talkWithDriver();
            if (result >= NO_ERROR) {
                size_t IN = mIn.dataAvail();
                if (IN < sizeof(int32_t)) continue;
                cmd = mIn.readInt32();
                IF_LOG_COMMANDS() {
                    alog << "Processing top-level Command: "
                        << getReturnString(cmd) << endl;
                }
    
    
                result = executeCommand(cmd);
            }
            
            // After executing the command, ensure that the thread is returned to the
            // default cgroup before rejoining the pool.  The driver takes care of
            // restoring the priority, but doesn't do anything with cgroups so we
            // need to take care of that here in userspace.  Note that we do make
            // sure to go in the foreground after executing a transaction, but
            // there are other callbacks into user code that could have changed
            // our group so we want to make absolutely sure it is put back.
            androidSetThreadSchedulingGroup(mMyThreadId, ANDROID_TGROUP_DEFAULT);
    
            // Let this thread exit the thread pool if it is no longer
            // needed and it is not the main process thread.
            if(result == TIMED_OUT && !isMain) {
                break;
            }
        } while (result != -ECONNREFUSED && result != -EBADF);
    
        LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n",
            (void*)pthread_self(), getpid(), (void*)result);
        
        mOut.writeInt32(BC_EXIT_LOOPER);
        talkWithDriver(false);
    }
    

    具体的解析工作是在executeCommand中进行的

    status_t IPCThreadState::executeCommand(int32_t cmd)
    {
        BBinder* obj;
        RefBase::weakref_type* refs;
        status_t result = NO_ERROR;
        
        switch (cmd) {
        case BR_ERROR:
            result = mIn.readInt32();
            break;
            
        case BR_OK:
            break;
            
        case BR_ACQUIRE:
            refs = (RefBase::weakref_type*)mIn.readInt32();
            obj = (BBinder*)mIn.readInt32();
            LOG_ASSERT(refs->refBase() == obj,
                       "BR_ACQUIRE: object %p does not match cookie %p (expected %p)",
                       refs, obj, refs->refBase());
            obj->incStrong(mProcess.get());
            IF_LOG_REMOTEREFS() {
                LOG_REMOTEREFS("BR_ACQUIRE from driver on %p", obj);
                obj->printRefs();
            }
            mOut.writeInt32(BC_ACQUIRE_DONE);
            mOut.writeInt32((int32_t)refs);
            mOut.writeInt32((int32_t)obj);
            break;
            
        case BR_RELEASE:
            refs = (RefBase::weakref_type*)mIn.readInt32();
            obj = (BBinder*)mIn.readInt32();
            LOG_ASSERT(refs->refBase() == obj,
                       "BR_RELEASE: object %p does not match cookie %p (expected %p)",
                       refs, obj, refs->refBase());
            IF_LOG_REMOTEREFS() {
                LOG_REMOTEREFS("BR_RELEASE from driver on %p", obj);
                obj->printRefs();
            }
            mPendingStrongDerefs.push(obj);
            break;
            
        case BR_INCREFS:
            refs = (RefBase::weakref_type*)mIn.readInt32();
            obj = (BBinder*)mIn.readInt32();
            refs->incWeak(mProcess.get());
            mOut.writeInt32(BC_INCREFS_DONE);
            mOut.writeInt32((int32_t)refs);
            mOut.writeInt32((int32_t)obj);
            break;
            
        case BR_DECREFS:
            refs = (RefBase::weakref_type*)mIn.readInt32();
            obj = (BBinder*)mIn.readInt32();
            // NOTE: This assertion is not valid, because the object may no
            // longer exist (thus the (BBinder*)cast above resulting in a different
            // memory address).
            //LOG_ASSERT(refs->refBase() == obj,
            //           "BR_DECREFS: object %p does not match cookie %p (expected %p)",
            //           refs, obj, refs->refBase());
            mPendingWeakDerefs.push(refs);
            break;
            
        case BR_ATTEMPT_ACQUIRE:
            refs = (RefBase::weakref_type*)mIn.readInt32();
            obj = (BBinder*)mIn.readInt32();
             
            {
                const bool success = refs->attemptIncStrong(mProcess.get());
                LOG_ASSERT(success && refs->refBase() == obj,
                           "BR_ATTEMPT_ACQUIRE: object %p does not match cookie %p (expected %p)",
                           refs, obj, refs->refBase());
                
                mOut.writeInt32(BC_ACQUIRE_RESULT);
                mOut.writeInt32((int32_t)success);
            }
            break;
        
        case BR_TRANSACTION:
            {
                binder_transaction_data tr;
                result = mIn.read(&tr, sizeof(tr));
                LOG_ASSERT(result == NO_ERROR,
                    "Not enough command data for brTRANSACTION");
                if (result != NO_ERROR) break;
                
                Parcel buffer;
                buffer.ipcSetDataReference(
                    reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                    tr.data_size,
                    reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                    tr.offsets_size/sizeof(size_t), freeBuffer, this);
                
                const pid_t origPid = mCallingPid;
                const uid_t origUid = mCallingUid;
                
                mCallingPid = tr.sender_pid;
                mCallingUid = tr.sender_euid;
                
                int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
                if (gDisableBackgroundScheduling) {
                    if (curPrio > ANDROID_PRIORITY_NORMAL) {
                        // We have inherited a reduced priority from the caller, but do not
                        // want to run in that state in this process.  The driver set our
                        // priority already (though not our scheduling class), so bounce
                        // it back to the default before invoking the transaction.
                        setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
                    }
                } else {
                    if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
                        // We want to use the inherited priority from the caller.
                        // Ensure this thread is in the background scheduling class,
                        // since the driver won't modify scheduling classes for us.
                        // The scheduling group is reset to default by the caller
                        // once this method returns after the transaction is complete.
                        androidSetThreadSchedulingGroup(mMyThreadId,
                                                        ANDROID_TGROUP_BG_NONINTERACT);
                    }
                }
    
                //LOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);
                
                Parcel reply;
                IF_LOG_TRANSACTIONS() {
                    TextOutput::Bundle _b(alog);
                    alog << "BR_TRANSACTION thr " << (void*)pthread_self()
                        << " / obj " << tr.target.ptr << " / code "
                        << TypeCode(tr.code) << ": " << indent << buffer
                        << dedent << endl
                        << "Data addr = "
                        << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
                        << ", offsets addr="
                        << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
                }
                if (tr.target.ptr) {
                    sp<BBinder> b((BBinder*)tr.cookie);
                    const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
                    if (error < NO_ERROR) reply.setError(error);
    
                } else {
                    const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
                    if (error < NO_ERROR) reply.setError(error);
                }
                
                //LOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
                //     mCallingPid, origPid, origUid);
                
                if ((tr.flags & TF_ONE_WAY) == 0) {
                    LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                    sendReply(reply, 0);
                } else {
                    LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
                }
                
                mCallingPid = origPid;
                mCallingUid = origUid;
    
                IF_LOG_TRANSACTIONS() {
                    TextOutput::Bundle _b(alog);
                    alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
                        << tr.target.ptr << ": " << indent << reply << dedent << endl;
                }
                
            }
            break;
        
        case BR_DEAD_BINDER:
            {
                BpBinder *proxy = (BpBinder*)mIn.readInt32();
                proxy->sendObituary();
                mOut.writeInt32(BC_DEAD_BINDER_DONE);
                mOut.writeInt32((int32_t)proxy);
            } break;
            
        case BR_CLEAR_DEATH_NOTIFICATION_DONE:
            {
                BpBinder *proxy = (BpBinder*)mIn.readInt32();
                proxy->getWeakRefs()->decWeak(proxy);
            } break;
            
        case BR_FINISHED:
            result = TIMED_OUT;
            break;
            
        case BR_NOOP:
            break;
            
        case BR_SPAWN_LOOPER:
            mProcess->spawnPooledThread(false);
            break;
            
        default:
            printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
            result = UNKNOWN_ERROR;
            break;
        }
    
        if (result != NO_ERROR) {
            mLastError = result;
        }
        
        return result;
    }
    

    其实,到这里,我们就明白了。BnXXX的onTransact函数收取命令,然后派发到派生类的函数,由他们完成实际的工作。
    说明:
    这里有点特殊,startThreadPool和joinThreadPool完后确实有两个线程,主线程和工作线程,而且都在做消息循环。为什么要这么做呢?他们参数isMain都是true。不知道google搞什么。难道是怕一个线程工作量太多,所以搞两个线程来工作?这种解释应该也是合理的。网上有人测试过把最后一句屏蔽掉,也能正常工作。但是难道主线程提出了,程序还能不退出吗?这个...管它的,反正知道有两个线程在那处理就行了

    待续。。。

  • 相关阅读:
    自定义上传图片拼图游戏
    react 移动端 兼容性问题和一些小细节
    利用AudioContext来实现网易云音乐的鲸鱼音效
    解决跨域问题,实例调用百度地图
    SVG vs Image, SVG vs Iconfont
    保存登陆username和password
    Android学习之——优化篇(2)
    ubuntu下新建用户
    PHP Laravel 本地化语言支持
    apache 绿色版 安装
  • 原文地址:https://www.cnblogs.com/javawebsoa/p/3097741.html
Copyright © 2020-2023  润新知