继续分析这个函数之前,又要解释两个数据结构了,一个是struct binder_thread结构体,顾名思久,它表示一个线程,这里就是执行binder_become_context_manager函数的线程了。
struct binder_thread {
struct binder_proc *proc;
struct rb_node rb_node;
intpid;
intlooper;
struct binder_transaction *transaction_stack;
struct list_head todo;
uint32_t return_error; /* Write failed,returnerror codeinreadbuf */
uint32_t return_error2; /* Write failed,returnerror codeinread*/
/* buffer. Usedwhensending a replytoa dead process that */
/* we are also waitingon*/
wait_queue_head_t wait;
struct binder_stats stats;
};
proc表示这个线程所属的进程。struct binder_proc有一个成员变量threads,它的类型是rb_root,它表示一查红黑树,把属于这个进程的所有线程都组织起来,struct binder_thread的成员变量rb_node就是用来链入这棵红黑树的节点了。looper成员变量表示线程的状态,它可以取下面这几个值:
enum {
BINDER_LOOPER_STATE_REGISTERED = 0x01,
BINDER_LOOPER_STATE_ENTERED = 0x02,
BINDER_LOOPER_STATE_EXITED = 0x04,
BINDER_LOOPER_STATE_INVALID = 0x08,
BINDER_LOOPER_STATE_WAITING = 0x10,
BINDER_LOOPER_STATE_NEED_RETURN = 0x20
};
其余的成员变量,transaction_stack表示线程正在处理的事务,todo表示发往该线程的数据列表,return_error和return_error2表示操作结果返回码,wait用来阻塞线程等待某个事件的发生,stats用来保存一些统计信息。这些成员变量遇到的时候再分析它们的作用。
另外一个数据结构是struct binder_node,它表示一个binder实体:
struct binder_node {
intdebug_id;
struct binder_workwork;
union{
struct rb_node rb_node;
struct hlist_node dead_node;
};
struct binder_proc *proc;
struct hlist_head refs;
intinternal_strong_refs;
intlocal_weak_refs;
intlocal_strong_refs;
void __user *ptr;
void __user *cookie;
unsigned has_strong_ref : 1;
unsigned pending_strong_ref : 1;
unsigned has_weak_ref : 1;
unsigned pending_weak_ref : 1;
unsigned has_async_transaction : 1;
unsigned accept_fds : 1;
intmin_priority : 8;
struct list_head async_todo;
};
rb_node和dead_node组成一个联合体。 如果这个Binder实体还在正常使用,则使用rb_node来连入proc->nodes所表示的红黑树的节点,这棵红黑树用来组织属于这个进程的所有Binder实体;如果这个Binder实体所属的进程已经销毁,而这个Binder实体又被其它进程所引用,则这个Binder实体通过dead_node进入到一个哈希表中去存放。proc成员变量就是表示这个Binder实例所属于进程了。refs成员变量把所有引用了该Binder实体的Binder引用连接起来构成一个链表。internal_strong_refs、local_weak_refs和local_strong_refs表示这个Binder实体的引用计数。ptr和cookie成员变量分别表示这个Binder实体在用户空间的地址以及附加数据。其余的成员变量就不描述了,遇到的时候再分析。
现在回到binder_ioctl函数中,首先是通过filp->private_data获得proc变量,这里binder_mmap函数是一样的。接着通过binder_get_thread函数获得线程信息,我们来看一下这个函数:
staticstruct binder_thread *binder_get_thread(struct binder_proc *proc)
{
struct binder_thread *thread =NULL;
struct rb_node *parent =NULL;
struct rb_node **p = &proc->threads.rb_node;
while (*p) {
parent = *p;
thread = rb_entry(parent, struct binder_thread, rb_node);
if (current->pid pid)
p = &(*p)->rb_left;
elseif (current->pid > thread->pid)
p = &(*p)->rb_right;
else
break;
}
if (*p ==NULL) {
thread = kzalloc(sizeof(*thread), GFP_KERNEL);
if (thread ==NULL)
returnNULL;
binder_stats.obj_created[BINDER_STAT_THREAD]++;
thread->proc = proc;
thread->pid =current->pid;
init_waitqueue_head(&thread->wait);
INIT_LIST_HEAD(&thread->todo);
rb_link_node(&thread->rb_node, parent, p);
rb_insert_color(&thread->rb_node, &proc->threads);
thread->looper |= BINDER_LOOPER_STATE_NEED_RETURN;
thread->return_error = BR_OK;
thread->return_error2 = BR_OK;
}
returnthread;
}
这里把当前线程current的pid作为键值,在进程proc->threads表示的红黑树中进行查找,看是否已经为当前线程创建过了binder_thread信息。在这个场景下,由于当前线程是第一次进到这里,所以肯定找不到,即*p == NULL成立,于是,就为当前线程创建一个线程上下文信息结构体binder_thread,并初始化相应成员变量,并插入到proc->threads所表示的红黑树中去,下次要使用时就可以从proc中找到了。注意,这里的thread->looper = BINDER_LOOPER_STATE_NEED_RETURN。
回到binder_ioctl函数,继续往下面,有两个全局变量binder_context_mgr_node和binder_context_mgr_uid,它定义如下:
staticstruct binder_node *binder_context_mgr_node;
staticuid_t binder_context_mgr_uid = -1;
binder_context_mgr_node用来表示Service Manager实体,binder_context_mgr_uid表示Service Manager守护进程的uid。在这个场景下,由于当前线程是第一次进到这里,所以binder_context_mgr_node为NULL,binder_context_mgr_uid为-1,于是初始化binder_context_mgr_uid为current->cred->euid,这样,当前线程就成为Binder机制的守护进程了,并且通过binder_new_node为Service Manager创建Binder实体:
staticstruct binder_node *
binder_new_node(struct binder_proc *proc, void __user *ptr, void __user *cookie)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent =NULL;
struct binder_node *node;
while (*p) {
parent = *p;
node = rb_entry(parent, struct binder_node, rb_node);
if (ptr ptr)
p = &(*p)->rb_left;
elseif (ptr > node->ptr)
p = &(*p)->rb_right;
else
returnNULL;
}
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (node ==NULL)
returnNULL;
binder_stats.obj_created[BINDER_STAT_NODE]++;
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);
node->debug_id = ++binder_last_id;
node->proc = proc;
node->ptr = ptr;
node->cookie = cookie;
node->work.type = BINDER_WORK_NODE;
INIT_LIST_HEAD(&node->work.entry);
INIT_LIST_HEAD(&node->async_todo);
if (binder_debug_mask & BINDER_DEBUG_INTERNAL_REFS)
printk(KERN_INFO"binder: %d:%d node %d u%p c%p created\n",
proc->pid,current->pid, node->debug_id,
node->ptr, node->cookie);
returnnode;
}
注意,这里传进来的ptr和cookie均为NULL。函数首先检查proc->nodes红黑树中是否已经存在以ptr为键值的node,如果已经存在,就返回NULL。在这个场景下,由于当前线程是第一次进入到这里,所以肯定不存在,于是就新建了一个ptr为NULL的binder_node,并且初始化其它成员变量,并插入到proc->nodes红黑树中去。
binder_new_node返回到binder_ioctl函数后,就把新建的binder_node指针保存在binder_context_mgr_node中了,紧接着,又初始化了binder_context_mgr_node的引用计数值。
这样,BINDER_SET_CONTEXT_MGR命令就执行完毕了,binder_ioctl函数返回之前,执行了下面语句:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
回忆上面执行binder_get_thread时,thread->looper = BINDER_LOOPER_STATE_NEED_RETURN,执行了这条语句后,thread->looper = 0。
回到frameworks/base/cmds/servicemanager/service_manager.c文件中的main函数,下一步就是调用binder_loop函数进入循环,等待Client来请求了。binder_loop函数定义在frameworks/base/cmds/servicemanager/binder.c文件中:
void binder_loop(struct binder_state *bs, binder_handler func)
{
intres;
struct binder_write_read bwr;
unsigned readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(unsigned));
for(;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res
LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
if (res == 0) {
LOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res
LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
首先是通过binder_write函数执行BC_ENTER_LOOPER命令告诉Binder驱动程序, Service Manager要进入循环了。
这里又要介绍一下设备文件/dev/binder文件操作函数ioctl的操作码BINDER_WRITE_READ了,首先看定义:
#define BINDER_WRITE_READ _IOWR('b', 1, struct binder_write_read)
这个io操作码有一个参数,形式为struct binder_write_read:
struct binder_write_read {
signed long write_size; /* bytestowrite */
signed long write_consumed; /* bytes consumedbydriver */
unsigned long write_buffer;
signed long read_size; /* bytestoread*/
signed long read_consumed; /* bytes consumedbydriver */
unsigned long read_buffer;
};
这里顺便说一下,用户空间程序和Binder驱动程序交互大多数都是通过BINDER_WRITE_READ命令的,write_bufffer和read_buffer所指向的数据结构还指定了具体要执行的操作,write_bufffer和read_buffer所指向的结构体是struct binder_transaction_data:
struct binder_transaction_data {
/* Thefirsttwo areonlyusedforbcTRANSACTIONandbrTRANSACTION,
* identifying the targetandcontentsofthetransaction.
*/
union{
size_t handle; /* target descriptorofcommandtransaction*/
void *ptr; /* target descriptorofreturntransaction*/
} target;
void *cookie; /* target object cookie */
unsignedintcode; /*transactioncommand */
/* General information about thetransaction. */
unsignedintflags;
pid_t sender_pid;
uid_t sender_euid;
size_t data_size; /* numberofbytesofdata */
size_t offsets_size; /* numberofbytesofoffsets */
/* If thistransactionisinline, the data immediately
* follows here; otherwise, it endswitha pointerto
* the data buffer.
*/
union{
struct {
/*transactiondata */
const void *buffer;
/* offsetsfrombuffertoflat_binder_object structs */
const void *offsets;
} ptr;
uint8_t buf[8];
} data;
};
有一个联合体target,当这个BINDER_WRITE_READ命令的目标对象是本地Binder实体时,就使用ptr来表示这个对象在本进程中的地址,否则就使用handle来表示这个Binder实体的引用。只有目标对象是Binder实体时,cookie成员变量才有意义,表示一些附加数据,由Binder实体来解释这个个附加数据。code表示要对目标对象请求的命令代码,有很多请求代码,这里就不列举了,在这个场景中,就是BC_ENTER_LOOPER了,用来告诉Binder驱动程序, Service Manager要进入循环了。其余的请求命令代码可以参考kernel/common/drivers/staging/android/binder.h文件中定义的两个枚举类型BinderDriverReturnProtocol和BinderDriverCommandProtocol。
flags成员变量表示事务标志:
enum transaction_flags {
TF_ONE_WAY = 0x01, /* thisisa one-way call: async,noreturn*/
TF_ROOT_OBJECT = 0x04, /* contents are the component's root object */
TF_STATUS_CODE = 0x08, /* contents are a 32-bitstatus code */
TF_ACCEPT_FDS = 0x10, /* allow replieswithfile descriptors */
};
每一个标志位所表示的意义看注释就行了,遇到时再具体分析。
sender_pid和sender_euid表示发送者进程的pid和euid。
data_size表示data.buffer缓冲区的大小,offsets_size表示data.offsets缓冲区的大小。这里需要解释一下data成员变量,命令的真正要传输的数据就保存在data.buffer缓冲区中,前面的一成员变量都是一些用来描述数据的特征的。data.buffer所表示的缓冲区数据分为两类,一类是普通数据,Binder驱动程序不关心,一类是Binder实体或者Binder引用,这需要Binder驱动程序介入处理。为什么呢?想想,如果一个进程A传递了一个Binder实体或Binder引用给进程B,那么,Binder驱动程序就需要介入维护这个Binder实体或者引用的引用计数,防止B进程还在使用这个Binder实体时,A却销毁这个实体,这样的话,B进程就会crash了。所以在传输数据时,如果数据中含有Binder实体和Binder引和,就需要告诉Binder驱动程序它们的具体位置,以便Binder驱动程序能够去维护它们。data.offsets的作用就在这里了,它指定在data.buffer缓冲区中,所有Binder实体或者引用的偏移位置。每一个Binder实体或者引用,通过struct flat_binder_object 来表示:
/*
* Thisisthe flattened representationofa Binder objectfortransfer
*betweenprocesses. The'offsets'suppliedaspartofa bindertransaction
*containsoffsetsintothe datawherethese structures occur. The Binder
* driver takes careofre-writing the structure typeanddataasit moves
*betweenprocesses.
*/
struct flat_binder_object {
/* 8 bytesforlarge_flat_header. */
unsigned long type;
unsigned long flags;
/* 8 bytesofdata. */
union{
void *binder; /*localobject */
signed long handle; /* remote object */
};
/* extra data associatedwithlocalobject */
void *cookie;
};
type表示Binder对象的类型,它取值如下所示:
enum {
BINDER_TYPE_BINDER = B_PACK_CHARS('s','b','*', B_TYPE_LARGE),
BINDER_TYPE_WEAK_BINDER = B_PACK_CHARS('w','b','*', B_TYPE_LARGE),
BINDER_TYPE_HANDLE = B_PACK_CHARS('s','h','*', B_TYPE_LARGE),
BINDER_TYPE_WEAK_HANDLE = B_PACK_CHARS('w','h','*', B_TYPE_LARGE),
BINDER_TYPE_FD = B_PACK_CHARS('f','d','*', B_TYPE_LARGE),
};
flags表示Binder对象的标志,该域只对第一次传递Binder实体时有效,因为此刻驱动需要在内核中创建相应的实体节点,有些参数需要从该域取出。
type和flags的具体意义可以参考Android Binder设计与实现一文。
最后,binder表示这是一个Binder实体,handle表示这是一个Binder引用,当这是一个Binder实体时,cookie才有意义,表示附加数据,由进程自己解释。
数据结构分析完了,回到binder_loop函数中,首先是执行BC_ENTER_LOOPER命令:
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(unsigned));
进入到binder_write函数中:
intbinder_write(struct binder_state *bs, void *data, unsigned len)
{
struct binder_write_read bwr;
intres;
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (unsigned) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res
fprintf(stderr,"binder_write: ioctl failed (%s)\n",
strerror(errno));
}
returnres;
}
注意这里的binder_write_read变量bwr,write_size大小为4,表示write_buffer缓冲区大小为4,它的内容是一个BC_ENTER_LOOPER命令协议号,read_buffer为空。接着又是调用ioctl函数进入到Binder驱动程序的binder_ioctl函数,这里我们也只是关注BC_ENTER_LOOPER相关的逻辑:
staticlong binder_ioctl(struct file *filp, unsignedintcmd, unsigned long arg)
{
intret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsignedintsize= _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
/*printk(KERN_INFO"binder_ioctl: %d:%d %x %lx\n", proc->pid,current->pid, cmd, arg);*/
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error
if (ret)
returnret;
mutex_lock(&binder_lock);
thread = binder_get_thread(proc);
if (thread ==NULL) {
ret = -ENOMEM;
gotoerr;
}
switch (cmd) {
caseBINDER_WRITE_READ: {
struct binder_write_read bwr;
if (size!= sizeof(struct binder_write_read)) {
ret = -EINVAL;
gotoerr;
}
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
gotoerr;
}
if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
printk(KERN_INFO"binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
if (ret
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
gotoerr;
}
}
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
if (ret
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
gotoerr;
}
}
if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
printk(KERN_INFO"binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
gotoerr;
}
break;
}
......
default:
ret = -EINVAL;
gotoerr;
}
ret = 0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
mutex_unlock(&binder_lock);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error
if (ret && ret != -ERESTARTSYS)
printk(KERN_INFO"binder: %d:%d ioctl %x %lx returned %d\n", proc->pid,current->pid, cmd, arg, ret);
returnret;
}
函数前面的代码就不解释了,同前面调用binder_become_context_manager是一样的,只不过这里调用binder_get_thread函数获取binder_thread,就能从proc中直接找到了,不需要创建一个新的。
首先是通过copy_from_user(&bwr, ubuf, sizeof(bwr))语句把用户传递进来的参数转换成struct binder_write_read结构体,并保存在本地变量bwr中,这里可以看出bwr.write_size等于4,于是进入binder_thread_write函数,这里我们只关注BC_ENTER_LOOPER相关的代码:
int
binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
void __user *buffer,intsize, signed long *consumed)
{
uint32_t cmd;
void __user *ptr = buffer + *consumed;
void __user *end= buffer +size;
while (ptr return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))
return-EFAULT;
ptr += sizeof(uint32_t);
if (_IOC_NR(cmd)
binder_stats.bc[_IOC_NR(cmd)]++;
proc->stats.bc[_IOC_NR(cmd)]++;
thread->stats.bc[_IOC_NR(cmd)]++;
}
switch (cmd) {
......
caseBC_ENTER_LOOPER:
if (binder_debug_mask & BINDER_DEBUG_THREADS)
printk(KERN_INFO"binder: %d:%d BC_ENTER_LOOPER\n",
proc->pid, thread->pid);
if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {
thread->looper |= BINDER_LOOPER_STATE_INVALID;
binder_user_error("binder: %d:%d ERROR:"
" BC_ENTER_LOOPER called after "
"BC_REGISTER_LOOPER\n",
proc->pid, thread->pid);
}
thread->looper |= BINDER_LOOPER_STATE_ENTERED;
break;
......
default:
printk(KERN_ERR"binder: %d:%d unknown command %d\n", proc->pid, thread->pid, cmd);
return-EINVAL;
}
*consumed = ptr - buffer;
}
return0;
}
回忆前面执行binder_become_context_manager到binder_ioctl时,调用binder_get_thread函数创建的thread->looper值为0,所以这里执行完BC_ENTER_LOOPER时,thread->looper值就变为BINDER_LOOPER_STATE_ENTERED了,表明当前线程进入循环状态了。
回到binder_ioctl函数,由于bwr.read_size == 0,binder_thread_read函数就不会被执行了,这样,binder_ioctl的任务就完成了。
回到binder_loop函数,进入for循环:
for(;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res
LOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
res = binder_parse(bs, 0, readbuf, bwr.read_consumed, func);
if (res == 0) {
LOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res
LOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
又是执行一个ioctl命令,注意,这里的bwr参数各个成员的值:
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (unsigned) readbuf;
再次进入到binder_ioctl函数:
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc=filp->private_data;
struct binder_thread *thread;
unsigned intsize=_IOC_SIZE(cmd);
void __user *ubuf= (void __user *)arg;
/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/
ret=wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error<2);
if (ret)
return ret;
mutex_lock(&binder_lock);
thread=binder_get_thread(proc);
if (thread== NULL) {
ret= -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_WRITE_READ: {
struct binder_write_read bwr;
if (size != sizeof(struct binder_write_read)) {
ret= -EINVAL;
goto err;
}
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret= -EFAULT;
goto err;
}
if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer);
if (bwr.write_size>0) {
ret=binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
if (ret<0) {
bwr.read_consumed=0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret= -EFAULT;
goto err;
}
}
if (bwr.read_size>0) {
ret=binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
if (ret<0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret= -EFAULT;
goto err;
}
}
if (binder_debug_mask & BINDER_DEBUG_READ_WRITE)
printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size);
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret= -EFAULT;
goto err;
}
break;
}
......
default:
ret= -EINVAL;
goto err;
}
ret=0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
mutex_unlock(&binder_lock);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error<2);
if (ret && ret != -ERESTARTSYS)
printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
return ret;
}
这次,bwr.write_size等于0,于是不会执行binder_thread_write函数,bwr.read_size等于32,于是进入到binder_thread_read函数:
staticint
binder_thread_read(struct binder_proc *proc, struct binder_thread *thread,
void __user *buffer,intsize, signed long *consumed,intnon_block)
{
void __user *ptr = buffer + *consumed;
void __user *end= buffer +size;
intret = 0;
intwait_for_proc_work;
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return-EFAULT;
ptr += sizeof(uint32_t);
}
retry:
wait_for_proc_work = thread->transaction_stack ==NULL&& list_empty(&thread->todo);
if (thread->return_error != BR_OK && ptr
if (thread->return_error2 != BR_OK) {
if (put_user(thread->return_error2, (uint32_t __user *)ptr))
return-EFAULT;
ptr += sizeof(uint32_t);
if (ptr ==end)
gotodone;
thread->return_error2 = BR_OK;
}
if (put_user(thread->return_error, (uint32_t __user *)ptr))
return-EFAULT;
ptr += sizeof(uint32_t);
thread->return_error = BR_OK;
gotodone;
}
thread->looper |= BINDER_LOOPER_STATE_WAITING;
if (wait_for_proc_work)
proc->ready_threads++;
mutex_unlock(&binder_lock);
if (wait_for_proc_work) {
if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))) {
binder_user_error("binder: %d:%d ERROR: Thread waiting "
"for process work before calling BC_REGISTER_"
"LOOPER or BC_ENTER_LOOPER (state %x)\n",
proc->pid, thread->pid, thread->looper);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error
}
binder_set_nice(proc->default_priority);
if (non_block) {
if (!binder_has_proc_work(proc, thread))
ret = -EAGAIN;
}else
ret = wait_event_interruptible_exclusive(proc->wait, binder_has_proc_work(proc, thread));
}else{
if (non_block) {
if (!binder_has_thread_work(thread))
ret = -EAGAIN;
}else
ret = wait_event_interruptible(thread->wait, binder_has_thread_work(thread));
}
.......
}
传入的参数*consumed == 0,于是写入一个值BR_NOOP到参数ptr指向的缓冲区中去,即用户传进来的bwr.read_buffer缓冲区。这时候,thread->transaction_stack == NULL,并且thread->todo列表也是空的,这表示当前线程没有事务需要处理,于是wait_for_proc_work为true,表示要去查看proc是否有未处理的事务。当前thread->return_error == BR_OK,这是前面创建binder_thread时初始化设置的。于是继续往下执行,设置thread的状态为BINDER_LOOPER_STATE_WAITING,表示线程处于等待状态。调用binder_set_nice函数设置当前线程的优先级别为proc->default_priority,这是因为thread要去处理属于proc的事务,因此要将此thread的优先级别设置和proc一样。在这个场景中,proc也没有事务处理,即binder_has_proc_work(proc, thread)为false。如果文件打开模式为非阻塞模式,即non_block为true,那么函数就直接返回-EAGAIN,要求用户重新执行ioctl;否则的话,就通过当前线程就通过wait_event_interruptible_exclusive函数进入休眠状态,等待请求到来再唤醒了。
至此,我们就从源代码一步一步地分析完Service Manager是如何成为Android进程间通信(IPC)机制Binder守护进程的了。总结一下,Service Manager是成为Android进程间通信(IPC)机制Binder守护进程的过程是这样的:
1. 打开/dev/binder文件:open("/dev/binder", O_RDWR);
2. 建立128K内存映射:mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
3. 通知Binder驱动程序它是守护进程:binder_become_context_manager(bs);
4. 进入循环等待请求的到来:binder_loop(bs, svcmgr_handler);
在这个过程中,在Binder驱动程序中建立了一个struct binder_proc结构、一个struct binder_thread结构和一个struct binder_node结构,这样,Service Manager就在Android系统的进程间通信机制Binder担负起守护进程的职责了。