channel、map、slice作为golang的核心三剑客,对于使用golang作为主语言完成开发工作的程序猿来说是非常重要的。了解其设计和源码是使用的基础,因此笔者本专题会对这三种数据结构的源码进行详细的介绍和解析…(算是集大家所长,加上自己的一点见解),若有帮助,求点赞关注。
Go源码分析专栏
Go源码解析——Channel篇
Go源码分析——Map篇
Go源码分析——Slice篇
文章目录
1.hchan
- channel的底层数据结构是hchan struct
- recvq 是读操作阻塞在 channel 的 goroutine 列表,sendq 是写操作阻塞在 channel 的 goroutine 列表(双向链表 ,FIFO ,使用双端队列是为了FIFO时入队出队方便)
- buf使用ring buffer(环形缓存区)优点包括
- 适合FIFO式的固定长度队列
- 可以预先分配固定大小的数组
- 允许高效的内存访问模式
- 所有的缓存区操作都是O(1),包括消耗一个元素,因为不需要移动元素
- 本质上就是一个带有头尾指针的固定长度数组,实现参考Go的数据结构与实现【Ring Buffer】 - 掘金
- sudog 是等待goroutine以及数据的封装,是核心数据结构
hchan部分源码如下:
type hchan struct {
qcount uint // 队列中数据个数
dataqsiz uint // buf 大小
buf unsafe.Pointer // 存放数据的环形数组
elemsize uint16 // channel 中数据类型的大小
closed uint32 // 表示 channel 是否关闭
elemtype *_type // 元素数据类型
sendx uint // send 的数组索引
recvx uint // recv 的数组索引
recvq waitq // 由 recv 行为(也就是 <-ch)阻塞在 channel 上的 goroutine 队列
sendq waitq // 由 send 行为 (也就是 ch<-) 阻塞在 channel 上的 goroutine 队列
// lock protects all fields in hchan, as well as several
// fields in sudogs blocked on this channel.
//
// Do not change another G's status while holding this lock
// (in particular, do not ready a G), as this can deadlock
// with stack shrinking.
lock mutex
}
type waitq struct {
first *sudog
last *sudog
}
type sudog struct {
// The following fields are protected by the hchan.lock of the
// channel this sudog is blocking on. shrinkstack depends on
// this for sudogs involved in channel ops.
g *g
selectdone *uint32 // CAS to 1 to win select race (may point to stack)
next *sudog
prev *sudog
elem unsafe.Pointer // data element (may point to stack)
// The following fields are never accessed concurrently.
// For channels, waitlink is only accessed by g.
// For semaphores, all fields (including the ones above)
// are only accessed when holding a semaRoot lock.
acquiretime int64
releasetime int64
ticket uint32
parent *sudog // semaRoot binary tree
waitlink *sudog // g.waiting list or semaRoot
waittail *sudog // semaRoot
c *hchan // channel
}
2.make
Go 语言中所有 Channel 的创建都会使用 make 关键字。编译器会将 make(chan int, 10) 表达式转换成 OMAKE 类型的节点,并在类型检查阶段将 OMAKE 类型的节点转换成 OMAKECHAN 类型:
func typecheck1(n *Node, top int) (res *Node) {
switch n.Op {
case OMAKE:
...
switch t.Etype {
case TCHAN:
l = nil
if i < len(args) { // 带缓冲区的异步 Channel
...
n.Left = l
} else { // 不带缓冲区的同步 Channel
n.Left = nodintconst(0)
}
n.Op = OMAKECHAN
}
}
}
这一阶段会对传入 make 关键字的缓冲区大小进行检查,如果我们不向 make 传递表示缓冲区大小的参数,那么就会设置一个默认值 0,也就是当前的 Channel 不存在缓冲区。
OMAKECHAN 类型的节点最终都会在 SSA 中间代码生成阶段之前被转换成调用 runtime.makechan 或者 runtime.makechan64 的函数:
func walkexpr(n *Node, init *Nodes) *Node {
switch n.Op {
case OMAKECHAN:
size := n.Left
fnname := "makechan64"
argtype := types.Types[TINT64]
if size.Type.IsKind(TIDEAL) || maxintval[size.Type.Etype].Cmp(maxintval[TUINT]) <= 0 {
fnname = "makechan"
argtype = types.Types[TINT]
}
n = mkcall1(chanfn(fnname, 1, n.Type), n.Type, init, typename(n.Type), conv(size, argtype))
}
}
runtime.makechan 和 runtime.makechan64 会根据传入的参数类型和缓冲区大小创建一个新的 Channel 结构,其中后者用于处理缓冲区大小大于 2 的 32 次方的情况,因为这在 Channel 中并不常见,所以我们重点关注 runtime.makechan:
func makechan(t *chantype, size int) *hchan {
elem := t.elem
// compiler checks this but be safe.
if elem.size >= 1<<16 {
throw("makechan: invalid channel element type")
}
if hchanSize%maxAlign != 0 || elem.align > maxAlign {
throw("makechan: bad alignment")
}
mem, overflow := math.MulUintptr(elem.size, uintptr(size))
if overflow || mem > maxAlloc-hchanSize || size < 0 {
panic(plainError("makechan: size out of range"))
}
// Hchan does not contain pointers interesting for GC when elements stored in buf do not contain pointers.
// buf points into the same allocation, elemtype is persistent.
// SudoG's are referenced from their owning thread so they can't be collected.
// TODO(dvyukov,rlh): Rethink when collector can move allocated objects.
var c *hchan
switch {
case mem == 0:
// Queue or element size is zero.
c = (*hchan)(mallocgc(hchanSize, nil, true))
// Race detector uses this location for synchronization.
c.buf = c.raceaddr()
case elem.ptrdata == 0:
// Elements do not contain pointers.
// Allocate hchan and buf in one call.
c = (*hchan)(mallocgc(hchanSize+mem, nil, true))
c.buf = add(unsafe.Pointer(c), hchanSize)
default:
// Elements contain pointers.
c = new(hchan)
c.buf = mallocgc(mem, elem, true)
}
c.elemsize = uint16(elem.size)
c.elemtype = elem
c.dataqsiz = uint(size)
lockInit(&c.lock, lockRankHchan)
if debugChan {
print("makechan: chan=", c, "; elemsize=", elem.size, "; dataqsiz=", size, "\n")
}
return c
}
runtime.makechan主要分为两个部分:合法性验证 和分配地址空间
2.1 合法性验证
- 数据类型大小,大于1<<16时异常
- 内存对齐(降低寻址次数),大于最大的内存8字节数时异常
- 传入的size大小,大于堆可分配的最大内存时异常
if elem.size >= 1<<16 {
throw("makechan: invalid channel element type")
}
if hchanSize%maxAlign != 0 || elem.align > maxAlign {
throw("makechan: bad alignment")
}
mem, overflow := math.MulUintptr(elem.size, uintptr(size))
if overflow || mem > maxAlloc-hchanSize || size < 0 {
panic(plainError("makechan: size out of range"))
}
2.2 分配地址空间
根据 channel 中收发元素的类型和缓冲区的大小初始化 runtime.hchan 和缓冲区
- 如果 channel 不存在缓冲区,分配 hchan 结构体空间,即无缓存 channel
- 如果 channel 存储的类型不是指针类型,分配连续地址空间,包括 hchan 结构体 + 数据
- 默认情况包括指针,为 hchan 和 buf 单独分配数据地址空间
var c *hchan
switch {
case mem == 0:
// Queue or element size is zero.
c = (*hchan)(mallocgc(hchanSize, nil, true))
// Race detector uses this location for synchronization.
c.buf = c.raceaddr()
case elem.ptrdata == 0:
// Elements do not contain pointers.
// Allocate hchan and buf in one call.
c = (*hchan)(mallocgc(hchanSize+mem, nil, true))
c.buf = add(unsafe.Pointer(c), hchanSize)
default:
// Elements contain pointers.
c = new(hchan)
c.buf = mallocgc(mem, elem, true)
}
更新 hchan 结构体的数据,包括 elemsize elemtype 和 dataqsiz
c.elemsize = uint16(elem.size)
c.elemtype = elem
c.dataqsiz = uint(size)
lockInit(&c.lock, lockRankHchan)
3.send
当我们想要向 Channel 发送数据时,就需要使用 ch <- i 语句,编译器会将它解析成 OSEND 节点并在 cmd/compile/internal/gc.walkexpr 中转换成 runtime.chansend1:
case OSEND:
n1 := n.Right
n1 = assignconv(n1, n.Left.Type.Elem(), "chan send")
n1 = walkexpr(n1, init)
n1 = nod(OADDR, n1, nil)
n = mkcall1(chanfn("chansend1", 2, n.Left.Type), nil, init, n.Left, n1)
runtime.chansend1 只是调用了 runtime.chansend 并传入 Channel 和需要发送的数据runtime.chansend 是向 Channel 中发送数据时一定会调用的函数,该函数包含了发送数据的全部逻辑,如果我们在调用时将 block 参数设置成 true,那么表示当前发送操作是阻塞的:
func chansend1(c *hchan, elem unsafe.Pointer) {
chansend(c, elem, true, getcallerpc())//阻塞发送
}
- ch <- x时阻塞发送
- x := <- ch时阻塞接收
3.1 chansend函数
func chansend(c *hchan, ep unsafe.Pointer, block bool, callerpc uintptr) bool {
if c == nil {//向nil chan发送数据会发生阻塞
if !block {
return false
}
gopark(nil, nil, waitReasonChanSendNilChan, traceEvGoStop, 2)//休眠
throw("unreachable")
}
if debugChan {
print("chansend: chan=", c, "\n")
}
if raceenabled {
racereadpc(c.raceaddr(), callerpc, funcPC(chansend))
}
// Fast path: check for failed non-blocking operation without acquiring the lock.
//
// After observing that the channel is not closed, we observe that the channel is
// not ready for sending. Each of these observations is a single word-sized read
// (first c.closed and second full()).
// Because a closed channel cannot transition from 'ready for sending' to
// 'not ready for sending', even if the channel is closed between the two observations,
// they imply a moment between the two when the channel was both not yet closed
// and not ready for sending. We behave as if we observed the channel at that moment,
// and report that the send cannot proceed.
//
// It is okay if the reads are reordered here: if we observe that the channel is not
// ready for sending and then observe that it is not closed, that implies that the
// channel wasn't closed during the first observation. However, nothing here
// guarantees forward progress. We rely on the side effects of lock release in
// chanrecv() and closechan() to update this thread's view of c.closed and full().
if !block && c.closed == 0 && full(c) {//full为ture的两种情况1)无缓存通道,recvq为空2)缓存通道,但是buffer已满
return false
}
var t0 int64
if blockprofilerate > 0 {
t0 = cputicks()
}
lock(&c.lock)
if c.closed != 0 {//再次检查channel是否关闭,向已关闭的chan发送元素会引起panic
unlock(&c.lock)
panic(plainError("send on closed channel"))
}
if sg := c.recvq.dequeue(); sg != nil {//取出第一个非空并且未被选择过的的sudog
// Found a waiting receiver. We pass the value we want to send
// directly to the receiver, bypassing the channel buffer (if any).
send(c, sg, ep, func() { unlock(&c.lock) }, 3)
return true
}
if c.qcount < c.dataqsiz {
// Space is available in the channel buffer. Enqueue the element to send.
qp := chanbuf(c, c.sendx)
if raceenabled {
racenotify(c, c.sendx, nil)
}
typedmemmove(c.elemtype, qp, ep)
c.sendx++
if c.sendx == c.dataqsiz {
c.sendx = 0
}
c.qcount++
unlock(&c.lock)
return true
}
if !block {
unlock(&c.lock)
return false
}
// Block on the channel. Some receiver will complete our operation for us.
gp := getg()
mysg := acquireSudog()
mysg.releasetime = 0
if t0 != 0 {
mysg.releasetime = -1
}
// No stack splits between assigning elem and enqueuing mysg
// on gp.waiting where copystack can find it.
mysg.elem = ep
mysg.waitlink = nil
mysg.g = gp
mysg.isSelect = false
mysg.c = c
gp.waiting = mysg
gp.param = nil
c.sendq.enqueue(mysg)
// Signal to anyone trying to shrink our stack that we're about
// to park on a channel. The window between when this G's status
// changes and when we set gp.activeStackChans is not safe for
// stack shrinking.
atomic.Store8(&gp.parkingOnChan, 1)
gopark(chanparkcommit, unsafe.Pointer(&c.lock), waitReasonChanSend, traceEvGoBlockSend, 2)
// Ensure the value being sent is kept alive until the
// receiver copies it out. The sudog has a pointer to the
// stack object, but sudogs aren't considered as roots of the
// stack tracer.
KeepAlive(ep)
// someone woke us up.
if mysg != gp.waiting {
throw("G waiting list is corrupted")
}
gp.waiting = nil
gp.activeStackChans = false
closed := !mysg.success
gp.param = nil
if mysg.releasetime > 0 {
blockevent(mysg.releasetime-t0, 2)
}
mysg.c = nil
releaseSudog(mysg)
if closed {
if c.closed == 0 {
throw("chansend: spurious wakeup")
}
panic(plainError("send on closed channel"))
}
return true
}
chansend函数主要可以归纳为三部分:
- 当存在等待的接收者时,也就是在 recvq 可以获得 waitq,通过 send 方法直接将数据发送给等待的接收者
- 当缓冲区存在空余空间时,将发送的数据写入 Channel 的缓冲区
- 当不存在缓冲区或者缓冲区已满时,阻塞等待其他 Goroutine 从Channel 接收数据,将goroutine和数据打包成sudog存入sendq
未初始化时为nil,向nil channel发送数据会阻塞
从nil channel读取数据同样会阻塞
if c == nil {//向nil chan发送数据会发生阻塞
if !block {
return false
}
gopark(nil, nil, waitReasonChanSendNilChan, traceEvGoStop, 2)//休眠
throw("unreachable")
}
向已经关闭的channel发送数据会引起panic
if c.closed != 0 {//再次检查channel是否关闭,向已关闭的chan发送元素会引起panic
unlock(&c.lock)
panic(plainError("send on closed channel"))
}
如果有等待的接受者,也就是recvq队列中有waitq,通过send方法直接将数据发送给等待的接受者
if sg := c.recvq.dequeue(); sg != nil {//取出第一个非空并且未被选择过的的sudog
// Found a waiting receiver. We pass the value we want to send
// directly to the receiver, bypassing the channel buffer (if any).
send(c, sg, ep, func() { unlock(&c.lock) }, 3)
return true
}
如果缓存区存在空余空间,写入buffer
if c.qcount < c.dataqsiz {
// Space is available in the channel buffer. Enqueue the element to send.
qp := chanbuf(c, c.sendx)//获取缓存区index地址
if raceenabled {
racenotify(c, c.sendx, nil)
}
typedmemmove(c.elemtype, qp, ep)//数据写入buffer
c.sendx++
if c.sendx == c.dataqsiz {
c.sendx = 0
}
c.qcount++
unlock(&c.lock)
return true
}
缓存区已满或无缓存channel,阻塞发送
- getg 获取发送数据的 Goroutine
- acquireSudog 获取 sudog 结构
- 将创建并初始化的 sudog 加入sendq,并设置到当前 Goroutine 的 waiting上,表示 Goroutine 正在等待该 sudog 准备就绪
- gopark 将当前的 Goroutine 陷入沉睡等待唤醒
- 被调度器唤醒后会将一些属性置零并且释放 runtime.sudog 结构体
// Block on the channel. Some receiver will complete our operation for us.
gp := getg()
mysg := acquireSudog()
mysg.releasetime = 0
if t0 != 0 {
mysg.releasetime = -1
}
// No stack splits between assigning elem and enqueuing mysg
// on gp.waiting where copystack can find it.
mysg.elem = ep
mysg.waitlink = nil
mysg.g = gp
mysg.isSelect = false
mysg.c = c
gp.waiting = mysg
gp.param = nil
c.sendq.enqueue(mysg)
// Signal to anyone trying to shrink our stack that we're about
// to park on a channel. The window between when this G's status
// changes and when we set gp.activeStackChans is not safe for
// stack shrinking.
// 把 goroutine 相关的线索结构入队,等待条件满足的唤醒;
atomic.Store8(&gp.parkingOnChan, 1)
// goroutine 切走,让出 cpu 执行权限;
gopark(chanparkcommit, unsafe.Pointer(&c.lock), waitReasonChanSend, traceEvGoBlockSend, 2)
// Ensure the value being sent is kept alive until the
// receiver copies it out. The sudog has a pointer to the
// stack object, but sudogs aren't considered as roots of the
// stack tracer.
KeepAlive(ep)
// someone woke us up.
// 之后就是唤醒之后的逻辑了,只有被唤醒,这个goroutine才会执行到这里
if mysg != gp.waiting {
throw("G waiting list is corrupted")
}
gp.waiting = nil
gp.activeStackChans = false
closed := !mysg.success
gp.param = nil
if mysg.releasetime > 0 {
blockevent(mysg.releasetime-t0, 2)
}
mysg.c = nil
releaseSudog(mysg)
if closed {
if c.closed == 0 {
throw("chansend: spurious wakeup")
}
panic(plainError("send on closed channel"))
}
return true
3.2 send函数
- 调用 sendDirect 将发送的数据直接拷贝到接收方 sg 中
- 调用 goready 将等待接收数据的 Goroutine 标记成可运行状态 Grunnable, 并把该 Goroutine 放到发送方所在的处理器的 runnext 上,该处理器调度时会唤醒接收方。
// send processes a send operation on an empty channel c.
// The value ep sent by the sender is copied to the receiver sg.
// The receiver is then woken up to go on its merry way.
// Channel c must be empty and locked. send unlocks c with unlockf.
// sg must already be dequeued from c.
// ep must be non-nil and point to the heap or the caller's stack.
func send(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int) {
if sg.elem != nil {
sendDirect(c.elemtype, sg, ep)
sg.elem = nil
}
gp := sg.g
unlockf()
gp.param = unsafe.Pointer(sg)
sg.success = true
if sg.releasetime != 0 {
sg.releasetime = cputicks()
}
goready(gp, skip+1)
}
4.recv
Go 语言中可以使用两种不同的方式去接收 Channel 中的数据:
i <- ch
i, ok <- ch
这两种不同的方法经过编译器的处理都会变成 ORECV 类型的节点,后者会在类型检查阶段被转换成 OAS2RECV 类型。数据的接收操作遵循以下的路线图:
虽然不同的接收方式会被转换成 runtime.chanrecv1 和 runtime.chanrecv2 两种不同函数的调用,但是这两个函数最终还是会调用 runtime.chanrecv。
func chanrecv1(c *hchan, elem unsafe.Pointer) {
chanrecv(c, elem, true)
}
func chanrecv2(c *hchan, elem unsafe.Pointer) (received bool) {
_, received = chanrecv(c, elem, true)
return
}
划重点:
- 不管是chanrecv1还是chanrecv2,最后都是阻塞的调用chanrev
- 不管是send还是recv,都是阻塞的调用
- 只有 select 的时候,block 才会是 false
4.1 chanrecv函数
func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool) {
// raceenabled: don't need to check ep, as it is always on the stack
// or is new memory allocated by reflect.
if debugChan {
print("chanrecv: chan=", c, "\n")
}
if c == nil {
if !block {
return
}
gopark(nil, nil, waitReasonChanReceiveNilChan, traceEvGoStop, 2)
throw("unreachable")
}
// Fast path: check for failed non-blocking operation without acquiring the lock.
if !block && empty(c) {
// After observing that the channel is not ready for receiving, we observe whether the
// channel is closed.
//
// Reordering of these checks could lead to incorrect behavior when racing with a close.
// For example, if the channel was open and not empty, was closed, and then drained,
// reordered reads could incorrectly indicate "open and empty". To prevent reordering,
// we use atomic loads for both checks, and rely on emptying and closing to happen in
// separate critical sections under the same lock. This assumption fails when closing
// an unbuffered channel with a blocked send, but that is an error condition anyway.
if atomic.Load(&c.closed) == 0 {
// Because a channel cannot be reopened, the later observation of the channel
// being not closed implies that it was also not closed at the moment of the
// first observation. We behave as if we observed the channel at that moment
// and report that the receive cannot proceed.
return
}
// The channel is irreversibly closed. Re-check whether the channel has any pending data
// to receive, which could have arrived between the empty and closed checks above.
// Sequential consistency is also required here, when racing with such a send.
if empty(c) {
// The channel is irreversibly closed and empty.
if raceenabled {
raceacquire(c.raceaddr())
}
if ep != nil {
typedmemclr(c.elemtype, ep)
}
return true, false
}
}
var t0 int64
if blockprofilerate > 0 {
t0 = cputicks()
}
lock(&c.lock)
if c.closed != 0 && c.qcount == 0 {
if raceenabled {
raceacquire(c.raceaddr())
}
unlock(&c.lock)
if ep != nil {
typedmemclr(c.elemtype, ep)
}
return true, false
}
if sg := c.sendq.dequeue(); sg != nil {
// Found a waiting sender. If buffer is size 0, receive value
// directly from sender. Otherwise, receive from head of queue
// and add sender's value to the tail of the queue (both map to
// the same buffer slot because the queue is full).
recv(c, sg, ep, func() { unlock(&c.lock) }, 3)
return true, true
}
if c.qcount > 0 {
// Receive directly from queue
qp := chanbuf(c, c.recvx)
if raceenabled {
racenotify(c, c.recvx, nil)
}
if ep != nil {
typedmemmove(c.elemtype, ep, qp)
}
typedmemclr(c.elemtype, qp)
c.recvx++
if c.recvx == c.dataqsiz {
c.recvx = 0
}
c.qcount--
unlock(&c.lock)
return true, true
}
if !block {
unlock(&c.lock)
return false, false
}
// no sender available: block on this channel.
gp := getg()
mysg := acquireSudog()
mysg.releasetime = 0
if t0 != 0 {
mysg.releasetime = -1
}
// No stack splits between assigning elem and enqueuing mysg
// on gp.waiting where copystack can find it.
mysg.elem = ep
mysg.waitlink = nil
gp.waiting = mysg
mysg.g = gp
mysg.isSelect = false
mysg.c = c
gp.param = nil
c.recvq.enqueue(mysg)
// Signal to anyone trying to shrink our stack that we're about
// to park on a channel. The window between when this G's status
// changes and when we set gp.activeStackChans is not safe for
// stack shrinking.
atomic.Store8(&gp.parkingOnChan, 1)
gopark(chanparkcommit, unsafe.Pointer(&c.lock), waitReasonChanReceive, traceEvGoBlockRecv, 2)
// someone woke us up
if mysg != gp.waiting {
throw("G waiting list is corrupted")
}
gp.waiting = nil
gp.activeStackChans = false
if mysg.releasetime > 0 {
blockevent(mysg.releasetime-t0, 2)
}
success := mysg.success
gp.param = nil
mysg.c = nil
releaseSudog(mysg)
return true, success
}
chanrecv函数同样可以归纳为三部分:
- 当存在等待的发送者时,也就是在 sendv 可以获得 waitq,通过 recv 方法直接将等待的发送者的数据复制到接收者的地址空间
- 当缓冲区中存在数据时,将缓存区中的数据写入接收者的地址空间
- 缓存区无数据或者无缓存channel时,阻塞等待其他goroutine发送数据,将goroutine和数据打包成sudog存入sendq
- 向nil channel接收数据会引起阻塞
- runtime.gopark让出处理器的使用权
func chanrecv(c *hchan, ep unsafe.Pointer, block bool) (selected, received bool) {
if c == nil {
if !block {
return
}
gopark(nil, nil, "chan receive (nil chan)", traceEvGoStop, 2)
throw("unreachable")
}
...
}
从 closed channel 接收数据,如果 channel 中还有数据,接着走下面的流程。如果已经没有数据了,则返回默认值。使用 ok-idiom 方式读取的时候,第二个参数返回 false。
lock(&c.lock)
if c.closed != 0 && c.qcount == 0 {
if raceenabled {
raceacquire(c.raceaddr())
}
unlock(&c.lock)
if ep != nil {
typedmemclr(c.elemtype, ep)
}
return true, false
}
- 当前有发送的 goroutine 阻塞在 channel 上,说明buf 已满或无buf channel
- 如果是无缓存channel,就直接找到一个在等待的sender,接收它的数据
- 如果是有缓存channel,从buf头部拿出值,之后将等待的sender的数据入buf
- 也就是说FIFO原则
if sg := c.sendq.dequeue(); sg != nil {
// Found a waiting sender. If buffer is size 0, receive value
// directly from sender. Otherwise, receive from head of queue
// and add sender's value to the tail of the queue (both map to
// the same buffer slot because the queue is full).
recv(c, sg, ep, func() { unlock(&c.lock) }, 3)
return true, true
}
- 如果没有sender在等待,看buf中是否有数据
- 如果有数据,直接从buf中接收数据,相应数据从buf中出队
if c.qcount > 0 {
// Receive directly from queue
qp := chanbuf(c, c.recvx)
if raceenabled {
racenotify(c, c.recvx, nil)
}
if ep != nil {
typedmemmove(c.elemtype, ep, qp)
}
typedmemclr(c.elemtype, qp)
c.recvx++
if c.recvx == c.dataqsiz {
c.recvx = 0
}
c.qcount--
unlock(&c.lock)
return true, true
}
- 判断是否为阻塞接收
- 正常的channel发送和接收,都是阻塞的,调用的底层函数为chansend1,chanrecv1,chanrecv2
- 与select结合使用时,底层调用的函数selectnbsend,selectnbrecv
if !block {
unlock(&c.lock)
return false, false
}
- buf为空时,阻塞等待,将goroutine休眠与数据等封装为sudog入recvq队列
- getg 获取发送数据的 Goroutine
- acquireSudog 获取 sudog 结构
- 将创建并初始化的 sudog 加入recvq,并设置到当前 Goroutine 的 waiting上,表示 Goroutine 正在等待该 sudog 准备就绪
- gopark 将当前的 Goroutine 陷入沉睡等待唤醒
- 被调度器唤醒后会将一些属性置零并且释放 runtime.sudog 结构体
// no sender available: block on this channel.
gp := getg()
mysg := acquireSudog()
mysg.releasetime = 0
if t0 != 0 {
mysg.releasetime = -1
}
// No stack splits between assigning elem and enqueuing mysg
// on gp.waiting where copystack can find it.
mysg.elem = ep
mysg.waitlink = nil
gp.waiting = mysg
mysg.g = gp
mysg.selectdone = nil
mysg.c = c
gp.param = nil
c.recvq.enqueue(mysg)
goparkunlock(&c.lock, "chan receive", traceEvGoBlockRecv, 3)
4.2 recv函数
- 如果buf大小为0即无buf channel,调用 recvdirect 直接从等待的sender中接收数据
- 如果有缓存槽,说明缓存槽满了,并且产生了等待发送的队列
- 从缓存槽中获取数据,将等待的发送数据入sendq等待队列
- 调用 goready将等待接收数据的 Goroutine 标记成可运行状态 Grunnable, 并把该 Goroutine 放到发送方所在的处理器的
runnext 上,该处理器调度时会唤醒接收方 - 将阻塞的发送goroutine唤醒
func recv(c *hchan, sg *sudog, ep unsafe.Pointer, unlockf func(), skip int) {
//如果没有缓存槽,那么直接拷贝发送队列的值
if c.dataqsiz == 0 {
if ep != nil {
// copy data from sender
recvDirect(c.elemtype, sg, ep)
}
} else {
//如果有缓冲槽,说明缓存槽满了并且产生了等待发送的队列
//从缓冲槽中获取数据,并且将发送队列的头节点保存的数据写入缓冲槽中
qp := chanbuf(c, c.recvx)
//将缓冲槽的数据拷贝到接收者目标地址
if ep != nil {
typedmemmove(c.elemtype, ep, qp)
}
//将发送队列的头节点拷贝到当前缓存槽位置
typedmemmove(c.elemtype, qp, sg.elem)
c.recvx++ //接收索引增加
if c.recvx == c.dataqsiz {
c.recvx = 0
}
//缓冲槽满了的情况是缓冲槽发送索引等于接收的索引值
c.sendx = c.recvx // c.sendx = (c.sendx+1) % c.dataqsiz
}
sg.elem = nil
gp := sg.g
unlockf()
gp.param = unsafe.Pointer(sg)
if sg.releasetime != 0 {
sg.releasetime = cputicks()
}
//将阻塞的发送方头节点goroutine唤醒
goready(gp, skip+1)
}
5.close
关闭channel主要涉及到的函数是closechan
5.1 closechan函数
close channel 的工作除了将 c.closed 设置为 1。还需要:
- 唤醒 recvq 队列里面的阻塞 goroutine
- 唤醒 sendq 队列里面的阻塞 goroutine
- 处理方式是分别遍历 recvq 和 sendq 队列,将所有的 goroutine 放到 glist 队列中,最后唤醒 glist 队列中的 goroutine。
- 关闭一个已经关闭的channel会引起panic
func closechan(c *hchan) {
if c == nil {
panic(plainError("close of nil channel"))
}
lock(&c.lock)
if c.closed != 0 {
unlock(&c.lock)
panic(plainError("close of closed channel"))
}
if raceenabled {
callerpc := getcallerpc()
racewritepc(c.raceaddr(), callerpc, funcPC(closechan))
racerelease(c.raceaddr())
}
c.closed = 1
var glist gList
// release all readers
for {
sg := c.recvq.dequeue()
if sg == nil {
break
}
if sg.elem != nil {
typedmemclr(c.elemtype, sg.elem)
sg.elem = nil
}
if sg.releasetime != 0 {
sg.releasetime = cputicks()
}
gp := sg.g
gp.param = unsafe.Pointer(sg)
sg.success = false
if raceenabled {
raceacquireg(gp, c.raceaddr())
}
glist.push(gp)
}
// release all writers (they will panic)
for {
sg := c.sendq.dequeue()
if sg == nil {
break
}
sg.elem = nil
if sg.releasetime != 0 {
sg.releasetime = cputicks()
}
gp := sg.g
gp.param = unsafe.Pointer(sg)
sg.success = false
if raceenabled {
raceacquireg(gp, c.raceaddr())
}
glist.push(gp)
}
unlock(&c.lock)
// Ready all Gs now that we've dropped the channel lock.
for !glist.empty() {
gp := glist.pop()
gp.schedlink = 0
goready(gp, 3)
}
}
typedmemclr 的作用是将 ep 指向的类型为 elemtype 的内存块置为 0 值
6.select
golang 中的 select 语句的实现,在 runtime/select.go 文件中,这篇文章并不打算看 select 的实现。我们要看的是 select 和 channel 一起用的时候。
- channel与select语句结合使用时,底层调用的还是chansend和chanrecv函数
- 不同的是非阻塞 send和recv,而常规channel的send和recv都是阻塞的
6.1 向channel中发送数据
select {
case c <- x:
... foo
default:
... bar
}
会被编译为:
if selectnbsend(c, v) {
... foo
} else {
... bar
}
对应 selectnbsend 函数如下:
func selectnbsend(c *hchan, elem unsafe.Pointer) (selected bool) {
return chansend(c, elem, false, getcallerpc(unsafe.Pointer(&c)))
}
6.2 从channel中接收数据
select {
case v = <-c
... foo
default:
... bar
}
会被编译为:
if selectnbrecv(&v, c) {
... foo
} else {
... bar
}
对应 selectnbrecv 函数如下:
func selectnbrecv(elem unsafe.Pointer, c *hchan) (selected bool) {
selected, _ = chanrecv(c, elem, false)
return
}
另一种接收数据的方式:
select {
case v, ok = <-c:
... foo
default:
... bar
}
会被编译为:
if c != nil && selectnbrecv2(&v, &ok, c) {
... foo
} else {
... bar
}
对应 selectnbrecv2 函数如下:
func selectnbrecv2(elem unsafe.Pointer, received *bool, c *hchan) (selected bool) {
// TODO(khr): just return 2 values from this function, now that it is in Go.
selected, *received = chanrecv(c, elem, false)
return
}
对应函数以上都已讲解,此处不再赘述。
参考文档
- Golang channel源码深度剖析
- golang channel 最详细的源码剖析
- Go语言并发模型:使用 select
- Golang的select/非缓冲的Channel实例详解
- 图解Golang channel源码
- Go Channel 源码剖析
- golang channel 源码剖析
- 深入 Go 并发原语 — Channel 底层实现
- Go的数据结构与实现【Ring Buffer】 - 掘金
- Go语言设计与实现-Channel
扩展阅读
- Dmitry Vyukov. Oct, 2014.“runtime: lock-free channels”
- Concurrency in Golang
- Communicating sequential processes
- lock-free channel
- 实现无限缓存的channel