Goroutine Source Record

Source: Internet
Author: User
This is a creation in Article, where the information may have evolved or changed.

The previous blog summarizes the design of the Go scheduler and the Go Scheduler to solve how to solve the typical problem of user-mode threading, this article is tracking down the goroutine of the source code implementation. Go1.5 Source code analysis has been written in very detailed, I only think I feel important to summarize the place.

Go Program initialization process

The entry address of the C program is usually the _start function of the C runtime Library, which completes the initialization stack, command-line parameter settings, environment variable initialization, IO initialization, thread-related initialization, or global constructs. The entire initialization process of Go's entry function also accomplishes similar work.

runtime.args   // 整理命令行参数 设置环境变量runtime.osinit  // 确定CPU的core数目 调整procs的值runtime.schedinit  // 初始化栈 内存分配器 垃圾回收器和并发调度器

After the initialization process is entered runtime.main . The background monitoring thread is started sysmon . Functions that have entered the user after the runtime package and all initialization init functions of the user package have been executed main .

func main() {    // new os thread for sysmon    systemstack(func() {        newm(sysmon, nil)    })    runtime_init() // must be before defer    gcenable()    main_init()    main_main()}

It is important to note that the Runtime.main is passed newproc and mstart created. This means that main corresponds to Goroutine instead of thread, so it's not sysmon high .

runtime.newprocruntime.mstart

Creation of P and G

Schedinit

The scheduler-related operations in Schedinit include setting the maximum number of M 10000, initializing the current m, initializing the number of P by default to the number of CPU cores, setting the environment variable Gomaxprocs, and finally adjusting the size of p.

func schedinit() { ...    sched.maxmcount = 10000    mcommoninit(_g_.m)    procs := int(ncpu)    if n := atoi(gogetenv("GOMAXPROCS")); n > 0 {        if n > _MaxGomaxprocs {            n = _MaxGomaxprocs        }        procs = n    }    if procresize(int32(procs)) != nil {        throw("unknown runnable goroutine during bootstrap")    }  ...}

The P size is adjusted because p is stored in the global array allp , and it allocates space in the. Data segment [_MaxGomaxprocs + 1]*p , which corresponds to the 256+1 pointer space. The schedinit procresize nprocs Rest is deleted by initializing a pointer in this space.

    • Freeunused p to handle the original G queue in P, put them in the global Schedt. Of course Schedinit does not exist this operation, this logic is Starttheworld modify p number prepared .
    • If the current p is the freed dial, the current p is separated from M, and M is bound to allp[0].
    • Processing Allp[0-nprocs], will not have the local G P into the Schedt global Idlep linked list, will have the local G queue as a runnalbleps linked list returned.
Func procresize (nprocs int32) *p {...//initialize new P's for I: = Int32 (0), I < Nprocs; i++ {pp: = All            P[i] if pp = = nil {pp = new (p) pp.id = i pp.status = _pgcstop ... Atomicstorep (unsafe. Pointer (&allp[i]), unsafe. Pointer (PP))} if Pp.mcache = = Nil {Pp.mcache = Allocmcache ()}}//Free U nused P ' s for i: = Nprocs; i < old;             i++ {p: = Allp[i]//Move all runnable goroutines to the global queue for p.runqhead! = p.runqtail {             p.runqtail--GP: = P.runq[p.runqtail%uint32 (len (P.RUNQ))] Globrunqputhead (GP)} Freemcache (p.mcache) P.mcache = Nil P.status = _pdead//Can ' t free p itself because it CA n BE//referenced by a M in Syscall} _g_: = GETG () if _G_.M.P! = 0 && _g_.m.p.ptr (). ID < NP        ROCs {//continue to use the current P_g_.m.p.ptr (). Status = _prunning} else {//release the current p and acquire allp[0] _G_.M.P = 0 _g_.m.mcache = Nil P: = allp[0] p.m. = 0 P.status = _pidle acquirep (P)} var runnableps *p for I: = nprocs-1; I >= 0; i--{p: = allp[i] If _g_.m.p.ptr () = = p {Continue} P.status = _pidle if R            Unqempty (P) {Pidleput (P)} else {P.m.set (Mget ()) P.link.set (runnableps) Runnableps = P}} atomicstore ((*uint32) (unsafe. Pointer (int32p)), UInt32 (Nprocs)) return runnableps}

Newproc

The go compiler will be go func() translated into runtime.newproc() . For the Go Func execution, the caller's PC register is stacked from right to left, the number of values returned, the number of parameters, the address of the first parameter, and the address of the function.

func newproc(siz int32, fn *funcval) {argp := add(unsafe.Pointer(&fn), ptrSize)pc := getcallerpc(unsafe.Pointer(&siz))systemstack(func() {newproc1(fn, (*uint8)(argp), siz, 0, pc)})}

newproc1A G instance was created. From gfget() getting the idle G object, malg Create a new G object if the fetch fails. Set the stack space and save the field's sched domain and the initial state grunnable,runqput put into the queue to run, if there is idle p then try to wake it to execute.

    • gfget Gets a reusable G object from the Gfree list of p or the gfree linked table in the global sched. When Goexit0 is called at the end of Goroutine, the current G object is gfput to the Gfree queue on the local p.
    • Malg initializes the new G object created by New (g) with the default 2KB stack space. The primary is to initialize the Newg.stack with stackalloc . The execution parameters specified by the
    • go func are copied to the stack space of G because it is no longer related to the stack on which main is located, and each uses a separate stack space.
    • creates a good G-precedence into the p.runnext, or into the array implementation of the loop queue P.runq, if the local queue Runq [256]*g is full, lock into the global queue sched.runq.
    • If the local queue is full, use Runqputslow to place half of the P local task G into the global queue. So that other p can be executed, which is the last wakep to wake up other m/p to perform the task.
Func Newproc1 (FN *funcval, Argp *uint8, Narg Int32, Nret Int32, Callerpc uintptr) *g {_g_: = GETG () _p _: = _g_.m.p.ptr () NEWG: = Gfget (_P_)//Get FREE Reusable g object if NEWG = nil {NEWG = Malg (_stackmin)//new stack space is 2KB    G Object Casgstatus (NEWG, _gidle, _gdead)//Set G status bit Gdead Allgadd (NEWG)} ...//Determine the top position of the stack and the list of parameters and the number of parameters in the stack Memmove (unsafe. Pointer (Sparg), unsafe. Pointer (ARGP), UIntPtr (NARG))//Initialize the sched domain memclr (unsafe) to be used to save the execution site. Pointer (&newg.sched), unsafe. Sizeof (newg.sched)) NEWG.SCHED.SP = SP//Specifies the return address of the G task function Goexit newg.sched.pc = FUNCPC (goexit) + _pcquantum NE WG.SCHED.G = guintptr (unsafe.    Pointer (NEWG)) GOSTARTCALLFN (&newg.sched, FN)//Set Status and ID field newg.gopc = callerpc Newg.startpc = Fn.fn Casgstatus (NEWG, _gdead, _grunnable) ... newg.goid = Int64 (_p_.goidcache) _p_.goidcache++//Put G into the queue to be run Runqpu T (_p_, NEWG, True)//If there is a global idle p attempt to wake WAITP to execute//if there is M in spin wait p or G state discard//If the current creation is runTime.main Discard if Atomicload (&sched.npidle)! = 0 && atomicload (&sched.nmspinning) = = 0 && Unsa Fe. Pointer (FN.FN)! = unsafe. Pointer (FUNCPC (main)) {Wakep ()} ... return NEWG}

M's creation and G's execution

It is visible from the previous section runtime.newproc that just G is created and placed in the current P's G queue or global g queue. If it is main goroutine, the call is displayed, mstart and the other goroutine attempts to wakep to start m creation and g execution.

Wakep+startm

First, G will attempt to pidleget execute by going to the Sched.pidle list to get the idle p, and if not, continue to queue for the existing P execution. Get to p after the need to bind M to execute, this can be obtained from the Shec.midle can be reused m, by notewakeup waking M, if no idle m is rebuilt newm .

func wakep() {    // be conservative about spinning threads    if !cas(&sched.nmspinning, 0, 1) {        return    }    startm(nil, true)}func startm(_p_ *p, spinning bool) {    lock(&sched.lock)    // 如果startm没有指定P则尝试获取空闲的P    if _p_ == nil {        _p_ = pidleget()        if _p_ == nil {            unlock(&sched.lock)            if spinning {                xadd(&sched.nmspinning, -1)            }            return        }    }    mp := mget() //获取闲置的M若无则新建newM    unlock(&sched.lock)    if mp == nil {        var fn func()        if spinning {            fn = mspinning        }        newm(fn, _p_)        return    }    mp.spinning = spinning    mp.nextp.set(_p_)    notewakeup(&mp.park)}

Newm

allocmThe main is to initialize the M-band named G0 Stack, the default 8KB stack memory. Its stack memory address is passed to the newosproc default stack space of the system thread. mcommoninitCheck if the number of M exceeds the default of 10000 and then add M to the allm list and will not be freed. newosprocrepresents the creation of an OS thread, the Linux call is clone , and the following flags are specified to indicate which process resources can be shared, and finally Clone_thread indicates that the clone comes out of the thread and displays the same PID as the current process. The startup function corresponding to the OS thread is also specified mstart .

clone_vm| Clone_fs | clone_files| Clone_sighand | Clone_thread

func newm(fn func(), _p_ *p) {    mp := allocm(_p_, fn)    // 设置p设置到m的nextp域    mp.nextp.set(_p_)    msigsave(mp)...    newosproc(mp, unsafe.Pointer(mp.g0.stack.hi))}func allocm(_p_ *p, fn func()) *m {...    mp := new(m)    mp.mstartfn = fn    mcommoninit(mp)    mp.g0 = malg(8192 * stackGuardMultiplier)    mp.g0.m = mp...    return mp}func newosproc(mp *m, stk unsafe.Pointer) {    ret := clone(cloneFlags, stk, unsafe.Pointer(mp),       unsafe.Pointer(mp.g0), unsafe.Pointer(funcPC(mstart)))}

Mstart

Whether it is main goroutine or other Goroutine, the final starting point for G execution is mstart. Mstart mainly sets the stack space boundary of G and binds M to its nextp. The binding process acquirep , that is, M gets the mcache of P and sets the state of P to prunning.

func mstart() {    _g_ := getg()    // Initialize stack guards so that we can start calling    // both Go and C functions with stack growth prologues.    _g_.stackguard0 = _g_.stack.lo + _StackGuard    _g_.stackguard1 = _g_.stackguard0    mstart1()}func mstart1() {    _g_ := getg()    // 初始化g0执行现场    gosave(&_g_.m.g0.sched)    _g_.m.g0.sched.pc = ^uintptr(0) // make sure it is never used    asminit()    minit()    // 执行启动函数 通常是mspinning() sched.nmspinning--    if fn := _g_.m.mstartfn; fn != nil {        fn()    }    // 将m与它的nextp绑定    if _g_.m.helpgc != 0 {        _g_.m.helpgc = 0        stopm()    } else if _g_.m != &m0 {        acquirep(_g_.m.nextp.ptr())        _g_.m.nextp = 0    }    // 进入任务调度循环 不再返回    schedule()}

Schedule

Summarize g's execution process: Get G task from various channels + execute this g task. Execution g needs to switch from the current G0 stack to the stack execution of G, return to execute the Goexit cleanup site, and then re-enter the schedule.

    The
    • gets the G task to take precedence from the local P queue runqget, and each n task is to go global to get G task, if the local G and global G, even the network task Netpoll None, then from the other P queue steal. The
    • Execute task is the Gogo function that is ultimately called. It completes the switchover of the G-stack of the G0 Plank Road, jmp to G task function code execution. The
    • G task returns with Goexit , because the return address of its stack space into the stack is goexit when Newproc1 initializes G. Goexit completes the cleanup of the G-State, returning the G-drop reply to the dispatch loop with the linked list.
Func Schedule () {_g_: = GETG () Top:var GP *g var inherittime bool if GP = = Nil {//handle n tasks and go to global queue to get G-ren            To ensure fairness if _g_.m.p.ptr (). schedtick%61 = = 0 && sched.runqsize > 0 {lock (&sched.lock) GP = Globrunqget (_g_.m.p.ptr (), 1) Unlock (&sched.lock) if GP! = Nil {Rese         Tspinning ()}}}//Get G task from p local queue if GP = nil {GP, inherittime = Runqget (_g_.m.p.ptr ())      If GP! = Nil && _g_.m.spinning {throw ("schedule:spinning with local Work")}}     Get G from other possible places if acquisition fails, block m goes into hibernation if GP = = Nil {GP, inherittime = findrunnable () resetspinning ()} Execute g Execute (GP, Inherittime)}func execute (GP *g, inherittime bool) {_g_: = GETG () casgstatus (GP, _grunnab Le, _grunning) gp.waitsince = 0 Gp.preempt = False gp.stackguard0 = Gp.stack.lo + _stackguard if!inherittime {_g_.m.p.ptr (). schedtick++} _g_.m.curg = gp gp.m = _g_.m gogo (&gp.sched)} 
func goexit0(gp *g) {    _g_ := getg()    casgstatus(gp, _Grunning, _Gdead)    gp.m = nil...    gfput(_g_.m.p.ptr(), gp)    schedule()}

State change

The state transition of P and g

P created in Schedinit program initialization, in addition to the current corresponding to the main goroutine p, the other npcrocs-1 p are placed in the idle P-linked list waiting for use, the state is pidle. When m and p are bound, acquirep the P state is set to prunning. Psyscall occurs only when a system call is entered. Pdead is only used when adjusting the prosize size.

const (    _Pidle    = iota    _Prunning // Only this P is allowed to change from _Prunning.    _Psyscall    _Pgcstop    _Pdead)

G is created in Newproc, which is the initial gidle when calling a function through the GO keyword. G is gdead before assigning the stack space to G, and the state is changed to grunnable before it is initialized and placed in the queue. M actually performing to G after state is grunning.

const (    _Gidle            = iota // 0    _Grunnable               // 1 runnable and on a run queue    _Grunning                // 2    _Gsyscall                // 3    _Gwaiting                // 4    _Gmoribund_unused        // 5 currently unused, but hardcoded in gdb scripts    _Gdead                   // 6    _Genqueue                // 7 Only the Gscanenqueue is used.    _Gcopystack              // 8 in this state when newstack is moving the stack)

Gopark+goready

Gwaiting only park_m appear when the G will never execute unless it happens runtime.ready . Because the gwaiting does not appear in the queue to be run. Channel Operation Timer Network poll are possible Park Goroutine.

func park_m(gp *g) {    _g_ := getg()    casgstatus(gp, _Grunning, _Gwaiting)    dropg()    if _g_.m.waitunlockf != nil {        fn := *(*func(*g, unsafe.Pointer) bool)(unsafe.Pointer(&_g_.m.waitunlockf))        ok := fn(gp, _g_.m.waitlock)        _g_.m.waitunlockf = nil        _g_.m.waitlock = nil        if !ok {            casgstatus(gp, _Gwaiting, _Grunnable)            execute(gp, true) // Schedule it back, never returns.        }    }    schedule()}

Midle and Gsyscall

When the g of the system call is returned, the first thing to DROPG is to separate from the original m, since the original m already has no p to provide it with memory. After G to re-pidleget find a free p queue, if not the global queues. Finally stopm stops the current m and continues to schedule.

func exitsyscall0(gp *g) {    _g_ := getg()    casgstatus(gp, _Gsyscall, _Grunnable)    dropg()    lock(&sched.lock)    _p_ := pidleget()    if _p_ == nil {        globrunqput(gp)    } else if atomicload(&sched.sysmonwait) != 0 {        atomicstore(&sched.sysmonwait, 0)        notewakeup(&sched.sysmonnote)    }    unlock(&sched.lock)    if _p_ != nil {        acquirep(_p_)        execute(gp, false) // Never returns.    }    if _g_.m.lockedg != nil {        // Wait until another thread schedules gp and so m again.        stoplockedm()        execute(gp, false) // Never returns.    }    stopm()    schedule() // Never returns.}

When M exits from the system call exitsyscall0 , it is called stopm to put M into the idle m-linked list, falling into hibernation waiting to be awakened. The source of the so-called idle m when Startm is the M recovered from the system call. Startm occurs at two times: when a new G joins Wakep, there are other G tasks at HANDOFFP. This will trigger the idle m reuse, corresponding notewakeup .

 func stopm () {_g_: = GETG () retry:lock (&sched.lock) mput (_g_.m) Unlock (&am P;sched.lock) Notesleep (&_g_.m.park) noteclear (&_g_.m.park) Acquirep (_g_.m.nextp.ptr ()) _G_.M.NEXTP = 0}  
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.