The Go Scheduler

Source: Internet
Author: User
Tags posix
This is a creation in Article, where the information may have evolved or changed.

Go Runtime Scheduler:
Before you know the scheduler of go, you need to understand why you need it, because we might think that the OS kernel is not already a thread scheduler?
Everyone familiar with the POSIX API knows that the POSIX scenario is largely a logical description and extension of the UNIX process model, with many similarities. Thread has its own signal mask, CPU affinity and so on. But many features are cumbersome for go programs. This is especially the time-consuming context switch. Another reason is that the go garbage collection requires all the goroutine to stop, so there is a consistent state within. The point of time for garbage collection is uncertain, and if you rely on the OS's own scheduler to dispatch, there will be a lot of threads that need to stop working.

A go scheduler can be developed separately to let it know when the memory state is consistent, that is, when the garbage collection starts, just wait for the thread that is running on the CPU core, instead of waiting for all the threads.

Mappings between user-space threads and kernel-space threads are: N:1,1:1 and m:n
N:1 is that multiple (N) user threads are always running on a kernel thread, and context switches are really fast, but they can't really take advantage of multicore.
1:1 means that a user thread runs on only one kernel thread, which can take advantage of multicore, but the context switch is slow.
M:n is that multiple goroutine run on multiple kernel threads, which seems to be able to combine the advantages of both, but undoubtedly increases the difficulty of scheduling.

There are three important structures inside the GO scheduler: m,p,s
M: Represents a real kernel OS thread, like the thread in POSIX, the person who really works
G: Represents a goroutine, it has its own stack, instruction pointer and other information (waiting for the channel, etc.) for dispatch.
P: Represents the context of a dispatch, which can be seen as a local scheduler that allows go code to run on a thread, which is the key to implementing mapping from N:1 to n:m.

Picture,There are 2 physical threads m, each of which has a context (P), and each has a running goroutine.
The number of P can be set by Gomaxprocs (), which in fact represents the true concurrency, that is, how many goroutine can run at the same time.
The gray goroutine in the figure are not running, but are ready for readiness and are waiting to be dispatched. P maintains this queue (called Runqueue),
In the go language, it's easy to start a goroutine: Go function is OK, so every go statement is executed, and the Runqueue queue joins a goroutine at the end, at the next dispatch point, It is removed from the runqueue (how to decide which goroutine to take?) ) a goroutine executes.

Why should I maintain multiple context p? Because when an OS thread is blocked, p can turn to another OS thread!
As seen in the figure, when an OS thread M0 into a block, p runs on the OS thread M1 instead. The scheduler guarantees that there are enough threads to run all of the context P.

The M1 in the diagram may be created, or removed from the thread cache.

When M0 returns (that is, the M0 is blocked before it is now awakened), it must try to get a context p to run Goroutine, which, in general, will steal steal a context from other OS threads, if not stolen, It puts Goroutine in a global runqueue, and then goes to sleep (into the thread cache), because Goroutine relies on context p to run. Contexts P will also periodically check the global runqueue, otherwise the goroutine on global Runqueue will never be executed.

In another case, the task G assigned by P is quickly executed (uneven distribution), which causes a context p to be idle and the system still busy. But if global Runqueue has no task G, then P will have to take some g from the other context p to execute. In general, if context P steals a task from another context p, it is generally ' stealing ' half of the run queue, which ensures that every OS thread is fully used.

Author: Yi Wang
Source: Know
Copyright belongs to the author, please contact the author for authorization.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.