How the Go language handles stacks

Source: Internet
Author: User
Tags cloudflare
This is a creation in Article, where the information may have evolved or changed.

Go 1.4beta1 just released, in Go 1.4beta1, go language stack processing way from the previous "segmented stacks" changed to "continuous stacks". On the Go language of stack processing mechanism, development history, existing problems, cloudflare an official blog has been systematically elaborated, here the content is translated from the CloudFlare blog: How Stacks is Handled in Go.

At CloudFlare, we use the go language to implement a variety of services and applications. In this blog post, we will take you through some of the complex technical details of go.

One of the important features of the go language is goroutines. They are inexpensive, co-ordinated execution threads that are used to implement various operations such as timeout, generators, and competing back-end programs. In order for Goroutines to be able to accommodate more tasks, we not only need to guarantee the minimum memory consumption per goroutines, but also to ensure that people can start them up with minimal configuration.

To achieve this goal, the Go language uses stack management, which is similar to other programming languages, but at the specific implementation level, it differs greatly from other languages.

One, thread stacks (thread stacks) Introduction

Before we look at the stack processing of Go, let's start by looking at the traditional languages, such as how C is managed on the stack.

When you start a C implementation of thread, the C standard library is responsible for allocating a piece of memory as the stack of this thread. The standard library allocates this memory, tells the kernel its location, and lets the kernel handle the execution of the thread. But when the memory is not enough, the problem comes, so let's take a look at the following function:

int a (int m, int n) {
if (M = = 0) {
return n + 1;
} else if (M > 0 && n = = 0) {
Return a (m–1, 1);
} else {
Return a (m–1, a (M, n–1));
}
}

This function uses recursion a lot, and executing a (4, 5) will drop all stack memory exhaustion. To solve this problem, you can adjust the size of the memory block allocated by the standard library to the thread stack. But raising the stack size across the line means that each thread will increase the memory usage of the stack, even if they are not heavily recursive. In this way, you will run out of memory, even if your program has not yet used the memory on the stack.

An alternative workaround is to determine the stack size separately for each thread. This way you have to do the task of estimating the size of their stack memory based on the needs of each thread. This will be the difficulty of creating threads beyond our expectations. Trying to figure out how much memory is normally required for a thread stack is not feasible, even if it is usually difficult.

How go is coping with this problem

Instead of allocating a fixed-size stack space for each goroutine, the Go runtime attempts to provide Goroutine with the stack space they need on demand. This frees the programmer from the hassle of deciding the size of the stack space. But the go core team is trying to switch to another scenario, and here I'll try to explain the old scenario and its drawbacks, the new scenario, and why it's going to change.

Third, segmented stack (segmented Stacks)

The staging stack (segmented stacks) is the first scenario used by the go language to process stacks. When creating a goroutine, the go runtime allocates a 8K byte of memory for the stack for Goroutine to run, and we let Goroutine perform its task processing on this stack.

The problem followed when we ran out of this 8 K-byte stack space. To solve this problem, each go function has a small piece of code (called Prologue) at the entrance of the function, which checks to see if the allocated stack space has been exhausted, and if it is exhausted, the code calls the Morestack function.

The Morestack function allocates a new memory to use as a stack space, which in turn writes various data about the stack to a struct at the bottom of the stack, including the address of the previous stack. A bit we have a new stack segment, and we'll restart Goroutine, starting with the function that's running out of stack space (Foobar in the note). This is called "Stack split."

The following stack is exactly what happens after we split the stack:

At the bottom of the new stack, we inserted a stack entry function Lessstack. We're not going to call this function, and setting this function is for us to return with the function that led us to run out of stack space (note: Foobar). When the function (Foobar) returns, we go back to Lessstack (the stack frame), Lessstack looks for the struct at the bottom of the stack and adjusts the stack pointer (stack pointer) so that we can return to the previous stack space. After doing this, we can release the new stack (stack segment) and proceed with our program.

Iv. issues with segmented stacks (segmented stacks)

The staging stack gives us the ability to scale on demand. Programmers don't have to worry about the size of the stack, starting a new goroutine is cheap and programmers don't know how much the stack will grow.

This is the way that the Go language processing stack has grown until now, but this method has a flaw. That is, stack shrinking can be a relatively expensive operation. If you encounter a stack split in a loop, you will most likely feel it. A function increases the stack space, splits the stack, and returns and frees the stack (stack segment). If you do this in a loop, you will pay a great price (performance).

This is the so-called "hot split" problem. It is also the main reason for the Go Core Development group to replace the stack copying with a new stack management solution.

V. Stack copy (stack copying)

The initial stage of the stack copy is similar to the staging stack. Goroutine is running on the stack, and when it runs out of stack space, it encounters the same stack overflow check as in the old scheme. However, unlike the old scheme, which retains a link that returns a previous stack, the new scheme creates a new stack twice the size of the original stack and copies the old stack into it. This means that when the stack is actually used to shrink to its original size, go runs without doing anything. Stack shrinking is an operation without any cost. In addition, when the stack grows again, the runtime does not need to do anything, we just need to reuse the previously allocated free space.

Six, how the stack is copied

The copy stack sounds simple, but it's actually a difficult thing to do. Because the variables on the stack in go have their own addresses, once you have pointers to the variables on the stack, you can't do what you want. When you move the stack, the pointer to the original stack becomes an invalid pointer.

Fortunately, only pointers assigned on the stack can point to addresses on the stack. This is essential for memory security, otherwise the program may access addresses on stacks that are no longer in use.

Since we need to know the location of pointers that need to be reclaimed by the garbage collector, we know which parts of the stack are pointers. When we move the stack, we can update the stack pointer to point to the new destination address, and all relevant pointers will be taken care of.

Because we use garbage collection information to assist with stack copies, all functions that appear on the stack must have this information. But it's not always the case. Because most of the code in the Go runtime is written in C, a large number of runtime calls do not have pointer information available so they cannot be copied. Once this happens, we have to go back to the staging stack and accept the high price it pays for it.

This is why the go runtime has been massively rewritten by the run-time developers. The code that cannot be rewritten with go, such as the kernel of the scheduler and the garbage collector, will be executed on a special stack, and the size of this particular stack is determined by the runtime developer alone.

In addition to making stack copies possible, this approach allows us to implement features such as concurrent garbage collection in the future.

Vii. about Virtual memory

Another way to handle the stack is to allocate large memory segments in virtual memory. Because physical memory is allocated only when it is actually used, it seems as if you can allocate a large memory segment and let the operating system handle it. Here are some of the problems with this approach

First, the 32-bit system can only support 4G bytes of virtual memory, and the application can only use 3G space. Since it is not uncommon to run millions of goroutines at the same time, you are likely to be running out of virtual memory, even if we assume that each goroutine stack is only 8K.

Second, however, we can allocate large memory in a 64-bit system, which relies on excessive memory usage. Excessive use means that when you allocate more memory than the physical memory size, the dependent operating system guarantees that the physical memory is allocated when needed. However, allowing excessive use can lead to some risks. Because some processes allocate memory that exceeds the size of the machine's physical memory, the operating system will have to replenish the allocated memory for these processes if they use more memory. This causes the operating system to put some memory segments into the disk cache, which often increases the unpredictable processing latency. It is for this reason that some new systems have turned off support for excessive use.

Viii. Conclusion

The Go development team has made a lot of effort to make goroutine use cheaper, faster, and more task-friendly. Stack management is just a small part of it. If you want to learn more about the details of a stack copy, you can refer to its design documentation. Also, if you want to learn more about the details of the Go runtime rewrite, here's a list of mail.

, Bigwhite. All rights reserved.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.