Where's the stack?

Source: Internet
Author: User

What are heaps and stacks, and where are they?

Original link: http://www.kuqin.com/shuoit/20140815/341692.html

Problem description

The programming language books often explain that value types are created on the stack, and reference types are created on the heap, but do not essentially explain what this heap and stack are. I have only advanced language programming experience and have not seen a clearer explanation for this. I mean I understand what stacks are, but what are they, and where are they (from the perspective of actual computer physical memory)?

    1. Is it normally controlled by the operating system (OS) and Language runtime (runtime)?
    2. What is the scope of their action?
    3. What is the size of their decision?
    4. Which is faster?
Answer a

The stack is the amount of memory space that is left for the execution thread. When the function is called, the top of the stack is a local variable and some bookkeeping data reserve blocks. When the function is finished, the block is useless and may be used again at the next function call. Stacks usually reserve space in last-in, first-out (LIFO) mode, so the most recent reserved block (reserved block) is usually first released. Doing so can make the tracking stack simple, and releasing the Block from the stack (free block) is just a pointer offset.

A heap is a reserved memory space for dynamic allocation. Unlike stacks, there is no fixed pattern for allocating and reallocating blocks from the heap; You can allocate and release it at any time. This makes it very complicated to track which part of the heap has been allocated and released, and there are many custom heap allocation policies used to adjust the heap performance for different usage patterns.

Each thread has a stack, but each application usually has only one heap (although there are cases where allocating memory for different types uses multiple heaps).

Answer your question directly: 1. When a thread is created, the operating system (OS) allocates stacks for each system-level (System-level) thread. Typically, the operating system allocates the heap to the application by invoking the language's runtime (runtime). 2. The stack is attached to the thread, so the stack is recycled when the thread ends. The heap is usually allocated through the runtime when the application starts and is recycled when the application (process) exits. 3. When the thread is created, set the size of the stack. The size of the heap is set when the application starts, but it can be extended when needed (the allocator requests more memory from the operating system). 4. The stack is faster than the heap, because its access mode makes it easy to allocate and reallocate memory (pointers/integers only perform simple increment or decrement operations), while the heap has more complex bookkeeping involved in allocating and releasing. In addition, the frequent reuse of each byte on the stack means that it may be mapped to the processor cache, so it is soon (the translator notes: Locality principle).

Answer two

Stack:

    1. stored in the computer's RAM as the heap.
    2. When you create a variable on the stack, it expands and is automatically recycled.
    3. It is much faster to allocate on the stack than the heap.
    4. Implemented using a stack in the data structure.
    5. Stores local data, returns an address, and is used as a parameter pass.
    6. A stack overflow (infinite number of recursive calls, or a large amount of memory allocations) can result when too many stacks are used.
    7. Data on the stack can be accessed directly (not by using pointers).
    8. If you know exactly when you need to allocate data and are not too big before compiling, you can use the stack.
    9. Determines the capacity limit of the stack when you start the program.

Heap:

    1. And stacks are stored in the computer's RAM.
    2. The variables on the heap must be released manually, and there is no scope problem. Data can be released using Delete, delete[] or free.
    3. It is slower to allocate memory than on the stack.
    4. Use the program on demand.
    5. A large amount of allocations and releases can cause memory fragmentation.
    6. In C + +, the number created on the heap is accessed using pointers, and memory is allocated with new or malloc.
    7. If the requested buffer is too large, the application may fail.
    8. It is recommended that you use the heap when you do not know how much data will be needed during the run time or when you need to allocate a lot of memory.
    9. May cause a memory leak.

Example:

int foo() {     char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).     bool b = true; // Allocated on the stack.     if(b)     {         //Create 500 bytes on the stack         char buffer[500];          //Create 500 bytes on the heap         pBuffer = new char[500];      
Answer three

Heaps and stacks are the two collectively referred to in two memory allocations. There may be many different implementations, but the implementation conforms to several basic concepts:

1. For the stack, the new data item in the stack is placed at the top of the other data, and you can only remove the topmost data (not overrun) when you remove it.

2. For heaps, data item locations are not in a fixed order. You can insert and delete them in any order, because they don't have the concept of "top" data.

The previous image is a good description of how heaps and stacks allocate memory.

Is it normally controlled by the operating system (OS) and Language runtime (runtime)?

As mentioned earlier, heaps and stacks are collectively known and can be implemented in many ways. A computer program usually has a stack called the call stack, which stores information about the current function call (for example, the address of the keynote function, local variables), since the function call needs to be returned to the keynote function. Stacks extend and shrink to host information. In fact, the program is not controlled by the runtime, it is determined by the programming language, the operating system, or even the system architecture.

A heap is a generic term (memory) that is dynamically and randomly allocated in any memory, that is, unordered. Memory is usually allocated by the operating system, and the API interface is called by the application to implement the allocation. There is some additional overhead involved in managing dynamically allocated memory, but this is handled by the operating system.

What is the scope of their action?

The call stack is a low-level concept, which has little to do with the scope of the program. If you disassemble some code, you will see the pointer reference stack section. In the case of high-level languages, language has its own scope rules. Once the function returns, the local variables in the function are directly released. Your programming language is based on this work.

In the heap, it is also difficult to define. The scope is limited by the operating system, but your programming language may add some of its own rules to limit the scope of the heap in the application. The architecture and operating system uses virtual addresses, which are then translated by the processor into the actual physical address, as well as page faults and so on. They record that page belongs to that application. But you don't have to worry about it, because you only allocate and free memory in your programming language, and some error checking (the reason for allocation failure and release failure).

What is the size of their decision?

Still, depending on the language, compiler, operating system and architecture. Stacks are usually allocated in advance, because stacks must be contiguous blocks of memory. The compiler or operating system of the language determines its size. Do not store large chunks of data on the stack, so that there is enough space to not overflow, unless there is an infinite recursion (the amount, stack overflow) or other uncommon programming resolutions.

A heap is a generic term for any memory that can be dynamically allocated. It depends on what you think of it, its size is changing. Working in modern processors and operating systems is highly abstract, so you don't need to worry about its actual size under normal circumstances unless you have to use memory that you haven't allocated or have already freed.

Which is faster?

The stack is faster because all the free memory is contiguous, so there is no need to maintain a list of free memory blocks. Just a simple pointer to the top of the current stack. Compilers are typically implemented with a dedicated, fast register. The more important thing is that the subsequent operations on the stack are usually centered around a block of memory, which facilitates high-speed access to the processor (translator Note: The principle of locality).

Answer Four

The answer to your question is dependent on implementation, which differs depending on the compiler and processor architecture. Here's a simple explanation:

    1. Stacks and heaps are used to obtain memory from the underlying operating system.
    2. In a multithreaded environment, each thread can have his or her own completely separate stack, but they share the heap. Parallel access is controlled by the heap instead of the stack.

Heap:

    1. The heap contains a list of used and idle memory blocks. New allocations on the heap (with new or malloc) memory are found in the free blocks of memory to meet the requirements of the appropriate block. This operation updates the block list in the heap. These meta-information is also stored on the heap, often in a small area of the head of each block.
    2. The addition of heaps of new fast usually extends from the ground address to the high address. So you can think of the heap increasing in size as memory allocations. If the requested memory size is small, you usually get more memory than the request size from the underlying operating system.
    3. The application and release of many small blocks may produce the following states: There are many small free blocks between the blocks that have been used. In order to request a chunk of memory failure, although the sum of free blocks is sufficient, but the idle small block is fragmented, can not meet the size of the request. This is called "heap fragmentation".
    4. When the used blocks with free blocks are released, the new free blocks may be merged with adjacent free blocks into a large free block, which effectively reduces the "heap fragmentation".

Stack:

    1. Stacks often work with SP registers (the Translator notes: "Stack pointer", known to the Assembly's friends), and initially the SP points to the top of the stack (the Highland address of the stack).
    2. The CPU uses the push command to stack the data, using the pop command to bounce the stack. When using push to stack, the SP value is reduced (to the low address extension). The SP value increases when a pop is used to reload the stack. Storing and retrieving data are the values of the CPU registers.
    3. When the function is called, the CPU uses a specific instruction to put the current IP (translator Note: "Instruction Pointer", which is a register that is used to record the location of the CPU instructions). That is, the address of the execution code. The CPU then assigns the calling function address to the IP to make the call. When the function returns, the old IP is bounced, and the CPU continues to go to the code before the function call.
    4. When entering a function, the SP expands downward to ensure that sufficient space is left for the local variables of the function. If there is a 32-bit local variable in the function, it will leave enough space of four bytes in the stack. When the function returns, the SP frees up space by returning to its original location.
    5. If the function has parameters, the arguments are stacked before the function call. The code in the function locates the parameters and accesses them through the SP's current location.
    6. function nesting calls and using magic, each new function is assigned a function parameter, and the return value address, local variable space, and nested call activity records are pressed into the stack. When the function returns, it is revoked in the correct manner.
    7. Stacks are constrained by blocks of memory, and the constant nesting of functions/allocating too much space to local variables can lead to stack overflow. A CPU exception is triggered when the memory area in the stack has been used and continues to write down (low address). This exception is then translated into various types of stack overflow exceptions through the runtime of the language. (Translator notes: "Different languages have different exception hints, so they are translated by the language runtime" I think he expressed this meaning.)

* Can the allocation of functions be stacked instead of stacks?

No, the activity record of the function (that is, local or automatic) is allocated on the stack, which not only stores the variables, but can also be used to trace the functions.

Heap management relies on the runtime environment, C uses malloc, and C + + uses new, but many languages have a garbage collection mechanism.

Stacks are the lower-level features that bind tightly to the processor architecture. It is not difficult to expand space when the heap is insufficient, as there can be a library function to invoke. However, the extension stack is generally not possible because the execution thread is shut down by the operating system when the stack overflows, which is too late.

Where's the stack?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.