Linux Process address space, core stack, user stack, kernel thread

Source: Internet
Author: User

Address Space:

On a 32-bit Linux system, the process address space is 4 GB, including 1 GB of kernel address space and 3 GB of user address space.

Kernel stack:

The process control block task_struct stores information about the size of two pages.

Why is every process used?RespectiveWhat about the kernel stack?


Reference (http://hi.baidu.com/iruler/blog/item/0c3363f377ccc5c90a46e023.html)"

Assume that a process runs in the kernel state (using this global Kernel stack) through a system call. If it is preemptible, a switchover occurs and another process starts to run, if the current process falls into the kernel through system calls, the global Kernel stack will also be used by the process, which will destroy the kernel space stack of the previous process.
If the process uses an independent Kernel stack, this situation will be avoided.

"

Kernel thread:

A scheduling unit with its own independent Kernel stack can be used for scheduling and executed in the kernel space.


User Stack:

Each thread has a user stack directed by SS and ESP.


========================================================== ==============

Process 1 PROCESS 2

Kernel code zone kcode (0xc0001000)
Kcode (0xc0001000)

Kernel stack kstack (0xc000f000) kstack (0xc001f000)

Kernel stack kstack (0xc000d000) kstack (0xc001d000)

...

Kernel data zone kdata (0xc0003000) kdata (0xc0003000)

Bytes ---------------------------------------------------------------------------------------------

User code area ucode (0x70001000) ucode (0x70001000)

User stack zone ustack (0x7000f000) ustack (0x7000f000)

User stack ustack (0x7000d000) ustack (0x7000d000)

...

User Data zone udata (0x70003000) udata (0x70003000)

========================================================== ==============

Reasonable explanation:

The ing page table (256 entries * 4 m) of kernel 1g space has only one copy, and N processes are shared (all copies are in their own process page table, entries with 256 kernels + entries with 768 user spaces, a total of 1024 entries, assuming that 4 m pages are used and all are allocated at the beginning ).

The entries of each process user space are different. For example, for the same 0x70001000 virtual address, process 1 points to physical memory 0x2000, and process 2 points to 0x1000.

The virtual addresses of the kernel stack corresponding to each thread do not overlap.

Thread1's kernel stack = 0f000,

Thread2's
Kernel stack = 0d000,

Thread3's
Kernel stack = 1f000,

Thread4's
Kernel stack = 1d000

...

Thought 1:

If the kernel stack is not pre-allocated (allocation means "separating a segment of non-overlapping space in the kernel space as the stack of each thread", e.g. kmalloc call), so when the kernel state is entered, the stack is pressed, page missing exception occurs, the page occupied by the kernel stack must be updated, the call to the page change process must involve the parameter pressure stack output stack. At this time, the kernel stack is not ready, and an exception occurs in nesting and a system error occurs!


Thinking 2:

In the kernel, kmalloc can be used. You can add an entries and associate it with a piece of physical memory. OK can be used.


Think 3:

If you want a process to share data from a virtual memory address 0x80001000, add one entries (0x80001000-> 0x3000) to the page table of process P1 and P2 to be shared ).

In addition, the kernel is a natural shared object, so a copy of the kernel space page table is set in each process page table.

If a guy is Maverick, create n page tables about the kernel space and point to N physical memory blocks, then he needs to "lay" n kernel code and data copies in the N physical memories (it is difficult to find out ).


Thinking 4:

The kernel stack is indeed not suitable for sharing (a special memory area ). What should I do? Is it fixed in a virtual address like a user space stack, and the insert page table item entries points to different physical memory? Obviously not! There is only one way to allocate N non-overlapping spaces in the kernel space.


Thinking 5:

When the kernel is entered, the initial "kernel stack" is not a real Kernel stack. This stack is global, and each CPU is used to transition to a real Kernel stack. (Http://bbs.pediy.com/archive/index.php? T-87518.html)


Thinking 6:

Scenarios of independent Kernel stack,At first, let's analyze what will happen if a kernel stack is shared. Suppose there are three processes A and B, and a calls the system to call read (1 ,...) the read button does not exist,
Therefore, a is blocked in the kernel. At this time, the kernel schedules B to execute the task. At this time, B Also calls a system call. At this time, the key event arrives, process a is awakened, and process a continues to execute. Let's think about B entering the kernel
It has damaged the kernel stack that a enters the kernel. Can a return normally at this time? Therefore, from the above analysis, A and B must have their respective kernel stacks. This kernel stack seems to have been allocated with task_struct before.
A total of three pages are allocated, except for the memory occupied by task_struct, the rest is the kernel stack. On x86, the stack and other pointers are stored in ss0 and esp0 of TSS. Http://bbs.chinaunix.net/thread-1930753-1-1.html)


References:

1. Use of Kernel stack (http://tech.ddvip.com/2008-09/122095404362368.html)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.