About the Linux thread stack and TLS

Source: Internet
Author: User
Note:

A. this article describes the brief Implementation of the Linux nptl thread stack and the principle of local thread storage. In the experimental environment, the Linux kernel version is 2.6.32, the glibc version is 2.12.1, And the Linux release version is ubuntu, the hardware platform is a 32-bit x86 system.
B. There are many topics about Linux nptl threads. This article selects the private address space of each thread for discussion, namely the thread stack and TLS. The principle mountain private is not really private, because we all know that the thread is characterized by a shared address space. The principle private space means that other threads will not touch the data in these spaces through normal means.
I. Thread Stack

Although Linux unifies threads and processes into task_struct without distinction, there are still some differences in the stack for its address space. For Linux processes or main threads, the stack is generated during fork. In fact, the stack address of the father is copied, and then the cow is written and dynamically increased, this can be seen from the sys_fork call do_fork parameter:

int sys_fork(struct pt_regs *regs){    return do_fork(SIGCHLD, regs->sp, regs, 0, NULL, NULL);}

What is dynamic growth? We can see that the initial size of the sub-process is 0. Since the parent SP is copied and all VMA copied later in dup_mm, the sub-process stack flags still include:

#define VM_STACK_FLAGS    (VM_GROWSDOWN | VM_STACK_DEFAULT_FLAGS | VM_ACCOUNT)

This means that for VMA with this flags (Stack is also in a VMA !) You can dynamically increase its size, which can be seen in do_page_fault:

if (likely(vma->vm_start <= address))    goto good_area;if (unlikely(!(vma->vm_flags & VM_GROWSDOWN))) {    bad_area(regs, error_code, address);    return;}

Very clear.
However, for the sub-threads generated by the main thread, its stack is no longer fixed in advance, but called by the MMAP system, it does not carry the vm_stack_flags mark (it is estimated that the kernel will support it later !). This can be seen from the allocate_stack function in nptl/allocatestack. C of glibc:

mem = mmap (NULL, size, prot,        MAP_PRIVATE | MAP_ANONYMOUS | MAP_STACK, -1, 0);

The acquisition of the size parameter in this call is very complicated. You can manually import the stack size or use the default value, which is generally the default value. These are not important. What is important is that such a stack cannot grow dynamically. Once used up, it will be gone. This is different from the fork that generates the process. After the stack is obtained through MMAP in glibc, the underlying layer will call the sys_clone system call:

Int sys_clone (struct pt_regs * regs) {unsigned long clone_flags; unsigned long newsp; int _ User * parent_tidptr, * child_tidptr; clone_flags = regs-> BX; // obtain the stack pointer newsp = regs-> CX; parent_tidptr = (INT _ User *) regs-> DX of the thread obtained by MMAP; child_tidptr = (INT _ User *) regs-> Di; If (! Newsp) newsp = regs-> sp; return do_fork (clone_flags, newsp, regs, 0, parent_tidptr, child_tidptr );}

Therefore, for the stack of sub-threads, it is actually a memory area mapped out in the address space of the process. In principle, it is private to the thread, however, when all threads in the same process are generated, they can copy many fields of the generator's task_struct, including all VMA fields. Other threads can still access the fields if needed, so pay attention to it.
Ii. Local thread storage-TLS

Linux glibc uses the GS register to access TLS. That is to say, the GS Register indicates that the segment points to the Teb of the thread (Windows terminology), that is, TLS. This is a benefit, that is, you can efficiently access the information stored in TLS without calling the system again and again. Of course, you can also use the System Call method. This can be done because intel has loose regulations on the functions of various registers, so you can use GS, FS, and other block registers to do almost anything, of course, TLS can be directly accessed. When the thread starts, glibc first points the GS register to the 6th CIDR blocks, and uses the CIDR block mechanism to support TLS addressing access, subsequent access to TLS information is as efficient as accessing user information.
When a thread starts, sys_set_thread_area can be used to set the TLS information of the thread. All the information must be provided by glibc:

asmlinkage int sys_set_thread_area(struct user_desc __user *u_info){    int ret = do_set_thread_area(current, -1, u_info, 1);    asmlinkage_protect(1, ret, u_info);    return ret;}int do_set_thread_area(struct task_struct *p, int idx,               struct user_desc __user *u_info,               int can_allocate){    struct user_desc info;    if (copy_from_user(&info, u_info, sizeof(info)))        return -EFAULT;    if (idx == -1)        idx = info.entry_number;    /*     * index -1 means the kernel should try to find and     * allocate an empty descriptor:     */    if (idx == -1 && can_allocate) {        idx = get_free_idx();        if (idx < 0)            return idx;        if (put_user(idx, &u_info->entry_number))            return -EFAULT;    }    if (idx < GDT_ENTRY_TLS_MIN || idx > GDT_ENTRY_TLS_MAX)        return -EINVAL;    set_tls_desc(p, idx, &info, 1);    return 0;}

Fill_ldt sets the base address, segment limit, DPL, and other information of the 6th segment descriptors in gdt. The information is obtained from the u_info parameter called by the sys_set_thread_area system. In essence, the information described in the 6th segments of gdt is actually a piece of memory, which is used to store TLS segments. This memory actually uses BRK, MMAP and other calling requests are applied for in the heap space of the main thread, but later sys_set_thread_area is called to set it to the private space of the current thread. If the main thread or other threads are willing, you can also access this space by other means.

After understanding the general principles, let's take a look at how everything is associated. First, let's take a look at the definition of gdt segments in the Linux kernel, as shown in:

We found that the sixth segment was used to record TLS data. I confirmed that I was writing a simple program and using GDB to check the GS register value, at this point, we already know that the segment description sub-points in the GS register to record TLS data, as shown in:


We can see that the GS value in the Red Circle is 0x33. How can we explain this 0x33? See decomposition:

This confirms that GS points to the segment to indicate TLS data. In glibc, the GS register points to the sixth segment during initialization:

In this case, can we directly access TLS data through the GS register? The answer is yes, of course. glibc is actually doing this. It is more convenient to use than encapsulated. But if you want to understand why it is appropriate, or to toss it yourself, my environment is Ubuntu glibc-2.12.1, it is worth noting that each glibc version of TLS header may be different, be sure to check the source code of the version you Debug. Otherwise, it will go crazy. I modified test_gs.c to the following code:

# Include <stdlib. h> # include <stdio. h> # include <malloc. h> # include <string. h> # include <pthread. h> int main (INT argc, char ** argv) {int A = 10, B = 0; // B saves the address of the segment represented by the GS register // sets three TLS variables. The first two use heap memory, and the last one does not use static pthread_key_t thread_key1; static pthread_key_t thread_key2; static pthread_key_t thread_key3; char * addr1 = (char *) malloc (5); char * addr2 = (char *) malloc (5); memset (addr1, 0, 5 ); memset (addr2, 0, 5); strcpy (addr1, "aaaa"); strcpy (addr2, "BBBB"); pthread_key_create (& thread_key1, null); pthread_key_create (& thread_key2, null); pthread_key_create (& thread_key3, null); pthread_setspecific (thread_key1, addr1); pthread_setspecific (thread_key2, addr2); pthread_setspecific (thread_key3, "1111111111 "); // obtain the GS-indicated segment, that is, the TLS address. Use Embedded Assembly for ASM volatile ("movl % GS: 0, % 0 ;": "= r" (B)/* output */); printf ("OK \ n ");}

The meaning of this Code is that I can access the TLS variable through the GS register. For convenience, I did not write the code, but confirmed it through GDB, in fact, using code writing to retrieve TLS variables has the same effect as using GDB to view memory. I personally think the debugging method is better for understanding.
When debugging, after obtaining the GS, we get the TLS address. Then, based on the TLS structure of this version, we can analyze where the TLS variable is stored and view the memory near the TLS address, verify that there is indeed a TLS variable, which can be concluded by comparing the addresses. Of course, before the actual operation, we first look at the glibc-2.12.1 version of the TLS data structure, as shown in:

Note: because we do not have the intention of in-depth hack TLS, we only need to know where the variable can be obtained. Therefore, we only need to know the size of some fields, at present, you do not need to understand its meaning and design ideas.
We found that it should be the TLS variable region starting from 35th * 4 bytes. Is that true? Let's take a look at the debugging results. Note that we need to set the breakpoint after ASM so that the value of B can be played. Of course, you can adjust the above Code, it is also possible to put the ASM Embedded Assembly at the beginning of the Code. The GDB command is simple, as shown below:

The result is clear. There is also a small problem in the end, that is, the thread switching problem.
For Windows, the thread's Teb is almost fixed, and for Linux, it is also like this. You only need to get the GS register to get the TCB of the current thread. In other words, GS is always unchanged, always 0x33, always pointing to the 6th segment of gdt, changing the content of the 6th segment of gdt. Whenever the process or thread switches, the content of the second segment must be reloaded to load the information in the TLS info of the thread to be run. This is done in switch_to Macro during the switch:

load_TLS(next, cpu);

Each task_struct has thread_struct, and the metadata of the thread TLS is stored in the tls_array array of the thread_struct struct:

static inline void native_load_tls(struct thread_struct *t, unsigned int cpu){    unsigned int i;    struct desc_struct *gdt = get_cpu_gdt_table(cpu);    for (i = 0; i < GDT_ENTRY_TLS_ENTRIES; i++)        gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i];}
Note: What else do we need to say about TLS?

In addition to the TLS variables created at runtime using the pthread API, some of the TLS variables are also called static TLS variables. These TLS elements are generated in advance during compilation and are common:
1. Custom _ thread modifier variable;
2. Some library-level predefined variables, such as errno
Where are these variables stored? The designer is wise to put it in the space of dynamic TLS, that is, the address indicated by the GS register. In fact, if I design it like this, you will also. The advantage of this design is that it is convenient to access dynamic TLS variables, whether it is dynamic TLS variables or static TLS variables, and also convenient to manage dynamic TLS.
The data is in the "initialized data section", but the link or thread is dynamically redirected to the static TLS space during initialization. In my experiment environment, if I define a variable:
_ Thread int test = 123;

The result of debugging is located at the offset of four bytes following the TLS segment address indicated by the GS register, while errno is located at 14*4 bytes below the _ thread variable. Specific space in the end how to arrange, you can see the glibc dl-reloc.c, dl-tls.c and other files, however I think this does not make any sense, because it involves a lot about compilation, linking, redirection, elf and other knowledge, if you don't want to get lost in depth first, the understanding principle is enough. I really don't have time to write it again. When I get home, I have to watch my children and shop, do housework ..... The following figure shows the overall picture after redirection:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.