Address mapping for Linux under the IA-32 architecture

Source: Internet
Author: User
Tags prev

1. Overview

2. Logical address to linear address

The mapping of logical addresses to linear addresses is also known as segment mapping in the IA-32 architecture. As shown, we first need to get the logical address and segment selector, the segment selector is used to get the base address in the middle of the GDT, and the logical address is added as the offset and subgrade address to get the linear address. The mapping process for the detailed logical address to the linear address:

    • Determine which segment register to use according to the nature of the instruction;
    • According to the contents of the segment register, the corresponding address segment descriptor structure is found, the segment descriptor structure is usually placed in gdt,ldt,tr or IDT, and the starting address of the description table is saved in the GDTR,LDTR,TR and IDTR registers;
    • Find the base site of the segment from the address description structure;
    • The address of the instruction is taken as the displacement, and the length degree specified in the paragraph descriptor is compared to see if it is out of bounds;
    • Depending on the nature of the instruction and the permissions in the segment descriptor, the permission is appropriate;
    • A linear address is added to the base address as a displacement of the addresses emitted in the instruction;

Segment selectors are in the segment register, such as Cs,ds. Segment descriptors are in memory management registers, such as GDTR,LDTR,IDTR and tr. Segment selector content is as follows

The segment descriptor reads as follows:

In the C language we access a local variable address to print it out, at this time the address is a logical address, then this address to the linear address of the conversion process for what.

#include <stdio.h>int  main () {    long x = 0z01234567;    printf ("Thex address is 0x%x\n", &x);     return 0 ;}

The above program prints out the logical address, according to the logical address to the linear address conversion mode, we want to get the segment selector from the segment register at this time. We know that local variables are stored in the buyers area, so we can get the segment selector from the stack register SS. When the kernel creates a thread, the segment registers are set first, and the implementation code for the IA-32 schema is located at the arch/x86/kernel/process_32.c:200 line

voidStart_thread (structPt_regs *regs, unsignedLongNEW_IP, unsignedLongnew_sp) {Set_user_gs (regs,0); Regs->fs =0; Regs->ds =__user_ds; Regs->es =__user_ds; Regs-&GT;SS =__user_ds; Regs->cs =__user_cs; Regs->ip =new_ip; Regs-&GT;SP =new_sp; Regs->flags =x86_eflags_if; /** Force it to the Iret return path by making it look as if there is * some work pending. */Set_thread_flag (Tif_notify_resume);}

As we can see from the code, the kernel uses only two segments, the code snippet (CS) and the data Segment (DS), and the CS and DS for each process are the same, only the EIP and ESP are different. At this point the segment selector is obtained from the SS segment register, and the value of__user_ds is defined in Arch/x86/include/asm/segment.h:

#define GDT_ENTRY_DEFAULT_USER_DS 15
#define GDT_ENTRY_DEFAULT_USER_CS#define __user_ds (gdt_entry_default_user_ds*8+3)# Define __user_cs (gdt_entry_default_user_cs*8+3)

In this case the binary of the SS is: 0000 0000 0111 1011. With the above segment selector structure, the height of 13bit is index, at this time the index value is 15, and the 3bit is 0, indicating the use of the GDT Global descriptor table. At this point we are able to get a linear address with an subgrade address plus an offset address for the address at index 15 in the GDT table. The location of the GDT table above has been said to be stored by the GDTR register, GDTR defined in AARCH/X86/KERNEL/CPU/COMMON.C in kernel

Define_per_cpu_page_aligned (structGdt_page, Gdt_page) = {. GDT ={#ifdef config_x86_64/** We need valid kernel segments for data and code in long mode too * IRET would check the segment types Kkeil  2000/10/28 * Also Sysret mandates a special GDT layout * * TLS descriptors is currently at a different place     Compared to i386.     * Hopefully nobody expects them at a fixed place (Wine?) */[Gdt_entry_kernel32_cs]= Gdt_entry_init (0xc09b,0,0xfffff), [Gdt_entry_kernel_cs]= Gdt_entry_init (0xa09b,0,0xfffff), [Gdt_entry_kernel_ds]= Gdt_entry_init (0xc093,0,0xfffff), [Gdt_entry_default_user32_cs]= Gdt_entry_init (0XC0FB,0,0xfffff), [Gdt_entry_default_user_ds]= Gdt_entry_init (0xc0f3,0,0xfffff), [Gdt_entry_default_user_cs]= Gdt_entry_init (0XA0FB,0,0xfffff),#else[Gdt_entry_kernel_cs]= Gdt_entry_init (0xc09a,0,0xfffff), [Gdt_entry_kernel_ds]= Gdt_entry_init (0xc092,0,0xfffff), [Gdt_entry_default_user_cs]= Gdt_entry_init (0XC0FA,0,0xfffff), [Gdt_entry_default_user_ds]= Gdt_entry_init (0xc0f2,0,0xfffff),    /** Segments used for calling PnP BIOS has a byte granularity.     * They code segments and data segments has a fixed 64k limits, * The transfer segment sizes is set at run time. */    /*32-bit Code*/[GDT_ENTRY_PNPBIOS_CS32]= Gdt_entry_init (0x409a,0,0xFFFF),    /*16-bit Code*/[GDT_ENTRY_PNPBIOS_CS16]= Gdt_entry_init (0x009a,0,0xFFFF),    /*16-bit Data*/[Gdt_entry_pnpbios_ds]= Gdt_entry_init (0x0092,0,0xFFFF),    /*16-bit Data*/[Gdt_entry_pnpbios_ts1]= Gdt_entry_init (0x0092,0,0),    /*16-bit Data*/[Gdt_entry_pnpbios_ts2]= Gdt_entry_init (0x0092,0,0),    /** The APM segments has a byte granularity and their bases * is set at run time.     All has 64k limits. */    /*32-bit Code*/[Gdt_entry_apmbios_base]= Gdt_entry_init (0x409a,0,0xFFFF),    /*16-bit Code*/[Gdt_entry_apmbios_base+1] = Gdt_entry_init (0x009a,0,0xFFFF),    /*Data*/[Gdt_entry_apmbios_base+2] = Gdt_entry_init (0x4092,0,0xFFFF), [GDT_ENTRY_ESPFIX_SS]= Gdt_entry_init (0xc092,0,0xfffff), [GDT_ENTRY_PERCPU]= Gdt_entry_init (0xc092,0,0xfffff), Gdt_stack_canary_init#endif} };

Gdt_entry_init defined in Arch/x86/kernel/cpu/desc_defs.h

  #define  Gdt_entry_init (Flags, base, limit) {{{\

When Gdt_entry_default_user_ds is 15 o'clock, the corresponding address in the GDT table is Gdt_entry_init (0xc0f2, 0, 0xfffff) , at this point the base address base is 0,segment limit 0xfffff, the linear address equals the base address in the GDT plus the logical addresses, the base address is 0, so the linear address and the logical address in the Linux kernel are equal.

3. Linear address to physical address to be replenished

The process of eventually mapping a linear address to a physical address is called a page-map. The mapping process from a linear address to a physical address is:

    • Get the base address of the page directory from the CR3 register;
    • The base address of the corresponding page table is obtained in the directory with the DIR-bit segment as the subscript.
    • The page description item in the resulting page directory is obtained by using the page segment in the linear address as the subscript.
    • Adding the base address of the page in the page description item to the offset segment of the linear address to get the physical addresses;

The mapping process for linear addresses to physical addresses is as follows:

Each process has its own address space, different processes have different CR3 registers, the value of the CR3 register is generally stored in the process control block, for example, in the TASK_STRUCT structure, the CR3 Register page item at 32bit

From the process described above, we first want to obtain the value of the CR3 register, the kernel will be assigned a page directory when the process is created, the page directory address is stored in the TASK_STRUCT structure, task_struct structure has a mm_ There is a PGD field in the struct struct that stores the value of the CR3 register, which is in KERNEL/FORK.C

Static int mm_alloc_pgd (struct mm_struct *mm) {    mm->PGD = pgd_alloc (mm);     if (Unlikely (!mm->PGD))         return -enomem;     return 0 ;}

During process switching, the base address of the Process page directory is loaded into the CR3 register, and the code is in Arch/x86/include/asm/mmu_context.h

StaticInlinevoidSWITCH_MM (structMm_struct *prev,structMm_struct *Next,structTask_struct *tsk) {Unsigned CPU=smp_processor_id (); if(Likely (prev! =next))        {#ifdef CONFIG_SMP this_cpu_write (cpu_tlbstate.state, TLBSTATE_OK); This_cpu_write (cpu_tlbstate.active_mm, next);#endifcpumask_set_cpu (CPU, Mm_cpumask (next)); /*re-load page Tables*/LOAD_CR3 (next-> PGD); Trace_tlb_flush (Tlb_flush_on_task_switch,Tlb_flush_all);/*Stop Flush IPIs for the previous mm*/cpumask_clear_cpu (CPU, Mm_cpumask (prev)); /*Load The LDT, if the LDT is different:*/        if(Unlikely (Prev->context.ldt! = next->Context.ldt)) Load_ldt_nolock (&next->context); } #ifdef CONFIG_SMPElse{this_cpu_write (cpu_tlbstate.state, TLBSTATE_OK); Bug_on (This_cpu_read (cpu_tlbstate.active_mm)!=next); if(!cpumask_test_cpu (CPU, Mm_cpumask (next))) {            /** on established MMS, the Mm_cpumask are only changed * from IRQ context, from Ptep_clear_flus H () while in * lazy tlb mode, and here.             Irqs is blocked during * schedule, protecting us from simultaneous changes. */cpumask_set_cpu (CPU, Mm_cpumask (next)); /** We were in lazy tlb mode and leave_mm disabled * TLB flush IPI delivery.             We must reload CR3 * To do sure to use no freed page tables. */LOAD_CR3 (Next-PGD);            Trace_tlb_flush (Tlb_flush_on_task_switch, Tlb_flush_all); Load_ldt_nolock (&next->context); }    }#endif}

Address mapping for Linux under the IA-32 architecture

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.