Access Physical Address via Devmem

Source: Internet
Author: User

Directory

    • 1. Write in front
    • 2.devmem use
    • 3. Application Layer
    • 4. Kernel Layer
1. Write in front

Recently, when debugging, you need to access physical memory at the user level, and the application layer can use the devmem tool to access the physical address. View the source code, is actually the /dev/mem operation, through the mmap can map the physical address to the virtual address of the user space, completes the reading and writing to the device register in the user space. For this reason, I would like to understand the concrete realization of mmap in depth.

2.devmem use

The configuration of the Devmem can be found in miscellaneous in busybox .

CONFIG_USER_BUSYBOX_DEVMEM:                                       devmem is a small program that reads and writes from physical     memory using /dev/mem.                                           Symbol: USER_BUSYBOX_DEVMEM [=y]                                  Prompt: devmem                                                      Defined at ../user/busybox/busybox-1.23.2/miscutils/Kconfig:216   Depends on: USER_BUSYBOX_BUSYBOX                                  Location:                                                           -> BusyBox (USER_BUSYBOX_BUSYBOX [=y])                              
# busybox devmemBusyBox v1.23.2 (2018-08-02 11:08:33 CST) multi-call binary.Usage: devmem ADDRESS [WIDTH [VALUE]]Read/write from physical address    ADDRESS Address to act upon    WIDTH   Width (8/16/...)    VALUE   Data to be written
Parameters Detailed Description
ADDRESS Physical address required for read and write access
WIDTH Accessing data types
VALUE If the read operation is omitted, if it is a write operation, the data that needs to be written

Basic Test usage

# devmem 0x44e07134 160xFFEF# devmem 0x44e07134 320xFFFFFFEF# devmem 0x44e07134 80xEF
3. Application Layer

The interface is defined as follows:

#include <sys/mman.h>void *mmap(void *addr, size_t length, int prot, int flags, int fd, off_t offset);int munmap(void *addr, size_t length);

The detailed parameters are as follows:

Parameters Detailed Description
Addr The virtual memory address that needs to be mapped, or if NULL, the system is automatically selected. Return this address after a successful mapping
Length How much data needs to be mapped
Prot Describes memory protection methods for mapped areas, including: Prot_exec, Prot_read, Prot_write, Prot_none.
Flags Describes the attributes of a mapped region, such as whether to share with other processes, whether to establish an anonymous mapping, or whether to create a private cow.
Fd File descriptors to map to in-memory
Offset Offset of file mappings

Taking the implementation of Devmem as an example,

If argv[3] exists, the read-write permission needs to be mapped, and if it does not exist, only the Read permission needs to be mapped.

    map_base = mmap(NULL,            mapped_size,            argv[3] ? (PROT_READ | PROT_WRITE) : PROT_READ,            MAP_SHARED,            fd,            target & ~(off_t)(page_size - 1));
4. Kernel Layer

Because space is limited, here is not to express glibc, system call relationship, directly find the system call code implementation.

Arch/arm/include/uapi/asm/unistd.h

#define __NR_OABI_SYSCALL_BASE  0x900000#if defined(__thumb__) || defined(__ARM_EABI__)#define __NR_SYSCALL_BASE   0#else#define __NR_SYSCALL_BASE   __NR_OABI_SYSCALL_BASE#endif#define __NR_mmap           (__NR_SYSCALL_BASE+ 90)#define __NR_munmap         (__NR_SYSCALL_BASE+ 91)#define __NR_mmap2          (__NR_SYSCALL_BASE+192)

Arch/arm/kernel/entry-common. S

 /*============================================================================= * SWI handler *-------- ---------------------------------------------------------------------*/. Align 5ENTRY (VECTOR_SWI) #ifdef Config_ cpu_v7m V7m_exception_entry#else Sub sp, SP, #S_FRAME_SIZE Stmia sp, {r0-r12} @ calling R0-r12 ARM (add R8, SP, #S_PC)   ARM (Stmdb R8, {sp, lr}^) @ Calling SP, LR thumb (mov r8, SP) thumb (store_user_sp_lr R8, R10, s_sp    @ Calling SP, LR Mrs R8, SPSR @ called from Non-fiq mode, so OK. str LR, [sp, #S_PC] @ save calling PC str R8, [sp, #S_PSR] @ save CPSR str r0, [sp, #S_OLD_R0] @ Save old_r0#endif zero_fp#ifdef config_alignment_trap ldr IP, __cr_alignment ldr IP, [IP] MCR p15, 0, IP, C1, C0 @ Update control register#endif ENABLE_IRQ ...  
/* * Note: off_4k (r5) is always units of 4K.  If we can‘t do the requested * offset, we return EINVAL. */sys_mmap2:#if PAGE_SHIFT > 12        tst r5, #PGOFF_MASK        moveq   r5, r5, lsr #PAGE_SHIFT - 12        streq   r5, [sp, #4]        beq sys_mmap_pgoff        mov r0, #-EINVAL        mov pc, lr#else        str r5, [sp, #4]        b   sys_mmap_pgoff#endifENDPROC(sys_mmap2)

Arch/arm/kernel/calls. S

/* 90 */    CALL(OBSOLETE(sys_old_mmap))    /* used by libc4 */            CALL(sys_munmap)            ... /* 190 */   CALL(sys_vfork)            CALL(sys_getrlimit)            CALL(sys_mmap2)

Include/linux/syscalls.h

asmlinkage long sys_mmap_pgoff(unsigned long addr, unsigned long len,            unsigned long prot, unsigned long flags,            unsigned long fd, unsigned long pgoff);

Search the mmap_pgoff function definition, located in mm/mmap.c, omitting some code that we don't care much about.

SYSCALL_DEFINE6(mmap_pgoff, unsigned long, addr, unsigned long, len,        unsigned long, prot, unsigned long, flags,        unsigned long, fd, unsigned long, pgoff){    struct file *file = NULL;    unsigned long retval = -EBADF;    if (!(flags & MAP_ANONYMOUS)) {        audit_mmap_fd(fd, flags);        file = fget(fd);        if (!file)            goto out;        if (is_file_hugepages(file))            len = ALIGN(len, huge_page_size(hstate_file(file)));        retval = -EINVAL;        if (unlikely(flags & MAP_HUGETLB && !is_file_hugepages(file)))            goto out_fput;    }    ...        flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE);    retval = vm_mmap_pgoff(file, addr, len, prot, flags, pgoff);out_fput:    if (file)        fput(file);out:    return retval;}

Mm/util.c

unsigned long vm_mmap_pgoff(struct file *file, unsigned long addr,    unsigned long len, unsigned long prot,    unsigned long flag, unsigned long pgoff){    unsigned long ret;    struct mm_struct *mm = current->mm;    unsigned long populate;    ret = security_mmap_file(file, prot, flag);    if (!ret) {        down_write(&mm->mmap_sem);        ret = do_mmap_pgoff(file, addr, len, prot, flag, pgoff,                    &populate);        up_write(&mm->mmap_sem);        if (populate)            mm_populate(ret, populate);    }    return ret;}

The vm_area_struct structure is used to describe the virtual memory area of the process, which is associated with the process's memory descriptor mm_struct , and is managed through linked lists and red-black trees.

unsigned long do_mmap_pgoff (struct file *file, unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, unsigned long pgoff, unsigned long *populate) {struct mm_struct * mm = Curren    t->mm;    vm_flags_t Vm_flags;           *populate = 0;        Search the process address space to find a linear address range that can be used, Len specifies the length of the interval, and the non-null addr parameter specifies the address from which to start the lookup addr = Get_unmapped_area (file, addr, Len, Pgoff, flags); Vm_flags = calc_vm_prot_bits (prot) |            Calc_vm_flag_bits (Flags) | Mm->def_flags | Vm_mayread | Vm_maywrite |                 Vm_mayexec;            The file pointer is not empty, a mapping is established from files to virtual space, and access rights are set according to the flags flag.                if (file) {struct Inode *inode = file_inode (file); Switch (Flags & map_type) {case Map_shared:vm_flags |= vm_shared |            Vm_mayshare;        Break ...    }        else {//file pointer is empty, create only virtual space, do not map.            Switch (Flags & map_type) {case Map_shared:pgoff = 0; Vm_flags |= vm_shared |            Vm_mayshare;        Break            Case Map_private:pgoff = addr >> page_shift;      Break    }//Create a virtual space and map it.        Addr = mmap_region (file, addr, Len, Vm_flags, Pgoff); return addr;}
unsigned long mmap_region (struct file *file, unsigned long addr, unsigned long len, vm_flags_t vm_flags, unsigned l Ong Pgoff) {...//check if the virtual space needs to be expanded if (!MAY_EXPAND_VM (mm, Len >> page_shift)) {unsigned long nr_pag        Es /* map_fixed may remove pages of mappings this intersects with * requested mapping.         Account for the pages it would unmap. */if (!) (        Vm_flags & map_fixed)) Return-enomem;        Nr_pages = Count_vma_pages_range (mm, addr, addr + len);    if (!MAY_EXPAND_VM (mm, Len >> page_shift)-nr_pages) Return-enomem; }//Scan the current process address space for the VM_AREA_STRUCT structure of the red-black tree, determine the position of the linear region, if an area is found, indicating that the addr is in use, indicating that the virtual interval is already used, so that calls to Do_ Munmap removes this area from the process address space. Munmap_back:if (Find_vma_links (mm, addr, addr + len, &prev, &rb_link, &rb_parent)) {if (Do_munmap (        MM, addr, len)) Return-enomem;    Goto Munmap_back; } VMA = Vma_merge (mm, prev, addr, addr+ len, vm_flags, null, file, pgoff, NULL);          if (VMA) goto out;    Allocation mapping virtual Space VMA = Kmem_cache_zalloc (Vm_area_cachep, Gfp_kernel);        if (!VMA) {error =-enomem;    Goto Unacct_error;    } vma->vm_mm = mm;    Vma->vm_start = addr;    Vma->vm_end = addr + len;    Vma->vm_flags = Vm_flags;    Vma->vm_page_prot = Vm_get_page_prot (vm_flags);    Vma->vm_pgoff = Pgoff;         Init_list_head (&vma->anon_vma_chain);            if (file) {if (Vm_flags & vm_denywrite) {error = deny_write_access (file);        if (error) goto FREE_VMA;        } vma->vm_file = get_file (file);        Error = File->f_op->mmap (file, VMA);        if (error) goto UNMAP_AND_FREE_VMA;         /* Can addr have changed?? * * Answer:yes, several device drivers can do it in their * F_op->mmap method.  -davem * Bug:if addr is changed, Prev, Rb_link, Rb_parent should       * Be updated for vma_link () */warn_on_once (addr! = Vma->vm_start);        addr = vma->vm_start;    Vm_flags = vma->vm_flags;        } else if (Vm_flags & vm_shared) {error = Shmem_zero_setup (VMA);    if (error) goto FREE_VMA; }            ...}

mmap_region function implementation in File->f_op->mmap (file, VMA) , corresponding to Mmap_mem , located in /DRIVERS/CHAR/MEM.C with the following code:

static const struct File_operations mem_fops = {. Llseek = Memory_lseek,. Read = Read_mem,. Write = Write_mem,. mmap = Mmap_mem,. Open = Open_mem,. Get_unmapped_area = get_unmapped_area_mem,};static    int mmap_mem (struct file *file, struct vm_area_struct *vma) {size_t size = vma->vm_end-vma->vm_start;    if (!valid_mmap_phys_addr_range (vma->vm_pgoff, size)) Return-einval;    if (!PRIVATE_MAPPING_OK (VMA)) Return-enosys;    if (!range_is_allowed (vma->vm_pgoff, size)) Return-eperm;        if (!phys_mem_access_prot_allowed (file, Vma->vm_pgoff, size, &vma->vm_page_prot))    Return-einval;                         Vma->vm_page_prot = Phys_mem_access_prot (file, vma->vm_pgoff, size,    Vma->vm_page_prot);    Vma->vm_ops = &mmap_mem_ops; /* Remap-pfn-range'll mark the range Vm_io */if (Remap_pfn_range (VMA, VMA-&GT;VM_start, Vma->vm_pgoff, size, Vma->vm_page_prot)) {Return-eagai    N } return 0;}

The remap_pfn_range function establishes the physical address and Virtual Address page table. Where Vm_pgoff represents the physical address to be mapped, andVm_page_prot represents the permissions on the page. These parameters correspond to the parameters of the mmap, and the physical address can now be accessed through the application layer.

Access Physical Address via Devmem

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.