Memory and I/O operations for Linux Device Driver Programming

Source: Internet
Author: User
Tags preg

ArticleReprinted from:

Http://dev.yesky.com/412/2639912.shtml

Author: Song Baohua Source: Tianji development responsibility Editor: Fangzhou

Http://www.openhw.org/tatata/blog/10-03/185769_eb28a.html

Related Topics:Linux device driver development

Linux provides a complex Storage Management System for processors that provide MMU (Storage Manager, which assists the operating system in memory management and hardware support such as virtual/real address translation, the memory that the process can access reaches 4 GB.
The 4 GB memory space of a process is divided into two parts: user space and kernel space. The user space address distribution ranges from 0 to 3 GB (page_offset, in 0x86, It is equal to 0xc0000000), and 3 GB to 4 GB is the kernel space, for example:

In the kernel space, the address range from 3G to vmalloc_start is the physical memory ing area (this area contains the kernel image, the physical page box table mem_map, and so on ), for example, if the memory of the VMWare virtual system we use is 160 MB, then 3G ~ The 3G + M memory should be mapped to the physical memory. After the physical memory ing area, it is the vmalloc area. For a M system, the vmalloc_start location should be around 3G + M (an 8 m gap exists between the physical memory ing zone and vmalloc_start to prevent the gap ), the location of vmalloc_end is close to 4 GB (the system will reserve a K area for dedicated page ing), for example:

The memory requested by kmalloc and get_free_page is located in the physical memory ing area and is physically continuous. They have only a fixed offset with the real physical address, so there is a simple conversion relationship, virt_to_phys () can be used to convert the kernel virtual address to the physical address:

# DEFINE _ Pa (x) (unsigned long) (x)-page_offset)
Extern inline unsigned long comment _to_phys (volatile void * address)
{
Return _ Pa (Address );
}

The above conversion process is to subtract the virtual address 3G (page_offset = 0xc000000 ).
The corresponding function is phys_to_virt (), which converts the physical address of the kernel to a virtual address:

# DEFINE _ VA (x) (void *) (unsigned long) (x) + page_offset ))
Extern inline void * phys_to_virt (unsigned long address)
{
Return _ VA (Address );
}

Both maid () and phys_to_virt () are defined in include \ asm-i386 \ Io. h.
The memory applied for by vmalloc is located in vmalloc_start ~ There is no simple Conversion Relationship Between vmalloc_end and physical addresses. Although they are logically continuous, they do not require continuity physically.
We use the followingProgramTo demonstrate the differences between kmalloc, get_free_page, and vmalloc:

# include
module_license ("GPL");
unsigned char * pagemem;
unsigned char * kmallocmem;
unsigned char * vmallocmem;
int _ init mem_module_init (void)
{< br> // it is best to check whether the application is successful each time the memory application is applied
// The following section only serves as the demo Code no check
pagemem = (unsigned char *) get_free_page (0);
printk ("<1> pagemem ADDR = % x", pagemem);
kmallocmem = (unsigned char *) kmalloc (100, 0);
printk ("<1> kmallocmem ADDR = % x", kmallocmem);
vmallocmem = (unsigned char *) vmalloc (1000000 );
printk ("<1> vmallocmem ADDR = % x", vmallocmem);
return 0;
}< br> void _ exit mem_module_exit (void)
{< br> free_page (pagemem);
kfree (kmallocmem);
vfree (vmallocmem);
}< br> module_init (mem_module_init );
module_exit (mem_module_exit);

Our system has 121 MB of memory space. Once we run the above program, we find that the address of pagemem is 0xc7997000 (about 3G + M) the kmallocmem address is at 0xc9bc1380 (about 3G + 155 m) and the vmallocmem address is at 0xcabeb000 (about 3G + 171 m), which conforms to the memory layout described above.
Next, we will discuss how Linux Device Drivers access the peripheral I/O Ports (registers ).
Almost every type of peripherals is carried out by reading and writing registers on the device, usually including control registers, status registers, and data registers. The peripheral registers are usually continuously compiled. Depending on the CPU architecture, the CPU can address the I/O port in two ways:
(1) I/O ing (I/O-mapped)
Typically, x86 processors provide a dedicated address space for peripherals, which is called "I/O address space" or "I/O port space ", the CPU uses dedicated I/O commands (such as x86 In and out commands) to access the address units in this space.

2) memory ing (memory-mapped)
Generally, the CPU (such as arm and PowerPC) of a CPU of a server-defined identity management system only implements one physical address space, and the peripheral I/O port is a part of the memory. In this case, the CPU can access the peripheral I/O port as it accesses a memory unit, without the need to set up a dedicated peripheral I/O command.
However, the hardware implementation differences between the two are completely transparent to the software, driver developers can regard memory ing I/O Ports and peripheral memory as "I/O memory" resources.
Generally, when the system is running, the physical address of the peripheral I/O memory resources is known and determined by the hardware design. However, the CPU generally does not pre-define the virtual address range for the physical addresses of these known peripheral I/O memory resources, and the driver cannot directly access the I/O memory resources through the physical address, instead, you must map them to the core virtual address space (through the page table) before you can access these I/O memory resources through the access commands based on the core virtual address range obtained by the ing. Linux in Io. the H header file declares the function ioremap (), which is used to map the physical addresses of I/O memory resources to the core virtual address space (3 GB-4 GB). The prototype is as follows:

Void * ioremap (unsigned long phys_addr, unsigned long size, unsigned long flags );

The iounmap function is used to cancel the Iing of ioremap (). The prototype is as follows:

Void iounmap (void * ADDR );

Both functions are implemented in the mm/ioremap. c file.
After ing the physical address of the I/O memory resources into the core virtual address, theoretically we can directly read and write the I/O memory resources like the read/write Ram. To ensure the cross-platform portability of the driver, we should use special functions in Linux to access the I/O memory resources, instead of the core virtual address pointer. For example, on the X86 platform, read/write I/O functions are as follows:

# Define readb (ADDR) (* (volatile unsigned char *) _ io_virt (ADDR ))
# Define readw (ADDR) (* (volatile unsigned short *) _ io_virt (ADDR ))
# Define readl (ADDR) (* (volatile unsigned int *) _ io_virt (ADDR ))
# Define writeb (B, ADDR) (* (volatile unsigned char *) _ io_virt (ADDR) = (B ))
# Define writew (B, ADDR) (* (volatile unsigned short *) _ io_virt (ADDR) = (B ))
# Define writel (B, ADDR) (* (volatile unsigned int *) _ io_virt (ADDR) = (B ))
# Define memset_io (a, B, c) memset (_ io_virt (a), (B), (c ))
# Define memcpy_fromio (a, B, c) memcpy (a) ,__ io_virt (B), (c ))
# Define memcpy_toio (a, B, c) memcpy (_ io_virt (a), (B), (c ))

Finally, we need to emphasize the implementation of MMAP functions in the driver. MMAP is used to map a device, which means that an address in the user space is associated with the device memory. This allows the program to read or write data within the allocated address range, it is actually access to the device.
In LinuxSource codeTo search for the text containing "ioremap" and find that there are very few ioremap actually appearing. Therefore, the author tries to find the physical address translation from the I/O operation to the real location of the virtual address, and finds that Linux has a statement to replace ioremap, but this conversion process is indispensable.
For example, we extract a short segment from the RTC (Real-time clock) Driver of the ARM chip S3C2410:

Static void get_rtc_time (INT Alm, struct rtc_time * rtc_tm)
{
Spin_lock_irq (& rtc_lock );
If (ALM = 1 ){
Rtc_tm-> tm_year = (unsigned char) almyear & msk_rtcyear;
Rtc_tm-> tm_mon = (unsigned char) almmon & msk_rtcmon;
Rtc_tm-> tm_mday = (unsigned char) almday & msk_rtcday;
Rtc_tm-> tm_hour = (unsigned char) almhour & msk_rtchour;
Rtc_tm-> tm_min = (unsigned char) almmin & msk_rtcmin;
Rtc_tm-> tm_sec = (unsigned char) almsec & msk_rtcsec;
}
Else {
Read_rtc_bcd_time:
Rtc_tm-> tm_year = (unsigned char) bcdyear & msk_rtcyear;
Rtc_tm-> tm_mon = (unsigned char) bcdmon & msk_rtcmon;
Rtc_tm-> tm_mday = (unsigned char) bcdday & msk_rtcday;
Rtc_tm-> tm_hour = (unsigned char) bcdhour & msk_rtchour;
Rtc_tm-> tm_min = (unsigned char) bcdmin & msk_rtcmin;
Rtc_tm-> tm_sec = (unsigned char) bcdsec & msk_rtcsec;
If (rtc_tm-> tm_sec = 0 ){
/* Re-read all BCD registers in case of bcdsec is 0.
See RTC section at the manual for more info .*/
Goto read_rtc_bcd_time;
}
}
Spin_unlock_irq (& rtc_lock );
Bcd_to_bin (rtc_tm-> tm_year );
Bcd_to_bin (rtc_tm-> tm_mon );
Bcd_to_bin (rtc_tm-> tm_mday );
Bcd_to_bin (rtc_tm-> tm_hour );
Bcd_to_bin (rtc_tm-> tm_min );
Bcd_to_bin (rtc_tm-> tm_sec );
/* The epoch of tm_year is 1900 */
Rtc_tm-> tm_year + = rtc_leap_year-1900;
/* Tm_mon starts at 0, but RTC month starts at 1 */
Rtc_tm-> tm_mon --;
}

I/O operations seem to be performed on registers defined by almyear, almmon, and almday. Why are these macros defined?

# Define almday brtc (0x60)
# Define almmon brtc (0x64)
# Define almyear brtc (0x68)

With the help of macro brtc, this macro is defined:

# Define brtc (NB) _ reg (0x57000000 + (NB ))

The macro _ reg is used, and _ reg is defined:

# DEFINE _ reg (x) io_p2v (X)

The final io_p2v is the place where the real "play" virtual address and physical address are converted:

# Define io_p2v (x) | 0xa0000000)

There is a _ preg corresponding to _ reg:

# DEFINE _ preg (x) io_v2p (X)

There is an io_v2p corresponding to io_p2v:

# Define io_v2p (x )&~ 0xa0000000)

It can be seen that ioremap is secondary. The key issue is whether there is a conversion between virtual addresses and physical addresses!
The following program retains a piece of memory at startup, then uses ioremap to map it to the kernel virtual space, and maps it to the User Virtual space with remap_page_range, both the kernel and the user can access it. If the memory initialization string "ABCD" is added to the virtual address in the kernel, you can read it from the virtual address:

/************ Mmap_ioremap.c **************/
# Include
# Include
# Include
# Include
# Include/* For mem_map _ (un) Reserve */
# Include/* for each _to_phys */
# Include/* For kmalloc and kfree */
Module_parm (mem_start, "I ");
Module_parm (mem_size, "I ");
Static int mem_start = 101, mem_size = 10;
Static char * reserve_virt_addr;
Static int Major;
Int mmapdrv_open (struct inode * inode, struct file * file );
Int mmapdrv_release (struct inode * inode, struct file * file );
Int mmapdrv_mmap (struct file * file, struct vm_area_struct * VMA );
Static struct file_operations mmapdrv_fops =
{
Owner: this_module, MMAP: mmapdrv_mmap, open: mmapdrv_open, release:
Mmapdrv_release,
};
Int init_module (void)
{
If (Major = register_chrdev (0, "mmapdrv", & mmapdrv_fops) <0)
{
Printk ("mmapdrv: unable to register character device \ n ");
Return (-EIO );
}
Printk ("MMAP device major = % d \ n", Major );
Printk ("high memory physical address 0x % LDM \ n", pai_to_phys (high_memory )/
1024/1024 );
Reserve_virt_addr = ioremap (mem_start * 1024*1024, mem_size * 1024*1024 );
Printk ("reserve_virt_addr = 0x % lx \ n", (unsigned long) reserve_virt_addr );
If (reserve_virt_addr)
{
Int I;
For (I = 0; I <mem_size * 1024*1024; I + = 4)
{
Reserve_assist_addr [I] = 'a ';
Reserve_assist_addr [I + 1] = 'B ';
Reserve_virt_addr [I + 2] = 'C ';
Reserve_assist_addr [I + 3] = 'D ';
}
}
Else
{
Unregister_chrdev (Major, "mmapdrv ");
Return-enodev;
}
Return 0;
}
/* Remove the module */
Void cleanup_module (void)
{
If (reserve_virt_addr)
Iounmap (reserve_virt_addr );
Unregister_chrdev (Major, "mmapdrv ");
Return;
}
Int mmapdrv_open (struct inode * inode, struct file * file)
{
Mod_inc_use_count;
Return (0 );
}
Int mmapdrv_release (struct inode * inode, struct file * file)
{
Mod_dec_use_count;
Return (0 );
}
Int mmapdrv_mmap (struct file * file, struct vm_area_struct * VMA)
{
Unsigned long offset = VMA-> vm_pgoff <page_shift;
Unsigned long size = VMA-> vm_end-VMA-> vm_start;
If (size> mem_size * 1024*1024)
{
Printk ("size too big \ n ");
Return (-enxio );
}
Offset = offset + mem_start * 1024*1024;
/* We do not want to have this area swapped out, lock it */
VMA-> vm_flags | = vm_locked;
If (remap_page_range (VMA, VMA-> vm_start, offset, size, page_shared ))
{
Printk ("remap page range failed \ n ");
Return-enxio;
}
Return (0 );
}

The remap_page_range function constructs a new page table for ing a physical address to map the kernel space to the user space. Its prototype is as follows:

Int remap_page_range (vma_area_struct * VMA, unsigned long from, unsigned long to, unsigned long size, pgprot_tprot );

The most typical example of MMAP is the driver of the display card. ing the display space directly from the kernel to the user space will provide the Read and Write efficiency of the display memory.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.