Memory Management overview for Linux

Source: Internet
Author: User
Tags file system advantage linux

In Linux often found that there is little free memory, it seems that all the memory is occupied by the system, the surface is not enough memory, it is not. This is an excellent feature of Linux memory management, and in this context differs from the memory management of Windows. The main feature is that no matter how large the physical memory, Linux will be fully utilized, some programs called the hard disk data into memory, the use of memory read and write high-speed features to improve the data access performance of Linux systems. Windows allocates memory to the application only when memory is needed, and does not take full advantage of the large amount of memory space. In other words, each additional physical memory, Linux will be able to take full advantage of the hardware investment benefits, and Windows will only do it as a device, even if the increase of 8GB or even greater.

This feature of Linux, mainly uses the free physical memory, divides a part space, makes the cache and the buffers, thus enhances the data access performance.

1, what is cache

Page cache is a primary disk cache implemented by the Linux kernel. It is primarily used to reduce I/O to disk. Specifically, by caching the data in the disk into physical memory, access to the disk becomes accessible to physical memory.

The value of disk caching is in two areas: first, the speed of access to the disk is much lower than the speed of access to memory, so accessing data from memory is faster than accessing it from disk. Second, once the data has been accessed, it is likely to be accessed again in the short term.

The page cache is made up of physical pages in memory, and each page in the cache corresponds to multiple blocks on the disk. Whenever the kernel starts to perform a page I/O operation (usually a disk operation on a block of page size in a normal file), first checks whether the required data is in the cache, and if so, the kernel uses the data in the cache directly, thereby avoiding access to the disk.

For example, when you open a source program file using a text editor, the data for that file is transferred into memory. As you edit the file, more and more of the data is transferred to the memory page in succession. Finally, when you compile it, the kernel can directly use pages in the page cache without having to read the file again from disk. Because users tend to read or manipulate the same file over and over again, page caching reduces a lot of disk operations.

2, cache how to update

Write operations are actually delayed because of the caching effect of the page cache. When the data in the page cache is newer than the data in the background store, the data is called dirty data. Dirty pages that accumulate in memory must eventually be written back to disk. Dirty pages are written back to disk when two of the following conditions occur:

When free memory falls below a specific threshold, the kernel must write dirty pages back to disk to free up memory.

When a dirty page resides in memory longer than a specific threshold, the kernel must write the timed out dirty page back to disk to ensure that the dirty page does not reside in memory indefinitely.

In the 2.6 kernel, two kinds of work are uniformly performed by a group of kernel threads-pdflush the background writeback routines.

First, the Pdflush thread flushes the dirty page back to disk when the free memory in the system is below a specific threshold value. The purpose of this background writeback routine is to release dirty pages to regain memory when the available physical memory is too low. Specific memory thresholds can be set by Dirty_background_ratio sysctl system calls. When the free memory is lower than the threshold: Dirty_background_ratio is low, the kernel calls the function Wakeup_bdflush () wakes up a pdflush thread, and then the Pdflush thread calls the function further Background_ Writeout () starts writing dirty pages back to disk. function Background_ Writeout () requires a long integer parameter that specifies the number of pages that are attempting to write back. The function background_writeout () writes the data sequentially until the following two conditions are met:

The specified minimum number of pages has been written to disk.

The number of free memory has picked up, exceeding the threshold dirty_background_ratio.

The above conditions ensure that the Pdflush operation can reduce the low memory pressure in the system. Writeback does not stop until the two conditions are reached, unless Pdflush writes back all the dirty pages, and no more dirty pages are left to write back.

To meet the second goal, the Pdflush background routines are periodically awakened (regardless of whether the idle memory is too low), writing out dirty pages that have been in memory for too long, making sure that there are no persistent dirty pages in memory. If the system crashes and memory is in disarray, dirty pages that have not yet been written back in memory will be lost, so it is important to periodically synchronize the page cache and disk. at system startup, the kernel Initializes a timer that wakes the Pdflush thread periodically and then runs the function wb_kupdate ().

This chapter describes the characteristics of Linux memory management, also known as virtual memory and disk buffering. Describes the purpose, working principles, and other things that the system administrator needs to consider for memory management.

1. What is virtual memory

Linux supports virtual memory (virtual memory), which refers to the use of disk as a ram extension, so that the available memory size increases accordingly. The kernel writes the contents of the unused memory block to the hard disk, so that this memory can be used for other purposes. When the original content needs to be used, they are read back into memory. These operations are completely transparent to the user, and programs running under Linux simply see a large amount of memory available and do not notice that some of them reside on the hard disk from time to time. Of course, reading and writing hard drives is much slower (thousands of times times slower) than using real memory directly, so the program won't be as fast as it has been running in memory. Part of the hard disk used as virtual memory is called swap space.

Linux can use a regular file in the file system or a separate partition as a swap space. Swap partitions are faster, but it's easy to change the size of the swap file (you don't need to repartition the entire hard disk, and you can install anything from the staging section). When you know how much space you need to swap, you should use swap partitions, but if you are unsure, you can first use a swap file and then use the system for a while, and you can feel how much swap space you have, and when you can be sure of its size, create an exchange partition.

You should know that Linux allows you to use several swap partitions and/or Exchange files at the same time. This means that if you only occasionally need another swap space, you can set up an additional swap file at that time instead of allocating the swap space all the way.

Operating system Terminology notes: Computer science often distinguishes between [swapping] (writing the entire process to swap space) and page scheduling [paging] (at some point, only thousands of bytes of fixed size are written to the swap space). Page scheduling is usually more efficient, which is also the practice of Linux, but the traditional Linux terminology refers to Exchange.

2. Create Swap Space

An exchange file is an ordinary file; it is not special to the kernel. What is related to the kernel is that it cannot have holes, and it is prepared with Mkswap. Furthermore, it must reside on a local hard disk, and it cannot reside on a file system that is loaded through NFS for the reason of implementation.

This paper url:http://www.bianceng.cn/os/linux/201410/45613.htm

About the hole is important. The swap file retains disk space so that the kernel can quickly swap out the page without doing something to allocate the disk sector to the file. The kernel is simply using any sector that is already allocated to the swap file. Because a hole in a file means that there is no disk sector allocation (to the appropriate portion of the hole in the file), you cannot use this type of perforated file for the kernel.

A good way to create a hole-free swap file is to use the following command:

$ dd If=/dev/zero Of=/extra-swap bs=1024 count=1024

The above/extra-swap is the name of the swap file, which is given by the value of the count= back. The size is preferably a multiple of 4, because the memory page (memory pages) that the kernel writes out is 4,000 bytes. If the size is not a multiple of 4, the last thousands of bytes will not be used.

There is nothing special about an exchange partition. You can create it just like any other partition, except that it is used as an original partition, i.e. it does not include any file system. It is a good idea to mark the swap partition as type (Linux swap partition); This will make the list of partitions clearer, although it is not necessarily the case for the kernel.

After you create a swap file or an exchange partition, you must write a signature in the beginning of it; This signature includes some management information that is used by the kernel. This is done with the \cmd{mkswap} command, using the following:

$ mkswap/extra-swap 1024

Setting up swapspace, size = 1044480 bytes

Note that the swap space is not yet in use: it already exists, but the kernel has not yet used it as virtual memory. You have to use mkswap very carefully because it does not check whether the file or partition has been used by someone else. You can easily use Mkswap to overwrite important files and partitions! Fortunately, you only need to use Mkswap when installing the system.

The Linux memory management program restricts each swap space to a maximum of about 127MB (for various technical reasons, the actual limit size is (4096-10) * 8 * 4096 = 133890048$ byte, or 127.6875 megabytes). However, you can use up to 16 swap spaces at the same time, with a total capacity of almost 2GB.

Use of swap space

An initialized swap space is formally used using the command swapon. The command tells the kernel that the swap space can be used. The path to the swap space is given as a parameter, so the command to begin using the exchange on a temporary interchange file is as follows:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.