Process working set

Source: Internet
Author: User
Tags compact

 

The set of pages currently used by a process is called its working set ). If the entire working set is in the memory, running the process before entering the next stage will not cause many page faults. If the memory is too small to accommodate the entire working set, running the process will cause a large number of page faults and the speed will be very slow. A program that causes a page failure every few commands is called inBumpy.

In modern computer systems, memory access speed is much higher than external memory access speed. If no page disconnection occurs in the system, the data access time is approximately the same as the memory access time. However, if a page is interrupted, you need to call the page from external storage, which greatly reduces system performance. Denning puts forward the working set theory through a long-term study on the page missing rate. Because the program accesses the page unevenly (I .e., local) during running, if you can predict the pages that the program will visit in a certain period of time and transfer them to the memory in advance, this will reduce the page missing rate and increase the CPU utilization.

Process working set

Disk I/O caused by frequent page adjustment operations will greatly reduce the program running efficiency. Therefore, for each process, the virtual memory manager will host a certain amount of memory pages in the physical memory. It also tracks the performance indicators it executes and dynamically adjusts the quantity. The Memory Page residing in the physical memory in Win32 is called the workingset of the process. You can view the working set of the process in the "Task Manager, the "memory usage" column indicates the working set size.

The working set changes dynamically. At the beginning of a process, only a few code pages and data pages are transferred to the memory. When you execute code that has not been transferred into the memory or access data that has not yet been transferred into the memory, these code pages or data pages will be transferred to the physical memory, and the working set will also grow. But the working set cannot grow infinitely. The system defines a default minimum working set for each process (this value may be 20 ~ according to the physical memory size of the system ~ 50 MB) and the maximum working set (depending on the system physical memory size, this value may be 45 ~ 345 MB ). When the working set reaches the maximum working set, that is, the process needs to call a new page to the physical memory again, the virtual memory manager will replace some pages in the original working set with the memory first, then, call the new page to the memory.

Because the pages of the working set reside in the physical memory, access to these pages does not involve disk I/O, which is relatively fast; otherwise, if the code executed or accessed data is not in the work set, additional disk I/O will be triggered, thus reducing the program running efficiency. In an extreme situation, thrashing is called as a process, that is, most of the execution time of a program is spent on page adjustment operations, rather than code execution.

As mentioned above, when you call a page, the virtual memory manager not only calls the required page, but also transfers the pages nearby to the memory. Based on these knowledge, developers should consider the following two factors if they want to improve program running efficiency.

(1) For the code, write compact code as much as possible. The ideal situation is that the working set never reaches the maximum threshold. When you call a new page, you do not need to replace the pages that have been loaded into the memory. According to the locality feature, previously executed code and accessed data may be re-executed or accessed later. In this way, when the program is executed, the number of page missing errors will be greatly reduced, that is, the disk I/O is reduced, in Figure 5-6, you can also see the number of page missing errors that occurred at the time of execution of a program. Even if this is not possible, compact code often means that the code to be executed next is more likely to be on the same page or adjacent pages. Based on the time locality feature, 80% of the program's time is spent on 20% of the Code. If you can compact and merge the 20% code as much as possible, it will undoubtedly greatly improve the overall running performance of the program.

(2) for data, try to put together the data that will be accessed together (such as the linked list. In this way, when accessing the data, because they are on the same page or adjacent pages, only one page adjustment operation is required. Otherwise, if the data is scattered across multiple pages (or even worse, these pages are not adjacent), each access to the data will lead to a large number of page missing errors, reducing performance. Using the reserved and submitted two-step mechanism provided by Win32, You can reserve a large block of space for the data that will be accessed together. At this time, no actual storage space is allocated, and the memory is submitted as needed when the data is generated during subsequent execution. In this way, you do not waste real physical storage (including the disk space and physical memory space of paging files), but can also use the locality feature. In addition, the memory pool mechanism is also based on similar considerations.

In addition, the worker set and the resident set are different. The resident set refers to the current page set in the primary storage, and the working set refers to the page set that will be needed soon.

Resident set, in a virtual memory system, a process 'resident set is that
Part of a process 'address space which is currently in main memory. If this
Does not include all of the process' working set, the system may thrash.

Working Set, the working set of a program or system is that memory or set
Of addresses which it will use in the near future. This term is generally used
When discussing Miss rates at some storage level; the time scale of "near
Future "depends upon the cost of a miss. The working set shoshould fit in
Storage Level; otherwise the system may thrash.

 

 

Http://book.51cto.com/art/201006/203597.htm

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.