Re-read kernel storage management (6): Applications with high-speed cache

Source: Internet
Author: User

Happy shrimp http://blog.csdn.net/lights_joy/lights@hb165.com this article applies to ADI bf561 DSPuclinux-2008r1-rc8 (transplanted to vdsp5) Visual DSP ++ 5.0 welcome reprint, but please keep the author information

1.1 applications with high-speed cacheThe kernel often needs to request and release a single page. To improve system performance, each memory management zone defines a "cache per CPU" page. All caches contain pre-allocated pages that are used to meet a single memory request from the local CPU. In fact, there are two high-speed caches for each memory management zone and each CPU: one hot cache, And the content contained in the pages it stores may be in the CPU Hardware high-speed cache; there is also a cold high-speed cache. The kernel uses two bits to monitor the size of hot cache and cold cache: if the number of pages is lower than the lower limit, the kernel uses the buddy system to allocate a single batch page to supplement the corresponding cache; otherwise, if the number of page boxes is higher than the upper limit, the kernel releases the batch page boxes from the cache to the buddy system. 1.1.1 page recyclingWhen the buddy algorithm recycles a page, it first tries to put it in the hot cache, and its implementation is as follows: fastcall void _ free_pages (struct page * Page, unsigned int order) {If (put_page_testzero (page) {If (Order = 0) free_hot_page (PAGE); else _ free_pages_ OK (page, order );}} void fastcall free_hot_page (struct page * Page) {free_hot_cold_page (page, 0);} trace free_hot_cold_page function: /** free a 0-order page */static void fastcall free_hot_cold_page (struct page * Page, int C Old) {struct zone * zone = page_zone (PAGE); // returns the zone_dma region struct per_cpu_pages * PCP; unsigned long flags; If (pageanon (page )) page-> mapping = NULL; If (free_pages_check (page) return; If (! Pagehighmem (page) // always false debug_check_no_locks_freed (page_address (PAGE), page_size); arch_free_page (page, 0); // empty statement kernel_map_pages (page, 1, 0 ); // empty statement PCP = & zone_pcp (zone, get_cpu ()-> PCP [cold]; local_irq_save (flags); _ count_vm_event (pgfree ); // empty statement list_add (& page-> LRU, & PCP-> list); PCP-> count ++; If (PCP-> count> = PCP-> high) {free_pages_bulk (zone, PCP-> batch, & PCP-> list, 0); PCP-> count-= PCP-> batch;} local_irq_restore (flags); put_cpu ();} from this function, we can see that if the number of pages in PCP is small, it will directly put the recycled page in the cache (hot cache or cold cache), that is, the list_add call in the above function. Here, the PCP actually points to the PCP in dma_zone. When there are many pages in the cache, It switches out some pages that finally enter the cache and links these pages to the available page linked list (using the buddy algorithm ). 1.1.2 page feed policyWhen there are many cached pages in the kernel, some pages will be switched to the linked list of available memory. This operation is completed by the free_pages_bulk function:/** frees a list of pages. * assumes all pages on list are in same zone, and of same order. * count is the number of pages to free. ** if the zone was previusly in an "all pages pinned" State then look to * See if this freeing clears that state. ** and clear the zone's pages_scanned counter, to hold off the "All pages are * pinned" detec Tion logic. */static void free_pages_bulk (struct zone * zone, int count, struct list_head * List, int order) {spin_lock (& zone-> lock); zone-> all_unreclaimable = 0; zone-> pages_scanned = 0; while (count --) {struct page * page; vm_bug_on (list_empty (list); page = list_entry (list-> Prev, struct page, LRU);/* have to delete it as _ free_one_page list manipulates */list_del (& page-> LRU); _ free_one_page (P Age, zone, order) ;}spin_unlock (& zone-> lock) ;}because the page to be recycled is inserted to the header of the double-stranded table during page recycling, as can be seen from the above functions, when switching the page out of the high-speed cache, it is also performed from the linked list header in order, so the entire switching policy is post-in-first-out, that is, switch out before entering the cache.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.