Linux kernel source Scenario analysis-slab-recovery of memory management

Source: Internet
Author: User
Tags goto

In the previous article Linux kernel source code scenario analysis-the slab-allocation and release of memory management, finally formed the structure as follows:


Figure 1


We see a number of pages occupied by free slab blocks, and we do not release them ourselves; we collect them through kmem_cache_reap and Kmem_cache_shrink. The difference between them is:



1, we first look at Kmem_cache_shrink, the code is as follows:

int Kmem_cache_shrink (kmem_cache_t *cachep) {if (!cachep | | in_interrupt () | |!is_chained_kmem_cache (CACHEP)) BUG (); Return __kmem_cache_shrink (CACHEP);}


__kmem_cache_shrink, the code is as follows:

static int __kmem_cache_shrink (kmem_cache_t *cachep) {slab_t *slabp;int ret;drain_cpu_caches (Cachep); SPIN_LOCK_IRQ ( &cachep->spinlock);/* If The cache is growing, stop shrinking. */while (!cachep->growing) {//determines that the buffer is not in growingstruct list_head *p;p = The last block of the cachep->slabs.prev;//list is the free blocks, In this case, the 4th free block if (p = = &cachep->slabs)//If there is no slab block in the list, direct BREAKBREAK;SLABP = List_entry (Cachep->slabs.prev, slab_t, list), if (slabp->inuse)//If it is not a free block, Breakbreak;list_del (&slabp->list);//If the free block is deleted, so the next loop cachep- >slabs.prev looking for is the new slab block if (cachep->firstnotfull = = &slabp->list)//If Firstnotfull is this free block, Then there must be no block or free block in the system to allocate cachep->firstnotfull = &cachep->slabs;//point to CACHEP->SLABSSPIN_UNLOCK_IRQ (& Cachep->spinlock); Kmem_slab_destroy (Cachep, SLABP);//destructors, and frees all pages where free slab blocks SPIN_LOCK_IRQ (&cachep-> Spinlock);} ret =!list_empty (&cachep->slabs); Spin_unlock_irq (&cachep->spinlock); return ret;}

Kmem_slab_destroy, destructors, and frees all pages where the free slab block is located, the code is as follows:

static void Kmem_slab_destroy (kmem_cache_t *cachep, slab_t *slabp) {if (Cachep->dtor        ...) {int i;for (i = 0; i < cachep->num; i++) {void* OBJP = slabp->s_mem+cachep->objsize*i;                        ... if (cachep->dtor) (Cachep->dtor) (OBJP, Cachep, 0);//Destructors All objects ...                        } Kmem_freepages (Cachep, Slabp->s_mem-slabp->colouroff);//Releases all pages occupied by free SLAB blocks if (Off_slab (Cachep)) kmem_cache_ Free (Cachep->slabp_cache, SLABP);}


Kmem_freepages, free all pages of the free slab block, the code is as follows:

static inline void Kmem_freepages (kmem_cache_t *cachep, void *addr) {unsigned long i = (1<<cachep->gfporder); str UCT page *page = virt_to_page (addr);/* Free_pages () does not clear the type bit-we do. * The pages has been unlinked from their cache-slab, * but their ' struct page ' s might is accessed in * Vm_scan (). Shouldn ' t be a worry. */while (i--) {Pageclearslab (page);p age++;} Free_pages ((unsigned long) addr, cachep->gfporder);}

2, again see KMEM_CACHE_REAP, traverse the Cache_chain list to find the appropriate buffer, and only released the selected buffer up to 80% of the completely idle slab block, the code is as follows:

void kmem_cache_reap (int gfp_mask) {slab_t *slabp;kmem_cache_t *searchp;kmem_cache_t *best_cachep;unsigned int Best_ pages;unsigned int best_len;unsigned int scan;if (Gfp_mask & __gfp_wait) down (&cache_chain_sem); ElseIf (Down_ Trylock (&cache_chain_sem)) Return;scan = Reap_scanlen;best_len = 0;best_pages = 0;best_cachep = NULL;SEARCHP = Clock_ searchp;//last inspected buffer do {unsigned int pages;struct list_head* p;unsigned int full_free;/* It's safe to test this without HOL Ding the Cache-lock. */if (Searchp->flags & slab_no_reap) goto NEXT;SPIN_LOCK_IRQ (&searchp->spinlock); if (searchp-> Growing)//buffers cannot be in the growth state goto next_unlock;if (Searchp->dflags & Dflgs_grown) {searchp->dflags &= ~dflgs_                Grown;goto Next_unlock;}  ... full_free = 0;p = searchp->slabs.prev;//Starts from the idle block while (P! = &searchp->slabs) {SLABP = List_entry (p, slab_t, list), if (Slabp->inuse) break;full_free++;//has a free block, Full_free adds 1p = p->prev;//forward to continue searching}/* * Try to avoid slabs with COnstructors and/or * More than one page per slab (as it can is difficult * to get high orders from GFP ()). */pages = Full_free * (1<<searchp->gfporder);//number of pages occupied by a number of free blocks if (searchp->ctor) pages = (pages*4+1)/5;if ( Searchp->gfporder) pages = (pages*4+1)/5;//pages per 80if (pages > Best_pages) {//Find free blocks up to the maximum number of pages Best_cachep = SEARCHP Best_len = full_free;//number of free blocks best_pages = pages;//The number of pages occupied by several free blocks is 80if (full_free >= reap_perfect) {CLOCK_SEARCHP = List_entry (Searchp->next.next,kmem_cache_t,next); goto Perfect;}} NEXT_UNLOCK:SPIN_UNLOCK_IRQ (&searchp->spinlock); NEXT:SEARCHP = List_entry (searchp->next.next,kmem_ Cache_t,next);} while (--scan && searchp! = CLOCK_SEARCHP);//Traverse Cache_chain list to find the appropriate buffer CLOCK_SEARCHP = searchp;//This expedition, That is, the start of the next expedition if (!BEST_CACHEP)//not found//couldn ' t find anything to reap */goto Out;spin_lock_irq (&best_cachep->  spinlock);p erfect:/* free, only 80% of a, */best_len = (best_len*4 + 1) The number of idle blocks 80for (scan = 0; scan < Best_leN scan++) {//release free block struct List_head *p;if (best_cachep->growing)//cannot be in the growth state break;p = best_cachep->slabs.prev;// Because the idle slab block has been deleted, it also points to the new idle slab block if (p = = &best_cachep->slabs)//indicates that all slab blocks have been traversed, BREAKBREAK;SLABP = List_ Entry (p,slab_t,list); if (slabp->inuse)//is not a free block, Breakbreak;list_del (&slabp->list);//delete idle slabif (best_ Cachep->firstnotfull = = &slabp->list)//If Firstnotfull is this free block, then there must be no part of the system or free blocks to allocate best_cachep-> Firstnotfull = &best_cachep->slabs;//points to cachep->slabsstats_inc_reaped (BEST_CACHEP);/* Safe to drop the lock. The slab is no longer linked to the * cache. */SPIN_UNLOCK_IRQ (&best_cachep->spinlock); Kmem_slab_destroy (Best_cachep, SLABP);//Ibid. SPIN_LOCK_IRQ (& Best_cachep->spinlock);} SPIN_UNLOCK_IRQ (&best_cachep->spinlock); Out:up (&cache_chain_sem); return;}

Linux kernel source Scenario analysis-slab-recovery of memory management

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.