Memory Management---Slab mechanism to destroy objects

Source: Internet
Author: User

release objects to slab in the Linux kernel function for kfree () or kmem_cache_free () . Two functions will call the __cache_free () function.

Cache recycling objects are based on the following guidelines

1. Local cache space can also accommodate idle objects, then directly put the object back to the local cache

2. When the local cache is full, the object is transferred from the local cache to the slab by the value of Batchcount, and the transfer is based on the FIFO principle, that is, the Batchcount idle object that is the first of the entry array is transferred. Because these objects exist in the array for a relatively long time, it is unlikely that they will still reside in the CPU cache


1 cpu cache Span style= "font-family: ' Times New Roman '" >cpu cache

2, when there are too many objects in the local cache (greater than or equal to the specified limit), you need to release a batch of objects into the slab three chains. Implemented by the function cache_flusharray () .

1) If there is a shared local cachein the three chain, then the first choice is released to the shared local cache , how much can be released;

2) If noshared Local Cache, release the object toSlabin three chains, the implementation function isFree_block (). ForFree_block ()function, destroying this when the number of idle objects in the three chains is too largeCache. Otherwise, add thisSlabto the list of idle links. Because at the time of allocation we see theSlabstructure fromCacheThe list was detached, and here, according topageDescriptor.LRUFindSlaband add it to the idle list of three chains.

* Release an obj back to its cache.  If The obj has a constructed state, it must * are in this state _before_ it is released. Called with disabled ints. */static inline void __cache_free (struct kmem_cache *cachep, void *objp, void *caller) {struct Array_cache *ac = CPU_CAC He_get (Cachep);  /* obtains the local cache */ check_irq_off () of this CPU; Kmemleak_free_recursive (OBJP, Cachep->flags) ; OBJP = Cache_free_debugcheck (Cachep, OBJP, caller); Kmemcheck_slab_free (Cachep, OBJP, cachep->object_size);/* * Skip calling Cache_free_alien () when the platform are not NUMA. * This would avoid cache misses that happen while accessing SLABP (which * was per page memory reference) to get Nodeid. Instead use a global * variable to skip the call, which are mostly likely to being present in * the cache. */if (Nr_online_nodes > 1 && cache_free_alien (Cachep, OBJP)) return;/* If the Idle object in the local cache is less than the upper limit of the Idle object, The address of the object is recorded directly with the element in entry */ if (likely (Ac->avail < ac->limit)) {stats_inc_freehit (CACHEP);} else {Stats_inc_freemiss (CACHEP);/* Otherwise, bulk transfer of idle objects in the local cache to slab */ cache_flusharray (Cachep, AC);} Ac_put_obj (Cachep, AC, OBJP);//actually executes ac->entry[ac->avail++] = OBJP;}
/*local Cache has too many objects and needs to release a batch of objects into the slab three-strand. */
static void Cache_flusharray (struct kmem_cache *cachep, struct Array_cache *ac) {int batchcount;struct kmem_list3 *l3;int node = numa_node_id (); batchcount = ac->batchcount; /* batchcount objects per release */  #if debugbug_on (!batchcount | | Batchcount > Ac->avail); #endifcheck_irq_off (); l3 = Cachep->nodelists[node];spin_lock (&l3->list_ Lock), if (l3->shared) {/* If shared local cache is turned on *//* get shared array_cache*/struct array_cache *shared_array = l3->shared;  /* If there is a shared local cache, release the object into which */ /* compute the number of idle objects that the shared on-premises cache can hold */int max = Shared_array->limit-shared_ Array->avail;if (max) {if (Batchcount > Max) batchcount = max;/* Move Batchcount objects to the shared local cache */memcpy (& (Shared_ Array->entry[shared_array->avail]), ac->entry, sizeof (void *) * batchcount); Shared_array->avail + = Batch Count;goto Free_done;}} /* Place the first Batchcount object of the local cache back into the slab*/ /* no shared local cache, releasing the object into the slab three chains */ free_block (Cachep, Ac->entry, Batchcount, node); Free_done: #if stats{int I= 0;struct List_head *p;p = L3->slabs_free.next;while (P! = & (L3->slabs_free)) {struct Slab *SLABP;SLABP = list_ Entry (p, struct slab, list); BUG_ON (slabp->inuse); i++;p = P->next;} Stats_set_freeable (Cachep, i);} #endifspin_unlock (&l3->list_lock); Ac->avail-= batchcount;/* refreshes the local cache avail value *//* The cache has a batchcount space in front of it, moving the subsequent objects forward Batchcount bit/ memmove (Ac->entry, & (Ac->entry[batchcount]), sizeof (void *) *ac->avail);}

static void Free_block (struct kmem_cache *cachep, void **objpp, int nr_objects, int node) {int i;struct kmem_list3 *l 3;for (i = 0; i < nr_objects; i++) {void *OBJP = objpp[i];struct slab *slabp;/* Get slab descriptor by virtual address of object */<span style= "whit E-space:pre ">/* through the virtual address to get the page, and then through the page to get slab */&NBSP;&LT;/SPAN&GT;SLABP = Virt_to_slab (OBJP);/* Get KMEM_LIST3*/L3 = cachep->nodelists[node];/* first removes the slab from the linked list */list_del (&slabp->list); Check_spinlock_acquired_node (Cachep , node); CHECK_SLABP (Cachep, SLABP);/* Place an object back on slab */slab_put_obj (Cachep, SLABP, OBJP, node); Stats_dec_active (Cachep); Number of idle objects in/*kmem_list3 plus 1*/L3-&GT;FREE_OBJECTS++;CHECK_SLABP (Cachep, SLABP);/* Fixup slab Chains *//*slab Object All idle */if (Slabp->inuse = = 0) {/* If the number of idle objects is greater than the upper limit of idle objects */if (L3->free_objects > L3->free_limit) {/* Total number of idle objects minus one slab number of objects */l3->free_objects-= cachep->num;/* No need to drop any previously held * lock here, even if W E has a off-slab slab * Descriptor It is guaranteed to come from * a different cache, rEfer to comments before * alloc_slabmgmt. */* Destroy the Slab*/slab_destroy (Cachep, SLABP);} else {/* Adds the slab to the free list */list_add (&slabp->list, &l3->slabs_free);}} else {/* Otherwise added to the partial link list *//* unconditionally move a slab to the end of the * partial list on free-maximum time for the * o Ther objects to be freed, too. */list_add_tail (&slabp->list, &l3->slabs_partial);}}
objects are released into their slab

static void Slab_put_obj (struct kmem_cache *cachep, struct slab *slabp,void *objp, int nodeid) {/* Gets the object's cable in the kmem_bufctl_t array Cited */unsigned int objnr = Obj_to_index (Cachep, SLABP, OBJP); #if debug/* Verify that the slab belongs to the intended node * /WARN_ON (Slabp->nodeid! = Nodeid); if (Slab_bufctl (SLABP) [objnr] + 1 <= slab_limit + 1) {PRINTK (Kern_err) Slab:doubl E free detected in cache "" '%s ', OBJP%p\n ", Cachep->name, OBJP); BUG ();} #endif/* These two steps are equivalent to the static linked list insert Operation *//* Point to the original first idle object in Slab */slab_bufctl (SLABP) [objnr] = slabp->free;/* disposed object as the first idle object */ Slabp->free = objnr;/* The number of allocated objects minus one */slabp->inuse--;}

/* Get page by virtual address, and then page get slab */static inline struct slab *virt_to_slab (const void *obj) {struct page *page = Virt_to_head_ page (obj); return Page_get_slab (page);}



Memory Management---Slab mechanism to destroy objects

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.