Linux Memory Management--slab

Source: Internet
Author: User
Tags goto


This kmem_cache_create () function is a function that is related to the structure of the CPU, which is not found in the common function (3.10.98 kernel version), I chose arch/x86/kernel/

Description under Parameters:

The name of the const char *name:slab

size_t Size: The sizes of each object

size_t align: alignment of each object

Unsigned long flags: Object Not enough, identity to request memory

void (*ctor) (void *): constructor

struct Kmem_cache *kmem_cache_create (const char *name, size_t size, size_t align,          unsigned long flags, Void (*ctor) (VO ID *) {    return kmem_cache_create_memcg (NULL, name, size, align, flags, ctor, NULL);}


See the next 3.10.98 kmem_cache_create () found a lot of discrepancies, or first look at 2.6.32 version of the bar

/** * Kmem_cache_create-create a cache. * @name: A string which is used In/proc/slabinfo to identify this cache. * @size: The size of objects to is created in this cache. * @align: The required alignment for the objects. * @flags: SLAB flags * @ctor: A constructor for the objects. * * Returns a PTR to the cache on success, NULL on failure. * Cannot is called within a int, but can is interrupted. * The @ctor is run while new pages are allocated by the cache. * * @name must be valid until the cache is destroyed. This implies is * The module calling this have to destroy the cache before getting unloaded. * Note that Kmem_cache_name () was not guaranteed to return the same pointer, * therefore applications must manage it Themse Lves. * The Flags is * *%slab_poison-poison the SLAB with a known test pattern (A5A5A5A5) * To catch references to Uniniti alised memory. * *%slab_red_zone-insert ' RED ' zones around the allocated memory to check * for buffer overruns. * *%slab_hwcache_align -Align The objects in this cache to a hardware * cacheline. This can is beneficial if you ' re counting cycles as closely * as Davem. */struct Kmem_cache *kmem_cache_create (const char *name, size_t size, size_t align, unsigned long flags, Void (*ctor) (    void *)) {size_t left_over, slab_size, ralign;     struct Kmem_cache *cachep = NULL, *pc;     gfp_t GFP;     /* * Sanity checks these is all serious usage bugs. */if (!name | | in_interrupt () | |        (Size < Bytes_per_word) | |  Size > Kmalloc_max_size) {//general check, because you need to allocate memory for name, the/proc/slabinfo display, will sleep, so cannot be in interrupt context printk (kern_err "%s:early error        In Slab%s\n ", __func__, name);    BUG ();  }/* * We use Cache_chain_mutex to ensure a consistent view of * Cpu_online_mask as well. Please see Cpuup_callback */if (slab_is_available ()) {//If the slab is already valid, it needs to be locked.        In the pre-initialization, only one CPU is initialized slab, can not lock Get_online_cpus ();    Mutex_lock (&cache_chain_mutex); } List_for_each_Entry (PC, &cache_chain, next) {//Check all slab on cache_chain, all slab will hang on global variable cache_chain char tmp;        int res; /* * This happens when the module gets unloaded and doesn ' t * destroy its slab cache and no-one else reuse  s the Vmalloc * area of the module.         Print a warning.         */Res = probe_kernel_address (pc->name, TMP);//Check if all slab have names <span style= "White-space:pre" ></span>                   if (res) {PRINTK (kern_err "Slab:cache with size%d have lost its name\n",        pc->buffer_size);//error continue; } if (!strcmp (pc->name, name)) {//check if your name is already in the list PRINTK (kern_err "Kmem_cache_cre            Ate:duplicate cache%s\n ", name);            Dump_stack ();        goto oops; }} #if DEBUG warn_on (STRCHR (Name, ")); /* It confuses parsers */#if forced_debug */* Enable redzoning and last user accounting, except for caches with    * Large objects, if the increased size would increase the object size * above the next power of Two:caches with O     Bject sizes just above A * power of both has a significant amount of internal fragmentation. */if (Size < 4096 | | FLS (SIZE-1) = FLS (size-1 + redzone_align + 2 * sizeof (unsigned long Long)) Flags |= Slab_red_zone |    Slab_store_user; if (! ( Flags & SLAB_DESTROY_BY_RCU)) flags |= Slab_poison; #endif if (Flags & SLAB_DESTROY_BY_RCU) bug_on (Flags & Slab_poison); #endif/* * Always checks flags, a caller might being expecting debug support which * is     N ' t available.    */bug_on (Flags & ~create_mask);  /* Check that size was in terms of words. This was needed to avoid * unaligned accesses for some archs when Redzoning was used, and makes * sure any On-slab b     Ufctl ' s are also correctly aligned. *///word alignment, why not directly: size = (size + (BYTES_PER_WORD-1)) & (~ (byte_per_worD-1) if (Size & (bytes_per_word-1)) {size + = (bytes_per_word-1);    Size &= ~ (bytes_per_word-1);  }/* Calculate the final buffer alignment: */* 1) Arch Recommendation:can is overridden for debug */if (flags  & slab_hwcache_align) {//high-speed buffer line Alignment/* * Default alignment:as specified by the arch code.         Except If * An object was really small, then squeeze multiple objects into * one cacheline.            */ralign = Cache_line_size ();//There are architecture-provided functions, alignment values while (size <= RALIGN/2)//objects are smaller, you can populate buffer rows with several more objects    Ralign/= 2; } else {ralign = bytes_per_word;//default is word alignment}/* * redzoning and user store require WORD alignment or Possi     Bly larger. * Note This'll be overridden by architecture or caller mandated * alignment if either is greater than Bytes_per_word     .    */if (Flags & slab_store_user) ralign = Bytes_per_word; if (Flags & Slab_red_zone) {RALign = redzone_align; /* If redzoning, ensure that the second RedZone are suitably * aligned, by adjusting the object size accordingly.        */size + redzone_align-1;    Size &= ~ (redzone_align-1); }//above are debug/* 2) Arch mandated alignment */if (Ralign < arch_slab_minalign) {ralign = Arch_slab_minal    IGN;    }/* 3) Caller mandated alignment */if (Ralign < align) {//The minimum alignment value specified in the architecture ralign = align; }/* Disable debug if necessary */if (Ralign > __alignof__ (unsigned long long)) Flags &= ~ (slab_red_z One |    Slab_store_user);     /* * 4) Store it.    */align = ralign;    if (slab_is_available ())//If slab is in effect, you can hibernate GFP = Gfp_kernel;    else//Pre-initialization, you can not sleep GFP = gfp_nowait; /* Get cache ' s description obj. *///to assign a cachep,cache_cache slab from cache_cache slab slab = Cachep for Kmem_cache_zal    LOC (&cache_cache, GFP); if (!cachep) goto oops; #if DEBUG Cachep->obj_size = size;     /* * Both debugging options require word-alignment which are calculated * into align above. */if (Flags & Slab_red_zone) {/* Add space for RED ZONE words */cachep->obj_offset + sizeof (UN        Signed long Long);    Size + = 2 * sizeof (unsigned long long); } if (Flags & Slab_store_user) {/* USER STORE requires one word storage behind the end of * the Rea L object.         But if the second red zone needs to IS * aligned to + bits, we must allow that much space.        */if (Flags & slab_red_zone) size + = Redzone_align;    else size + = Bytes_per_word; } #if forced_debug && defined (config_debug_pagealloc) if (size >= malloc_sizes[index_l3 + 1].cs_size & amp;& cachep->obj_size > Cache_line_size () && ALIGN (size, ALIGN) < page_size) {Cachep->obj        _offset + = Page_size-align (SIZE, ALIGN);    size = Page_size; } #endif #endif    /* * Determine if the slab management is ' on ' or ' off ' slab.     * (bootstrapping cannot cope with Offslab caches so don ' t does * it too early on.)    *///started to deal with Slab's head structure, is it stored on slab or somewhere outside of slab?? if (Size >= (page_size >> 3)) &&!slab_early_init)//object is larger (greater than 512) is external, from here you can see that the initialization is built-in/* *         Size is large, assume best to place the slab management obj * Off-slab (should allow better packing of OBJS). */Flags |= cflgs_off_slab;//represents the SLAB structure external size = ALIGN (size, ALIGN);//Align size  
//compute fragments, Concrete implementation look at the following function analysis 
    Left_over = Calculate_slab_order (Cachep, size, align, flags); if (!cachep->num) {//null object, Error PRINTK (kern_err "kmem_cache_create:couldn ' t create cache%s.\n", name        );        Kmem_cache_free (&cache_cache, Cachep);        Cachep = NULL;    goto oops; } slab_size = ALIGN (cachep->num * sizeof (kmem_bufctl_t) + sizeof (struct slab), ALIGN), the size of the//SLAB header structure/ * * If The slab has been placed Off-slab, and we had enough space then * move it on-slab.     This was at the expense of any extra colouring. *///take full advantage of the Shard, if possible, put SLAB head on SLAB if (Flags & Cflgs_off_slab && left_over >= slab_size) {//If the fragment size is greater than the SLAB header structure (including kmem_bufctl_t) The flags &= ~cflgs_off_slab;//become built-in left_over-= slab_size;//Change the Fragment size} if (Flags &A mp Cflgs_off_slab) {/* really OFF SLAB. No Need for manual alignment */slab_size = cachep->num * sizeof (kmem_bufctl_t) + sizeof (struct slab) If the alignment in the slab or not, then external, notNeed to align #ifdef config_page_poisoning/* If we ' re going to use the generic kernel_map_pages () * Poisoning and then         It's going to smash the contents of * The RedZone and userword anyhow, so switch them off. */if (size% Page_size = = 0 && Flags & Slab_poison) Flags &= ~ (Slab_red_zone |    Slab_store_user); The buffer length of the #endif}//l1 Cachep->colour_off = Cache_line_size (); /* Offset must be a multiple of the alignment.    */if (Cachep->colour_off < align)//must align Cachep->colour_off = align;    Cachep->colour = left_over/cachep->colour_off;    Cachep->slab_size = slab_size;    Cachep->flags = flags;    cachep->gfpflags = 0;    if (Config_zone_dma_flag && (Flags & SLAB_CACHE_DMA)) cachep->gfpflags |= GFP_DMA;    cachep->buffer_size = size;    Cachep->reciprocal_buffer_size = reciprocal_value (size); if (Flags & Cflgs_off_slab) {Cachep->slabp_cache = KmeM_find_general_cachep (Slab_size, 0u);         /* * This was a possibility for one of the malloc_sizes caches. * But since we go off slab only to object size greater than * PAGE_SIZE/8, and malloc_sizes gets created in Ascen         Ding Order, * this should not happen at all.         * But leave a bug_on for some lucky dude.    */bug_on (Zero_or_null_ptr (Cachep->slabp_cache));    } Cachep->ctor = ctor;    Cachep->name = name;        if (Setup_cpu_cache (Cachep, GFP)) {__kmem_cache_destroy (CACHEP);        Cachep = NULL;    goto oops; }/* Cache setup completed, link it into the list */List_add (&cachep->next, &cache_chain); Oops:if (!              Cachep && (Flags & Slab_panic)) PANIC ("Kmem_cache_create (): Failed to create SLAB '%s ' \ n",    name);        if (slab_is_available ()) {Mutex_unlock (&cache_chain_mutex);    Put_online_cpus (); } return Cachep;} Export_symbol (Kmem_cache_creaTE); 



Fragment Calculation function analysis

    Left_over = Calculate_slab_order (Cachep, size, align, flags);/** * calculate_slab_order-calculate Size (page order) of slabs * @cachep: Pointer to the cache, that's being created * @size: Size of objects to being created in this cache. * @align: Required alignment for the objects. * @flags: Slab Allocation Flags * * ALSO calculates the number of objects per slab.  * * This could is made much more intelligent.  For-now, try-to-avoid using * High order pages for slabs. When the GFP () functions is more friendly * towards High-order requests, this should is changed. */static size_t calculate_slab_order (struct Kmem_cache *cachep, size_t size, size_t align, unsigned long flags)    {unsigned long offslab_limit;    size_t left_over = 0;    int gfporder;        for (Gfporder = 0; Gfporder <= kmalloc_max_order; gfporder++) {//0~10 unsigned int num;        size_t remainder;        Cache_estimate (gfporder, size, align, flags, &remainder, &num); if (!num)//object is too large, 2^gfporder A memory page is not enough for an object, so return NULL continue; if (Flags & Cflgs_off_slab) {/* * Max number of Objs-per-slab for caches which * u Se off-slab slabs.             Needed to avoid a possible * looping condition in cache_grow ().            *///This online there are many explanations, here to say their own view, is to use an object to test the kmem_bufctl_t look at the size of the array offslab_limit = size-sizeof (struct slab);            Offslab_limit/= sizeof (kmem_bufctl_t);        if (num > Offslab_limit)//The number of objects cannot be too much break; }/* Found something acceptable-save it away */Cachep->num = num;//assigns values to various Members Cachep->gfporder        = Gfporder;        Left_over = remainder; /* * A vfs-reclaimable slab tends to the most allocations * as gfp_nofs and we really don ' t want         To was allocating * higher-order pages when we were unable to shrink Dcache.      */if (Flags & Slab_reclaim_account)//If you are allocating a page that can be recycled, you do not need to do the following checks, it will not be recycled      Break         /* * Large Number of objects is good, but very Large slabs was * currently bad for the GFP () s.        */if (Gfporder >= slab_break_gfp_order)//To reach the maximum order break;         /* * Acceptable internal fragmentation?     *///wasted space less than 1/8 (page << gfporder), exit if (Left_over * 8 <= (page_size << gfporder)) break; } return left_over;}


Note has been made to calculate how many fragments in a given buffer size

Cache_estimate (gfporder, size, align, flags, &remainder, &num);

/* * Calculate the number of objects and Left-over bytes for a given buffer size. */static void cache_estimate (unsigned long gfporder, size_t buffer_size, size_t align, int flags, size_t *le    ft_over, unsigned int *num) {int nr_objs;    size_t Mgmt_size; size_t slab_size = page_size << gfporder;//Allocated Memory page/* * The slab management structure can be either off the SL AB or * on it. For the latter case, the memory allocated for a * slab are used for: * *-the struct slab *-one KMEM_BUF     ctl_t for each Object *-Padding to respect alignment of @align *-@buffer_size bytes for each object * * If The slab management structure is off the slab and then the * alignment would already be calculated into the size.     Because * The slabs is all pages aligned, the objects would be in the * correct alignment when allocated.        */if (Flags & Cflgs_off_slab) {//slab structure plug-in, which is relatively simple mgmt_size = 0; Nr_oBJS = slab_size/buffer_size;//directly divides the size of each object if (Nr_objs > Slab_limit)//object's limit NR_OBJS = Slab_limit; } else {//slab structure body, will trouble dot/* * Ignore padding for the initial guess. The padding * are at @align-1 bytes, and @buffer_size are at * least @align.  In the worst case, this result would * is one greater than the number of objects that fit * into the memory         Allocation when taking the padding * to account. *///built-in, struct slab only one, and kmem_bufctl_t is as much as the object, because kmem_bufctl_t is used to see if the object is idle Nr_objs = (slab_size-sizeof (struct        Slab)/(buffer_size + sizeof (kmem_bufctl_t));         /* * This calculated number is either the right * amount, or one greater than what we want. */if (slab_mgmt_size (NR_OBJS, align) + nr_objs*buffer_size > Slab_size)//above is no alignment comparison, here the alignment after the comparison is not crossed,        Out of bounds on one less object nr_objs--;        if (Nr_objs > Slab_limit)    NR_OBJS = Slab_limit; Mgmt_size = Slab_mgmt_size (NR_OBJS, align);//This is the aligned, struct slab + nr_objs * sizeof (kmem_bufctl_t) value} *num = Nr_objs    ; *left_over = slab_size-nr_objs*buffer_size-mgmt_size;//All Sizes-All aligned object sizes-align object's structure and other values}


Slab Coloring Problem Understanding: http://blog.csdn.net/zqy2000zqy/article/details/1137895



Linux Memory Management--slab

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.