Deep understanding of memcached principle __memcached principle

Source: Internet
Author: User
Tags epoll int size md5 memcached prev

1. Why to use Memcache

Due to the high concurrent read and write requirements of the Web site, traditional relational databases begin to have bottlenecks, such as:

1 high concurrent Read and write to the database:

The relational database itself is a monster, and the process is time-consuming (such as parsing SQL statements, transactions, and so on). If the relational database is highly concurrent read/write (tens of thousands of accesses per second), then it is unbearable.

2 The processing of massive data:

For large SNS sites, every day there are thousands of data generated (such as Twitter, Sina Weibo). For relational databases, it is very inefficient to find a record in a data table with hundreds of billions of data.


The use of memcache can be a good solution to the above problems.

In practice, the results of the database query are usually saved to the Memcache, the next visit directly from the Memcache read, and no longer database query operations, so that to a large extent reduce the burden of the database.

The objects saved in Memcache are actually placed in memory, which is why memcache is so efficient.


installation and use of 2.memcache

There are too many tutorials on the internet.

3. Event handling based on Libevent
Libevent is a library that encapsulates the epoll of Linux, the kqueue of BSD-like operating systems and other event-handling functions into a unified interface. O (1) performance can be played even if the number of connections to the server increases.

Memcached uses this libevent library to perform its high performance on Linux, BSD, Solaris, and other operating systems.

Reference: Libevent:http://www.monkey.org/~provos/libevent/the c10k problem:http://www.kegel.com/c10k.html


4.memcache Use Example:

[PHP] View Plain copy print? <?php   $MC  = new memcache ();   $mc->connect (' 127.0.0.1 ',  11211);       $uid  =  (int) $_get[' uid '];   $sql  =  ' select *  From users where uid= ' uid '   ';   $key  = md5 ($sql);   if (!) ( $data  =  $MC->get ($key))  {        $conn  = mysql_ Connect (' localhost ',  ' test ',  ' test ');       mysql_select_db (' Test ');         $result  = mysql_fetch_object ($result);        while ($row  = mysql_fetch_object ($result))  {               $data [] =  $row;       }        $MC->add ($key,  $datas);  }      var_dump ($datas);  ?>  

<?php
$MC = new Memcache ();
$MC->connect (' 127.0.0.1 ', 11211);

$uid = (int) $_get[' uid '];
$sql = "SELECT * from Users where uid= ' uid '";
$key = MD5 ($sql);
if (!) ( $data = $MC->get ($key)) (
    $conn = mysql_connect (' localhost ', ' test ', ' test ');
    mysql_select_db (' test ');
    $result = Mysql_fetch_object ($result);
    while ($row = Mysql_fetch_object ($result)) {
          $data [] = $row;
    }
    $MC->add ($key, $datas);

Var_dump ($datas);
? >


5.memcache How to support high concurrency (further research is required here)

Memcache use multiplexing I/O models, such as (Epoll, select, etc.), in traditional I/O, the system may wait until a user connection is not ready for I/O, knowing that the connection is ready for I/O. At this point, if there are other users connected to the server, it is likely that the system will be blocked by the response.

Multiplexing I/O is a message notification mode, the user connected to do I/O preparation, the system will inform us that this connection can do I/O operations, so that the connection will not be blocked in a user. Therefore, memcache can support high concurrency.

In addition, Memcache uses a multithreaded mechanism. You can handle multiple requests at the same time. The number of threads is generally set to CPU core, which is the most efficient.


6. Save data using the Slab allocation algorithm

The principle of the slab allocation algorithm is that the fixed size (1MB) of memory is divided into n small chunks, as shown in the following illustration:



The slab allocation algorithm calls every 1MB of memory a slab page, each time applying a slab page to the system, and then dividing the slab page into several small blocks of chunk (as shown above), and then assigning these chunk to the user, the segmentation algorithm as follows (in the SLABS.C file):

(Note: Memcache's GitHub Project address: https://github.com/wusuopubupt/memcached)

[CPP] View Plain copy print? /**   * determines the chunk sizes and initializes the slab  class descriptors   * accordingly.   */   Void slabs_init ( Const size_t limit, const double factor, const bool prealloc)  {        int i = POWER_SMALLEST - 1;        unsigned int size = sizeof (item)  + settings.chunk_size;           mem_limit = limit;           if  (prealloc)  {           /*  allocate everything in a big chunk with malloc  to request memory by malloc Way */            mem_base = malloc (Mem_limit);           if  (mem_base != null)  {                mem_current = mem_base;                mem_avail = mem_limit;            } else {                fprintf (stderr,  "warning: failed to  Allocate requested memory in "                         " one large chunk.\nwill  allocate in smaller chunks\n ");            }       }          memset (Slabclass,  0, sizeOf (Slabclass));          while  (++i < power_largest  && size <= settings.item_size_max / factor)  {            /* make sure items are always n-byte  aligned   Note the byte alignment here */           if  (size  % chunk_align_bytes)                 size += CHUNK_ALIGN_BYTES -  (size % chunk_align_bytes);               slabclass[i].size = size;            slabclass[i].perslab = settings.item_size_max /  slabclass[i].size;           size *= factor;// Increase CHUNK&N in multiples of 1.25bsp;          if  (settings.verbose > 1)  {                fprintf (stderr,  "slab  class %3d: chunk size %9u perslab %7u\n ",                        i,  Slabclass[i].size, slabclass[i].perslab);           }        }          power_largest =  i;       slabclass[power_largest].size = settings.item_size_max;        slabclass[power_largest].perslab = 1;        if  (settings.verbose > 1)  {            fpriNTF (stderr,  "slab class %3d: chunk size %9u perslab %7u\n",                    i,  Slabclass[i].size, slabclass[i].perslab);       }          /* for the test suite:  faking of how  Much we ' Ve already malloc ' d */       {            char *t_initial_malloc = getenv ("T_MEMD_INITIAL_ MALLOC ");           if  (t_initial_malloc)  {                mem_malloced =  (size_t ) Atol (t_initial_malloc);           }          }           if  (prealloc)  {            slabs_preallocate (power_largest);       }  }   

/** * Determines the chunk sizes and initializes the slab class descriptors * accordingly.
    */void Slabs_init (const size_t limit, const double factor, const BOOL prealloc) {int i = power_smallest-1;

    unsigned int size = sizeof (item) + Settings.chunk_size;

    Mem_limit = limit; if (prealloc) {/* Allocate everything in a big chunk with malloc request memory via malloc/mem_base = malloc (mem_
        limit);
            if (mem_base!= NULL) {mem_current = Mem_base;
        Mem_avail = Mem_limit; else {fprintf (stderr, "warning:failed to allocate requested memory in" "one large Chu
        Nk.\nwill allocate in smaller chunks\n ");

    } memset (Slabclass, 0, sizeof (slabclass)); while (++i < power_largest && size <= settings.item_size_max/factor) {/* Make sure items are Alwa YS N-byte aligned Note that the byte alignment/if (size% chunk_align_bytes) size = = Chunk_align_bytes- (size% chunk_align_bytes);
        slabclass[i].size = size;
        Slabclass[i].perslab = settings.item_size_max/slabclass[i].size;  The size *= factor;//is increased in multiples of 1.25 chunk if (Settings.verbose > 1) {fprintf (stderr, "Slab class%3d:chunk
        Size%9u Perslab%7u\n ", I, Slabclass[i].size, Slabclass[i].perslab);
    } power_largest = i;
    Slabclass[power_largest].size = Settings.item_size_max;
    Slabclass[power_largest].perslab = 1; if (Settings.verbose > 1) {fprintf (stderr, "slab class%3d:chunk size%9u Perslab", I,
    Slabclass[i].size, Slabclass[i].perslab); }/* For the "test" suite:faking of how much we ' ve already malloc ' d */{char *t_initial_malloc = getenv (
        "T_memd_initial_malloc");
        if (t_initial_malloc) {mem_malloced = (size_t) atol (T_initial_malloc);
    } if (Prealloc) {slabs_preallocate (power_largest); }
} 



The Slabclass in the above code is an array of type slabclass_t structures, which are defined as follows:

[CPP] View Plain copy print? typedef struct {       unsigned int size;       /* sizes of items */       unsigned  int perslab;   /* how many items per slab */       void **slots;           /*  list of item ptrs */       unsigned int sl_ total;  /* size of previous array */        unsigned int sl_curr;   /* first free slot */        void *end_page_ptr;         /*  pointer to next free item at end of page, or 0 */       unsigned int end_page_free; /* number of items  remaining at end of last alloced page */        unsigned int slabs;     /* how many slabs  were allocated for this class */       void **slab_ list;       /* array of slab pointers */        unsigned int list_size; /* size of prev array  */       unsigned int killing;  /* index+1 of  dying slab, or zero if none */       size_t  requested; /* The number of requested bytes */  }  slabclass_t;  

typedef struct {
    unsigned int size;      /* Sizes of items *
    /unsigned int perslab;   /* How many items per slab
    /void **slots;           /* List of item ptrs *
    /unsigned int sl_total;  /* Size of previous array *
    /unsigned int sl_curr;   /* The slot
    /void *end_page_ptr;         /* Pointer to next free item at end of page, or 0
    /unsigned int end_page_free;/* Number of items remaining at end of last alloced page *
    /unsigned int slabs;     /* How to many slabs were allocated for this class *
    /void **slab_list;       /* Array of slab pointers
    /unsigned int list_size/* size of prev array *
    /unsigned int killing;  /* index+1 of dying slab, or zero if none
    /size_t Requested/* The number of requested bytes/
} slabclass_t ;

Borrow someone else's picture to illustrate the slabclass_t structure:



By the source code of the segmentation algorithm, the slab algorithm divides the slab page according to the different size chunk, and the chunk of different sizes multiply by the factor (default is 1.25).

Use the memcache-u root-vv command to view memory allocations (8-byte alignment):




Find the most appropriate size for the chunk assigned to the request cache:

[CPP] View Plain copy print? /*   * Figures out which slab class  (chunk size)  is  required to store an item of   * a given size.   *    * given object size, return id to use when allocating /freeing memory for object   * 0 means error: can ' T store  such a large object   */      Unsigned int slabs_ CLSID (Const size_t size)  {       int res = power_ smallest;//  initialized to the smallest chunk          if  (size == 0)            return 0;        while  (size > slabclass[res].size)  //gradually increased chunk size until the first size larger than the application was found Chunk           if  (res++ == power_largest)       /* won ' t fit in the biggest slab */               return 0;        return res;  }  

 /* Figures out which slab class (chunk size) are required to store an item of
 * a given size.
 *
 * Given object size, return ID to "when" allocating/freeing memory for object
 * 0 means Error:can ' t store suc H a large Object
 * * *

unsigned int slabs_clsid (const size_t size) {
    int res = power_smallest;//initialized to the smallest chunk
  
   if (size = = 0) return
        0;
    while (Size > Slabclass[res].size)//increasing chunk size until you find the first chunk
        if (res++ = = power_largest) Larger than the size requested/     * Won ' t fit in the biggest slab
            /return 0;
    return res;
}
  


memory allocation:

(Here Reference: HTTP://SLOWSNAIL.COM.CN/?P=20)

[CPP] view plain copy Print

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.