[Top] High-concurrency server design-Memory Pool Design

Source: Internet
Author: User
Different businesses have different designs, but at least they all share common pursuits, such as performance. I have been developing servers for many years. Sometimes I am asked what is the server performance? What is the competition between various servers? The simple answer is QPS and concurrency, but sometimes it may be wrong to think about it. QPS and concurrency are for the same service. Different services may cause different pressure on the same server. Performance: for example, the server is a ship, and the performance is the capacity, speed, and stability of the ship. It is used by the province. I/O is not used when the memory is used, and the CPU is used less. The same QPS, the performance of CPU and memory usage is better than that of memory usage, QPS performance is better when it runs more than when it runs less, even if it uses more CPU and memory. What is Performance Assurance? Efficient event model, simple and clear business architecture, unified and stable resource management, and skilled personnel. Let's start with resources. Most of the resources are related to Io. If you have read the previous article, you will not be unfamiliar with the connection pool. Yes, the connection is an IO resource of the system. Let's look at another IO resource: memory. If you have read the code of servers such as Apache and nginx, or want to start, most of them should start from memory management. It is closely related to server performance. The design of the memory pool is also fast and stable. There are three types of lifecycle: global memory, which stores the global information of the entire process. Conn: information about each connection, from connection generation to disconnection. Busi: business-related information. A simple memory pool is defined as follows when each business ends:
typedef struct yumei_mem_buf_s yumei_mem_buf_t;typedef struct yumei_mem_pool_s yumei_mem_pool_t;struct yumei_mem_buf_s{int                          size;char                        *pos;char                        *start;yumei_mem_pool_t            *pool;};struct yumei_mem_pool_s{int                          size;char                        *data;char                        *last;yumei_mem_pool_t            *next;yumei_mem_pool_t            *current;};yumei_mem_pool_t* yumei_mem_pool_create( int block_size, int block_num );int yumei_mem_pool_free( yumei_mem_pool_t  *pool );yumei_mem_buf_t* yumei_mem_malloc( yumei_mem_pool_t   *pool, int size );int yumei_mem_buf_free( yumei_mem_buf_t *buf );

At the beginning of each connection, create a unique memory pool for the connection to store Io data. When you want to create a new business, create a business memory pool and release the memory pool when the business processing is complete:

typedef struct yumei_busi_s yumei_busi_t;struct yumei_busi_s{yumei_mem_pool_t      *pool;......}#define yumei_BUSI_MEM_BLOCL_SIZE 512#define yumei_BUSI_MEM_BLOCK_NUM  32yumei_busi_t* yumei_busi_create(){yumei_busi_t* busi;yumei_pool_t* pool;yumei_mem_buf_t* buf;int size;pool = yumei_mem_pool_create( yumei_BUSI_MEM_BLOCL_SIZE, yumei_BUSI_MEM_BLOCK_NUM );if( !pool ){return 0;}size = sizeof( yumei_busi_t );buf = yumei_mem_buf_malloc( pool, size );if( !buf ){yumei_mem_pool_free( pool );return 0;}busi = buf->data;return busi;}#define YUMEI_BUSI_ERROR -1#define YUMEI_BUSI_OK     0int yumei_busi_free( yumei_busi_t* busi ){if( !busi ){return YUMEI_BUSI_ERROR;}yumei_mem_pool_free( busi->pool );return YUMEI_BUSI_OK;}

In some cases, the business is relatively simple. A connection only corresponds to one or more businesses and is not executed in parallel. In this case, the Business memory pool is no longer required. You can directly connect to the memory pool:

yumei_busi_t* yumei_busi_create( yumei_conn_t* conn ){yumei_busi_t* busi;yumei_pool_t* pool;yumei_mem_buf_t* buf;int size;pool = conn->pool;if( !pool ){retur 0;}size = sizeof( yumei_busi_t );buf = yumei_mem_buf_malloc( pool, size );if( !buf ){yumei_mem_pool_free( pool );return 0;}busi = buf->data;return busi;}#define YUMEI_CONN_ERROR -1#define YUMEI_CONN_OK     0int yumei_conn_close( yumei_conn_t* conn ){if( !conn ){return YUMEI_CONN_ERROR;}yumei_mem_pool_free( conn->pool );return YUMEI_CONN_OK;}

Knowing how to use the memory pool, let's look at the internal design. The size in the four elements of the pool corresponds to block_size, and the data and last correspond to the starting address and allocable address of the block respectively, next and current correspond to the next memory pool and the current available memory pool respectively. On some common servers, we can see another element: Large. This is a battle for allocation of some large memory. When it is unclear how much memory the business needs, large is often necessary, so the memory pool structure becomes like this:

typedef struct yumei_mem_large_s yumei_mem_large_t;struct yumei_mem_large_s{char                      *data;int                        size;yumei_mem_large_t         *next;}struct yumei_mem_pool_s{int                          size;char                        *data;char                        *last;yumei_mem_pool_t            *next;yumei_mem_pool_t            *current;yumei_mem_large_t           *large;};

For some special businesses, for example, when the memory used by the business is fixed and similar, the memory pool is reduced to a fixed memory management, which is actually very simple, such a memory pool can be bound to a connection and used up without being released. The memory pool is reserved for reuse of the next connection, further saving costs.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.