Nginx source code analysis-Memory Pool

Source: Internet
Author: User

Reprinted statement: This article can be reproduced at will, but the original address must be specified. Thank you!

 

Nginx memory pool implementation is exquisite, and the code is also very concise. In general, all memory pools have a basic purpose: to apply for large memory blocks and avoid "long stream ".

 

1. Create a memory pool

The nginx memory pool mainly has the following two structures for maintenance: they maintain the header and data part of the memory pool respectively. Here, the data department is the place where users can allocate small pieces of memory.

 

// This structure is used to maintain data blocks in the memory pool for user allocation. <Br/> typedef struct {<br/> u_char * last; // The end position of the current memory allocation, that is, the start position of the next memory allocation <br/> u_char * end; // memory pool end position <br/> ngx_pool_t * Next; // link to the next memory pool <br/> ngx_uint_t failed; // count the number of times the Memory Pool cannot meet the allocation request <br/>} ngx_pool_data_t; <br/> // This structure maintains the header information of the entire memory pool. <Br/> struct ngx_pool_s {<br/> ngx_pool_data_t D; // data block <br/> size_t Max; // data block size, that is, the maximum memory size of small blocks <br/> ngx_pool_t * Current; // Save the current Memory Pool <br/> ngx_chain_t * chain; // you can mount a chain structure <br/> ngx_pool_large_t * large; // allocate a large block of memory, that is, memory requests exceeding max <br/> ngx_pool_cleanup_t * cleanup; // resources released when some memory pools are attached. <Br/> ngx_log_t * log; <br/> }; 

 

With the above two structures, you can create a memory pool. The interface for nginx to create a memory pool is ngx_pool_t * ngx_create_pool (size_t size, ngx_log_t * log) (In src/CORE/ngx_palloc.c); call this function to create a memory pool of size. Here, I use the memory pool structure diagram to demonstrate it, so I will not analyze the specific code.

 

The ngx_create_pool interface function is to allocate such a large block of memory, and then initialize each header field (in the color part ). The four fields in red are from the first structure and the maintenance data section. The figure shows that last is the starting position of the user to allocate new memory from the memory pool, end is the end position of the memory pool. All allocated memory cannot exceed the end. The value of the Max field in blue is equal to the length of the entire data part. When the memory requested by the user is greater than Max, the user requests a large memory, in this case, it needs to be allocated separately under the large field in purple format. If the memory requested by the user is not greater than Max, it is a small memory application, which is directly allocated in the data section. At this time, the last pointer will be moved.

 

2. allocate small block memory (size <= max)

An available memory pool has been created above, and the allocation of small memory is also mentioned. The memory allocation interfaces provided by nginx for users include:

Void * ngx_palloc (ngx_pool_t * Pool, size_t size );

Void * ngx_pnalloc (ngx_pool_t * Pool, size_t size );

Void * ngx_pcalloc (ngx_pool_t * Pool, size_t size );

Void * ngx_pmemalign (ngx_pool_t * Pool, size_t size, size_t alignment );

 

Both ngx_palloc and ngx_pnalloc allocate size memory from the memory pool. Whether the memory size is small or large depends on the size. Their difference is that, the memory obtained by palloc is aligned, while that obtained by pnalloc is not. Ngx_pcalloc directly calls the palloc to allocate the memory, and then performs a 0 initialization operation. Ngx_pmemalign allocates the size of memory and performs alignment-based alignment. Then, it is mounted to the large field and processed as a large memory. The following figure shows the model for allocating small blocks of memory:

 

This memory pool model is composed of the last three small memory pools. Because the remaining memory in the first memory pool is not allocated enough, the second new memory pool is created, the third memory pool is because the remaining parts of the first two memory pools are not allocated enough, so a third memory pool is created to meet your needs. As shown in the figure, all small memory pools are maintained by a one-way linked list. There are two other fields to be concerned about: failed and current. Failed indicates the number of times that the remaining available memory in the current Memory Pool cannot meet the user's allocation request. That is, after a allocation request arrives, the desired Memory Pool cannot be allocated, then failed will increase by 1. This allocation request will be submitted to the next memory pool for processing. If the next Memory Pool cannot meet the requirement, the failed will also add 1, the request is then passed down until the request is met (if there is no ready-made memory pool to meet the requirements, a new memory pool will be created ). The current field will change with the increase of failed. If the failed of the memory pool to which current points reaches 4, current points to the next memory pool. Guess: the value 4 should be the author's experience or a statistical value.

 

3. Allocation of large memory (size> MAX)

Large memory allocation requests do not directly allocate memory on the memory pool, but apply for such a memory directly from the operating system (just like allocating memory directly using malloc ), then, the memory is mounted to the large field in the memory pool header. The function of the memory pool is to solve the problem of frequent application of small memory pools. For such large memory, you can accept direct application. Similarly, the following figure shows a large memory application model:

 

Note that each large memory corresponds to a header structure (next & alloc), which is used to concatenate all the large memory into a linked list. This header structure does not apply directly to the operating system, but is applied directly in the memory pool as small memory (the header structure does not contain several bytes. After such a large block is used up, it may need to be released immediately to save memory space. Therefore, nginx provides the interface function: ngx_int_t ngx_pfree (ngx_pool_t * Pool, void * P ); this function is used to release a large block of memory in a memory pool. P is the address of the large memory. Ngx_pfree only releases large memory and does not release its corresponding header structure. After all, the header structure is applied as a small memory pool; the legacy header structure will be used for the next application for large memory.

 

Iv. Cleanup Resources

We can see that all the resources attached to the memory pool will form a circular linked list. Along the way, we can find that the seemingly simple data structure of the linked list is frequently used. The figure shows that each resource to be cleared has a header structure, which contains a key field handler, which is a function pointer, when a resource is mounted to the memory pool, a function for clearing the resource is also registered to the handler. That is to say, when the memory pool cleans up, it calls this handler to clear the corresponding resources. For example, we can mount an open file descriptor to the memory pool as a resource, and provide a function to register the function that closes the file description to the handler. When the memory pool is released, we will call the closed file function provided to process file descriptor resources.

 

5. Memory release

Nginx only provides interfaces for users to apply for memory, but does not release the memory. How does nginx release the memory? You cannot apply for it all the time. To address this problem, nginx uses special scenarios of Web server applications. A Web Server Always keeps accepting connection and request requests, so nginx divides the memory pool into different levels, there are process-level memory pools, connection-level memory pools, and request-level memory pools. That is to say, when a worker process is created, a memory pool is created for the worker process. After a new connection arrives, A memory pool is created for the connection in the memory pool of the worker process. After a request arrives at the connection, a memory pool is created for the request in the connected memory pool. In this way, after the request is processed, the entire memory pool of the request will be released. After the connection is disconnected, the connected memory pool will be released. Therefore, the memory is allocated and released.

 

Summary:From the memory allocation and release, we can see that nginx only aggregates the small memory application and then releases it together. This avoids frequent requests for small memory and reduces the generation of memory fragments.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.