Previous Article: Implementation of the C language interpreter-sequence (zero)
Directory:
1. Memory Pool
2. Stack
3. Hash table
1. Memory Pool
In some small Program There is no need to add the memory management module in it. However Code If you need a lot of memory operations, it is necessary to add your own memory management. There are at least some advantages: it can speed up memory application and release; it can easily find out memory leakage problems; it can make a more accurate statistics on the memory consumption of the entire software; it will be of great benefit to future optimization. Therefore, in my interpreter, I added a simple memory management module to simulate the memory pool practice.
The main idea is as follows:
A. Record all applied memory
B. When the memory is released, record it for the next application
C. When applying for memory, you can directly use the previously released memory
To achieve the above functions. I divide the granularity for the applied memory size. For example, I have to arrange the granularity {16, 32, 64,128 ,...} when applying for the size of 17 bytes, I will apply for the size of 32 bytes. This facilitates management. Create a two-way linked list with available memory for each granularity. When applying for memory, you can apply directly from these linked list headers (removing a node from the linked list header as the requested space and inserting it into the used linked list ), the release of memory is an idea process. These storage structures are as follows:
(Figure 1.1 storage structure of the Memory Pool)
Typedef struct _ pool_block {
Int size;
Void * data;
Struct _ pool_block * next;
Struct _ pool_block * pre;
} Pool_block_t;
Typedef struct _ pool {
Int num_all;
Int num_free;
Pool_block_t * list_all;
Pool_block_t * list_free [pool_atom_num];
} Pool_t;
Int pool_atom_tab [pool_atom_num] = {
32, 64,128,256,512,102 4, 2048,409 6, 8192,-1
};
Note:
A. The memory application will be aligned according to the size in the pool_atom_tab array. For example, if you apply for 10 bytes, I will apply for 32 bytes.
B. Save a two-way linked list for each granularity to save the released memory. If the memory to be applied for exceeds 8192, I will call the system's malloc directly.
C. Memory application process: Check whether available memory exists in the corresponding granularity linked list (list_free). If yes, directly move it from the list_free linked list to the list_all linked list.
D. Memory release process: the memory to be released must be stored in list_all. Based on its size, move it to the corresponding list_free linked list.
E. When the pool_block_t structure is placed in front of the applied memory, the position of pool_block_t can be obtained directly based on the buffer pointer during release, so as to get next and pre, and quickly move in the linked list.
2. Stack
The stack is used in many places in the interpreter, including expression parsing, code block parsing, type parsing, and so on. So it is impossible not to implement it, but in the data structure, it is the simplest. It is nothing more than applying for a space and saving it by one node, extract data by one node. There is no skill in it, but I want the size and space of the stack to automatically increase and decrease. The purpose of doing so is to limit the size of the stack space only to the memory size. However, the disadvantage is that when the size of the stack space changes automatically, the data in the stack must be copied again, which will definitely affect the efficiency. But there is no way. This is now possible. The only way is to make a choice in time and space.
The stack storage structure is as follows:
(Figure 1.2 stack storage structure)
Typedef struct _ stack {
Int item_len;
Int item_num;
Int stack_size;
Char * P;
} Stack_t;
Note:
Item_len: Save the length of each node
Item_num: number of nodes in the stack
Stack_size: number of nodes that can be stored in the stack
P: pointing to stack space
A. When the number of nodes item_num is greater than stack_size, you must apply for a new space to copy the original data to the new space.
B. When the number of nodes is reduced to a certain number, you can apply for a small data space to release the original large space.
3. Hash table
Hash is famous for its fast search capability, but it is too wasteful of memory, so it is used less, only when the function is called. Because function calls are frequent, if you look for a function from the beginning, it will waste a lot of time. It is also necessary to introduce hash here.
# Define hh_tab_size 128
Typedef struct _ hh_node {
Unsigned int hash, klen, dlen;
Void * key;
Void * data;
Struct _ hh_node * next;
} Hh_node_t;
Typedef struct _ hh_head {
Unsigned int node_num;
Hh_node_t * node_list;
} Hh_head_t;
Typedef struct _ hh_hash {
Hh_opts_t opts;
Hh_head_t tabs [hh_tab_size];
} Hh_hash_t;
typedef struct _ hh_opts {
int (* cmp_key) (void * key1, void * key2);
unsigned int (* get_hash) (void * Key);
void * (* new_key) (INT);
void * (* new_data) (INT);
void (* del_key) (void * Key);
void (* del_data) (void * data);
}hh_opts_t;