In fact, many people have mentioned this solution that I want to mention today. The reason why I emphasize it here is that although many people have already mentioned this solution, however, this problem is still frequently encountered in our design process and has not been taken seriously or solved well.
In many cases, we may need to apply for and release many small pieces of memory. We all know that the application and release of a large number of small pieces of memory are very likely to cause a large number of memory fragments, each request for memory consumption is not just the size we applied for. It also needs to allocate a memory information block, such as the size of the block memory. If it is a paging memory, therefore, each allocation must be at least the minimum Memory Page of the system. Therefore, a large amount of small memory applications will inevitably lead to a large waste of memory. In a non-Paging memory management system, a large number of fragments will be generated, in severe cases, the actual memory usage may be low, but the system has no memory available status.
Resources on PCs are not rich, so this problem is not so obvious. However, because embedded systems are resource-constrained systems, memory is a very precious resource, in addition, the embedded system is limited by a single resource, and the memory management algorithm is generally relatively simple. Therefore, when writing application code, effective management and utilization of memory is a very important issue.
Of course, there is no solution to completely solve memory fragmentation and memory waste. In many cases, memory fragmentation and memory waste are in this situation, if no fragments are generated, memory is wasted. If no memory is wasted, fragments are easily generated. Here, I would like to introduce a common solution for your reference.
Problems caused by the application of a large number of small memory blocks:
1. memory fragmentation generation
2. Serious memory waste
Two memory application scenarios:
1. The memory size for each application is the same.
In many cases, we will encounter the need to apply for a lot of memory of the same size. For example, the linked list is a typical case. If the node of the linked list is small, after multiple operations on the linked list, a large amount of memory waste and a large number of fragments may occur. For memory paging management, the fragmentation problem can be solved, but the waste may be more serious:
For example, the linked list node is 8 bytes,
For general non-Paging memory management systems, a 4-byte memory start address and 4-byte memory size must be added to the memory management linked list at the same time each time a node is requested, that is, the actual memory occupied by each node is 16 bytes, which is twice the size of the memory we want to apply,
For the paging memory management system, assume that the Minimum Memory Page is 32 (very small) bytes, then, each node must add at least one 4-byte memory page identifier (which may be a page number or page START address) to the memory management linked list, because the page size is fixed, therefore, you do not need to record the memory size. Therefore, the actual memory occupied by each node is 32 + 4 bytes, which is four times the memory we actually need to apply.
If our linked list operations are frequent, the paging system will not produce fragments, but will occupy a large amount of memory space, but it will be bad for a simple memory management system, because there may be a large number of 8-byte memory fragments released in the system, these memory fragments are rarely used because the occupied space is too small.
2. The memory size for each application is different.
For some application scenarios, we may need a large amount of small memory, but we cannot determine how small the memory will be. Depending on the situation, different requirements may occur.
Memory Management Solution:
In embedded systems, the memory management algorithms of the operating system are generally not optimal for specific applications. Therefore, in order to manage the memory more efficiently, in application modules that require a large amount of memory, secondary Management of memory is generally performed.
Because it is already in a specific application scenario, in our design, we must be able to predict what kind of memory use environment we will need and whether we need to use a large amount of small memory, whether the memory size is fixed. If it is not fixed, the maximum value and the minimum value will be.
For memory allocation management, we can apply for a large memory block and allocate it to a small memory of a fixed size, the application can obtain or return memory blocks from the linked list. When all memory blocks in the linked list are allocated, you can apply for a large memory block and break it into small memory blocks to join the linked list. So on, when all the memory blocks in a large memory block are returned, this large memory block can be released to reduce memory usage.
For memory requirements that cannot be fixed, we can separate the memory requirements from the minimum requirement to the maximum requirement into several ranges, use the maximum size of each range as the fixed size required by the memory in this range, maintain a memory linked list for each size requirement, and manage the linked list set in a unified manner. When the application applies, the memory is automatically matched according to the specific requirements.
Of course, in some scenarios, small memory may be required in most cases, but in special cases, A large memory block may be suddenly needed. At this time, there is no suitable memory in the linked list collection. What should I do? In order to cope with this unexpected situation, when the memory requested by the application exceeds our expectation and the linked list set cannot meet our requirements, we can direct the memory allocation to the system, it is processed by the system's memory management algorithm, that is, the system's memory allocation/release interface is called directly.
Based on the above three situations, we can configure a flexible and variable memory secondary management algorithm based on the specific application. This memory management algorithm can be configured with the number of linked lists, the node size of the linked list can be configured. The memory size to be allocated can be automatically calculated to meet the memory allocation requirements that are not within the range of the linked list set.
There should be a lot of such management algorithms online, and I will release my own implementations next time.