Cstyle notes, freertos kernel details, article 1st

Source: Internet
Author: User
It is a kind of linked list Implementation of Dynamic Memory Management malloc/free service, dynamic memory allocation and recycling service, and malloc/free implementation. The main core content is one-way linked list. The data structure is defined as follows. A full segment of memory is SRAM or SDRAM. DRAM is centrally managed by the memory management module of the system. Here, it mainly refers to heap management:
Typedef struct a_block_link {struct a_block_link * pxnextfreeblock;/* <the next free block in the list. */size_t xblocksize;/* <the size of the free block. */} blocklink_t;
High address: 0 xffffffff -------------------- | top stack (SP) | ceiling | ...... | increases from top to bottom ------------------ | bottom stack | ------------------ |
...... | Heap bottom | ==============||| pxnextfreeblock |||||-------------------- ||| data |||||== = ==========|| from top to bottom | ...... |
| Heap top | ------------------ | low address: 0x00000000
1. the first is the definition of the heap size static uint8_t ucheap [configtotal_heap_size], which defines the size of the entire heap. configtotal_heap_size can be customized, of course, in some systems, we may want to dynamically detect the size of the system Ram. For example, the SPD of dimm records the size of the system Ram, at the same time, in some systems, our bootloader/BIOS will occupy part of the memory space, or when the loader detects the memory, it will find that some internal segments are unstable or have errors, they will block this part of memory, prevent the system from using it, and ensure system stability. Therefore, the RAM space left for the system will be further reduced, generally, these systems provide a table for the OS to read the available memory size of the current system, such as the e820 table in ACPI table and various tags tables in uboot, for the moment, we will not consider these complex situations, but only the simple state of static allocation. 2. Memory alignment issues. Different processor architectures have different requirements on heap alignment. 8/4/2/1 bytes alignment may occur. The specific requirement depends on the processor architecture and the function call rules defined by the compiler. For example: x64, ia32, arm64, arm32, msc51 and GCC, ms c, these are not the same, here we will discuss with 8 bytes, since it is 8-byte alignment, we need to modify the pre-defined stack size to ensure that the starting and ending addresses of the stack are exactly 8-byte alignment, of course, this will lose a small amount of memory space. 3. Stack growth direction. In general, the growth direction of the stack and stack is just the opposite. If the stack grows downward (from the high address to the address, each push stack operation will reduce the stack pointer by one, of course, whether to first subtract from the incoming stack or to the next stack is related to the processor architecture. Some processors support multiple modes and provide multiple Assembly commands for users to choose from, of course, in the end, the C compiler needs to be selected. Otherwise, if the stack increases upwards, the opposite is true. In the Microsoft vs environment, the default x86 environment requires 8-byte alignment, while x64 requires 16-byte alignment. For details, refer to the relevant readme documentation, such as the msdn or GCC help documentation. 4. About the growth direction of the heap. The stack growth direction mentioned above, the heap growth direction mentioned here is just the opposite of keeping up with, that is, the heap and stack happen to exist at both ends of the available memory, at the same time, moving closer to the middle leads to a problem. We need to ensure that the two cannot overlap, that is, do not let the stack overflow (whether the top overflow or the bottom overflow are not allowed ). 5. the location where the stack exists in the system memory. Different implementation methods are different. Because the implementation of this part in a large OS is complicated, there is no chance to study it clearly. Maybe there are some clever ways to implement it, here we will talk about the simplest implementation method, that is, using the BSS segment to save the heap ,. stack segment to save the stack. 6. About. Stack. BSS. Data and. Relocate. txt. Generally, System Code Compiled in C language not in OS will contain these segments. The specific functions of each segment are as follows. TXT section: It is generally used to save executable code, that is, our real code section. It is generally stored in read-only memory. Relocate segment: this is not a required part. Generally, it is the part where the TXT segment is loaded into RAM. Data Segment: Save some initialized data or static data. BSS segment: uninitialized data is saved here. Stack segment: Save Our stack here. We can see that each line of code in our code has its final destination, so we can find them in the corresponding storage ing area. Of course, each compiler has different definitions of the names of these parts, and not every system has these segments. The specific implementation is related to the processor architecture and compiler. From here, we can get a general idea of where Stack and heap exist. 7. default stack storage content. We can see from the stack pool ucheap that it is a static array. In Standard C, our static uninitialized array is stored in the BSS segment, and the default value is zero, therefore, the default value of the stack is zero. Of course, we can also write our tag value during stack initialization, for example, 0x55aa, as an exception verification for the stack. Of course, the BSS segment is loaded to the memory by crts0 and other C standard libraries only at runtime to allocate memory space, the build time only records the memory size and related information in the Rom, which can save a lot of storage space. 8. About algorithms. There are n memory management algorithms, which are known as partner algorithms, fixed-size memory block allocation methods, and so on. The efficiency and time space complexity of each algorithm are not the same, in some systems, due to the need to take real-time into account, that is to say, to ensure that the memory is allocated within a predictable time range, some need to ensure the minimum amount of memory fragments, and some require the most concise code, some require the highest efficiency, and so on. Different policies are not the same. They are used in different systems. Of course, some systems provide related configuration interfaces, you can choose your memory Scheduling Algorithm Based on your actual usage. Here we are talking about a general algorithm that does not involve too many optimizations. After all, the general system does not need to be too strict, especially for our amateurs, it is too complicated but a burden. So I only used a one-way linked list. The memory block is defined as above. Each block is responsible for managing a continuous memory and recording its address and length. All the blocks are serialized using a one-way linked list. We can start from the linked list header to find the blocks we need and use them. When we call the malloc service, we will search the linked list to find the block we need and mark the memory to use this segment. When the free service is called, we will mark the block release and put it back into the memory pool. At the same time, you also need to manage memory fragments, merge adjacent blocks of the released blocks, and release the corresponding blocks. This completes simple memory management. 9. alignment algorithm. The following algorithm ensures that all our blocks are 8-byte aligned. Although I don't know who came up with them, this algorithm can indeed make the final result 8-byte aligned. /* The size of the structure placed at the beginning of each allocated memory block must by correctly byte aligned. */static const uint16_t heapstruct_size = (sizeof (blocklink_t) + (portbyte_alignment-1 ))&~ Portbyte_alignment_mask); 10. Stack header and stack tail. In addition to the stack pool definition, we also need to create an index pointer to the stack space. Here we use two pointers pointing to the block node to index the node. /* Create a couple of list links to mark the start and end of the list. */static blocklink_t xstart, * pxend = NULL; 11. Idle stack counter. A counter is defined to free the system Tracing System stack. Note that the memory alignment space is not considered here. /* Keeps track of the number of free bytes remaining, but says nothing aboutfragmentation. */static size_t xfreebytesremaining = (size_t) equals) & (size_t )~ Portbyte_alignment_mask); 12. Stack block owner flag. There are many types of system memory. For example, when we use malloc, we can specify the memory type of our allocate, and then customize different behaviors according to different types. RW, Ro, RW and other common memory states in UEFI. For example, in GCD, we define various states of memery and Io, GCD is used to control the conversion of memory or IO in different states, such as efiloadercode, efiloaderdata, efibootservicescode, efibootservicescode, success, success, efiunusablememory, fail, fail, and fail, efimemorymappedio. In this case, we need to add the standard block attribute fields to each block. Because the memory in our system is relatively small and there are no requirements for Multiple Memory types, we only need to specify whether a certain segment of memory is in use, in addition, the system memory generally does not exceed 4 GB, so the high position of the xblocksize part is generally 0, which wastes a lot of storage space, we use the maximum bit of xblocksize to indicate that the memory is released. Equivalent to the Memory attribute identifier. Bit31 = 1, indicating that the memory is used; bit31 = 0, indicating that the memory is idle. You need to dynamically change the value of malloc and free each time. /* Gets set to the top bit of an size_t type. when this bit in the xblocksize member of an blocklink_t structure is set then the block belongs to the application. when the bit is free the block is still part of the free heap space. */static size_t xblockallocatedbit = 0; 13. about re-import. Because the heap pool is shared globally, data related to each operation must be mutually exclusive. Otherwise, confusion may occur. There are many methods to ensure mutual exclusion, there are different implementation methods for each hardware platform and system requirement. For example, mutex is a good method. In essence, the list of non-atomic operations cannot be interrupted. I think there are several ways to do this: A. Disable interruption. B. Disable Task Switching. Here we use the second method vtasksuspendall (). 14. About exception handling. The most robust code is the code that considers exceptions in advance and properly handles exceptions. Therefore, exception handling is also a very important part, inserting exception handling in a proper place is a robust and reasonable procedure. Use macro definitions to insert exception handling to appropriate places, and then hook it to the real Exception Handling Module. For example, assert (X) is a good tool. 15. query algorithms. As mentioned above, we have xstart and pxend pointers pointing to the head and tail of the heap pool respectively, and they are sorted by xblocksize from small to large. Therefore, the search algorithm is very simple. You only need to compare the size of the applied memory and the size of the idle block in the heap pool from the beginning. If yes, the returned block is marked, at this time, we can use this memory. If the block is found to be large, we will divide the block into two parts, one part will be returned, and the other part will be inserted into freelist. This is equivalent to the void pointer returned by malloc. The following is the implementation code. While (pxblock-> xblocksize <xwantedsize) & (pxblock-> pxnextfreeblock! = NULL) {pxpreviusblock = pxblock; pxblock = pxblock-> pxnextfreeblock;} 16. About memory fragments. Each time the free service is used, the block will be released, and the parameter fragmentation will inevitably occur. The algorithm we use here is to first return the block of the memory block of the required size every time malloc is used, then, create a new block for the following part of the memory and insert it into freelist. Let the memory of some released blocks combine with the previous and later blocks to form a larger block to reduce the generation of fragments. This can theoretically avoid the generation of fragments, however, this simple logic may lead to multiple times of memory blocks with different sizes of malloc and free. If some of these blocks have been occupied and not released, many fragments of different sizes are generated, resulting in lower efficiency of the malloc and free services. It can only be said that this method is the most direct and simple implementation method, and the task can still be completed correctly in systems that do not require too much memory to be released frequently. 17. What can be improved: A. Add the allocated memory attributes and access permission management. (For example, add the blocklink_t field, or split xblocksize into more bits based on platform features to control attributes and permissions. After all, the general system memory volume is limited and 32bit is not used, generally, small systems have dozens of KB of capacity, and only 10 bits are needed ~ 20 bits are enough) B. optimization algorithms, such as fragment, which move memory blocks that are not released for a long time together to merge more memory blocks, avoid malloc service calling failure due to memory fragmentation. C. for the current memory management mechanism, when we call the service, we try to allocate memory blocks that do not change frequently, such: put these calls that do not release memory for a long time together as much as possible, so that the memory blocks they allocate will be continuously together, thus greatly reducing the generation of fragments.
Reprinted please indicate the source [email protected] // http://blog.csdn.net/CStyle_0x007

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.