Description of driver Memory Allocation

Source: Internet
Author: User

Reprinted from: http://hi.baidu.com/zhanghuikl/blog/item/845478096f6878c53bc763ae.html

 

One major aspect of programming involves allocating storage units. Unfortunately, the driver cannot simply call malloc and free, or new or delete. To ensure that the correct type of memory is allocated, the allocated memory must be released after use, because the kernel mode code does not have an automatic clearing mechanism.

Memory available for the driver

The driver has three storage allocation options. The selection of standard memory allocation depends on the duration event, size, and IRQL. Available options:

1. Kernel stack: The Kernel Mode stack provides a limited amount of non-Paging storage space for local variables during driver routine execution.

2. Paging pool: the routines running below dispatch_level IRQL can have a heap called a paging pool. The memory in this area is paging, and page faults may occur when it is accessed.

3. Non-Paging pool: The routine running to improve IRQL needs to allocate temporary storage space from a heap space called a non-Paging pool. The system ensures that the virtual space in the non-Paging pool is always in the physical storage space. The device and controller extension created by the I/O manager are stored in this area.

Because the driver must be reentrant, no global variables are allocated except read-only data. Otherwise, a thread tries to store the global variables of data, and the read/write operations of another thread may be the same data.

Of course, the local static variables of the driver are also bad. The driver status information must be stored elsewhere, such as the extension of a previously introduced device.

Use the kernel stack

On the 80x86 platform, the size of the kernel stack is only 12 kbyte. on other platforms, the size of the kernel stack is 16 kbyte. Therefore, the kernel stack is a valuable resource, and kernel stack overflow will cause exceptions. Follow these guidelines to avoid Kernel stack overflow:

1. Do not design nested internal routines to keep the call tree as flat as possible.

2. Avoid recursion as much as possible. When necessary, limit the depth of recursion. The driver is not the place where the Fibonacci series operations are performed.

3. Do not use the kernel stack to create a large data structure. A large data structure should be created in the pool.

Another feature of the kernel stack is that it exists in high-speed buffer storage, so it cannot be used for DMA operations. The DMA buffer should be in a non-Paging pool.

Use Buffer Pool

Use the kernel routines exallocatepool and exfreepool to allocate storage space in the buffer pool.

These functions allow allocation of the following buckets:

1. nonpagedpool: memory available for driver routines with IRQL higher than or equal to dispatch_level.

2. nonpagedpoolmustsucceed: An Important temporary bucket for the driver to continue to operate. To use this storage space in emergencies, release it as soon as possible. In fact, an exception will occur if the allocation fails.

3. nonpagedpoolcachealigned: memory that ensures the natural boundary aliging of the CPU cache line. The driver uses this memory as a permanent I/O buffer.

4. nonpagedpoolcachealignedmusts: temporary buffer for important driver operations. S at the end indicates that the operation is successful. Like the previous mustsucceed operation, this request has never been used.

5. pagedpool: used by routines whose IRQL is lower than dispatch_level. Typically, these include driver initialization, cleanup, dispatch routines, and threads in any kernel mode.

6. pagedpoolcachealigned: it is the I/O buffer storage space used by the file system,

When using system memory, remember the following points:

1. The buffer pool is a precious system resource and should not be too luxurious, especially in non-Paging areas.

2. When a non-Paging memory driver is allocated or released, it must be executed on or higher than dispatch_level IRQL. The driver must allocate or release paging memory on IRQL or lower than the below apc_level.

3. Release the memory that is no longer in use as soon as possible. Otherwise, the operating efficiency of the system will decrease if there is no memory. In particular, make sure that the space is returned to the buffer pool when the driver is uninstalled.

Memory redistribution

Generally, drivers should avoid allocating and releasing buffer pools smaller than page_size bytes, so that fragments are generated in the buffer pool and cannot be used by other kernel mode code. If this is required, allocate a large storage area and provide a redistribution subroutine to allocate them.

In fact, a C programmer may write his own subroutine for allocating and releasing buckets in a large buffer pool. a c ++ programmer may reload the new and delete operations.

Some drivers need to manage small fixed-size storage blocks. For example, a SCSI driver must provide a SCSI request block (srbs), which is used to send commands to SCSI devices. The kernel provides two allocation mechanisms for different processing:

Regional buffer zone

A regional buffer pool is a buffer pool allocated by a driver. Executive routines provide fixed-size storage blocks for managing paging or non-Paging memory.

When using the regional buffer, pay attention to synchronization. In particular, If a service is interrupted, DPC, and dispatch routines all need to access a uniform regional buffer, an executive spin lock should be used for synchronization. If all access operations are at the same IRQL level, you can use a mutex instead.

Before installing the zone buffer, you must understand the zone_header data structure. The region buffer or quick mutex object must be declared and initialized. The following describes how to manage the region Buffer:

1. Call the exallocatepool request region buffer and then use exinitializezone to initialize it. This step is often executed in the DriverEntry routine.

2. Call exallocatefromzone or exinterlockedallocatefromzone to allocate a block from the zone buffer. The latter uses a spin lock to synchronously access the zone buffer. The synchronization work of the former is left to the driver code.

3. Call exfreetozone or interlockedfreetozone to release the allocated block.

4. In the driver's unload routine, use exfreepool to release the buffer space in the entire region. When releasing the zone buffer, you must ensure that all blocks in the zone buffer are released.

The buffer zone in one region should not exceed the necessary space. mmquerysystemsize can obtain the total number of available system buckets. The other executive function, mmisthisanntassystem, is used to check whether the current platform is a Win2000 Server version, the driver running on the server version can allocate a slightly larger space.

If the block allocation fails in the region buffer, the driver uses the standard buffer pool to obtain the requested block. This policy requires a clear structure to indicate whether the allocation is from the region buffer pool or the buffer pool. In this way, the appropriate routine can be called to release the block.

You can use exextendzone or exinterlockedextendzone to increase the size of the available zone, but these functions are rarely used. The system does not seem to be able to properly allocate additional zone buffers. In fact, microsoft has considered abolishing the abstraction of the entire regional buffer zone. Win2000 provides a more effective monitoring list architecture.

Monitoring List

The lookaside list is a connection list of a fixed-size storage block. Unlike the region buffer, the monitoring list can be dynamically increased or decreased based on the system status. Therefore, a proper monitoring list may have less waste storage space.

Compared with the region buffer, the synchronization mechanism on the monitoring list is more effective. If there is an 8-byte Comparison Switch command in the CPU build, executive uses it to continuously access the monitoring list. On platforms without this command, a spin lock is used for non-Paging pools, and a mutex is used for paging pools.

Before using the monitoring list, you must declare an npaged_lookaside_list or paged_lookaside_list (depending on whether the bucket is paging) structure. The following describes the monitoring list management process:

1. Use the exinitializenpagedlookasidelist or exinitializepagedlookasidelist function to initialize the header structure of the list. Generally, the DriverEntry or adddevice routine executes this task.

2. Call exallocatenpagedlookasidelist or exallocatepagedlookasidelist to allocate a block from the monitoring list. You can call these two routines anywhere in the driver.

3. Call exfreetonpagelookasidelist or exfreetopagelookasidelist to release the block.

4. Call exdeletenpagedlookasidelist or exdeletepagedlookasidelist to release any resources associated with the monitoring list. This function is usually executed in the driver's unload or removedevice routine.

The monitoring list initialization function simply sets the list header, which is not actually allocated to the list. The maximum number (called the depth of the list) of the initialized function request list.

When the assignment function is used, the system allocates the required storage space. When a block is released, it is added to the monitoring list until the maximum allowable depth is reached. The release of any block will cause the storage space to be released to the system. After a period of time, the number of blocks in the monitoring list will approach the depth of the list.

Be careful to control the depth of the monitoring list. If it is too small, the system will often perform expensive allocation and release actions. If it is too deep, it will cause a waste of storage space. The statistics of the List header structure can help determine the depth of the list.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.