Share memory between drivers and applications

Source: Internet
Author: User
Tags valid win32

Original: http://www.cnblogs.com/lzjsky/archive/2010/11/19/1881911.html


On different occasions, many driver writers need to share memory between the driver and the user program. Two of the easiest techniques are:

The application sends the IOCTL to the driver, provides a pointer to the memory, and then the driver and the application can share the memory. (Application allocates shared memory)

L The memory pages are allocated by the driver, and the memory pages are mapped to the address space of the specified user-mode process, and the address is returned to the application. (Driver allocates shared memory)

To share buffer with the IOCTL:  

Using a buffer described by ioct, sharing memory between the driver and the user program is the simplest form of memory sharing. After all, the IOCTL is also the most classic way to drive support for other I/O requests. The application calls the Win32 function DeviceIoControl (), and the base address and length of the buffer to be shared is placed in the Outbuffer parameter. What you need to determine for driver writers using this buffer sharing method is which buffer method to take for a particular ioctl. You can use Method_xxx_direct, or you can use Method_neither.

(PS: In method_xxx_direct mode, the IO Manager creates an MDL lock on the application layer's buffer memory for the output buffer specified by the application layer (OutputBuffer), and then We can use Mmgetsystemaddressformdlsafe in the kernel layer to obtain the address of the kernel layer corresponding to the application-level output buffer. The MDL address is placed in the irp->mdladdress).

If the Method_xxx_direct method is used, the user buffer will be checked for proper access and the user buffer will be locked into memory after checking. The driver needs to call Mmgetsystemaddressformdlsafe to map the aforementioned buffer to the kernel address space. One advantage of this approach is that the driver can access the shared memory buffer in any process context, at any IRQL priority level. Use the Method_in_direct method if you only need to pass data to the driver. Use Method_out_buffer if you return data from the drive to the application or do bidirectional data exchange.

(PS: These checks will be handled by the IO Manager and, at this point, the IO Manager will create an MDL for the user layer's buffers.) Because the device driver layer is not yet present, the current context belongs to the process that is currently initiating the DeviceIO call, and the memory of the user-mode buffer is valid. However, the current context cannot be guaranteed when the IO Manager sends an IRP to the lower driver, but fortunately the IO Manager creates the MDL for us. In this way, we can obtain the corresponding kernel address at the kernel layer and write the data freely. )

Using Method_neither to describe a shared memory buffer there are many inherent limitations and areas of caution. (Basically, at any time a driver uses this method the same way). The most important rule is that the driver can only access buffer in the context of the originating request process. This is because you want to access the shared memory buffer through the user's virtual address in buffer. This means that the drive must be at the top of the device stack and called directly by the user application via IO Manager. There can be no middle-tier drivers or file system drivers on our drive. In practice, WDM drivers will strictly restrict the storage of user buffer in their dispatch routines. The kmdf driver needs to be used in the Evtioincallercontext event callback function.

Another important inherent limitation is that the drive to access user buffer using the Method_neither mode must be at the IRQL level of the passive_level. This is because IO Manager does not lock the buffer in memory, so when the driver wants to access the shared buffer, the memory may be swapped out. If the driver does not meet this requirement, the driver is required to create an MDL and then share the buffer lock into memory.

(Ps:method_neither is not recommended, or it is good to use direct IO.) )

In addition, given the choice of transport type, the possible indirect and obvious limitation for this approach is that the shared memory must be allocated by the user-mode application. If the quota limit is taken into account, the amount of memory that can be allocated is limited. In addition, the user application cannot allocate physical contiguous memory and Non-cache memory. Of course, if the driver and user-mode applications all have to do is to use a reasonable size of data buffer to pass data into and out, this technology may be the simplest and practical.

Like its simplicity, using the IOCTL to share memory between driver and user-mode applications is also the most commonly misunderstood scenario. A new Windows driver developer who uses this scenario often makes the mistake of notifying the end of the IOCTL when the driver has already queried the address of buffer. This is a very bad thing. Why. If the application suddenly exits, such as having a meaning, what happens. Another problem is when using Method_xxx_direct, if an IRP with MDL is completed, buffer will no longer be mapped to the system kernel address space. An attempt to access the previously valid kernel virtual address space (Mmgetsystemaddressformdlsafe fetch) will crash the system. This is usually avoided.

One scenario for this problem is that the application uses file_flag_overlapped to open the device and consider the IOCTL using a overlapped structure. A driver can set the cancel routine for the IRP (using Iosetcancelroutine), mark the IRP as pending (using iomakeirppending), and return the IRP to the internal queue before returning to the caller status_pengding. Of course, the Kmdf driver can be assured of this type of problem by simply setting the request in progress and canceling it, just like Wdfqueue.

(PS: Use the MDL carefully to prevent the application layer program from exiting unexpectedly, causing the virtual memory described by MDL to be invalid.) )

There are two advantages of using this method:

1. When the application gets the error_io_pending return result from the IOCTL call, it knows that the buffer is mapped. And know when the IOCTL is finally finished and the buffer is unmapped.

2, by canceling the routine (WDM) or a Evtiocancelonqueue event handling callback routine, the driver is notified when the application exits or cancels the IO command, so it can perform the necessary operations to complete the IOCTL. Thus there is an MDL position for the memory cancellation map operation.

Assign and map pages:

 

Now there is the second method mentioned earlier: allocating memory pages and mapping those pages to the user virtual address space for a particular process. This method is surprisingly easy to use with most of the common APIs of Windows driver writers, and allows the driver to have maximum control over the type of memory allocated.

Regardless of the standard method used, the driver wants to allocate memory for sharing. For example, if the driver requires an appropriate device (logical) address for DMA, like the kernel virtual address of a memory block, it can use Allocatecommonbuffer to allocate memory. If a specific memory feature is not required, the amount of memory to be shared is also modest, and the driver can allocate a 0-padded, non-paged physical memory page to buffer.

Allocate 0 filled, non-paged pages from main memory, using MMALLOCATEPAGESFORMDL or Mmallocatepagesformdlex. These functions return an MDL description of the allocation of memory. The driver uses the function Mmgetsystemaddressformdlsafe to map the MDL-described page to the kernel virtual address space. It is not a good idea to allocate pages from the main memory more securely than using paged memory pools or nonpaged memory pools.

PS: This is the kernel that allocates memory space, but is allocated from the main memory pool using MMALLOCATEPAGESFORMDL, returns an MDL, and, for driving how to use the shared memory, Use Mmgetsystemaddressformdlsafe to get its kernel address. For the application tier to use this shared memory, the Mmmaplockedpagesspecifycache is mapped to the application-tier process address space, returning the start address of the user-level address space and placing it in the IOCTL back to the user application.

With an MDL to describe shared memory, the driver is now ready to map these pages to the user process address space. This can be accomplished using the function Mmmaplockedpagesspecifycache. You need to know that the trick to calling this function is:

You must call this function in the context of the process where you want to map the buffer.

PS: If it is called in another process context, it becomes mapped to another process context, but how can I ensure that it is called in the context of the process where I want to map buffer?

Set the AccessMode parameter to UserMode. The return value for the Mmmaplockedpagesspecifycache function call is the user virtual address space address that the MDL describes in-memory page mappings. The driver can put it in the corresponding IOCTL cache to the user application.

You need to have a way to clear out the allocated memory when you don't need it. In other words, you need to call mmfreepagefrommdl to free up memory pages. and call IOFREEMDL to release the MDL created by MMALLOCATEPAGEFORMDL (Ex). You are almost always doing this in your driver's irp_mj_cleanup processing routines (WDM) or Evtfilecleanup event handling callbacks (KMDF).

This is what is to be done, combined, the code to complete this process is shown below.

PVOID createandmapmemory (out pmdl* pmemmdl, out
pvoid* UserVa)
{
pmdl Mdl;
PVOID Uservatoreturn;
Physical_address lowaddress;
Physical_address highaddress;
size_t TotalBytes;
 
Initialize MMALLOCATEPAGESFORMDL required physical Address
lowaddress.quadpart = 0;
Max_mem (Highaddress.quardpart);
Totalbytes.quadpart = page_size;
 
Allocate 4K of shared buffer
MDL = mmallocatepagesformdl (lowaddress,
highaddress,
lowaddress,
totalbytes);
if (! MDL)
{
Return status_insufficient_resources;
}
 
Map shared buffers to user address space
Uservatoreturn = Mmmaplockedpagesspecifycache (MDL,
usermode,
mmcached,
NULL ,
FALSE,
normalpagepriority);
 
if (! Uservatoreturn)
{
mmfreepagesfrommdl (MDL);
IOFREEMDL (MDL);
Return status_insufficient_resource;
}
 
Return, get the virtual address of the MDL and user layer
*userva = Uservatoreturn;
*PMEMMDL = MDL;
 
return status_success;
}


Of course, this approach also has drawbacks, and calling Mmmaplockedpagesspecifycache must be done in the process context where you want the memory page to be mapped. Compared with the IOCTL method using Method_neither, the method shows no need for more flexibility. However, unlike the former, the latter requires only one function (Mmmaplockerpagesspecifycache) to be called in the target context. Because many OEM device drivers have only one and direct bus-based (that is, no other device on the device stack), this condition is easy to meet in addition to the bus-driven drive. For those small device drivers that are in the depths of the device stack and need to share buffer directly with the user-mode application, an enterprise-level driver writer may find a secure place to invoke in the context of the requested process.

After the page is mapped, the shared memory can be accessed in any process context, as with the Method_xxx_direct IOCTL method, or on a high IRQL (because the shared memory is not paged memory).

PS: One thing we need to determine is when to invoke the Mmmaplockedpagesspecifycache security mapping into the context of the specified process. Another thing is that the shared memory is in non-paged memory, so it can be accessed on the IRQL.

If you use this method, there is a decisive thing that has always been to reporters: you have to be sure that your driver is going to provide the method, and at any time the user process exits, you can map the page to the user space as a unmap operation. Failure of this event can cause the system to crash when the application layer exits. We found an easy way to do this is to unmap the page whenever the application layer shuts down the device handle. Because the application layer closes the handle, unexpected or otherwise, the driver receives a irp_mj_cleanup that corresponds to the device file object that the application layer opens, and you can be confident that it is working. You will make these operations in cleanup, not close, because you can guarantee that the cleanup IRP is obtained in the context of the requesting thread. The following code can see the release of the allocated resource.

VOID unmapandfreememory (pmdl pmdl,pvoid UserVa)
{
if (! PMDL)
{return;}
 
De
-Map mmunmaplockerpages (USERVA,PMDL);
Releases the MDL-locked physical page
mmfreepagesfrommdl (pmdl);
Release MDL
IOFREEMDL (PMDL);
}


Other challenges:

Regardless of the mechanism used, drivers and applications will need to support a common way of synchronizing access to shared memory, which can be done in many ways. Perhaps the simplest mechanism is to share one or more named events. The simplest way to apply and drive shared events is to apply layer-generated events and then pass event handles to the drive layer. The driver then reference the event handle from the context of the application layer. If you use this method, do not forget to dereference this handle in the driver's cleanup processing code.

PS: Be sure to note that the event object from the application layer is referenced.

Summary:

We looked at two ways to share memory in drive and user-mode applications:

1. The user layer creates a buffer and transmits it to the driver via the IOCTL

2. Use MMALLOCATEPAGESFORMDL to allocate memory pages in the drive, get the MDL, and then map the memory described by the MDL to the user Layer address space (Mmmaplockedpagesspecifycache). The starting address of the user's address space is obtained and returned to the user layer via the IOCTL.

Translator Note:

When you use named events to synchronize drives and application shared buffers, you typically do not use the driver to create named events and then open the method based on the application name. Although this method can make the driver activation event, all related applications can be awakened, convenient for the development of the program, but he has two problems: first, the naming event can only be created correctly after the WIN32 subsystem, which will affect driver development. The most serious problem is that the events created in the drive require a high level of access, and the application created by a user with Administrator group permissions under WinXP is required to be able to access the event. The problem is exacerbated by the hardening of security features in the Vista system. So try to use the events created by the application, or by other synchronization methods

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.