D3D solution for YUV playback in WPF and wpfyuvd3d Solution
During video media playback and monitoring system construction, YUV data is often displayed. Generally, the playback control and SDK use the Window handle to render the Window directly using DirectDraw. However, if the user interface is developed using WPF, it is generally only possible to embed WinForm In the WPF interface through WinFormHost. However, in this case, the AeroSpace problem occurs, that is, the winform control will always float at the top of WPF, and any WPF element will be covered, zooming and dragging results in poor user experience. The reason is that WPF and Winform use different rendering technologies.
To perfectly support YUV data display in WPF, the common solution is to convert YUV data into RGB data that can be supported by WPF first, and then use controls similar to WriteableBitmap, display it on WPF. The main problem in doing so is that a large amount of CPU is required for RGB conversion, and the efficiency is relatively low. One optimization method is to use SwScale in FFMPEG or Intel's IPP library. These libraries have been optimized to provide limited use of hardware acceleration. The following is an example of using WritableBitmap.
WriteableBitmap imageSource = new WriteableBitmap (videoWidth, videoHeight,
DPI_X, DPI_Y, System.Windows.Media.PixelFormats.Bgr32, null);
...
int rgbSize = width * height * 4; // bgr32
IntPtr rgbPtr = Marshal.AllocHGlobal (rgbSize);
YV12ToRgb (yv12Ptr, rgbPtr, width, height);
// update image
imageSource.Lock ();
Interop.Memcpy (this.imageSource.BackBuffer, rgbPtr, rgbSize);
imageSource.AddDirtyRect (this.imageSourceRect);
imageSource.Unlock ();
Marshal.FreeHGlobal (rgbPtr);
Another solution is to use D3DImage as a bridge between WPF and the video card. We can use D3DImage to directly send the rendered D3D part to WPF for display. A reference is the application of VMR9 in WPF. VMR9 is the Render of DirectShow provided by Microsoft. After carefully referring to the VMR9-related code in WpfMediaTookit, the core idea is to let DirectShow output a D3D9Surface when initializing the VMR9 Renderer. D3DImage uses this Surface as the BackBuffer. When a new video frame is rendered on the Surface, VMR9 sends an event notification. After receiving the notification, D3DImage refresh the BackBuffer. The following code shows the core idea.
private VideoMixingRenderer9 CreateRenderer () {
var result = new VideoMixingRenderer9 ();
var cfg = result as IVMRFilterConfig9;
cfg.SetNumberOfStreams (1);
cfg.SetRenderingMode (VMR9Mode.Renderless);
var notify = result as IVMRSurfaceAllocatorNotify9;
var allocator = new Vmr9Allocator ();
notify.AdviseSurfaceAllocator (m_userId, allocator);
allocator.AdviseNotify (notify);
// When building VMR9 Render, register a new video frame rendering completion event
allocator.NewAllocatorFrame + = new Action (allocator_NewAllocatorFrame);
// Register to receive events when a new D3DSurface is created
allocator.NewAllocatorSurface + = new NewAllocatorSurfaceDelegate (allocator_NewAllocatorSurface);
return result;
}
void allocator_NewAllocatorSurface (object sender, IntPtr pSurface)
{
// For the sake of understanding, only the core part is kept. Omitting other parts
...
// Set pSurface to BackBuffer of D3DImage
this.m_d3dImage.Lock ();
this.m_d3dImage.SetBackBuffer (D3DResourceType.IDirect3DSurface9, pSurface);
this.m_d3dImage.Unlock ();
...
}
void allocator_NewAllocatorFrame ()
{
...
// redraw
this.m_d3dImage.Lock ();
this.m_d3dImage.AddDirtyRect (new Int32Rect (0, / * Left * /
0, / * Top * /
this.m_d3dImage.PixelWidth, / * Width * /
this.m_d3dImage.PixelHeight / * Height * /));
this.m_d3dImage.Unlock ();
...
}
As a result, you can use VMR9 to display videos on WPF perfectly as long as they are played using DirectShow. However, in many cases, DirectShow cannot solve all problems. For example, when performing interactive video optimization or video superposition, DirectShow with a fixed filter assembly line is difficult to meet the requirements. Sometimes, you still need a convenient direct rendering method.
From the VMR9 example, we can see that the key is to generate a D3D9Surface and render it above. The remaining problem is how to render YUV data to D3D9Surface.
D3D does not directly support the YUV image format. Therefore, we need to find a way for D3D to render YUV data. During the C # rewrite process, I suddenly found that D3D has provided a simpler method to help us achieve the conversion from YUV to RGB color space, and is directly supported by the graphics card hardware. High efficiency. The main principle is to use the StrentchRectangle method of D3DDevice.
public void StretchRectangle(
Surface sourceSurface,
Rectangle sourceRectangle,
Surface destSurface,
Rectangle destRectangle,
TextureFilter filter
);
The main function of the StrentchRectangle method is to copy the content of a region on a Surface to a specified region on another Surface. During the Copy process, as long as the format is directly supported by the video card, such as YV12 and YUY2, The D3D PixelFormat conversion will be automatically performed! Therefore, we only need to create a D3D OffscreenPlainSurface with a specified PixelFormat, fill in the original data, and call StrentchRectangle to copy the data to the target Surface, so we can get the desired Surface. The rest will be handed over to D3DImage. The following is the core part of the sample code.
public void Render (IntPtr imgBuffer)
{
lock (this.renderLock)
{
// Fill image data into offscreen surface
this.FillBuffer (imgBuffer);
// Call StrentchRectangle to copy the original image data into TextureSurface
this.StretchSurface ();
// perform rendering operations
this.CreateScene ();
}
// Tell D3DImage to refresh the image
this.InvalidateImage ();
}
The above is a D3D solution for YUV playback in WPF. I hope it will be helpful to you. If you have any questions, please leave a message and I will reply to you in a timely manner. Thank you very much for your support for the help House website!