Source code Download
In the PC, for the YUV format of video such as Yv12,yuy2 display method, is generally used DirectDraw, using the graphics card overlay surface display. Overlay technology is primarily designed to address a hardware-based technology that plays a VCD on a PC and is implemented on a graphics card. The emergence of overlay, a good solution to play VCD on the PC encountered difficulties. Early PC processing capacity is limited, when playing VCD, not only to do video decoding work, but also to do YUV to RGB color space conversion, software implementation is very resource-intensive, so, YUV overlay surface appeared, color space conversion is transferred to the graphics card to achieve, graphics card do these work is a natural advantage.
With the development of graphics technology, the limitations of overlay have been exposed more and more fully. General graphics only support a overlay surface, with overlay to achieve multi-screen difficult, video and text overlay is also difficult, of course, to achieve some effects is more difficult. More importantly, overlay technology in the graphics card is a 2D module, in the drive of high-quality 3D games, now the function and performance of the graphics card, mainly reflected in the 3D module, the manufacturer put the largest, but also in the GPU 3D module. Overlay technology does not take advantage of and play the 3D performance of graphics GPU. Microsoft has long stopped supporting DirectDraw and encouraged developers to turn to Direct3D, so overlay is not able to use the new API.
Early 3D rendering, mainly using the CPU, the graphics card to do less. Later, the graphics card GPU processing power is increasingly strong, assume more and more 3D rendering capabilities, began to use a fixed rendering pipeline, that is, all rendering algorithms are built into the graphics card, we can only combine these algorithms. Now the graphics card is a programmable rendering pipeline, that is to say we can write their own rendering algorithm, download to the video card to execute, replace the fixed rendering pipeline algorithm, flexibility greatly improved. With the improvement of GPU performance, the graphics card has been widely used in image processing, video processing, scientific calculation and other fields.
Use D3D to render YUV video, which can be rendered using D3D surface or D3D textures. Surface rendering is relatively simple, but with more limited functionality, we only discuss D3D texture video rendering below. Texture video rendering, is to fill the video data into a two-dimensional texture, combined with our own piece of pixel shader code, sent to the graphics card GPU rendering pipeline to render.
The following assumes that the reader is aware of the video format details discussed.
For YV12 video data, you can create three textures, filled with Y, U, and v video data, respectively. For the missing U and V sampling points, we can use the built-in bilinear filtering algorithm of the video card to interpolate it out, which is a simple method. Of course, we can also use a better image interpolation algorithm to achieve higher image quality, which requires us to implement our interpolation algorithm through shader, and then respectively, the U, v rendering to the texture, to achieve better interpolation. Relative and bilinear filtering, the disadvantage is that the complexity is higher and the efficiency is low. U, v interpolation, video data from YUV420 to YUV444, and then do color space conversion, from YUV444 to RGB32, conversion of shader code is very simple, is a matrix multiplication operation. For the I420 data, it is the YV12 data of the U, v data in the memory position swapped, the other processing and YV12 the same.
NV12 format, the Y-plane and the YV12, the UV part is the packaging format. Therefore, the UV part needs to be processed separately, the UV part is filled into a texture, and then the UV texture is made two times a simple rendering to the texture, thus the U, v data are rendered to two textures respectively. After this processing, we get the same data as the YV12 data, the next processing can refer to the YV12 process and methods.
For data YUY2 in packaged format, the missing U and V samples need to be interpolated when rendering. YUY2 in only odd pixels on the missing U, v sampling, such processing needs to distinguish the odd idol, for even pixels, do not need to do the U, v interpolation, direct color space conversion. For odd pixels, the interpolation of the adjacent U and V samples is required, and the catmull-rom interpolation algorithm or the linear interpolation algorithm can be used. After interpolation, you can do color space conversion. Uyvy's approach is similar to YUY2.
For the RGB format, the NVIDIA GeForce 9800 GT graphics I used support rgb32,rgb555,rgb565 and do not support RGB24. Therefore, for RGB24, you need to remove the R, G, B in the shader code, and then make up the RGB32 output. RGB32,RGB555,RGB565 does not require additional processing and can be rendered directly.
For the flat text overlay, using the Cd3dfont class provided by Microsoft, but this class does not support Chinese, I have made some modifications on the basis of this class, and implemented a text overlay class that supports Chinese. These two classes interact with each other, and can efficiently implement a flat text overlay.
As discussed above, the verification tests are performed using DIRECT3D9.0C. I don't know much about OpenGL, but I think these methods should work for OpenGL as well, after all, the two 3D architectures have a similar function on a PC.
Using 3D rendering Plane video, you can achieve a variety of special effects, such as multi-screen, picture in pictures, text overlay, zoom, are easy to implement, performance is also very good. If you combine other 3D technologies, you can also make cool effects that look at our imagination.
from:http://blog.csdn.net/dengzikun/article/details/5807694
Rendering YUV video data using D3D