Author: Lu Qiming
Sorting Date: 2004/12/27
As you know, video Renderer (VR) receives raw RGB/YUV data and then displays the filter on the display. To improve the Computer Drawing performance, based on your computer's graphics card capabilities, VR will first use DirectDraw and overlay surfaces. If these features are not supported by the graphics card, VR will use the GDI function for drawing. When the upper-level filter is connected to VR, VR always requires the RGB format of the color bits set for the current display, such as the 24-bit color set for your machine, first, VR requires that the connected media type be rgb24. If your video card supports the YUV overlay surface, VR dynamically changes the connected media type when the filter graph runs, and requires the upper-level filter to output a suitable YUV format. The ivideowindow interface is implemented on the VR filter. The filter graph Manager uses this interface to control the video window.
So what is the problem with overlay mixer? To put it simply, overlay mixer is the filter that can combine several video streams and output them. This filter is specially designed for DVD playback (the DVD has sub-picture or line-21 data needs to be superimposed) or broadcast video streams (including line-21 data. At the same time, it also supports the use of video port extensions by the hardware decoder, that is, bypassing the PCI bus and directly sending the data decoded by the hardware to the video card for display. This filter also gives priority to the DirectDraw capability of the video card and must have an overlay surface. Overlay mixer has an output pin. The output media type is mediatype_video and mediasubtype _ overlay. A video Renderer is usually connected to the end. When the filter graph is running, the actual image display is completed by Overlay mixer, while the video Renderer only manages the video window. Another more common filter is overlay mixer 2. The filter and overlay mixer functions are the same, but the two filters support different format types and merit values.
Overlay mixer uses color keying to synthesize several videos: it sends the Color Key and sub-picture (or line-21) data to the primary surface and the primary video data to the overlay surface; the video card then combines the data on the two surfaces and sends them to the frame buffer for display. In typical cases, overlay mixer uses three input pins: PIN 0 to input primary video data, Pin 1 and pin 2 to input sub-picture data and line-21 data. Overlay mixer internally creates an overlay surface based on the data entered by pin 0. Overlay mixer is connected to video decoder. If this is a software decoder, data transmission on Pin 0 uses the standard imeminputpin interface. If hardware acceleration is used, the iamvideoaccelerator interface must be used on Pin 0. (Note that these two interfaces cannot be used at the same time !) If the last-level filter is the packaging filter of the hardware decoder and the VP pin is used for output, the decoder and overlay mixer use the ivpconfig and ivpnotify interfaces for communication to coordinate the work. Overlay mixer does not support collection devices with 1394 or USB interfaces. Overlay mixer is generally connected to the video Renderer. Video Renderer is only a video window manager. The two filters communicate with each other through the ioverlay and ioverlaypolicy interfaces to coordinate the work. (Video Renderer input pin has two connection modes: When VR directly performs image display, the imeminputpin interface is used to receive video stream data; When overlay mixer performs image display, VR uses the ioverlay interface to communicate with the previous filter. There is no video data transmission between overlay mixer and VR. Note that these two interfaces are not used at the same time !)
As you can see, some of the functions of video Renderer and overlay mixer are repeated. Video Renderer was first designed. At the beginning of the design, many applications were not taken into account. Therefore, overlay mixer was used to "patch ". Now, why don't we integrate the two functions? That's exactly what Microsoft did! In Windows XP (Home Edition and Professional Edition), a new filter (the registered name is also called "video Renderer", but the CLSID of the two filters is different and the merit value is different) appears ), replaces the original default video Renderer. This new filter, called video mixing Renderer Filter 7 (VMR-7), uses DirectDraw 7 Technology internally. In this case, vmr is a new generation of video Renderer on Windows. It is worth noting that this filter is only integrated in Windows XP and is not available in any other DirectX release package. The general functions of the VMR-7 are as follows: Support for Alpha mixing of up to 16 input streams; support for access to synthetic images prior to display; Support for inserting third-party developed Video Effects and transitions components. In addition, the RGB media type is not required for vmr connection, because it will not use the GDI function for drawing in any case.
With the release of DirectX 9, there will be a new video Renderer, called the VMR-9. This filter uses direct3d 9 technology. VMR-9 and VMR-7 are two different filters. VMR-9 performance is stronger. It is worth noting that, in order to maintain downward compatibility, the merit value of the VMR-9 is not high and it is not used as the system's default video Renderer; if your application only needs little video display control, we recommend that you use the default video Renderer of your platform.
The following are some frequently asked questions about video Renderer usage:
1. Write the DirectShow-based application, and the ivideowindow interface of filter graph manager will certainly be used. The filter graph Manager interface is actually implemented on the video Renderer. Note that the method of this interface can be called only after the video Renderer connection is successful. Otherwise, the method call will always fail.
2. Use ivideowindow: put_fullscreenmode to implement full screen mode. For some new graphics cards, VR can directly stretch the images before they are displayed (the performance will not suffer a lot). However, if the graphics card is not performing well, filter graph Manager Automatically replaces VR with full screen Renderer Filter. In fact, when the user calls this interface function to switch to full screen mode, the control logic of filter graph manager is as follows: the video Renderer that directly supports full screen mode in filter graph is preferentially used (through ivideowindow :: get_fullscreen mode judgment); otherwise, use a video Renderer that scales the image to full screen, and the performance loss is not very high; then, use the full screen Renderer Filter to replace; all the above attempts fail, select any video Renderer in the filter graph that supports the ivideowindow interface. Except for some older graphics cards, the second step is generally successful.
3. Use ibasicvideo: getcurrentimage to obtain the current image data. This function is unreliable for video Renderer. Because video Renderer uses DirectDraw acceleration, this function fails to be called, and video Renderer must be in the pause State to call this function. Vmr has no such restrictions. Therefore, if you want to get an image of a frame in the video stream with video Renderer, we recommend that you write an in-place-Trans filter and insert it to the front of the video Renderer, it is easy to implement.
4. Sometimes, the output pin render of a decoder will be automatically connected to the overlay mixer 2 filter? Or how to connect the self-written decoder to overlay mixer 2? This is mainly because the format type used by the media type supported by the output pin of decoder. Note that overlay mixer 2 only supports format_videoinfo2. While overlay mixer supports format_videoinfo and format_videoinfo2 at the same time, its merit value is merit_do_not_use and will not be automatically added to the filter graph.