DirectShow filter-Author: Lu Qiming

Source: Internet
Author: User

1. Filter Overview
Filter is a COM component consisting of one or more pins. Pin is also a COM component. The extension of the filter file is. Ax, but it can also be. dll. Filter can be roughly divided into three types based on its input pin or output pin (or its position in filter graph): Source Filter (only output pin), transform filter (with both input pin and output pin) and Renderer Filter (only input pin ).

Generally, a common Win32 DLL project is used to create a filter. In addition, the filter project generally does not use MFC. In this case, the application uses the cocreateinstance function as the filter instance, and the filter and application process are in the binary level. Alternatively, you can create a filter in the MFC application project. In this case, the filter does not need to be registered as a COM component, and the collaboration between the filter and the application is at the source code level. When a filter instance is created, the cocreateinstance function is no longer used, but a filter object is directly created, for example:
M_pfilterobject = new cfilterclass ();
// Make the initial refcount 1 to match com Creation
M_pfilterobject-> addref ();
Because the base class of the filter implements the reference counting of objects, even in the second case, the operation on the created filter object can fully follow the com standard.
Filter is an independent function module. It is recommended that you do not depend on other third-party DLL. Because the filter has the location transparency feature of COM, the filter file can be stored in any location of the hard disk, as long as the location is moved and then re-registered. However, if the filter depends on other DLL, the filter will locate the DLL.

The filter cannot be used independently from the filter graph. Therefore, if you want to bypass the filter graph and directly use the module function implemented by the filter, port your filter to DMO (DirectX Media object ). For DirectShow application developers, do not forget to use oleinitialize for initialization.

2. Filter Registration
Filter is a COM component, so you must register it before use. The registration procedure of filteris regsvr32.exe. If the command line parameter/u is included, it indicates cancellation. If yes/s is included, a prompt dialog box Indicating successful registration or cancellation is not displayed. If you want to automatically register when building the filter project, set the following settings on the vc m build page of Project Settings in VC:
Description: Register Filter
Commands: regsvr32/S/C $ (targetpath)
Echo regsvr32 exe. Time> $ (targetdir)/$ (targetname). Trg
Outputs: $ (targetdir)/$ (targetname). Trg

The registration information of filter consists of the basic com Information and filter information. Registration information is stored in the registry. The former is hkey_classes_root/CLSID/filter CLSID/, and the latter is hkey_classes_root/CLSID/category/instance/filter CLSID /. The com information indicates that the filter is a standard COM component that can be created through the cocreateinstance function. The filter information indicates the information about the filter that we see through graphedit. If you do not want graphedit to see the filter you write (or let the filter enumerator find it), you can not register the filter information. And you don't have to worry about it. Doing so won't affect the filter function at all.
Blocking the registration of filter information is also very easy. Because cbasefilter implements two functions of the iamoviesetup interface: Register and unregister. We only need to reload these two functions and directly return s_ OK.

The merit value of the filter. This value is used by Microsoft's "smart connection" function. In graphedit, when we add a source filter and execute "render" on its pin, some filters will be automatically connected. Merit values are as follows:
Merit_preferred = 0x800000,
Merit_normal = 0x600000,
Merit_unlikely = 0x400000,
Merit_do_not_use = 0x200000,
Merit_sw_compressor = 0x100000,
Merit_hw_compressor = 0x100050
The merit value can only be used by "smart connection" when it is greater than merit_do_not_use. The larger the merit value, the larger the chance of this filter.

3. Pin Connection process between Filters
A filter can be used only when it is added to the filter graph and connected to other filters as a complete link. The connection between filters (that is, the connection between pins) is actually a media type negotiation process between the two parties. The connection direction is always from output pin to input pin. The general process of the connection is: if the complete media type has been specified when the connection function is called, the media type is used for the connection. If the connection is successful, the connection process ends; if media type is not specified or not completely specified, the following enumeration process is performed. Enumerate all media types on the input pin to be connected, and use these media types to connect to the output pin one by one (if the connection function provides incomplete media types, first, you need to check the matching of each enumerated media type). If the output pin accepts this media type, the connection between the pins is declared successful; if media type and output pin enumerated on all input pins are not supported, all media types on the output pin are enumerated and connected with the input pin one by one. If the Input Pin accepts one of the media types, the connection between the pins is also successful. If all media types on the output pin are not supported, the connection between the two pins fails.

Each pin can implement the getmediatype function to provide all the preferred media types supported on the pin (but generally only implemented on the output pin, the Input Pin mainly implements checkmediatype to check whether the media type currently provided is supported ). All media types obtained from pin Enumeration During connection are provided here.

In the cbasepin class, there is a protected member variable m_btrymytypesfirst. The default value is false. Change the value of this variable to true in the output pin of the custom filter. You can customize our own connection process (first enumerate the media type on the output pin ).

When the PIN is successfully connected, the completeconnect function is called on the pin. We can obtain some media type information on the connection and perform some calculations here. In the completeconnect Implementation of the output pin, another important task is to negotiate the memory configuration used for sample transmission after the filter graph runs. This is also an interactive process: first, you need to ask the Configuration Requirements on the input pin. If the Input Pin provides the Memory Manager (Allocator), the memory manager on the input pin is used first; otherwise, use the memory manager generated by the output pin. We usually need to implement decidebuffersize to determine the memory size of the stored sample. Note: After the negotiation is completed, the actual memory is not allocated, but the active function on the output pin must be called.

4. Overview of filter media type
Media type can be am_media_type or cmediatype. The former is a struct, and the latter is a class inherited from this struct.
Each media type consists of major type, subtype, and format type. All three parts are uniquely identified using guid. Major type mainly describes a media type, such as specifying a video, audio, or stream. Subtype further refines the media type, if video is used, you can specify uyvy, yuy2, rgb24, or rgb32. format type uses a struct to further refine the media type.
If the three parts of media type specify a specific guid value, the media type is completely specified. If any of the three parts of media type is guid_null, this media type is not completely specified. Guid_null is used as a wildcard.

Common major type:
Mediatype_video;
Mediatype_audio;
Mediatype_analogvideo; // analog capture
Mediatype_analogaudio;
Mediatype_text;
Mediatype_midi;
Mediatype_stream;
Mediatype_interleaved; // DV camcorder
Mediatype_mpeg1systemstream;
Mediatype_mpeg2_pack;
Mediatype_mpeg2_pes;
Mediatype_dvd_encrypted_pack;
Mediatype_dvd_navigation;

Common subtype:
Mediasubtype_yuy2;
Mediasubtype_yvyu;
Mediasubtype_yuyv;
Mediasubtype_uyvy;
Mediasubtype_yvu9;
Mediasubtype_y411;
Mediasubtype_rgb4;
Mediasubtype_rgb8;
Mediasubtype_rgb565;
Mediasubtype_rgb555;
Mediasubtype_rgb24;
Mediasubtype_rgb32;
Mediasubtype_argb32; // contains Alpha Value
Mediasubtype_overlay;
Mediasubtype_mpeg1packet;
Mediasubtype_mpeg1payload; // video Payload
Mediasubtype_mpeg1audiopayload; // audio Payload
Mediasubtype_mpeg1system; // A/V Payload
Mediasubtype_mpeg1videocd;
Mediasubtype_mpeg1video;
Mediasubtype_mpeg1audio;
Mediasubtype_avi;
Mediasubtype_asf;
Mediasubtype_qtmovie;
Mediasubtype_pcm;
Mediasubtype_wave;
Mediasubtype_dvsd; // dv
Mediasubtype_dvhd;
Mediasubtype_dvsl;
Mediasubtype_mpeg2_video;
Mediasubtype_mpeg2_program;
Mediasubtype_mpeg2_transport;
Mediasubtype_mpeg2_audio;
Mediasubtype_dolby_ac3;
Mediasubtype_dvd_subpicture;
Mediasubtype_dvd_lpcm_audio;
Mediasubtype_dvd_navigation_pci;
Mediasubtype_dvd_navigation_dsi;
Mediasubtype_dvd_navigation_provider;

Common Format type:
Format_none
Format_dvinfo dvinfo
Format_mpegvideo mpeg1videoinfo
Format_mpeg2video mpeg2videoinfo
Format_videoinfo videoinfoheader
Format_videoinfo2 videoinfoheader2
Format_waveformatex waveformatex

5. data transmission between Filters
Data between filters is transmitted through sample. Sample is a COM component with its own data buffer. The sample is centrally managed by allocator. As shown in:

Data transmission between filters can be performed in two modes: Push mode and pull mode. The push mode means that the source filter can generate data by itself, and there is an independent sub-thread on its output pin to send data, common situations are the live source filter of the WDM model acquisition card. The so-called Pull Mode means that the source filter does not have the ability to send its own data. In this case, generally, the source filter is followed by a parser filter or splitter filter. This filter generally has an independent sub-thread on the input pin, which is responsible for continuously obtaining data from the source filter, after processing, the data is transmitted. common situations include file source. In push mode, the source filter is active; in PULL mode, the source filter is passive. In fact, if you think of the Source Filter and Splitter filter in the PULL mode as another virtual source filter, the data transmission between the filters is exactly the same as that in the push mode.

Then, how is the data transmitted through the connected pin at the bottom? First, let's look at the push mode. An imeminputpin interface must be implemented on the input pin of the filter after the source filter. The data is transmitted through the receive method of the interface called by the filter at the upper level. It is worth noting that the data is transmitted from the output pin to the input pin by calling the receive method without memory copying. It is just a "notification" equivalent to the arrival of data ". Let's look at the PULL mode. In PULL mode, an iasyncreader interface must be implemented on the output pin of the source filter. The splitter filter is used to obtain data by calling the request or syncread method of this interface. The splitter filter then, like the push mode, calls the receive method of the imeminputpin interface on the input pin of the next filter to implement data transfer-down.

A DirectShow application has at least two threads: the main thread and the sub-thread used by the filter for data transmission. Since it is multi-thread, thread synchronization will inevitably occur. The status change of the filter is completed in the main thread, and the data-related operations of the filter are called in the data line. The main function calls of each thread are as follows:
Streaming thread (s): imeminputpin: Receive, imeminputpin: receivemultiple, ipin: endofstream, imemallocator: getbuffer.
Application Thread: imediafilter: pause, imediafilter: Run, imediafilter: Stop, imediaseeking: setpositions, ipin: beginflush, ipin: endflush.
Either: ipin: newsegment.
These functions should not be called in combination; otherwise, thread deadlocks may occur. It is also worth noting that beginflush and endflush are called by the main thread rather than by the Data thread.

6. Differences between transform filter and trans-in-place Filter
First, the two filters have something in common, because the trans-in-place filter itself is inherited from the transform filter. Next, we need to understand that the trans-in-place filter "tries its best to make the input pin and output Pin use the same Allocator, so as not to go to the memcpy of the sample data once. We say "best effort", that is, the Trans-in-place filter may not be able to achieve its original intention. (If Allocator used by trans-in-place filter is readonly, and trans-in-place filter needs to modify the data of the sample, the input pin and output pin of the Trans-in-place filter must use different Allocator values .)
The Trans-in-place filter has a protected member variable m_bmodifiesdata. The default value is true. If you are sure that the custom trans-in-place filter does not need to modify the sample data, set m_bmodifiesdata to false to ensure that the input pin and output Pin use the same allocator.

The implementation of trans-in-place filter is mainly reflected in the following three functions: ctransinplacefilter: completeconnect, ctransinplacplacputpin: getallocator and ctransinplaceinputpin: policyallocator. Reconnect is required in completeconnect to ensure that the input pin and output pin of the Trans-in-place filter use the same media type. Getallocator can be used to obtain Allocator on the input pin of the next filter in the Trans-in-place filter. Notifyallocator "tries" to make the input pin and output pin of the Trans-in-place filter use the same allocator.

7. Implementation of imediaseeking
Imediaseeking is implemented on the filter, but the application should obtain this interface from the filter graph manager. At the filter level, the filter graph manager first queries from the Renderer Filter whether the output pin of the filter of the previous level supports the imediaseeking interface. If yes, this interface is returned. If not, the system continues to ask for filter at the upper level until the source filter. Generally, the imediaseeking interface is implemented on the output pin of the source filter. (For file source, this interface is generally implemented in parser filter or splitter filter .) For filter developers, if we write the source filter, we need to implement the imediaseeking interface on the output pin of the filter; if we write the transform filter, you only need to pass the user's Interface request to the output pin of the upper-level filter on the output pin. If the write is Renderer Filter, the user's Interface request needs to be passed to the output pin of the last filter on the filter. Note: To ensure the synchronization of the stream after the seek operation, if the filter that implements the imediaseeking interface has multiple output pins, only one pin supports the seek operation. If the custom transform filter has multiple input pins, you need to decide which path to continue the request when the output pin receives the imediaseeking Interface request.

The application can perform seek operations on the filter graph at any time (running, paused or stopped. However, when the filter graph is running, the filter graph manager will stop using it and run the seek operation again.

Imediaseeking can have the following seek time formats:
Time_format_frame video frames.
Time_format_sample samples in the stream.
Time_format_field interlaced video fields.
Time_format_byte byte offset within the stream.
Time_format_media_time reference time (100-nanosecond units ).
However, the filter that implements this interface may not support all these formats. Generally, filters support time_format_media _ time. When using other formats, it is best to call imediaseeking: isformatsupported for confirmation.

The imediaposition interface is not recommended for filters. Imediaposition is used to support automation (for example, DirectShow in VB). imediaseeking does not support automation.

8. Filter status conversion
Filter has three statuses: stopped, paused, and running. Paused is an intermediate state. The stopped state to the running state must pass through the paused state. Paused can be understood as the data readiness state, designed to quickly switch to the running state. In the paused state, the data thread is started, but it is blocked by Renderer Filter.

The state transition between paused and running is negligible for source filter and transform filter, while for Renderer Filter (especially video Renderer/audio Renderer) is slightly different. Renderer first processes the samples held in the paused state. When a new sample is received, Renderer judges the timestamp on the sample. If the time is not reached, Renderer will hold the sample and wait.

Filter graph manager converts the status of a filter from bottom to top, that is, it traces back from Renderer Filter to source filter. This order can effectively avoid the loss of samples and the deadlock of the filter graph.
Stopped to paused: Start from Renderer to switch the paused status. In this case, the filter calls the active function of all its pins for initialization (generally, the pin allocates sample memory in the active state. If the source filter is used, the data thread is also started ), make the filter in a ready state. Source Filter is the last filter that completes the transition to the ready state. Then, the source filter starts the data thread and sends the sample to the next step. When the Renderer receives the first sample, it will block it. When all the Renderer changes the status, the filter graph manager considers that the status conversion is complete.
Paused to stopped: When the filter enters the stopped state, the inactive function of all the pins of the current user is called (generally, the pin is released in inactive, if the source filter is used, the data thread is terminated ). Release all hold samples to remove the getbuffer of the last filter from the blocking; terminate all waits in the receive so that the receive function of the last filter is called and returned. The filter rejects all samples in the stopped status. In this way, the Renderer Filter is disconnected from the upper level. When the source filter is reached, the data thread can be terminated.

9. endofstream Problems
When all data of the source filter has been sent, ipin: endofstream on the input pin of the next filter will be called until Renderer Filter. When all input pins of the Renderer Filter are called endofstream, an ec_complete event is sent to the filter graph manager. Filter graph manager sends the event to the application only when all streams in filter graph send the ec_complete event.

In our custom filter, if the endofstream passed by the last filter is received, it means that all the above data has been transferred, and the receive method does not need to receive data. If we buffer the data, make sure that all data in the buffer is processed and sent down, and then call endofstream. In PULL mode, usually the splitter filter or parser filter sends endofstream, and the direction is down. The source filter will not receive such notifications.

10. beginflush, endflush, and newsegment Problems
In typical cases, after performing mediaseeking, beginflush and endflush are called. These two functions are generally implemented on the input pin.
For filter developers, the following operations must be performed when the beginflush API is called:
· Call beginflush of the next filter so that it no longer receives new samples;
· Refuse to receive data from the last-level filter, including receive call and endofstream call;
· If the filter at the upper level is blocking the sample waiting for null, you need to remove it from the blocking (through Allocator );
· Ensure that the data stream thread is out of the blocking state.
When the filter is called endflush, you need to do the following:
· Ensure that all samples waiting for cache are discarded;
· Ensure that cached data on the filter is discarded;
· Clear the ec_complete event that has not been sent (if this is an rendered Input Pin );
· Call the endflush of the next filter.
Another point: if you have to input time stamp for each sample in the custom filter, remember that the time stamp of the sample after mediaseeking should be replayed from 0.

A segment is a group of samples with the same playback rate within a period of time. It is called by the newsegment function to indicate the start of this segment. Newsegment is generally initiated by source filter (in Push mode) or parser/splitter filter (in PULL mode) when a new stream is started or after the user performs mediaseeking, and call it layer by layer until Renderer Filter.
We can use newsegment to transmit the information in our customized filter, especially for decoder. Audio Renderer is also a typical example. It outputs sound cards based on the playback rate and actual audio sampling frequency.

11. Quality Control Problems
Data transmission between filters is sometimes too fast or too slow. DirectShow uses quality control to solve this problem, namely, the two functions of the iqualitycontrol interface (setsink and notify ). Generally, the Renderer Filter implements this interface on the filter, while other filters implement this interface on the output pin. The figure above shows the general quality control processing process. The iqualitycontrol interface that can adjust the sending speed is generally implemented on the Source Filter (parser/splitter filter in PULL mode). The transform filter only transmits the quality message to the upper-level filter.
The application can implement its own quality control manager and then set it to the filter by calling the setsink method. The above process changes, and the quality message is directly sent to the custom manager. But this is generally not the case. It is worth noting that the specific quality control implementation depends on the actual filter, which may be to adjust the sending speed or lose part of the data. Therefore, use quality message with caution.

12. Support for media type changes during running
We can see from cbaseinputpin: receive that the input pin performs checkstreaming every time before receiving the sample (if the current filter has been stopped or is being flushed or runtimeerror occurs, and save the current sample property to the m_sampleprops member variable of protected. The sample attribute describes the structure of am_sample2_properties. It has a flag to indicate whether the media type of the current sample has changed. If the media type is changed, check medaitype to see if our filter still supports it. If not, a runtimeerror is sent and an endofstream is sent.
A sound filter should be able to handle changes to the media type at runtime. In our receive method implementation, we can use if (pprops-> dwsampleflags & am_sample_typechanged) to determine whether the Meida type has changed. If it changes, we need to perform necessary initialization based on the new media type.

A typical case: When camcorder is input, the media type of audio may change. For example, mediatype_pcm is used for media type connection during filter connection, while mediatype_wave is used during running, or the audio sampling frequency is 44.1 K during connection, but it turns into 48 k at runtime; or the camcorder tape itself saves the mixed audio of 44.1k and 48k.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.