DirectShow Technology Introduction (lengthy)-5

Source: Internet
Author: User
Tags commit prepare

3.4. Data flow in the Filter graph

This section describes how media data flows in the filter graph. If you're just writing DirectShow applications, you don't need to know these details, of course, knowing these details is still helpful for writing DirectShow applications. But if you want to write DirectShow filter, then you have to master this part of the knowledge.

3.4.1. DirectShow Data Flow Overview

In this section, we will first describe briefly how the data flow in DirectShow works.

The data is first stored in the buffer, and in the buffer, they are just a byte array. Each buffer is contained by a COM object called Media sample, and media sample provides the Imediasample interface. Media sample is created by another COM object called the allocator (allocator), and allocator provides the Imemallocator interface. Each PIN connection is assigned a allocator, and of course, two or more pin connections can also share several allocator.

Each of the allocator creates a media sample pool and allocates buffers for each sample. Once a filter needs a buffer to populate the data, it calls the Imemallocator::getbuffer method to request a sample. As long as allocator has a sample that has not been used by any filter, the GetBuffer method immediately returns a pointer to the sample. If all of allocator's sample has been exhausted, this method blocks there until a sample becomes available. After GetBuffer returns a sample, filter writes the data to the buffer of sample, sets the appropriate token (such as a timestamp) on the sample, and then submits it to the next filter.

When a renderer filter receives a sample, renderer filter checks the timestamp and saves the sample first until the reference clock of the filter graph indicates that the sample data can be render. When the filter has rendered the data, it releases the sample, and the sample does not immediately return to allocator's sample pool, unless the reference count on the sample has become 0, indicating that all the filter has released the sample.

The upstream filter may run before renderer, which means that the upstream filter fill buffer may be faster than renderer destroy them. However, there is no need for samples to be render earlier because renderer will keep them until the appropriate time to render, and upstream filter will not accidentally overwrite these samples buffers, Because the Getsample method will only return those that are not used in the sample. The number of sample that upstream filter can use in advance depends on the number of sample in the allocator allocation pool.

The previous chart shows only one allocator, but typically, there are multiple allocator in each stream. As a result, when renderer releases a sample, it produces a cascade effect. As the following illustration shows, a decoder saves a video compression frame that is waiting for renderer to release a sample, and parser filter is decoder to release a sample.

When renderer releases a sample, decoder completes the GetBuffer call that has not yet been completed. The decoder then decodes the compressed video frame and frees the sample it saved, allowing parser to complete its getbuffer invocation.

3.4.2. Transport protocol (Transports)

In order for the media data to flow in the filter graph, the Directshow filter must be able to support one of several protocols known as the Transport Protocol (transports). When two filter is connected, they must support the same transport protocol, otherwise they will not be able to exchange data. Typically, a transport protocol requires a PIN to support a specific interface, and when two filter is connected, the other PIN calls this interface of the PIN.

Most DirectShow filter stores media data in main memory and submits the data to another filter via a PIN connection, which is referred to as the local Memory Transfer Protocol (transport). Although this type of transport protocol is most commonly used in DirectShow, not all filter uses it. For example, some filter passes data through a hardware path, using a pin just to pass control information, such as the Ioverlay interface.

DirectShow defines two mechanisms for the local memory Transfer Protocol, push (push) mode and pull mode. In push mode, the source filter generates data and submits it to the downstream filter, the downstream filter passively receives the data and processes them, and then passes the data to its downstream filter. In pull mode, the source filter is connected to a parser filter, parser filter requests data from the source filter, and the source filter responds to the request and passes the data. Push mode uses the Imeminputpin interface, while the pull mode uses the Iasyncreader interface.

Push mode is more widely used.

3.4.3. Media samples (sample) and dispensers (allocator)

When a pin passes media data to another pin, it is not a pointer to a memory buffer, but rather a pointer to a COM object that manages the memory buffer, called the media sample, which exposes the Imediasample interface. The receiver pin accesses the memory buffers by calling methods of the Imediasample interface, such as Imediasample::getpointer,imediasample::getsize and Imediasample:: Getactualdatalength.

Sample always transfers down from the output pin to the input pin. In push mode, the output pin is passed a sample by using the Imeminputpin::receive method on the input pin. Enter a PIN or synchronously process the data internally within the Receive method, or open another worker thread for asynchronous processing. If the input pin needs to wait for resources, it is allowed to block in receive.

Another COM object used to manage media samples, called the Allocator (allocator), exposes the Imemallocator interface. Once a filter needs a free media sample, it calls the Imemallocator::getbuffer method to get a pointer to sample. Each pin connection shares a allocator, and when two pins are connected, they negotiate which filter to provide allocator. The PIN can set the properties of the allocator, such as the number of buffers and the size of each buffer.

The following figure shows the relationship between allocator, media sample, and filter:

Media Sample Reference count (Media sample Reference Counts)

A allocator creates a sample pool with a limited sample. At some point, some of the sample is being used and some can be used by the GetBuffer method. Allocator uses the reference count to track the sample reference count returned by the Sample,getbuffer method to 1, and if the reference count becomes 0,sample it can be returned to the allocator sample pool. So that it can be used again by the GetBuffer method. When the reference count is greater than 0, sample cannot be used by GetBuffer. If each sample that belongs to allocator is being used, the GetBuffer method is blocked until a sample can be used.

For example, suppose an input pin receives a sample. If it synchronously processes it within the Receive method, the reference count of sample does not increase, and when receive returns, the output PIN releases the sample, and the reference count is returned to the sample pool 0,sample. In another case, if the input pin processes sample asynchronously, it adds 1 to the reference count of sample before the Receive method returns, when the reference count becomes 2. When the output PIN releases this sample, the reference count becomes 1,sample and cannot be returned to the sample pool until the working thread of the asynchronous process finishes the work, the call release releases the sample, and the reference count becomes 0 o'clock before it can be returned to the sample pool.

When a pin receives a sample, it can copy the data to another sample, or modify the original sample and pass it to the next filter. A sample may be passed throughout the graph length, with each filter calling AddRef and release in turn. Therefore, the output pin must not reuse the same sample after calling receive because the downstream filter may be using this sample. The output pin can only call GetBuffer to get a new sample.

This mechanism reduces the overall memory allocation process because the filter can reuse the same buffering. It also prevents the data from being overwritten before being processed by accidentally writing.

When the filter processes data, the amount of data becomes larger (such as decoding data), and a filter can assign different allocator to the input pin and output pin. If the output data is not larger than the amount of input data, filter can handle the data in an alternate way without copying it into the new sample, in which case two or more pin connections share a allocator.

Commit (commit) and anti-commit (decommit) allocator

When a filter creates a allocator for the first time, allocator does not allocate a memory buffer for it, at which point it fails if the GetBuffer method is called. When the flow begins to flow, the output pin calls Imemallocator::commit to commit the allocator, which allocates memory for it. At this point the pin can call GetBuffer.

When the stream is stopped, the PIN calls Imemallocator::D Ecommit to deserialize the allocator, and all subsequent GetBuffer calls will fail before allocator is resubmitted, again, If there is a blocking GetBuffer call waiting for the sample, the failure information will also be returned immediately. Whether the Decommit method frees memory depends on how it is implemented, such as when the Cmemallocator class is not freed until it is destructor.

3.4.4. Filter status

The filter has three possible states: Stop (Stopped), Ready (paused), and run (running). The purpose of the ready state is to have graph prepare in advance so that it can respond immediately when the Run command is released. The Filter Graph Manager controls all state transitions. When an application calls Imediacontrol::run,imediacontrol::P ause or Imediacontrol::stop, the Filter Graph The manager uses the corresponding Imediafilter method on all filter. The transition between the stop state and the running state is always in a ready state, that is, if the application is going to run in a stopped Graph, the Filter Graph manager turns it into a pause state before running it.

For most filter, the run state and the ready state are equivalent. Look at this graph below:

Source > Transform > Renderer

Assuming that this source filter is not a real-time acquisition source, when the source filter is ready, it creates a thread that generates new data as quickly as possible and writes to media sample. The thread uses the Imeminputpin method to "push" the sample to the downstream filter by using the input pin on the transform filter. Transform filter receives data in the thread of the source filter, it may also use a work line Cheng to pass the sample to renderer, but in general it passes them in the same thread. If the renderer is in a ready state, it waits to receive sample, when it receives one, it either blocks or saves the sample, if this is a video renderer, it displays the sample as a static picture and refreshes it only when necessary.

At this point, the stream is ready to be fully render, and if graph is still in a ready state, sample will accumulate after each sample, until each filter is blocked in receive or GetBuffer. No data will be lost. Once the source thread's blocking is lifted, it simply recovers from the blocking point.

The source filter and transform filter ignore the go from ready state to the running state-they simply continue to process the data as quickly as possible. But when renderer is running, it will start to render sample. First, it renders the sample that is saved in the ready state, and then, each time a new sample is received, it calculates the rendering times of the sample, and renderer saves each sample until its render time and then renders them. When waiting for the appropriate rendering time, it either blocks on the Receive method, or receives the data in a worker thread and puts it in the queue. Renderer's previous filter does not care about these issues.

Real-time feeds (live source), such as acquisition devices, are an exception to the usual situation. In a live source, it is not appropriate to prepare the data in advance. The application might put graph in a ready state and wait a long time before running it. Graph should not render the sample in the ready period, so a live source does not produce a new sample when it is in the ready state. To notify this to the Imediafilter::getstate method of the filter Graph Manager,source filter, return vfw_s_cant_cue. This return value indicates that the filter has switched to ready state even though renderer has not received any data yet.

When a filter is stopped, it no longer receives any data that is passed to it. The source filter closes their streamlines, and the other filter closes all worker threads that they create. Pin anti-commit (decommit) their allocator.

State transitions

The Filter Graph manager completes all state transitions in the order from downstream filter to upstream filter, starting from renderer up to source filter, which is necessary to prevent data loss or graph deadlock. The most important state transitions are transitions between the ready state and the Stop state:

* Stop state to Ready state: When each filter is set to the ready state, it is ready to receive sample from the previous filter. The source filter is the last filter that is set to the ready state, which creates a data flow path and begins passing sample. Because all downstream filter is in the ready state, no filter will refuse to receive sample. When all renderer in graph receive a sample, the filter Graph manager completely completes the state transition (except for live feeds).

* Ready state to stop state: When a filter is stopped, it releases all of its saved sample and will unblock all upstream filter calls GetBuffer. If filter</

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.