New FFMPEG: avfilter

Source: Internet
Author: User
Are you still using libswscale for Pixel format conversion of Images Using FFMPEG? Hey, it's out of date!
FFmpeg has something new: libavfilter. Using It can completely replace libswscale, and can automatically complete some complex conversion operations. libavfilter, all of which are good! But it is too complicated...
If you only process image pixel format, it is quite simple to use libswscale. You can check the latest ffplay. the code in C is surrounded by # If config_avfilter # endif, which is a huge amount of code and confusing. But to catch up with the trend, we still have to learn about it...
First, find out several related concepts in avfilter. (Note: if you do not have a basic DirectShow knowledge, learn the basic concepts of DirectShow first ):
1 avfiltergraph: almost identical to fitlergraph in DirectShow, representing a string of connected filters.
Avfilter: a filter.
Avfilterpad: indicates the input or output port of a filter. It is equivalent to the pin in dshow. Only the filter of the output pad is called source, and only the tilter of the Input Pad is called sink.
Avfilterlink: the bond between two connected fitler.
In fact, it seems that libavfitler is almost the same as dshow.

Next we will take ffplay. C as an example to analyze the avfilter-related code.
1 generate graph:
Avfiltergraph * graph = avfilter_graph_alloc ();
2. Create Source
Avfiltercontext * filt_src;
Avfilter_graph_create_filter (& filt_src, & input_filter, "src", null, is, graph );
The first parameter is the generated filter (a source), the second parameter is an avfilter structure instance, and the third parameter is the name of the fitler to be created, the fourth parameter is unknown, the fifth parameter is user data (the caller's private data), and the sixth parameter is the graph pointer. the instance with the second parameter must be implemented by the caller before the frame can be sent to graph.
3. Create a sink
Avfiltercontext * filt_out;
Ret = avfilter_graph_create_filter (& filt_out, avfilter_get_by_name ("buffersink"), "out", null, pix_fmts, graph );
The parameters are the same as above. the created sink is a buffersink. Refer to the libavfitler source code file sink_buffer.c to see what it is. sink_buffer is actually a sink that can output frames through buffer. Of course, its output is not through pad, because it does not have fitler. using it as the sink allows the graph code to easily retrieve the frames processed by the graph.
4 connect Source and Sink
Avfilter_link (filt_src, 0, filt_out, 0 );
The first parameter is connected to the previous filter, the second parameter is the serial number of the pad to be connected by the former fitler, and the third parameter is the following filter, the fourth parameter is the pad to be connected by the filter.
4. Perform the final check on the graph.
Avfilter_graph_config (graph, null );
We retrieve the processed frames from the sink, so we 'd better save the reference of the sink, for example:
Avfiltercontext * out_video_filter = filt_out;
6. Implement input_filter

Because input_filter is a source, only the output pad is assigned to it and only one pad is available.

Static avfilter input_filter = <br/>{< br/>. name = "ffplay_input", </P> <p>. priv_size = sizeof (filterpriv), </P> <p>. init = input_init, <br/>. uninit = input_uninit, </P> <p>. query_formats = input_query_formats, </P> <p>. inputs = (avfilterpad []) {. name = NULL },< br/>. outputs = (avfilterpad []) {. name = "default", <br/>. type = avmedia_type_video, <br/>. request_frame = input_request_frame, <br/>. config_props = input_config_props, },< br/> {. name = NULL }}, <br/> };Callback functions that implement avfilter: Init () and uninit () -- used to initialize/destroy the resources used.
Let's take a look at the implementation in ffplay. C:

Static int input_init (avfiltercontext * CTX, const char * ARGs, void * opaque) <br/>{< br/> filterpriv * priv = CTX-> priv; <br/> avcodeccontext * codec; <br/> If (! Opaque) Return-1; </P> <p> priv-> is = opaque; <br/> codec = priv-> is-> video_st-> codec; <br/> codec-> opaque = CTX; <br/> If (codec-> capabilities & codec_cap_dr1 )) {<br/> av_assert0 (codec-> flags & codec_flag_emu_edge); <br/> priv-> use_dr1 = 1; <br/> codec-> get_buffer = input_get_buffer; <br/> codec-> release_buffer = input_release_buffer; <br/> codec-> reget_buffer = input_reget_buffer; <br/> codec-> thread_safe_callbacks = 1; <br/>}</P> <p> priv-> frame = avcodec_alloc_frame (); </P> <p> return 0; <br/>}Filterpriv is the private data structure of the filter (input_filter) implemented by ffplay. the main task is to assign an avframe to save the frames obtained from the device. uninit () is simpler, so you don't have to read it.
You also need to implement request_frame () of the output pad so that the filter after input_filter can get the frame.

Static int input_request_frame (avfilterlink * link) <br/>{< br/> filterpriv * priv = link-> Src-> priv; <br/> avfilterbufferref * picref; <br/> int64_t PTS = 0; <br/> avpacket Pkt; <br/> int ret; </P> <p> while (! (Ret = get_video_frame (priv-> is, priv-> frame, & pts, & Pkt) <br/> av_free_packet (& Pkt ); <br/> If (Ret <0) <br/> return-1; </P> <p> If (priv-> use_dr1 & priv-> frame-> opaque) {<br/> picref = avfilter_ref_buffer (priv-> frame-> opaque, ~ 0); <br/>}else {<br/> picref = avfilter_get_video_buffer (link, av_perm_write, link-> W, link-> H ); <br/> av_image_copy (picref-> data, picref-> linesize, <br/> priv-> frame-> data, priv-> frame-> linesize, <br/> picref-> Format, link-> W, link-> H); <br/>}< br/> av_free_packet (& Pkt ); </P> <p> avfilter_copy_frame_props (picref, priv-> frame); <br/> picref-> PTS = PTS; </P> <p> avfilter_start_frame (link, picref); <br/> avfilter_draw_slice (link, 0, link-> H, 1); <br/> avfilter_end_frame (Link); </P> <p> return 0; <br/>}The caller obtains the processed frame from the sink:
Av_buffersink_get_buffer_ref (filt_out, & picref, 0 );
The acquired frame is saved in picref. this function will cause the filter in graph to call the request_frame () of the outpad of the previous filter from the back to the front, and finally call request_frame () of the source, that is, input_request_frame (), input_request_frame () call get_video_frame () (see ffplay. c) Obtain a frame from the device (which may need to be decoded) and then copy the frame data to picref. The frames processed by the filters are represented by avfilterbufferref. then copy some attributes of the frame to picref and call avfilter_start_frame (link,
Picref); avfilter_draw_slice (link, 0, link-> H, 1); avfilter_end_frame (Link); to process this frame. these three functions correspond to the three function pointers on pad: start_frame, draw_slice, end_frame. taking start_frame as an example, the call process is as follows: first, start_frame of source is called. After necessary processing, start_frame of filter connected to source is called. the output pad of each filter is responsible for passing down the call in this function. when the sink calls start_frame (), the output of the source is returned layer by layer.
Pad. When these three functions are called by the source output pad, the final result of this frame is displayed, and can be obtained on the sink.
Compared with dshow, avfilter does not have the push mode and pull mode concepts, and does not implement threads on the source output pad. The running of the entire graph is driven by the caller.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.