FFMPEG-0.11.1 analysis of the FFMPEG structure (simple involved) and the code flow (data flow, multi-threading) is very messy, it is recommended to directly see the original connection contained in the

Source: Internet
Author: User

1. "Data Flow" is omitted here, memory management, only the simple process of recording, do not say where each packet is born, how to flow, where destroyed. 】

Refer to Link: http://www.rosoo.net/a/201207/16135.html

bit stream: stream:

Here is the URL, the IO side, just a person to give him a name that is, Bitstream, stream, no special meaning.

Package:

The front end is Demux, the file is opened in the form of a stream in Io, Opt_input_file-avformat_open_input calls S->iformat->read_header (s) to parse the header information, Then transcode, get the format context from Input_files[file_index]->ctx, Av_read_frame (IS,&PKT) reads the data as a package from the file, the whole frame of data in a single package, or a slice, See the specific container format (more than one frame in rvmb). Next, in Output_packet (InputStream *ist, const avpacket *pkt), copy the current package and go to the decoding end.

The backend is a mux, such as transcoding audio and video codec information, some manual settings, some default, in Transcode_init-avformat_write_header, call Avoutputformat. Write_header, the header information of the file (for example, FLV, AVI, 3GP at the beginning of the file, stored a lot of information, some tagged audio and video codec parameters, some indicate the media data or other data in the next file arrangement, they follow the respective file protocol. When the backend encoding does not completely end, some types of file header information is also not completed, where FLV is not, its header information is irrelevant to the media load.

Write packet data in Output_packet-do_streamcopy-write_frame-av_interleaved_write_frame-s->oformat->write_packet, The IO is then on the lower level of the MUX, which is logical.


Frame:

The decoder opens avcodec_open2 in Init_input_stream invoked by Transcode_init, while Avcodeccontext.get_buffer/release_ Buffer gets the static Intcodec_get_buffer (Avcodeccontext *s, avframe *frame)/Static Voidcodec_release_buffer ( Avcodeccontext *s, Avframe *frame), Get_buffer from InputStream. Buffer_pool frame, InputStream is the structure of the data after the decoding, Buffer_pool is a framebuffer linked list.

Decode_video (InputStream *ist, Avpacket *pkt, Int*got_output), decoding in InputStream, Single-threaded and multi-threaded chip decoding walk Vctx->codec->decode, multi-threaded frame decoding go ff_thread_decode_frame.

Arrows are pointer reference relationships, and straight lines are parallel relationships. Hehe, we can look at the code, here to draw, that is to show that they have a connection.

In addition, the synchronization problem, we should also pay attention to.

Here's a picture here, useful, from: http://www.rosoo.net/a/201207/16135.html

Decoded data, passed Pre_process_video_frame (off edge. There will be an edge processing in the decoding, OutputStream is used to manage the data that will be encoded before, ost->sync_ist = Input_streams[source_index] in New_output_stream to associate the two. In Transcode-init_simple_filtergraph, ist->filters[ist->nb_filters-1] = Fg->inputs[0] met. Below or look at the filters, otherwise it is difficult to clarify the clue, this is the new mechanism in FFmpeg.

FFmpeg filter in the new version can refer to: http://blog.csdn.net/nkmnkm/article/details/7219641

Avfiltergraph: Almost exactly the same as the fitlergraph in DirectShow, representing a string of connected filter. "It's easy to understand, and soon to get started."
Avfilter: Represents a filter.
Avfilterpad: Represents the input or output of a filter, equivalent to a pin in dshow. Only the output pad filter is called source, only the filter input pad is called sink.
Avfilterlink: Represents a bond between the two connected Fitler.

1 Generation graph:avfiltergraph*graph = Avfilter_graph_alloc ();
2 Creating a source
Avfiltercontext *filt_src; Avfilter_graph_create_filter (&filt_src,&input_filter, "src", NULL, is, graph);
3 Creating sink
Avfiltercontext *filt_out;
ret = Avfilter_graph_create_filter (&filt_out,avfilter_get_by_name ("Buffersink"), "out", NULL, Pix_fmts,graph);
4 Connecting Source and Sink:avfilter_link (filt_src, 0, filt_out, 0);
5 final check of graph: avfilter_graph_config (graph, NULL);
We remove the processed frames from the sink, so it is best to save the sink references, such as:
Avfiltercontext *out_video_filter=filt_out;
6 Implementation Input_filter

Configure_simple_filtergraph-configure_video_filters was configured in Transcode_init. Call Av_buffersink_read or Av_buffersink_get_buffer_ref in Poll_filters, where scaling is implemented using this mechanism, replacing the previous Swscale, but this version of the code is a bit messy.

The encoder is opened in Transcode_init, Avcodec_open2 (Ost->st->codec, codec, &ost->opts),avctx->codec-> Init initialization. In Output_packet, but with if (!check_output_constraints (ist, ost) | | ost->encoding_needed) judgment, poll_filters-do_video_out- AVCODEC_ENCODE_VIDEO2 is encoded in the Write_frame, and then packaged into the container.

2. "Multithreading" (-threads N, decoding multithreading) (if you want to really support multi-threading, need to compile, add line libraries Pthread)

Suggested Reference: http://www.360doc.com/content/12/0416/11/474846_204064235.shtml

Decoding Multithreading:

Avcodec_open2, FF_LOCKMGR_CB Mutex, correlation function av_lockmgr_register, Avpriv_lock_avformat, Avpriv_unlock_avformat,entangled_ The Thread_counter variable is

Avcodec_open2 make a simple mark. Ff_thread_init in Validate_thread_parameters note, such as H264 decoding Avcodec. Capabilities=/*codec_cap_draw_horiz_band |*/CODEC_ CAP_DR1 | Codec_cap_delay | Codec_cap_slice_threads | Codec_cap_frame_threads, this shows the types it can support, and the IfElse option is ff_thread_frame prior to Ff_thread_slice.

(Linux) thread-related functions in pthread.c, you can refer to the following, Ff_thread_init below will be called to Frame_thread_init, there are frame_worker_thread worker threads.

Pay attention to the assignment of Avcodeccontext.thread_opaque and execute, Execute2.

Slice_threading:

In FFmpeg, both Dvvideo_decoder,ffv1_decoder,h264_decoder,mpeg2_video_decoder and Mpeg_video_decoder support Ff_thread_ SLICE (SLICE can be an entire frame, can also be segmented by a frame). If the thread type is a slice, then the ff_thread_init is called Thread_init (Avcodeccontext *avctx) to initialize, (by the way, this function can be seen, the thread's auto selection is the first to obtain the number of CPUs, Ffmin (NB _cpus + 1, max_auto_threads), then Pthread_create (&c->workers[i],null,worker, avctx) create thread, Avcodec_thread_park_ Workers, then execute =avcodec_thread_execute, Execute2 = Avcodec_thread_execute2.

Here, after registering the Avcodeccontext multithreading function Excute,codec decoding process slice call Avctx->excute (), Excute start slice decoding worker thread start multithreading decoding, At the same time quickly return to start the next slice parsing and decoding. Framethreading main thread and the synchronization of the code threads are as follows:

Frame_threading:

Currently supports frame threading decoder has h264_decoder,huffyuv_decoder, Ffvhuff_decoder, Mdec_decoder, Mimic_decoder, Mpeg4_decoder, Theora_decoder, Vp3_decoder and Vp8_decoder.

Frame threading has the following limitations: The user function Draw_horiz_band () must be thread-safe, and in order to improve performance, the user should provide a thread-safe get_buffer () callback function for codec, and the user must be able to handle the delay caused by multithreading. In addition, codec that supports frame threading requires that each package contain a full frame. The buffer content cannot be written after the buffer content is called in ff_thread_report_progress ().

Each thread has the following four states, as shown in Figure 2, in order to ensure thread safety, if codec does not implement Update_thread_context () and thread-safe get_buffer (), it must be decoded before the state can be converted to Status_setup_ Finished means that the next thread can only start decoding after the current thread has been decoded.


As shown in rabbit three, if codec implements Update_thread_context () and thread-safe get_buffer (), the thread state can then be converted to status_setup_finished before decoding begins, so that The next thread is likely to be parallel to the current thread.


The decoding main thread passes the code stream to the corresponding decoding thread by calling Submit_packet. The synchronization of the main thread and the code threads is shown in Figure 4.


As can be seen in the diagram, the main thread is ordered to submit packet, which is handled by the work thread.

"The article has been marked out reference articles, suggested that we look at the original text, this article is only the fragments of the thought record"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.