FFmpeg document 4: Create a thread

Source: Internet
Author: User

summary
previously, we used the SDL audio function to support audio decoding and playback, we define a thread function that includes the audio callback function Callbacks. When audio is needed, enable SDL to start this thread. Now we will do the same thing for video playback, which makes Code easier to modularize and collaborate, especially for audio and video synchronization, so where do we start?
first, we noticed that our main function needs to do too many things: run the event loop, read packet, and decode the video. What we need to do is
separate all parts, create a thread to decode the packets and put the packet of the audio and video into their respective queues.
the thread reads the packet from the corresponding audio and video processing threads, we have created the required audio thread, and the video processing thread will be a bit complicated, because
we need to play the video data on our own (the audio is played by SDL ). We will add our playback code to the main loop, but we
want to combine video playback and event-driven loop instead of playing in the main loop, this means that we first decode the video, put the
decoded video frame in another queue, then create a regular event (ff_refresh_event) and add it to the event-driven system, every time an event-driven loop encounters this event (ff_refresh_event), it plays the next frame, the following figure shows a hand-drawn character graph of the above function.
________ audio _______ _____
| Pkts |||| to spkr
| decode | --> | audio | -> | SDL |->
| ________ | _______ | _____ |
| video _______
| Pkts |
+ ----> | video |
________ | _______
| event | + --> | video | to Mon.
| loop | ------> | Disp. |->
| _______ | <-ff_refresh -- | _______ |

The sdl_delay thread of SDL can precisely control the playing time of the next video frame. This is the main reason why we combine the control of video playback with the event-driven loop, when we finally talk about audio and video synchronization in the next chapter, it is no longer difficult to add this part of code to refresh the correct image at the right time.
Simplified code
We need to trim the code. We have all audio and video codec information, and we also need to add queues, buffers, and other
Everything is a logical unit service-movie, so we should create a large struct to put all this information
Included, named videostate
Typedef struct videostate {

Avformatcontext * pformatctx;
Int videostream, audiostream;
Avstream * audio_st;
Packetqueue audioq;
Uint8_t audio_buf [(avcodec_max_audio_frame_size * 3)/2];
Unsigned int audio_buf_size;
Unsigned int audio_buf_index;
Avpacket audio_pkt;
Uint8_t * audio_pkt_data;
Int audio_pkt_size;
Avstream * video_st;
Packetqueue videoq;

Videopicture pictq [video_picture_queue_size];
Int pictq_size, pictq_rindex, pictq_windex;
Sdl_mutex * pictq_mutex;
Sdl_cond * pictq_cond;
Sdl_thread * parse_tid;
Sdl_thread * video_tid;

Char filename [1024];
Int quit;
} Videostate;
Let's take a rough look at this structure: the basic information avformatcontext * pformatctx; the audio and video stream index and its corresponding avstream entity, put audio-related audio_buffer, audio_buffer_size, and other buffers in this structure, and create a queue and buffer for the video (used to store decoded video frames, instead of a real queue ),
Videopicture struct is the struct we have created. When it is used, it will be analyzed to create two threads for us.
Assign pointer, quit flag, and movie file name. Now let's go back to the main function to see how these functions canProgramFirst, initialize videostate struct:
Int main (INT argc, char * argv []) {
Sdl_event event;

Videostate * is;

Is = av_mallocz (sizeof (videostate ));
Av_mallocz () is a good function that allocates memory for us and clears its content.
Then initialize the mutex lock of the display buffer (pictq), because the event-driven loop calls the playing function from the display buffer
(Pictq) extracts decoded data frames, and the video decoding function puts the generated data frames into the display buffer (pictq ),
The two will conflict. This is a typical competition. We need to assign mutex locks to any thread before starting it.
Copy the movie name to videostate.
Pstrcpy (is-> filename, sizeof (is-> filename), argv [1]);

is-> pictq_mutex = sdl_createmutex ();
is-> pictq_cond = sdl_createcond ();
pstrcpy () is a function in FFMPEG. Compared with strncpy, it provides additional boundary checks.
the first thread
now we can create a thread and do some practical work.
schedule_refresh (is, 40);

Is-> parse_tid = sdl_createthread (decode_thread, is );
If (! Is-> parse_tid ){
Av_free (is );
Return-1;
}
The schedule_refresh () function will be defined later. Its function is to send
Ff_refresh_event event-driven, which calls the video refresh function in the event-driven loop. Now let's take a look.
Sdl_creatthread () function, which creates a thread to run in a given function and can transmit user-defined data to this function. This thread has full access to the memory area of the process, we use this method to call decode_thread () and pass videostate. The first half of this function has nothing new, just opening the file and finding the audio and video streams, the only difference is that you need to save format_context to videostate, find the audio and video stream location, and then call another function stream_component_open () We will define () it is natural to use this method to separate programs. Because the initialization function for audio and video decoding is similar, putting it into a function can save a lot of code.
In the stream_component_open () function, find the codec and decoder initialization audio options and save important information
In videostate, enable audio, video, and thread. You can also add options here to specify the decoder instead of automatic detection. The following is the function content:
Int stream_component_open (videostate * is, int stream_index ){
Avformatcontext * pformatctx = is-> pformatctx;
Avcodeccontext * codecctx;
Avcodec * codec;
Sdl_audiospec wanted_spec, spec;

If (stream_index <0 | stream_index> = pformatctx-> nb_streams ){
Return-1;
}

// Get a pointer to the codec context for the video stream
Codecctx = pformatctx-> streams [stream_index]-> codec;

If (codecctx-> codec_type = codec_type_audio ){
// Set audio settings from codec info
Wanted_spec.freq = codecctx-> sample_rate;
Wanted_spec.callback = audio_callback;
Wanted_spec.userdata = is;
If (sdl_openaudio (& wanted_spec, & spec) <0 ){
Fprintf (stderr, "sdl_openaudio: % s \ n", sdl_geterror ());
Return-1;
}
}
Codec = avcodec_find_decoder (codecctx-> codec_id );
If (! Codec | (avcodec_open (codecctx, codec) <0 )){
Fprintf (stderr, "unsupported codec! \ N ");
Return-1;
}

switch (codecctx-> codec_type) {
case codec_type_audio:
is-> audiostream = stream_index;
is-> audio_st = pformatctx-> streams [stream_index];
is-> audio_buf_size = 0;
is-> audio_buf_index = 0;
memset (& is-> audio_pkt, 0, sizeof (is-> audio_pkt);
packet_queue_init (& is-> audioq );
sdl_pauseaudio (0);
break;
case codec_type_video:
is-> videostream = stream_index;
is-> video_st = pformatctx-> streams [stream_index];
packet_queue_init (& is-> videoq);
is-> video_tid = sdl_createthread (video_thread, is);
break;
default:
break;
}< br> these codes are basically the same as those mentioned above, however, we combine audio and video processing. It is worth noting that we use the
large structure videostate as the audio callback parameter instead of the original codecctx, the audio and video streams are saved to audio_st and video_st respectively. Similarly, we created a video queue and initialized it like the audio queue. However, the most important thing is to start the audio and video processing threads:
sdl_pauseaudio (0);
break;

Is-> video_tid = sdl_createthread (video_thread, is );
Sdl_pauseaudio () has been discussed in the previous section. The following describes the video_thread () function.
First, let's take a look at the second half of the decode_thread () function. It is basically a for loop that mainly reads a packet

And then add it to the corresponding queue.
For (;;){
If (is-> quit ){
Break;
}
// Seek stuff goes here
If (is-> audioq. size> max_audioq_size |
Is-> videoq. size> max_videoq_size ){
Sdl_delay (10 );
Continue;
}
If (av_read_frame (is-> pformatctx, packet) <0 ){
If (url_ferror (& pformatctx-> Pb) = 0 ){
Sdl_delay (100 );
Continue;
// Connect to the author's translation.
} Else {
Break;
}
}
// Is this a packet from the video stream?
If (packet-> stream_index = is-> videostream ){
Packet_queue_put (& is-> videoq, packet );
} Else if (packet-> stream_index = is-> audiostream ){
Packet_queue_put (& is-> audioq, packet );
} Else {
Av_free_packet (packet );
}
}

There is nothing new here, except that we limit the maximum value for the audio and video queues and add a function to detect read errors. The format context contains a byteiocontext structure called Pb. This struct is used to save some low-level file information. The url_ferror function is used to check the struct and find whether there are some file read errors.

After the loop, our code waits for the rest of the program to end and prompts that we have finished. This code is helpful because it shows how events are driven-images will be displayed later.

While (! Is-> quit ){

Sdl_delay (100 );

}

Fail:

If (1 ){

Sdl_event event;

Event. type = ff_quit_event;

Event. User. data1 = is;

Sdl_pushevent (& event );

}

Return 0;

We use the SDL constant sdl_userevent to get values from user events. The value of the first user event should be sdl_userevent, And the next user event is sdl_userevent + 1, and so on. In our program, ff_quit_event is defined as sdl_userevent + 2. If you like it, we can also pass user data. Here we pass the pointer of a large struct. Finally, we call the sdl_pushevent () function. In our event branch, we just put it in the sdl_quit_event section. We will discuss it in detail in our event queue. Now we just make sure that we have correctly placed the ff_quit_event event. We will capture it later and set our exit flag quit.

Get frame: video_thread

When the decoder is ready, we start the video thread. This thread reads the packet from the video queue, decodes it into a video frame, and then calls the queue_picture function to put the processed frame into the image queue:

Int video_thread (void * Arg ){

Videostate * Is = (videostate *) ARG;

Avpacket pkt1, * packet = & pkt1;

Int len1, framefinished;

Avframe * pframe;

Pframe = avcodec_alloc_frame ();

For (;;){

If (packet_queue_get (& is-> videoq, packet, 1) <0 ){

// Means we quit getting packets

Break;

}

// Decode Video Frame

Len1 = avcodec_decode_video (is-> video_st-> codec, pframe, & framefinished,

Packet-> data, packet-> size );

// Did we get a video frame?

If (framefinished ){

If (queue_picture (is, pframe) <0 ){

Break;

}

}

Av_free_packet (packet );

}

Av_free (pframe );

Return 0;

}

Many functions here should be familiar with it. We moved the avcodec_decode_video function here and replaced some parameters. For example, we saved avstream in our own large struct, so we can get the decoder information from there. We just keep getting packets from the video queue until someone tells us to stop or make an error.

Queue Frames

Let's take a look at the function of saving the decoded frame pframe to the image queue. Because our image queue is a set covered by SDL (basically no video display function is required for computing), we need to convert frames into corresponding formats. The data we save to the image queue is a structure we made ourselves.

Typedef struct videopicture {

Sdl_overlay * BMP;

Int width, height;

Int allocated;

} Videopicture;

One of our large structs can store these buffers. However, we need to apply for sdl_overlay on our own (Note: The allocated flag indicates whether we have done this application or not ).

To use this queue, we have two pointers: Write pointer and read pointer. We also need to ensure that a certain amount of actual data is in the buffer. To write data to the queue, we need to wait for the buffer to be cleared so that we can store our videopicture in a location. Then we checked to see if we have applied for an index number that can be overwritten. If not, we need to apply for a space. We also need to apply for a buffer again if the window size has changed. However, to avoid locking, you should avoid applying here (I am not quite sure about the reason; I believe it is to avoid the cause of calls to the SDL override function in other threads ).

Int queue_picture (videostate * is, avframe * pframe ){

Videopicture * VP;

Int dst_pix_fmt;

Avpicture PICT;

Sdl_lockmutex (is-> pictq_mutex );

While (is-> pictq_size> = video_picture_queue_size &&

! Is-> quit ){

Sdl_condwait (is-> pictq_cond, is-> pictq_mutex );

}

Sdl_unlockmutex (is-> pictq_mutex );

If (is-> quit)

Return-1;

// Windex is set to 0 initially

Vp = & is-> pictq [is-> pictq_windex];

If (! VP-> BMP |

VP-> width! = Is-> video_st-> codec-> width |

VP-> height! = Is-> video_st-> codec-> height ){

Sdl_event event;

VP-> allocated = 0;

Event. type = ff_alloc_event;

Event. User. data1 = is;

Sdl_pushevent (& event );

Sdl_lockmutex (is-> pictq_mutex );

While (! VP-> allocated &&! Is-> quit ){

Sdl_condwait (is-> pictq_cond, is-> pictq_mutex );

}

Sdl_unlockmutex (is-> pictq_mutex );

If (is-> quit ){

Return-1;

}

}

The event mechanism here is the same as what we saw when we wanted to exit. We have defined the event ff_alloc_event as sdl_userevent. We send the event to the event queue and then wait for the memory applying function to set the condition variable.

Let's take a look at how to modify the event loop:

For (;;){

Sdl_waitevent (& event );

Switch (event. Type ){

Case ff_alloc_event:

Alloc_picture (event. User. data1 );

Break;

Remember that event. User. data1 is our large struct. That's simple. Let's take a look at the alloc_picture () function:

Void alloc_picture (void * userdata ){

Videostate * Is = (videostate *) userdata;

Videopicture * VP;

Vp = & is-> pictq [is-> pictq_windex];

If (VP-> BMP ){

// We already have one make another, bigger/smaller

Sdl_freeyuvoverlay (VP-> BMP );

}

// Allocate a place to put our YUV image on that screen

VP-> BMP = sdl_createyuvoverlay (is-> video_st-> codec-> width,

Is-> video_st-> codec-> height,

Sdl_yv12_overlay,

Screen );

VP-> width = is-> video_st-> codec-> width;

VP-> Height = is-> video_st-> codec-> height;

Sdl_lockmutex (is-> pictq_mutex );

VP-> allocated = 1;

Sdl_condsignal (is-> pictq_cond );

Sdl_unlockmutex (is-> pictq_mutex );

}

You can see that we moved the sdl_createyuvoverlay function from the main loop here. This code should be completely self-Annotated. Remember to save the height and width to the videopicture structure because the size of the video we need to save has not changed for some reason.

Well, we have solved almost all of them and can apply for YUV coverage and prepare to receive images. Let's review queue_picture and read a code that copies frames to overwrite them. You should be able to recognize some of them:

Int queue_picture (videostate * is, avframe * pframe ){

If (VP-> BMP ){

Sdl_lockyuvoverlay (VP-> BMP );

Dst_pix_fmt = pix_fmt_yuv420p;

PICT. Data [0] = VP-> BMP-> pixels [0];

PICT. Data [1] = VP-> BMP-> pixels [2];

PICT. Data [2] = VP-> BMP-> pixels [1];

PICT. linesize [0] = VP-> BMP-> pitches [0];

PICT. linesize [1] = VP-> BMP-> pitches [2];

PICT. linesize [2] = VP-> BMP-> pitches [1];

// Convert the image into YUV format that SDL uses

Img_convert (& pict, dst_pix_fmt,

(Avpicture *) pframe, is-> video_st-> codec-> pix_fmt,

Is-> video_st-> codec-> width, is-> video_st-> codec-> height );

Sdl_unlockyuvoverlay (VP-> BMP );

If (++ is-> pictq_windex = video_picture_queue_size ){

Is-> pictq_windex = 0;

}

Sdl_lockmutex (is-> pictq_mutex );

Is-> pictq_size ++;

Sdl_unlockmutex (is-> pictq_mutex );

}

Return 0;

}

This part of code is the same as previously used. It is mainly used to fill the YUV overwrite with our frames. The last point is simply to add 1 to the queue. This queue will be written until it is full and read until it is empty. Therefore, all depends on the is-> pictq_size value, which requires that we have to lock it. What we do here is to increase the write pointer (rotate as necessary), lock the queue and increase the size. Now our reader functions will know more information in the queue, and when the queue is full, our writing functions will also know.

Show video

This is our video thread. Now we have seen almost all threads except one -- do we remember to call the schedule_refresh () function? Let's take a look at how it actually works:

Static void schedule_refresh (videostate * is, int delay ){

Sdl_addtimer (delay, sdl_refresh_timer_cb, is );

}

The sdl_addtimer () function is a simple function in SDL that executes user-defined callback functions (with some user data parameters) at a specified time (in a specific millisecond. We will use this function to regularly refresh the video-every time we call this function, it will set a timer to trigger a scheduled event to display a frame from the image queue to the screen.

However, let's first trigger that event.

Static uint32 sdl_refresh_timer_cb (uint32 interval, void * opaque ){

Sdl_event event;

Event. type = ff_refresh_event;

Event. User. data1 = opaque;

Sdl_pushevent (& event );

Return 0;

}

A familiar event is written to the queue. Ff_refresh_event is defined as sdl_userevent + 1. One thing to note is that when 0 is returned, SDL stops the timer, so the callback will not happen again.

Now we have a ff_refresh_event event. We need to process it in the event loop:

For (;;){

Sdl_waitevent (& event );

Switch (event. Type ){

Case ff_refresh_event:

Video_refresh_timer (event. User. data1 );

Break;

Then we run this function, in which the data will be taken out of the image queue:

Void video_refresh_timer (void * userdata ){

Videostate * Is = (videostate *) userdata;

Videopicture * VP;

If (is-> video_st ){

If (is-> pictq_size = 0 ){

Schedule_refresh (is, 1 );

} Else {

Vp = & is-> pictq [is-> pictq_rindex];

Schedule_refresh (is, 80 );

Video_display (is );

If (++ is-> pictq_rindex = video_picture_queue_size ){

Is-> pictq_rindex = 0;

}

Sdl_lockmutex (is-> pictq_mutex );

Is-> pictq_size -;

Sdl_condsignal (is-> pictq_cond );

Sdl_unlockmutex (is-> pictq_mutex );

}

} Else {

Schedule_refresh (yes, 100 );

}

}

Now, this is just an extremely simple function: when there is data in the queue, he gets data from it and sets a timer for the next frame, call the video_display function to display the image to the screen. Then, add the read index value of the queue to 1 and reduce the size of the queue to 1. You may notice that in this function, we have not actually performed some actual actions on the VP, because the reason is as follows: we will process them later. We will use it to access time information when synchronizing audio and video. Here you will see the comment "Timing password here ". We will discuss when to display the next video and write the corresponding value to the schedule_refresh () function. Now we just write a random value of 80. Technically, you can guess and verify this value and recompile the program for each movie, but: 1) It will drift in a while; 2) this method is stupid. We will discuss it later.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.