Android audio and video goes deep into 18 FFmpeg to play videos with sound (source code download) and androidffmpeg
Project address
Https://github.com/979451341/AudioVideoStudyCodeTwo/tree/master/FFmpegv%E6%92%AD%E6%94%BE%E8%A7%86%E9%A2%91%E6%9C%89%E5%A3%B0%E9%9F%B3%EF%BC%8C%E6%9A%82%E5%81%9C%EF%BC%8C%E9%87%8A%E6%94%BE%E3%80%81%E5%BF% AB %E8%BF%9B%E3%80%81%E9%80%80%E5%90%8E
This project was written by the simplified book 2012lc. It is okay to play the video, that is, other functions are too late...
Ah, I cannot write a good player myself,
Back to question
First, this code is the producer and consumer mode. The producer is constantly decoding mp4 to send a frame of data to the consumer. The consumer is the audio and video playing class, that is to say, one producer and two consumers enable the thread through pthread and maintain the link chain through mutex lock and condition information.
1. Producer-output data of one frame
The first step is to initialize various components and test whether the video file can be opened and obtain video-related information to prepare for later code.
Void init () {LOGE ("enable decoding thread") // 1. register the component av_register_all (); avformat_network_init (); // encapsulate the format context pFormatCtx = avformat_alloc_context (); // 2. open the input video file if (avformat_open_input (& pFormatCtx, inputPath, NULL, NULL )! = 0) {LOGE ("% s", "failed to open the input video file");} // 3. get video information if (avformat_find_stream_info (pFormatCtx, NULL) <0) {LOGE ("% s", "failed to get video information ");} // obtain the total playback time if (pFormatCtx-> duration! = AV_NOPTS_VALUE) {duration = pFormatCtx-> duration; // microsecond }}
Initialize the audio and video classes and send SurfaceView to the video class.
ffmpegVideo = new FFmpegVideo; ffmpegMusic = new FFmpegMusic; ffmpegVideo->setPlayCall(call_video_play);
Enable the generator thread
Pthread_create (& p_tid, NULL, begin, NULL); // enable the begin thread
Obtain the video and audio streams from the video information, copy the decoder context to the two consumer classes respectively, and give the stream location and time unit to the two consumer classes.
// Find the video stream and audio stream for (int I = 0; I <pFormatCtx-> nb_streams; ++ I) {// obtain the decoder AVCodecContext * avCodecContext = pFormatCtx-> streams [I]-> codec; AVCodec * avCodec = avcodec_find_decoder (avCodecContext-> codec_id); // copy a decoder, AVCodecContext * codecContext = login (avCodec); avcodec_copy_context (codecContext, avCodecContext); if (avcodec_open2 (codecContext, avCodec, NULL) <0) {LOGE ("failed to open") continue ;} // if it is a video stream if (pFormatCtx-> streams [I]-> codec-> codec_type = AVMEDIA_TYPE_VIDEO) {ffmpegVideo-> index = I; ffmpegVideo-> setAvCodecContext (codecContext ); ffmpegVideo-> time_base = pFormatCtx-> streams [I]-> time_base; if (window) {ANativeWindow_setBuffersGeometry (window, ffmpegVideo-> codec-> width, ffmpegVideo-> codec-> height, WINDOW_FORMAT_RGBA_8888);} // if it is an audio stream else if (pFormatCtx-> streams [I]-> codec-> codec_type = AVMEDIA_TYPE_AUDIO) {ffmpegMusic-> index = I; ffmpegMusic-> setAvCodecContext (codecContext); ffmpegMusic-> time_base = pFormatCtx-> streams [I]-> time_base ;}}
Enable two consumer threads
ffmpegVideo->setFFmepegMusic(ffmpegMusic); ffmpegMusic->play(); ffmpegVideo->play();
Then start decoding the data at a frame and frame to the vectors used to store the data for the two consumer classes. If the data in the vector still exists, the video will not be played and the video will continue to be played.
While (isPlay) {// ret = av_read_frame (pFormatCtx, packet); if (ret = 0) {if (ffmpegVideo & ffmpegVideo-> isPlay & packet-> stream_index = ffmpegVideo-> index) {// press the video packet into the queue ffmpegVideo-> put (packet );} else if (ffmpegMusic & ffmpegMusic-> isPlay & packet-> stream_index = ffmpegMusic-> index) {ffmpegMusic-> put (packet);} av_packet_unref (packet );} else if (ret = AVERROR_EOF) {// read complete but not necessarily play complete while (isPlay) {if (ffmpegVideo-> queue. empty () & ffmpegMusic-> queue. empty () {break;} // LOGE ("waiting for playback to complete"); av_usleep (10000 );}}}
After playing the video, stop two consumer threads and release resources.
IsPlay = 0; if (ffmpegMusic & ffmpegMusic-> isPlay) {ffmpegMusic-> stop () ;}if (ffmpegVideo & ffmpegVideo-> isPlay) {ffmpegVideo-> stop () ;}// release av_free_packet (packet); avformat_free_context (pFormatCtx); pthread_exit (0 );
2. Consumer-audio class
Enable thread
Pthread_create (& playId, NULL, MusicPlay, this); // enable the begin thread
The next step is to configure OpenSL ES to play audio, and the source of this data is determined by this Code.
(*bqPlayerBufferQueue)->RegisterCallback(bqPlayerBufferQueue, bqPlayerCallback, this);
Let's take a look at bqPlayerCallback. The data is obtained from the getPcm function.
FFmpegMusic * musicplay = (FFmpegMusic *) context; int datasize = getPcm (musicplay); if (datasize> 0) {// time required for the first shot sampling byte/sampling rate double time = datasize/(44100*2*2); // musicplay-> clock = time + musicplay-> clock; LOGE ("current frame sound time % f playback time % f", time, musicplay-> clock); (* bq)-> Enqueue (bq, musicplay-> out_buffer, datasize); LOGE ("play % d", musicplay-> queue. size ());}
Then, in the getPcm function, the get function is used to obtain a frame of data.
agrs->get(avPacket);
If a vector contains data, it extracts the data in the vector. If not, it waits for the producer to pass the conditional variable.
// Set the packet pop-up queue int FFmpegMusic: get (AVPacket * avPacket) {LOGE ("retrieve queue") pthread_mutex_lock (& mutex); while (isPlay) {LOGE ("retrieve peer xxxxxx") if (! Queue. empty () & isPause) {LOGE ("ispause % d", isPause); // if there is data in the queue, you can take it out if (av_packet_ref (avPacket, queue. front () {break;} // The result is successful. A queue is displayed and packet AVPacket * packet2 = queue is destroyed. front (); queue. erase (queue. begin (); av_free (packet2); break;} else {LOGE ("audio execution wait") LOGE ("ispause % d", isPause); pthread_cond_wait (& cond, & mutex) ;}} pthread_mutex_unlock (& mutex); return 0 ;}
Note that the obtained data is AVPacket. we need to decode it as AVFrame.
If (avPacket-> pts! = AV_NOPTS_VALUE) {agrs-> clock = av_q2d (agrs-> time_base) * avPacket-> pts;} // decodes the mp3 encoding format frame ---- pcm frame LOGE ("decoded ") avcodec_decode_audio4 (agrs-> codec, avFrame, & gotframe, avPacket); if (gotframe) {swr_convert (agrs-> swrContext, & agrs-> out_buffer, 44100*2, (const uint8_t **) avFrame-> data, avFrame-> nb_samples); // buffer size = bytes (NULL, agrs-> out_channer_nb, avFrame-> nb_samples, bytes, 1); break ;}
Return to the callback function of OpenSL ES and press the data into the player to play it.
// The time required for the first shot to sample Bytes/sampling rate double time = datasize/(44100*2*2); // musicplay-> clock = time + musicplay-> clock; LOGE ("current frame sound time % f playback time % f", time, musicplay-> clock); (* bq)-> Enqueue (bq, musicplay-> out_buffer, datasize); LOGE ("play % d", musicplay-> queue. size ());
3. Consumer-video
These two processes are similar. I will omit them here.
Enable thread
// Apply for AVFrame * frame = av_frame_alloc (); // assign an AVFrame struct, which is generally used to store raw data, point to the decoded original frame AVFrame * rgb_frame = av_frame_alloc (); // assign an AVFrame struct, pointing to the frame AVPacket * packet = (AVPacket *) after being converted to rgb *) av_mallocz (sizeof (AVPacket); // output FILE // FILE * fp = fopen (outputPath, "wb"); // cache area uint8_t * out_buffer = (uint8_t *) av_mallocz (avpicture_get_size (AV_PIX_FMT_RGBA, ffmpegVideo-> codec-> width, ffmpegVideo-> codec-> height); // It is associated with the cache area, set the rgb_frame cache zone avpicture_fill (AVPicture *) rgb_frame, out_buffer, AV_PIX_FMT_RGBA, ffmpegVideo-> codec-> width, ffmpegVideo-> codec-> height ); LOGE ("converted to rgba format") ffmpegVideo-> swsContext = sws_getContext (ffmpegVideo-> codec-> width, ffmpegVideo-> codec-> height, ffmpegVideo-> codec-> pix_fmt, ffmpegVideo-> codec-> width, ffmpegVideo-> codec-> height, AV_PIX_FMT_RGBA, SWS_BICUBIC, NULL, NULL );
Get a frame of data
ffmpegVideo->get(packet);
Then obtain the data from the vector.
Adjust the playback speed of videos and audios
Diff = ffmpegVideo-> clock-audio_clock; // the delay will accelerate sync_threshold = (delay> 0.01? 0.01: delay); if (fabs (diff) <10) {if (diff <=-sync_threshold) {delay = 0;} else if (diff> = sync_threshold) {delay = 2 * delay;} start_time + = delay; actual_delay = start_time-av_gettime ()/1000000.0; if (actual_delay <0.01) {actual_delay = 0.01 ;} av_usleep (actual_delay * 1000000.0 + 6000 );
Play video
video_call(rgb_frame);
Release resources and exit the thread
LOGE("free packet"); av_free(packet); LOGE("free packet ok"); LOGE("free packet"); av_frame_free(&frame); av_frame_free(&rgb_frame); sws_freeContext(ffmpegVideo->swsContext); size_t size = ffmpegVideo->queue.size(); for (int i = 0; i < size; ++i) { AVPacket *pkt = ffmpegVideo->queue.front(); av_free(pkt); ffmpegVideo->queue.erase(ffmpegVideo->queue.begin()); } LOGE("VIDEO EXIT"); pthread_exit(0);
It's over. In the future, try to write a player by yourself.