Project address, for star
Https://github.com/979451341/Audio-and-video-learning-materials/tree/master/FFmpeg (mp4%e8%bd%acyuv%ef%bc%89
This time is to decode the MP4 the YUV file out, first introduced a wave of YUV file
YUV refers to the luminance parameters and chromaticity parameters are separated by the pixel format, and the advantage of this separation is not only to avoid mutual interference, but also to reduce the color sampling rate without affecting the image quality too much. YUV is a relatively general statement, according to its specific arrangement, can be divided into a number of specific formats.
Directly on the main course, how to decode the MP4 the YUV file
First, use the NDK in Java, call the Decode function, enter the path to the MP4 file and the path to the output YUV file
Decode (Inputurl,outputurl);
The next step is a formal explanation of the role of the ffmpeg correlation function.
Let's start with a picture, sort out the order of the Code.
Next look at the notes, .... If every function in detail, I can not do, also write not finish, heavy in the familiar process
1. Register all components
Av_register_all ();
Initial network Flow
Avformat_network_init ();
Encapsulates the format context, dominates the global structure, and holds information about the format of the video file encapsulation
Pformatctx = Avformat_alloc_context ();
2. Open the input video file
if (Avformat_open_input (&pformatctx,input_str,null,null)!=0) {
LOGE ("couldn ' t open input stream.\n");
return-1;
}
3. Get video File information
if (Avformat_find_stream_info (pformatctx,null) <0) {
LOGE ("couldn ' t find stream information.\n");
return-1;
}
Gets the index position of the video stream
Traverse all types of streams (audio stream, video stream, subtitle stream), find video stream
Videoindex=-1;
for (i=0; i<pformatctx->nb_streams; i++)
if (Pformatctx->streams[i]->codec->codec_type==avmedia_type_video) {
Videoindex=i;
Break
}
if (videoindex==-1) {
LOGE ("couldn ' t find a video stream.\n");
return-1;
}
Only by knowing the encoding of the video can we find the decoder according to the encoding method.
Get the codec context in the video stream
pcodecctx=pformatctx->streams[videoindex]->codec;
4. Find the corresponding decoding based on the encoding ID in the context of the codec
Pcodec=avcodec_find_decoder (pcodecctx->codec_id);
if (pcodec==null) {
LOGE ("couldn ' t find codec.\n");
return-1;
}
if (Avcodec_open2 (Pcodecctx, Pcodec,null) <0) {
LOGE ("couldn ' t open codec.\n");
return-1;
}
pFrame=av_frame_alloc();pFrameYUV=av_frame_alloc();out_buffer=(unsigned char *)av_malloc(av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height,1));av_image_fill_arrays(pFrameYUV->data, pFrameYUV->linesize,out_buffer, AV_PIX_FMT_YUV420P,pCodecCtx->width, pCodecCtx->height,1);//准备读取//AVPacket用于存储一帧一帧的压缩数据(H264)//缓冲区,开辟空间packet=(AVPacket *)av_malloc(sizeof(AVPacket));
Parameters for transcoding (scaling), width height before turning, width height after turn, format, etc.
Img_convert_ctx = Sws_getcontext (Pcodecctx->width, Pcodecctx->height, PCODECCTX->PIX_FMT,
Pcodecctx->width, Pcodecctx->height, av_pix_fmt_yuv420p, sws_bicubic, NULL, NULL, NULL);
Output Video Information
sprintf (Info, "[Input]%s\n", input_str);
sprintf (Info, "%s[output]%s\n", info,output_str);
sprintf (Info, "%s[format]%s\n", info, pformatctx->iformat->name);
sprintf (Info, "%s[codec]%s\n", info, pcodecctx->codec->name);
sprintf (Info, "%s[resolution]%dx%d\n", info, pcodecctx->width,pcodecctx->height);
fp_yuv=fopen(output_str,"wb+");if(fp_yuv==NULL){ printf("Cannot open output file.\n"); return -1;}frame_cnt=0;time_start = clock();
6. A frame-by-frame reading of compressed data
while (Av_read_frame (pformatctx, packet) >=0) {
As long as the video compresses the data (judging by the index position of the stream)
if (Packet->stream_index==videoindex) {
7. Decode a frame of video compression data to get video pixel data
ret = Avcodec_decode_video2 (Pcodecctx, Pframe, &got_picture, packet);
if (Ret < 0) {
LOGE ("Decode error.\n");
return-1;
}
0 Description decoding complete, not 0 decoding
if (got_picture) {
Avframe to pixel format YUV420, wide height
2 6 input, output data
3 7 input, output the size of a row of data avframe conversion is a line-by-line conversion
4 input data The first column where you want to transcode starts at 0
5 The height of the input screen
Sws_scale (Img_convert_ctx, (const uint8_tConst) Pframe->data, pframe->linesize, 0, Pcodecctx->height,
Pframeyuv->data, pframeyuv->linesize);
Output to YUV file
Avframe Pixel Frame Write file
Image pixel data decoded by data (audio sampled data)
Y brightness UV chroma (compressed) People are more sensitive to brightness
The number of U V is 1/4 of y
y_size=pcodecctx->width*pcodecctx->height;
Fwrite (PFRAMEYUV->DATA[0],1,Y_SIZE,FP_YUV); Y
Fwrite (PFRAMEYUV->DATA[1],1,Y_SIZE/4,FP_YUV); U
Fwrite (PFRAMEYUV->DATA[2],1,Y_SIZE/4,FP_YUV); V
Output Info
Char pictype_str[10]={0};
Switch (pframe->pict_type) {
Case av_picture_type_i:sprintf (pictype_str, "I");
Case av_picture_type_p:sprintf (pictype_str, "P");
Case av_picture_type_b:sprintf (pictype_str, "B");
default:sprintf (Pictype_str, "other");
}
Logi ("Frame Index:%5d. Type:%s ", FRAME_CNT,PICTYPE_STR);
Freeing resources
frame_cnt++;
}
}
Av_free_packet (packet);
}
This just says how to decode the video, and the audio is not decoded, but the process is similar and even simpler
Reference article:
Https://www.cnblogs.com/CoderTian/p/6791638.html
Android Audio Video In depth nine ffmpeg decoding video generated YUV file (with source download)