This article was reproduced from: Http://blog.csdn.net/gubenpeiyuan/article/details/19548019?utm_source=tuicool
This article outlines:
This paper introduces the famous open source audio and video codec library ffmpeg how to decode H264 stream, and elaborates its h264 stream input process, decoding principle and decoding process in detail. At the same time, most of the application environment, the original stream video size display is not the best way, so developers need not only to decode the video stream, and need to scale the image to display under the different forms.
In summary, this paper introduces FFmpeg decoding H264, and illustrates how to use Swscale to scale the video stream.
Article uses the development environment Ubuntu12.04. Exchange e-mail: leoluopy@gmail.com. Reprint please specify the source csdn--the coupon. ffmpeg Introduction:
FFmpeg is an open source, free cross-platform video and audio streaming solution that belongs to free software, with a LGPL or GPL license (depending on the component you choose). It provides a complete solution for recording, converting, and streaming audio and video. It contains a very advanced audio/video codec library Libavcodec, in order to ensure high portability and codec quality, Libavcodec Many codec are developed from scratch.
Start decoding
Well, not much to say. Go directly to the engineering and code bar. (Note that when linking projects, reference libraries have a connection order because they have interdependencies and if missing will not be compiled.) )
Libraries that need to be connected: vs code is as follows[Plain]View Plain copy #pragma comment (lib, ".. \\FFMPEG_lib\\avformat.lib ") #pragma comment (lib,". \\FFMPEG_lib\\avutil.lib ") #pragma comment (lib,". \\FFMPEG_lib\\swscale.lib ") #pragma comment (lib,". \\FFMPEG_lib\\avcodec.lib ") #pragma comment (lib,". \\FFMPEG_lib\\avdevice.lib ") #pragma comment (lib,". \\FFMPEG_lib\\avfilter.lib ")
Required Header files:[Plain]View plain copy #include "libavcodec\\avcodec.h" #include "libswscale/swscale.h"
Environment Initialization code: (ref. API-EXAMPLE.C) The FFmpeg version used on Ubuntu is 0.6[Plain] View plain Copy avcodec_init () //First, the main function begins by invoking the Avcodec_init () function, which is initialized with Libavcodec, When we use the Avcodec codec library, the function must be called. avcodec_register_all ();//Register All codecs (codecs), parsers (parsers), and bitstream filters (bitstream filters). Of course we can also use the individual registration function to register the format we want to support. AVCodec *codec; avcodeccontext *c= NULL; int frame, size, got_picture, len; FILE *fin, *fout; avframe *picture,*dst_ picture; uint8_t inbuf[inbuf_size + ff_input_buffer_padding_size], *inbuf_ptr; char buf[1024]; /* set end of buffer to 0 (This ensures that no overreading happens for damaGed mpeg streams) */ memset (inbuf + inbuf_size, 0, ff_input_buffer_padding_size); printf ("video decoding\n"); /* find the mpeg1 video decoder */ codec = avcodec_find_decoder (codec_id_h264); if (!CODEC) { fprintf (stderr, "codec not found\n "); exit (1); } c= avcodec_alloc_context (); picture= avcodec_alloc_frame (); if (codec->capabilities &codec_cap_truncated) { c->flags|= codec_flag_ Truncated; /* we dont send complete frames */ } /* for some codecs, such as msmpeg4 and mpeg4, width and height MUST be initialized there because these info are not available in the bitstream */ /* open it */ if (Avcodec_open (C,&NBSP;CODEC) < 0) { fprintf (stderr, "Could not open codec\n "); exit (1); }
Avcodec_init and Avcodec_register_all Initialize the relevant decoder, request the space that the decoding need and so on.
Other decoding needs are Avcontext, Avcodec, and Avframe.
Avcontext is the environment for decoding, which stores information such as length-width, encoder algorithm, bitmap format and so on.
Avcondec is the codec you choose, indexed using enumerations, and used in conjunction with the decoding function after applying space.
Avframe and Avpicture, both store decoded bitmap information.
Decoding:
Avcodec_decode_video requires input parameters, Avcontext,avframe, data header address, and data length. An int pointer is also passed in to record the number of decoded successful frames returned by the decoding.
Len records the number of bytes consumed by this decoding.
[plain] view plain Copy len = Avcodec_decode_video (c, Picture, &got_picture, INB Uf_ptr, size); Note: In the decoding process do not clean up the contxt environment, as well as the decoder, if the necessary byte stream space has the preservation significance, because, 264 transmission process, there are PTS and DTS points, playback time and decoding time if inconsistent, may cause, first to the data needs to be stored after reaching his decoding time.
At the same time, the H264 code stream is divided into IPB frames, only I-frames are more comprehensive image information. If the decoding environment context is emptied after the decode I frame is complete, subsequent decoding will continue to return the error message until the next I-frame appears. The author of the test, hope to see this article of friends in doing decoding will not go this detour.
Since then, the decoding section has been elaborated. Zoom:
Using FFmpeg for image Data format conversion and image scaling applications, the main use of the Swscale.h file in the three functions, respectively:
[Plain] View Plain copy Struct swscontext *sws_getcontext (int srcw, int srch, enum avpixelformat srcformat, int dstw, int dsth, enum avpixelformat dstformat, int flags, SwsFilter *srcFilter, SwsFilter *dstFilter, const Double *param); int sws_scale (Struct swscontext *c, const uint8_t *const srcslice[], const int srcstride[], int srcslicey, int srcSliceH, uint8_t *const dst[], const int dststride[]); void sws_freecontext (Struct swscontext *swscontext);
The sws_getcontext function can be considered an initialization function, and its parameters are defined as:
int Srcw,int SrcH is the height and width of the original image data;
int Dstw,int Dsth is the height and width of the output image data;
Enum Avpixelformat Srcformat is the type of input and output image data; eg:av_pix_fmt_yuv420, pav_pix_fmt_rgb24;
The int flags is the scale algorithm type; eg:sws_bicubic, Sws_bicublin, Sws_point, Sws_sinc;
Swsfilter *srcfilter, Swsfilter *dstfilter,const double *param can be used without tube, all is null;
The sws_scale function is the execution function, and its parameter definitions are:
struct Swscontext *c is the value returned by the Sws_getcontext function;
Const uint8_t *const srcslice[],uint8_t *const dst[] is an array of buffer pointers for each color channel of the input and output image data;
the const int Srcstride[],const int dststride[] is an array of bytes stored per row for each color channel of the input and output image data;
The int srcslicey is a progressive scan starting from the number of columns of the input image data, usually set to 0;
The int SRCSLICEH is the number of rows to be scanned, usually the height of the input image data;
The sws_freecontext function is the End function, and its parameters are the values returned by the Sws_getcontext function;
To do an actual scaling YUV420 function Packaging example is as follows:
[Plain] View Plain copy int scaleimg (avcodeccontext *pcodecctx,avframe *src_picture,avframe *dst_ picture,int ndsth ,int ndstw ) { int i ; int nSrcStride[3]; int ndststride[3]; int nsrch = pcodecctx-> height; int nsrcw = pcodecctx->width; struct SwsContext* m_ pswscontext; Uint8_t *psrcbuff[3] = {src_picture->data[0] ,src_picture->data[1], src_picture->data[2]}; Nsrcstride[0] = nSrcW ; nsrcstride[1] = nsrcw/2 ; nsrcstride[2] = nSrcW/2 ; dst_picture->linesize[0] = nDstW; dst_picture->linesize[1] = ndstw / 2; Dst_ Picture->linesize[2] = ndstw / 2; printf ("nsrcw%d\n", NSRCW ); M_pswscontext = sws_getcontext (nsrcw, nsrch, pix_fmt _yuv420p, ndstw, ndsth, pix_fmt_yuv420p, sws_bicubic, NULL, Null, null); if (null == m_pswscontext) { printf ("ffmpeg get context error!\n"); exit ( -1); } Sws_scale (m_pswscontext, src_picture->data,src_picture-> Linesize, 0, pcodecctx->height,dst_picture->data,dst_picture->linesize); printf ("line0:%d line1:%d line2:%d\n", Dst_picture->linesize[0] ,dst_ PICTURE->LINESIZE[1]&NBSP;,DST_PICTURE->LINESIZE[2]); sws_freecontext (m_pSwsContext); return 1 ; }
The function is simple, apply the environment initial pointer, after scaling can. Read this article friend, this function can be copied directly using Yo. If in doubt can leave a message or e-mail: leoluopy@gmail.com
The RGB scaling can be referred to below:[Plain] View Plain copy Int scaleyuvimgtorgb (int nsrcw,int nsrch ,uint8_t* src_data,int * linesize,int ndstw ,int ndsth ) { int i ; int ret ; FILE *nRGB_file ; AVFrame *nDst_picture ; struct SwsContext* m_pSwsContext; Ndst_picture = avcodec_alloc_frame (); if (!ndst_picture) { printf ("Ndst_picture avcodec_alloc_frame failed\n "); exit (1); } if (Avpicture_alloc (avpicture *) ndst_picture,pix_fmt_ Rgb24,ndstw, ndsth) <0) { printf ("Dst_picture avpicture_alloc failed\n "); exit (1); } m_pswscontext = sws_getcontext (Nsrcw, nsrch, PIX_FMT_YUV420P, nDstW, ndsth, pix_fmt_rgb24, sws_bicubic, null, null, null); if (null == m_pswscontext) { printf ("ffmpeg get context error!\n"); exit ( -1); } ret = sws_scale (m_pswscontext, src_data,linesize, 0,nsrch,ndst_ picture->data,ndst_picture->linesize); nrgb_ File = fopen (".. \\YUV_STREAM\\RGBFile.rgb "," ab+ "); fwrite (ndst_picture->data[0],ndstw* Ndsth*3,1,nrgb_file); fclose (nrgb_file); sws_freecontext (m_pswscontext); avpicture_ Free ((avpicture *) ndst_picture); return 0; }
parameter data and linesize reference YUV plane pointers.
At the same time, if you do not want to use the avpicture structure, you can refer to the following: (note the different image types, linesize must write the right)[Plain] View Plain copy Char* h264decoder_c::scaleyuvimgtorgb (int nsrcw,int nsrch ,uint8_t** src _data,int *linesize,int ndstw ,int ndsth ) { int i ; int ret ; FILE *nRGB_file ; struct SwsContext* m_pSwsContext; char* out_img[3]; int out_linesize[3]; out_linesize[0] = 2*ndstw ; //out_linesize[1] = ndstw ;o ut_linesize[2] = ndstw ; out_img[0] = g_rgbimg ; m_pswscontext = sws_getcontext (nsrcw, nsrch, pix_fmt_yuv420p, ndstw, ndsth, pix_fmt_rgb565, SWS_BICUBIC, null, null, null); if (Null == m_pswscontext)