FFmpeg actual Combat Tutorial (i) mp4,mkv and other formats decoded to H264 and YUV data

Source: Internet
Author: User
Tags flush

FFmpeg has very powerful features including video capture, video format conversion, video capture, video watermarking, etc. The use of these features on the Internet is mostly based on the command line. This is not conducive to our in-depth study of customized FFmpeg, in the future I will write a series of code to implement these features of the tutorial for everyone to learn. The first part of this series I am going to write on the implementation of Windows, after the partial write porting to Android system implementation.
Code implementation of the premise is to have a certain understanding of ffmpeg source, if you do not understand can see here
Brief analysis of FFmpeg source code (i) Structure overview

The following into the topic, with the ffmpeg implementation of MP4,MKV and other formats of decoding. Decoded to H264 and YUV data and exists in the file.

on the first run of the result graph, it can be seen that two files are generated after decoding the data:
H264 is so much smaller than YUV files. H264 compression technology is really a leverage. The next generation of compression technology H265 is the leverage of the future to introduce this piece.
first, the entire process is introduced, and then the source code is given.

1. Copy the video name called Ws.mp4 into the project and directory
Then create two decoded output file codes as follows:

        Char filepath[]= "Ws.mp4";

    FILE *fp_yuv=fopen ("Output.yuv", "wb+");  
    FILE *fp_h264=fopen ("output.h264", "wb+");

2. Then initialize some of the components

Av_register_all ();//Register All Components
    avformat_network_init ();//Initialize network
    pformatctx = Avformat_alloc_context ();// Initialize a Avformatcontext

3. Open the video file and get the video information, select the decoder

Avformat_open_input (&pformatctx,filepath,null,null)

avformat_find_stream_info (PFormatCtx,NULL)

Avcodec_find_decoder (pcodecctx->codec_id)

4. Open the decoder and start decoding

Avcodec_open2 (Pcodecctx, Pcodec,null)

Avcodec_decode_video2 (Pcodecctx, Pframe, &got_picture, packet)

Note: When the Av_read_frame () loop exits, the decoder may actually contain the remaining frames of data.
It is therefore necessary to output these frames of data through "Flush_decoder". The "Flush_decoder" function simply calls Avcodec_decode_video2 () to get avframe instead of passing avpacket to the decoder. The code is as follows:

    while (1) {
        ret = Avcodec_decode_video2 (Pcodecctx, Pframe, &got_picture, packet);
        if (Ret < 0) break
            ;
        if (!got_picture) break
            ;
        Sws_scale (Img_convert_ctx, (const uint8_t* const*) Pframe->data, pframe->linesize, 0, Pcodecctx->height, 
            Pframeyuv->data, pframeyuv->linesize);

        int y_size=pcodecctx->width*pcodecctx->height;  
        Fwrite (PFRAMEYUV->DATA[0],1,Y_SIZE,FP_YUV);    Y 
        fwrite (PFRAMEYUV->DATA[1],1,Y_SIZE/4,FP_YUV);  U
        fwrite (PFRAMEYUV->DATA[2],1,Y_SIZE/4,FP_YUV);  V

        printf ("Flush decoder:succeed to decode 1 frame!\n");
    }

After the project is run, it is visible that two files are generated after decoding data:
The source code is as follows:

#include <stdio.h> #define __stdc_constant_macros #ifdef _win32//windows extern "C" {#include "libavcodec/avcod
Ec.h "#include" libavformat/avformat.h "#include" libswscale/swscale.h "}; #else//linux #ifdef __cplusplus extern "C" {#endif #include <libavcodec/avcodec.h> #include <libavformat/av
format.h> #include <libswscale/swscale.h> #ifdef __cplusplus};
    #endif #endif int main (int argc, char* argv[]) {Avformatcontext *pformatctx;
    int I, Videoindex;
    Avcodeccontext *pcodecctx;
    Avcodec *pcodec;
    Avframe *PFRAME,*PFRAMEYUV;
    uint8_t *out_buffer;
    Avpacket *packet;
    int y_size;
    int ret, got_picture;

    struct Swscontext *img_convert_ctx;

    Char filepath[]= "Ws.mp4";  
    FILE *fp_yuv=fopen ("Output.yuv", "wb+");

    FILE *fp_h264=fopen ("output.h264", "wb+"); Av_register_all ();//Register All Components Avformat_network_init ();//Initialize Network Pformatctx = Avformat_alloc_context ();//Initialize a Avformatco ntext if (AVFOrmat_open_input (&pformatctx,filepath,null,null)!=0) {//Open input Video file printf ("couldn ' t open input stream.\n");
    return-1; } if (Avformat_find_stream_info (pformatctx,null) <0) {//Get video File information printf ("couldn ' t find stream information.\n")
        ;
    return-1;
    } videoindex=-1; for (i=0; i<pformatctx->nb_streams; i++) if (pformatctx->streams[i]->codec->codec_type==avmedia_type
            _video) {videoindex=i;
        Break
        } if (videoindex==-1) {printf ("didn ' t find a video stream.\n");
    return-1;
    } pcodecctx=pformatctx->streams[videoindex]->codec;
        Pcodec=avcodec_find_decoder (pcodecctx->codec_id);//Find decoder if (pcodec==null) {printf ("Codec not found.\n");
    return-1;
        } if (Avcodec_open2 (Pcodecctx, Pcodec,null) <0) {//Open decoder printf ("Could not open codec.\n");
    return-1;
    } pframe=av_frame_alloc ();
   Pframeyuv=av_frame_alloc (); Out_buffer= (uint8_t *) Av_malloc (avpicture_get_size (pix_fmt_yuv420p, Pcodecctx->width, pCodecCtx->height));
    Avpicture_fill ((Avpicture *) PFRAMEYUV, Out_buffer, pix_fmt_yuv420p, Pcodecctx->width, pCodecCtx->height);
    packet= (Avpacket *) av_malloc (sizeof (avpacket));
    Output Info-----------------------------printf ("---------------File information----------------\ n");
    Av_dump_format (pformatctx,0,filepath,0);
    printf ("-------------------------------------------------\ n"); Img_convert_ctx = Sws_getcontext (Pcodecctx->width, Pcodecctx->height, PCODECCTX-&GT;PIX_FMT, pCodecCtx-> 

    width, pcodecctx->height, pix_fmt_yuv420p, sws_bicubic, NULL, NULL, NULL); while (Av_read_frame (pformatctx, packet) >=0) {//reads a frame compressed data if (Packet->stream_index==videoindex) {FW Rite (packet->data,1,packet->size,fp_h264); Write H264 data to fp_h264 file ret = Avcodec_decode_video2 (Pcodecctx, Pframe, &got_picture, packet);//decodingOne frame compressed data if (Ret < 0) {printf ("Decode error.\n");
            return-1; } if (got_picture) {Sws_scale (Img_convert_ctx, (const uint8_t* const*) Pframe->data, pframe-&

                Gt;linesize, 0, Pcodecctx->height, Pframeyuv->data, pframeyuv->linesize);  
                y_size=pcodecctx->width*pcodecctx->height;    Fwrite (PFRAMEYUV-&GT;DATA[0],1,Y_SIZE,FP_YUV);  Y fwrite (PFRAMEYUV-&GT;DATA[1],1,Y_SIZE/4,FP_YUV);  U fwrite (PFRAMEYUV-&GT;DATA[2],1,Y_SIZE/4,FP_YUV);

            V printf ("Succeed to decode 1 frame!\n");
    }} av_free_packet (packet);
    }//flush Decoder/* When the Av_read_frame () loop exits, the decoder may actually contain a few remaining frames of data.
   It is therefore necessary to output these frames of data through "Flush_decoder". The "Flush_decoder" function simply calls Avcodec_decode_video2 () to get avframe instead of passing avpacket to the decoder. */while (1) {ret = Avcodec_decode_video2 (Pcodecctx, Pframe, &got_picture, packet);
        if (Ret < 0) break;
        if (!got_picture) break; 
            Sws_scale (Img_convert_ctx, (const uint8_t* const*) Pframe->data, pframe->linesize, 0, Pcodecctx->height,

        Pframeyuv->data, pframeyuv->linesize);  
        int y_size=pcodecctx->width*pcodecctx->height;    Fwrite (PFRAMEYUV-&GT;DATA[0],1,Y_SIZE,FP_YUV);  Y fwrite (PFRAMEYUV-&GT;DATA[1],1,Y_SIZE/4,FP_YUV);  U fwrite (PFRAMEYUV-&GT;DATA[2],1,Y_SIZE/4,FP_YUV);
    V printf ("Flush decoder:succeed to decode 1 frame!\n");

    } sws_freecontext (IMG_CONVERT_CTX);
    Close the file and release the memory fclose (FP_YUV);

    Fclose (fp_h264);
    Av_frame_free (&AMP;PFRAMEYUV);
    Av_frame_free (&pframe);
    Avcodec_close (PCODECCTX);

    Avformat_close_input (&AMP;PFORMATCTX);
return 0;

 }
Compile Run

After configuring the FFmpeg environment under 1.vc++, copy the code into the source file directory. Self-configuring Baidu.

2. For Android development Sir may have VC + +, so introduce a way to compile MinGW,
We used to compile ffmpeg:http://blog.csdn.net/king1425/article/details/70338674 with MinGW.
Execute command under MinGW

    g++ ffmpeg_decoder.cpp-g-o ffmpeg_decoder.exe \  
    -i/usr/local/include-l/usr/local/lib \  
    -lmingw32-lsdl2main -lsdl2-lavformat-lavcodec-lavutil-lswscale  

Note: MinGW should be configured before executing the command.
(1) Download the latest shared and dev versions of ffmpeg from the FFmpeg Windows Build (http://ffmpeg.zeranoe.com/) website.

(2) Create the "local" folder under the Msys installation directory and create the "include" and "Lib" folders under the "local" folder.

(3) Copy the include from the dev version of FFmpeg to {msys}/local/include;lib copy to {msys}/local/lib.

(4) Copy the DLL under the shared version of FFmpeg to {Mingw}/bin.

Run the command 3.gcc:linux or MacOS command line

 gcc ffmpeg_decoder.cpp-g-o ffmpeg_decoder.out \-i/usr/local/include-l/usr/loca L/lib-lsdl2main-lsdl2-lavformat-lavcodec-lavutil-lswscale 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.