1: http://sourceforge.net/projects/mingw/ to download mingw-get-inst-20120426.exe
Installation requires online Installation
2: Install MinGw
3: Set the environment variable Path: MinGw installation Path bin
4: Modify the D: \ MinGW \ msys \ 1.0 \ msys. bat file and add
Call "D: \ Program Files \ Microsoft Visual Studio 9.0 \ VC \ bin \ vcvars32.bat"
5 go to The http://yasm.tortall.net/Download.html to download an available yasm
D: \ MinGW \ msys \ 1.0 \ bin
6 go to http://www.gtk.org/download/win32.php to download
GLib (Run-time), gettext-runtime (Run-time), pkg-config (tool ),
Decompress the downloaded files and put the *. dll and *. exe files obtained after decompression in the Directory D: \ MinGW \ bin. Then re-Execute./configure -- enable-shared.
7. Compile./configure -- enable-shared make install
8 Test
# Include <stdio. h>
// # Include "libavformat/avformat. h"
Extern "C "{
# Ifdef _ cplusplus
# Define _ STDC_CONSTANT_MACROS
# Ifdef _ STDINT_H
# Undef _ STDINT_H
# Endif
# Include "libavcodec/avcodec. h"
# Include "libavformat/avformat. h"
# Endif
}
# Pragma comment (lib, "avutil. lib ")
# Pragma comment (lib, "avformat. lib ")
# Pragma comment (lib, "avcodec. lib ")
# Pragma comment (lib, "swscale. lib ")
Int main ()
{
//
Av_register_all ();
AVFormatContext * pContex = NULL;
Int nRet = avformat_open_input (& pContex, "test.mp4", NULL, NULL );
If (nRet ){
Printf ("open mp4 file failure \ n ");
Return-1;
}
NRet = avformat_find_stream_info (pContex, NULL );
If (nRet ){
Return-2;
}
AVCodecContext * pAVcodeContex = NULL;
For (int I = 0; I <pContex-> nb_streams; I ++ ){
PAVcodeContex = pContex-> streams [I]-> codec;
If (pAVcodeContex-> coder_type = AVMEDIA_TYPE_AUDIO ){
Break;
}
Else {
// PAVcodeContex = NULL;
}
}
// Printf
AVCodec * pAVCodec = avcodec_find_decoder (pAVcodeContex-> codec_id );
Return 0;
}
9: during compilation, stdint. h and inttypes. h under the copy MinGW \ include directory must be
Compile and run it in the ffmpeg \ libavutil \ directory.
Refer:
Http://chfj007.blog.163.com/blog/static/17314504420121144223910/
Http://blog.chinaunix.net/uid-20718335-id-2980793.html
Library introduction:
Libavcodec-decoder
Libavdevice-Support for input/output devices
Libavfilter-supported audio and video Filters
Resolution of libavformat-audio and video formats
Libavutil-tool Library
Libpostproc-post-effect Processing
Libswscale-image color and size conversion
Video and audio literacy knowledge:
1. What is codec and codec? Why.
A: codec. CODEC = COde (encoding) + DECode (Decoding ).
Assume that the display is refreshed 60 times per second, that is, the update rate is 60Hz, and the resolution is 1024*768,
At this time, the amount of data processed by the video card per second is 60*1024*768 pixels. it is conceivable that the video file size is terrible.
If you do not need to compress the video files, you can only store the video content of about 37 seconds for 1 GB of files.
Therefore, we need to compress (code, encoding) It, store it, decompress it (decode, decoding) when you want to play the video ).
It is worthwhile to sacrifice some time in exchange for a large part of the space, and our hardware devices also have this capability.
2. How to codec? What are the prerequisites?
A: Assume that a pure uncompressed video file is large. When he shows it to us, some content, even if it is deleted, it will not affect our viewing effect. For example, when you drink boiled water, the amount of mineral elements in it does not affect your taste.
Therefore, some content does not affect our viewing experience, so we can delete it, that is
Codec Encoder
Original video file (very large) -----------------> encoded video file (relatively small)
There is a difference between the two in the space size, the latter is smaller, of course, the quality of the latter is also less.
Viewing:
Codec Decoder
The encoded video file (relatively smaller) -------------------> the original video file (larger) -------------> is displayed to the video card.
Therefore, there should be a certain set of specifications, and there must be a matching algorithm to support this specification.
The example above shows that compression is not affected when we watch the video. Here is another example.
A video file on a normal computer may be more than 1 GB. However, it is a waste of time to watch the video on a mobile device.
Therefore, we can also compress the video files on the mobile device on a 22-inch display and play the video on a full screen, you must be unhappy and think that the quality is too poor, but this is also necessary.
Compressing a high-definition P into a half-HD or SD is also a type of encoding method with a loss of quality.
3. Basic Concepts literacy, Container (Container), Stream (Stream), Frame (Frame), Codec (Codec), mux/demux (multiplexing/demultiplexing ).
A: A container is a file, and a container is a file format. For example, xxx. flv and yyy.mkv are two files. We can say they are two containers and they are two different containers.
Container and stream and reuse/demultiplexing
Look at xxx first. the flv file contains two types of streams: audio stream and video stream ), the two streams are encapsulated in the format specified by flv. in the flv container.
Let's look at the yyy.mkv file. Assume it contains three streams: audio stream and video stream ), another type of stream is subtitle streaming, and these three streams are encapsulated in a .mkv container in the mkvformat.
The relationship between the container and the stream is described above. Different streams are parsed (or extracted) from the container (File) according to the rules of a certain container, this behavior is called demux and demuxer is used. In turn, put different streams into that container according to the rules of a certain container (a file of some format must be generated in the end). This behavior is called multiplexing (mux) and muxer ).
These four concepts are abstracted, and different structures are used to implement these abstract concepts. Different containers have different rules to merge, and some containers have. For example, for MKV. rm. flv. mp4, there should be different demultiplexing methods for different containers. In fact, we can also implement these demuters on our own, on the premise that we can clearly understand the internal format of each container.
Stream and frame and Decoder
If you read the above carefully, you can probably guess that the method of this section is the same as that of the previous section. There must be a conversion between two different things. There must be two positive and inverse instruments to make the conversion positive or reverse.
Directly, frames are included in the stream!
When you get a stream from the container, or whatever you do, you get a stream, it is considered to be generated after being encoded by a certain encoder, you need to decode the frames in the stream. In fact, this is a counterargument. Let's look at it from another angle.
What is a video? It is actually a group of (many) images that are displayed at a very low interval. People think that the characters in the images are moving. This is the film. The essence of a movie is a collection of N images. What is the relationship between each image and a frame? In fact, if we store all the images in a video, the space will be very large, but if we use some algorithms (not related algorithms here ), compress each image (encoding _ encode) to a frame. Connect frames into streams, and then exile them into a container. This is the movie file we usually see. For example, why is this file named 4.h264.acc.mkv in the mission? Mkvindicates that the container is .mkv and contains at least two streams, h264 video streams and ACC audio streams. This is a typical
Sacrifice time in exchange for space.
Now I should understand it. Back to the previous topic, after getting a stream, I have to try to find the frame in it, use the decoder/decoder to restore the frame, and then play it again, you can also use another encoder/encoder encoding to compress frames into another format (this is a step to be completed by the so-called format conversion software ).
4. What is ffmpeg?
Next, I will Excerpt from the ffmpeg official website.
FFmpeg is an open-source, free, cross-platform video and audio stream solution. It is a free software and uses LGPL or GPL licenses (based on the components you choose ). It provides a complete solution for recording, conversion, and streaming audio and video. It contains a very advanced audio/video codec library libavcodec. To ensure high portability and codec quality, many codec libraries in libavcodec are developed from scratch.
5. What do we learn about ffmpeg?
Ffmpeg is a solution that solves coding/decoding and multiplexing. It provides many APIs for us to use. Of course, there are still some work that we must do by ourselves, such as synchronization. First, we need to understand the process from logging on to a video file, decoding, decoding, and playing. Then we need to learn how to use the APIS provided by Alibaba Cloud and how to use the sequence. Then, let's learn more about the implementation details of the API, and finally grasp the implementation details. Then, we can modify the implementation details as needed and make a customized version.
6. I tried to compile the ffmpeg source file package and found that some binary files such as ffplay, ffmpeg, and ffserver have been compiled. What are these binary files? What is it?
Ffplay: It is a real player, such as vlc and mplayer. It has a graphical interface.
Ffmpeg: a tool that uses APIs provided by the ffmpeg solution and other operations to implement transcoding and other functions.
Ffserver: As the name implies, it is a server. Unicast or multicasting streams