Transferred from: http://blog.csdn.net/simongyley/article/details/9984167
1. Decode the H264 file as YUV file
Ffmpeg-i file.h264 FILE.YUV
FFmpeg conversion
D:\ffmpeg\bin>ffmpeg.exe-i C:\USERS\PC\DESKTOP\SP.MP4-VF scale=500:-1-t ss.flv
C:\Users\pc\Desttop\sp.mp4 is the address of the file you need to convert
Scale=500:-1 indicates that the width of the quasi-post-swap video is 500px-1 and can be written in 500:500, indicating that both width and height are 500px
-T 100 means that the captured video takes 100 milliseconds,
SS.FLV represents a named new file name, and the file is stored on the D drive.
Ffmpeg
Ffmpeg-i demo.mp4-ss 10.1-t 0.001 1.jpg
Capture Demo.mp4 video from 10.1 seconds-T 0.001 for 10.1 seconds to save this image as 1.jpg
FFmpeg cropping
Ffmpeg-i demo.mp4-filter:v "crop=10:20:100:100" Out.mp4
The parameters in crop are: Left margin: Right margin: Width: height
FFmpeg same resolution to FLV
Ffmpeg-i demo.mp4-vcodec copy-acodec Copy out.flv
Ipad can play MP4 directly, you can use the HTML5 video tag
<video width= "555" height= "315" Controls preload= "Auto" src= "Demo.mp4" ></video>
FFmpeg video Synthesis
Because FFmpeg is a support for slicing MP4 videos, I take it for granted that FFmpeg is a support video merge. Until today, colleagues asked me to ask methods, only to find that the method is wrong, MP4 does not support direct concate (shameful ...) ), hurriedly fill up a bit of energy, from the internet grabbed a variety of implementations.
Note: The MP4 here refers to the way the H264+AAC MPEG4 containers are most visible on the web
FFmpeg + TS
The idea is to first convert the MP4 into the same encoded form of TS Stream, because the TS stream can be concate, first the MP4 encapsulated into TS, then concate TS Stream, and finally the TS into MP4.
123 |
ffmpeg -i 1.mp4 -vcodec copy -acodec copy -vbsf h264_mp4toannexb 1.ts ffmpeg -i 2.mp4 -vcodec copy -acodec copy -vbsf h264_mp4toannexb 2.ts ffmpeg -i "concat:1.ts|2.ts" -acodec copy -vcodec copy -absf aac_adtstoasc output.mp4 |
Convert AIF files to 16-bit signed, small-ended storage mode, 8000 Hz sample rate:
Ffmpeg-i test.aif-f S16le-ar 8000 TEST.PCM
Encode the PCM data stored in the 44.1KHz dual 16-bit signed small end to AAC:
Ffmpeg-f s16le-ar 44100-ac 2-i test.pcm-acodec aac-strict experimental TEST.AAC
Encode the 4:2:0 YUV file into the H. ES Stream (the FFmpeg libx264 component must be enabled, that is, when configuring FFmpeg:--enable-libx264):
Ffmpeg-pix_fmt yuv420p-s 176x144-i test.yuv-f h264 test.264
Or
Ffmpeg-pix_fmt yuv420p-s 176x144-i TEST.YUV test.h264
Decode the H. ES stream into a YUV file (the Rawvideo component of the ffmpeg must be enabled, that is, when configuring FFmpeg:--enable-encoder=rawvideo):
Ffmpeg-i test.264 TEST.YUV
Convert 4:2:0 QCIF size YUV file to 4:2:2 CIF size YUV file:
Ffmpeg-pix_fmt yuv420p-s 176x144-i foreman_qcif.yuv-pix_fmt yuv422p-s 352x288 TEST.YUV
Convert a YUV image of 4:2:0 QCIF size to a BMP file of CIF size:
Ffmpeg-pix_fmt yuv420p-s 176x144-i foreman_qcif.yuv-pix_fmt rgb24-s 352x288 test.bmp
Convert an image sequence of y4m format to a 4:2:0 YUV image sequence:
Ffmpeg-f yuv4mpegpipe-i test.y4m-pix_fmt yuv420p TEST.YUV
Convert AVI file to H. 4M Video + AC3 audio MP4 file with a bitrate of up to +/-10,45 for the video quantization interval (ffmpeg libx264 component must be enabled, i.e. when configuring FFmpeg:--enable-libx264):
Ffmpeg-i test.avi-vcodec libx264-b 4096000-qmin 10-qmax 45-acodec AC3 test.mp4
The YUV and PCM files are encoded and output as MPEG PS files:
Ffmpeg-pix_fmt yuv420p-s 720x576-r 25-b 8000000-i test.yuv-f s16le-ac 2-ar 48000-ab 384000-i test.pcm-f VOB tes T.vob
Capture video under Linux and encode it as h.263 ES stream:
Ffmpeg-f video4linux2-s 352*288- R 25-t 30-i/dev/video0-vcodec h263-f h263 test.263
Recording (MP3 or AMR):
Ffmpeg-f oss-i/DEV/DSP Wheer.mp3
Ffmpeg-f oss-i/dev/dsp-ar 8000-ab 10200 wheer.amr
Of course you can also set a bunch of parameters, such as adjusting the volume-vol 1024 (256 is the default), set the sample rate-ar 8000, set the bitrate-ab 122000, etc... As far as you want to switch between Mike and the sound card, of course you can turn to Aumix.
In addition, define alias amrec= ' ffmpeg-f oss-vol 1024-i/dev/dsp-ar 8000-ab 10200 ' in ~/.BASHRC, and then use Amrec file.amr Recording, compression rate is very high, one hour audio is more than 5M :)
Screen recording:
Ffmpeg-f x11grab-s xga-r 60-i: 0.0+0+0 Wheer.avi
Where-f specifies x11grab for screen recording (* Compile must add--enable-x11grab option *),-S to set the size, written in abbreviated or 1024x768 format,-R setting fps,-i: 0.0 for your X11 screen, +0,0 for offset, If you want to record a small window, you can use Xwininfo-frame to find the exact coordinates.
There are also a bunch of parameters can be set, such as bit rate-B 200000,-vcodec for video encoding, but also with the-F oss-i/dev/dsp simultaneous recording, with the Aumix to the recording source into a microphone can dub, the voice is too small to change with-vol ~ ~
Video clipping:
Ffmpeg-ss 01:02:30-t 00:10:00-i test.mov-vcodec copy-acodec copy Out.mov
The video file Test.mov from the 1th hour 2 minutes 30 seconds to intercept 10 minutes, that is, intercept Test.mov 1 hours 2 minutes 30 seconds to the 1th hour 12 minutes 30 seconds between the content.
How to capture a webcam input
Https://trac.ffmpeg.org/wiki/How to capture a webcam input
Linux¶
On Linux, we can use the video4linux2(or shortly "V4L2") input device to capture live input (such as Webcamera), like This
Ffmpeg-f video4linux2-r 25-s 640x480-i/dev/video0 Out.avi
Or
Ffmpeg-f v4l2-r 25-s 640x480-i/dev/video0 Out.avi
If you need to set some specific parameters of your camera, youcan does that using? v4l2-ctltool.
You can find it in the Fedora/ubuntu/debian package namedv4l-utils.
Most probably you'll want to know "what frame sizes/frame Ratesyour camera supports and can do" using: v4l2-c Tl--list-formats-ext
Also, might want to correct brightness, zoom, focus, Etc.with:
V4l2-ctl-l
and
=
Streaming a simple RTP audio stream from ffmpeg¶
Https://trac.ffmpeg.org/wiki/StreamingGuide
FFmpeg can stream a single stream using the? Rtpprotocol. In order to avoid buffering problems on the otherhand, the streaming should is done through the-re option, Whichmeans tha T the stream would be being streamed in real-time (i.e. slowsit) to simulate a live streaming source.
For example the following command would generate a signal, Andwill stream it to the port 1234 on localhost:
Ffmpeg-re-f lavfi-i aevalsrc= "sin (400*2*pi*t)"-ar 8000-f mulaw-f RTP rtp://127.0.0.1:1234
To play the stream with Ffplay, run the command:
Ffplay rtp://127.0.0.1:1234
Note that RTP by default uses UDP, which, for large streams, Cancause packet loss. See the ' point-to-point ' section of ThisDocument for hints if this ever happens.
Some summary on the page video playback
1. If the file is relatively large, you can only play streaming media format files, such as FLV format, the use of side download side play form, so if the file is not FLV format, you need to follow the conversion method described above.
2. The rate of the file will affect the speed of download, if the current network is 1M, then want to smooth playback of video files, playback of the file must be in the code rate below 1024Kbps, otherwise you need to convert to achieve the best playback effect.
More about FFmpeg http://ffmpeg.org/
More about Jplayer http://www.jplayer.org/
FFmpeg useful Commands (reproduced)