Intel Sandy-bridge HW h264 EncoderGOP(Group of pictures) settings
You can set h264 encoder parameters by referring to the sample code provided by the Intel Media SDK:
Intel (r) Media SDK encoding sampleUsage: sample_encode.exe h264 | MPEG2 [Options]-I inputyuvfile-O outputencodedfiLe-W width-H heightOptions:[-Nv12]-input is in nv12 color format, if not specified
Http://bbs.chinavideo.org/viewthread.php? Tid = 7575
I believe many of you want to play h264 video streaming media like me. However, a newbie often does not know where to start. Using Baidu, Google, and other search materials is a treasure. After N weeks of thinking, I made some achievements. It took a lot of useless effort. I spent a week watching the English protocol, and later I learned that there was a Chinese version, in addition, the g
It seems that the problem can only be solved in this way. Now we need to test more to prevent new problems. Currently, it does not affect the existing code, and the frame of the screen is directly blocked.
Ideas:I asked about h264 decoding of the set-top box. They used hardware decoding. They simply set an interface provided by hardware decoding: Set the error processing mode.I think this error handling mode will definitely block the wrong frames, so
Awesome video conferencing website: http://wmnmtm.blog.163.com/blog/#m =0
++++++++++++++++++++++++++++++++++++++++++++++++++++
http://wmnmtm.blog.163.com/blog/static/38245714201192491746701/
When using RTP to transmit H264, we need to use the SDP protocol description, two of them: Sequence Parameter Sets (SPS) and picture Parameter Set (PPS) need to be used, so where are these two items obtained? The answer is to get it from the
Due to my recent reading of h264 files, I encountered the problem of how to read the full frame data, through the use of Elecard Stream Analyzer tools, as well as the combination of "new generation of video compression coding standard--H264/AVC" (second edition) book, and found in the online summary as follows:
First NAL syntax, title syntax, and Nal_unit_type semantics must know:
The above two graphs are
Recent projects need to record video taken by the camera, using H264 encoding. The test found that H264 encoded 2k (1980x1080) video was smooth, but encoded 4k (3840x4120) video showed noticeable lag phenomenon. Therefore consider using H264 Nvenc hardware encoding.The original code
Avcodec *codec = Avcodec_find_encoder (av_codec_id_h264);
Switch
Avcod
The Base_clock of PTS here are calculated in terms of 1000 (milliseconds), and if reused in TS, the Base_clock is 90k, so it should be multiplied by 90. About the H264 in the SPS recorded in the frame rate is twice times the actual frame rate, including slice inside the PIC_ORDER_CNT_LSB is also twice times the increment, I guess the code according to the Sub-field (Top field, bottom field) coding.
H264 es
using MP4V2 to synthesize h264+aac mp4 files
Recording program to add new features: Record CMMB TV shows, our cards are sent out of the RTP stream (H264 video and AAC audio), the recording program to do the work is:
(1) Receiving and parsing RTP packets, separating out H264 and AAC data streams;
(2)
==================== Problem Description ====================Use Videoview to do the display. Play all the way a little problem, no card, very real-time, but, more can not play, also reported wrong. Automatically eject a dialog box that cannot be played.The same is true with Surfaceview.I hear it's because the Android bottom only supports decoding.Do you have to transplant ffmpeg? But this is soft decoding ah, the efficiency is too low, and seemingly quite complex.Who has a better way to do it.
.
Bytes ------------------------------------------------------------------------------------------------
H.264 video streams are transmitted in NAL units... However, in a nal unit, I-slice (p-slice or B-slice) may be stored, and colleagues may also store other information about the image.Does it mean that I frame, P frame, and B frame extract the VCL information in the received nal unit first, and then classify I, P, and bframe according to the content?However, we can only identify the data type
is equal to 1, a new num_ref_idx_10_active_minus1 */Int B _num_ref_idx_override;Int I _num_ref_idx_l0_active;Int I _num_ref_idx_l1_active;/* Reference the semantics of the rearranged image *//* Indicates whether to perform the re-sorting operation. When the syntax is equal to 1, it indicates that a series of syntaxes are followed for reference to the re-sorting of the frame queue */Int B _ref_pic_list_reordering_l0;Int B _ref_pic_list_reordering_l1;Struct {Int IDC;Int ARG;} Ref_pic_list_order [
Http://blog.chinaunix.net/space.php? Uid = 20751538 Do = Blog id = 165746
1. H.264 when the start code transmits h264 data over the network, a UDP packet is a nalu, And the decoder can conveniently detect the nal demarcation and decoding. However, if the encoding data is stored as a file, the original decoder cannot determine the start position and end position of each nal from the data stream. Therefore, H.264 uses the start code to solve this prob
The content in this article is original. For more information, see the source.
Decoding h264 data using FFmpeg is actually much easier than video encoding using x264, because FFMPEG provides a decoding_encoding.c file, this file contains simple examples of using FFMPEG for video and audio encoding/decoding. However, some people may not find this example. I will describe the transformed example here, add some explanations.
Note that, when decoding FFM
I am responsible for custom development of the SIP/IMS video client and support access to the SIP Soft Interface.Switch, IMS core network, supportedVoice, video, and instant messaging functions. The video formats support h263, h264, and MPEG4 soft encoding solutions. The hardware coding/decoding interface is provided for interconnection and servers. If you are interested, contact me.
Csdn lidp http://blog.csdn.net/perfectpdl
Some video terminals o
The following is a summary of some problems when using h264 + AAC for video playback on iPad, iPhone, itouch, and other terminals.
1: audio and video synchronization problems
The reason is mainly because of the TS file timestamp problem. Sometimes it will also cause problems after file encapsulation type conversion,
For example, FLV-ts will cause audio and video to be not synchronized. The base time used by FLV (reference time): one second =
protocol, it may cause delayed playback of the receiving side player or even not play properly. Therefore, for NALU units larger than the MTU, the unpacking process must be performed. RFC3984 gives the different RTP packaging schemes in 3: (1) Single Nalu Packet: Only one nalu is encapsulated in one RTP package, and this package is used for Nalu less than 1400 bytes in this article.(2) Aggregation Packet: Encapsulating multiple Nalu in one RTP package, this packaging scheme can be used for smal
frame is displayed.Dts:decode time Stamp. DTS primarily identifies when a binary stream that is read into memory is decoded when it starts to feed into the decoder. The order of DTS and the order of PTS should be the same without the existence of B-frames. VCL, NAL, Nalu: In the H.264/AVC video coding standard, the entire system framework is divided into two levels: the video coding level (VCL) and the network abstraction level (NAL). The former is responsible for effectively representing the c
1, nal full name Network abstract layer, that is, the net abstraction level.In the H.264/AVC video coding standard, the whole system framework is divided into two levels: Video coding level (VCL) and network abstraction level (NAL). The former is responsible for effectively representing the content of the video data, while the latter is responsible for formatting the data and providing the header information to ensure that the data suitable for the transmission of various channels and storage me
System environment:Linux inbank-gz 2.6.24-16-generic #1 SMP Thu Apr 13:23:42 UTC i686 gnu/linuxUbuntu 8.04Genuine Intel (R) CPU 1250 @ 1.73GHz * 21G MemoryTarget: Can play the H264 format HD video with MPlayer
Download related softwareMplayer1.1 version (includes FFmpeg)All kinds of decoding package codesx264Yasm + FAAC + Faad
You can download all the packages here http://down.51cto.com/data/1861780
: Video coding thread, audio coding thread, main thread (send thread) Video Coding thread
According to the frame rate to calculate the interval time, and then the video metadata rtmp packet to save for multiple send, from the video source to get frames, to determine whether it is a key frame, if it is the first send metadata, after the encoding package to send data, the final comparison of time and time to decide whether to wait
const int interval = 1000/fps; int bytes; char *buf, *frame;
Rtmppa
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.