IX h264 RTP transmission details (1)
In the previous chapters, the introduction to the server has a very important issue that has not been carefully explored: how to open a file and obtain its SDP information. Let's start from here.
When the rtspserver receives a DESCRIBE request to a media set, it finds the corresponding servermediasession and calls servermediasession: generatesdpdescription (). In generatesdpdescription (), all servermediasubsessio
(millisecond) * 7 (frame display sequence) * Frame Rate
Method 2: pic_order_cnt_lsb is recorded in each slice information, and the display sequence of the current frame in this group is recorded. With this pic_order_cnt_lsb, you can directly calculate the PTS of the current frame. This method does not require frame caching.
Calculation formula:
PTS = 1000 * (I _frame_counter + pic_order_cnt_lsb) * (time_scale/num_units_in_tick)
I _frame_counter is the frame sequence of the last I frame position
The MP4 recording program is modified according to the mpeg4ip-1.5.0.1 \ Server \ mp4live \ file_mp4_recorder.cpp file in mpeg4ip. The program supports h264 + AAC (raw stream) writing method, using the dynamic library mp4v2-2.0.0, do not use the older version of mpeg4ip, because it will be efficient when recording large files, the following describes some mp4v2 interfaces.
Mp4filehandle mp4create (const char * filename, uint32_t flags)Function: Creat
H264 RTP Header Parsing ProcessCombined with naldecoder. c Analysis
Protocol Analysis: Each RTP datagram consists of the header and payload. The meaning of the first 12 bytes of the header is fixed, the load can be audio or video data.
The parameter set of the active sequence remains unchanged in the same encoding video sequence, and the parameter set of the active image remains unchanged in the same encoding image.
H.264The encoder must set
H264 specifies three major grades, each of which supports a specific set of encoding functions and a specific class of applications.
1. Baseline profile: The I-and p-slices support intra-and inter-frame encoding and context-based adaptive variable-length encoding (cavlc ). It is mainly used for real-time video communication, such as videophone, conference TV, and wireless communication.
2. Main profile: supports video interlace, B-frame encoding,
Source code of the h264 decoder, port the h264 decoding part of FFMPEG to Android, perform in-depth deletion optimization, and pass the verification in the simulator (320x480.
ProgramJNI architecture. The interface, file reading, and video display are all made in Java. The underlying video decoding uses C to meet the speed requirements.
In this version, the nal is split from the
In the previous article, I talked about how to use JNI for h264 encoding, but the efficiency is not very practical.
After several days of hard work, refer to the real-time coding problem of Android with reference to http://www.javaeye.com/problems/27244.
Two important classes: mediarecorder parcelfiledescriptor
Mediarecorder is a collection encoding class provided by Android, while parcelfiledescriptor is a class that uses Socket to implement setout
When FFMPEG was recently used for h264 encoding, it was found that the video encoding was delayed, rather than real-time Encoding. After some research, we found thatAvcodec_open2Function
When encoder is enabled, SetAvdictionaryThe key code is as follows:
Avcodec_open2 function:
int avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options);
Solution:
Avdictionary * Param = NULL; // h26
Believe that a lot of people in the first contact with FFmpeg, want to ffmpeg API are very cumbersome, because of their own code is larger, more modules, more APIs, and in the FFmpeg routines are the file's driving into the codec, So want to achieve a simple pure data stream decoding will feel the start; This paper describes the decoding of a complete H264 data into a yuyv format.
FFmpeg version: ffmpeg-3.1.2
The FFmpeg libraries used include: Libavfo
In the previous article because the USB camera used on the PC can only support Gpeg image format, but the H264 encoding needs to use YUV data, so I found an arm development board to do the test. This thought that the code from the PC to the Development Board is a very simple thing, who knows that because the platform or v4l2 the difference between the bottom of the drive, and finally spent the Dickens to solve the problem. Words not much to say, direc
careful to note that in the above figure, the PIC_ORDER_CNT_LSB in the slice is incremented by 2.The frame rate, which is usually recorded in the SPS in H264, is twice times the actual frame rate time_scale/num_units_in_tick=fps*2Therefore, the actual calculation formula should be this wayPTS=1000* (I_FRAME_COUNTER*2+PIC_ORDER_CNT_LSB) * (Time_scale/num_units_in_tick)Or isPTS=1000* (I_FRAME_COUNTER+PIC_ORDER_CNT_LSB/2) * (TIME_SCALE/NUM_UNITS_IN_TICK
1, using ffmpeg extract MP4 h264 Code stream Write file method online There are many, do not know, please refer to the Raytheon Blog: http://blog.csdn.net/leixiaohua1020/article/details/11800877
2, but the file is a problem, the first can certainly play, but there will be a great chance to appear in the flower screen
A, first talk about the solution
In fact, it is very simple, but also use the Av_bitstream_filter_filter method to handle each avpacket,
H264 video packet loss is inevitable during transmission in the group network, especially when the network environment is poor, transmission of h264 code streams, packet loss will lead to the decoding terminal screen, mosaic is serious, this cutting-edge technology is FEC, NACK, the former is the forward correction technology, and the latter is the retransmission. The two can work well to solve the visual e
Today, when I access the Internet, I occasionally find the answer to my questions.
H.264 video types
The following media subtypes are defined for H.264 video.
Subtype
Fourcc
Description
Mediasubtype_avc1
'Avc1'
H. 264 bitstream without start codes.
Mediasubtype_h264
'H264'
H. 264 bitstream with start codes.
Mediasubtype_h264
'H264'
Equivalent to
Today, I found a strange problem. I can call the SIP client of the lower computer by using the Linphone client of the upper computer to work normally, but in turn there is a problem. Packet Capture found that Linphone sent a large number of IP fragmentation data packets, so google knows that when the data found is larger than MTU, it will generate IP fragmentation data packets. I have already split the RTP package? This should not happen normally.
Linphone uses RTP to package
Generally, there is a unit separator nal at the beginning, and the data packet between the two unit delimiters is an image. 00 00 01 09, which is the unit separator. However, the Protocol does not say that the nal stream must be so organized, and there may be other forms of organization. My h264 files are organized in this way.
The byte stream of h264 does not have the frame concept. Please carefully study
Provide the Symbian version of h264, MPEG-4 player :( 1) mobile phones and computers synchronously watch the remote camera image in real time (2) mobile instant capture function, get real-time pictures anytime, anywhere (3) video recording at any time, and playback function (4) download and play video clips from remote servers (AVI and 3GP formats supported) (5) support most of the currently available Symbian smartphones 3 and 5 (6) applicable to the
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.