Live555 Study Notes 12-h264 RTP packet Timestamp

Source: Internet
Author: User

Twelve h264 RTP packet Timestamp

Let's take h264 as an example.

Void hsf-videortpsink: dospecialframehandling (unsigned/* fragmentationoffset */, <br/> unsigned char */* framestart */, <br/> unsigned/* numbytesinframe */, <br/> struct timeval framepresentationtime, <br/> unsigned/* numremainingbytes */) <br/> {<br/> // set the RTP 'M' (Marker) bit IFF <br/> // 1/the most recently delivered fragment was the end of (or the only fragment of) An nal unit, and <br/> // 2/This NAL unit was the last nal unit of an 'Access part' (I. e. Video Frame). <br/> If (fourfragmenter! = NULL) {<br/> hsf-videostreamframer * framersource = (hsf-videostreamframer *) (fourfragmenter-> inputsource ()); <br/> // This relies on our fragmenter's source being a "hsf-videostreamframer ". <br/> If (fourfragmenter-> lastfragmentcompletednalunit () <br/> & framersource! = NULL & framersource-> pictureendmarker () {<br/> setmarkerbit (); <br/> framersource-> pictureendmarker () = false; <br/>}</P> <p> settimestamp (framepresentationtime); <br/>}The function first checks whether it is the last packet of a frame. If yes, it marks the 'M' and then sets the timestamp. Where does this timestamp come from? It depends on who calls the function dospecialframehandling (). After searching, it is called by multiframedrtpsink: aftergettingframe1. the parameter presentationtime of multiframedrtpsink: aftergettingframe1 () is passed to dospecialframehandling (). multiframedrtpsink: aftergettingframe1 () is transmitted to source when getnextframe () of source is called. which source does it pass? I passed it to hsf-fuafragmenter. Do you still remember the dark data from Chen Cang? Therefore, hsf-fuafragmenter obtains an nal
Multiframedrtpsink: aftergettingframe1 (). That is, hsf-fuafragmenter: aftergettingframe1 () calls multiframedrtpsink: aftergettingframe1 ().
Hsf-fuafragmenter: aftergettingframe1 () is called by aftergettingframe1 () of its own source. Who is the source of hsf-fuafragmenter? It is hsf-videostreamframer, which is the constructor passed to hsf-fuafragmenter during the dark data warehouse.
The aftergettingframe1 () of hsf-videostreamframer does not exist, instead of mpegvideostreamframer: continuereadprocessin (). it is secretly passed to the streamparser constructor by mpegvideostreamparser. therefore, after streamparser analyzes a frame (or nal unit), it calls mpegvideostreamframer: continuereadprocessin (). the following is proof:(Supplement: The following functions are not used to analyze a frame (or NAL) in parser.
Unit). parser uses bytestreamfilesuorce to obtain the original data and then mpegvideostreamframer calls the Parser () function of parser to analyze the original data)

Void streamparser: aftergettingbytes (void * clientdata, <br/> unsigned numbytesread, <br/> unsigned/* numtruncatedbytes */, <br/> struct timeval presentationtime, <br/> unsigned/* durationinmicroseconds */) <br/> {<br/> streamparser * parser = (streamparser *) clientdata; <br/> If (parser! = NULL) <br/> parser-> aftergettingbytes1 (numbytesread, presentationtime); <br/>}</P> <p> void streamparser: aftergettingbytes1 (unsigned numbytesread, <br/> struct timeval presentationtime) <br/> {<br/> // sanity check: make sure we didn't get too bytes for our bank: <br/> If (ftotnumvalidbytes + numbytesread> bank_size) {<br/> finputsource-> envir () <br/> <"streamparser: aftergettingbytes () Warning: read "<br/> <numbytesread <" bytes; expected no more than "<br/> <bank_size-ftotnumvalidbytes <" \ n "; <br/>}</P> <p> flastseenpresentationtime = presentationtime; </P> <p> unsigned char * PTR = & curbank () [ftotnumvalidbytes]; <br/> ftotnumvalidbytes + = numbytesread; </P> <p> // continue our original calling source where it left off: <br/> restoresavedparserstate (); <br/> // sigh... this is a crock; things wowould have been a lot simpler <br/> // here if we were using threads, with synchronous I/O... <br/> fclientcontinuefunc (fclientcontinueclientdata, PTR, numbytesread, <br/> presentationtime); <br/>}Fclientcontinuefunc is mpegvideostreamframer: continuereadprocessin (), and we can see that the timestamp is passed into fclientcontinuefunc.
However, this timestamp is ignored in mpegvideostreamframer: continuereadprocessin (), because the timestamp is calculated by bytestreamfilesource, which is not necessarily correct.

Void mpegvideostreamframer: continuereadprocessing (void * clientdata, <br/> unsigned char */* PTR */, <br/> unsigned/* size */, <br/> struct timeval/* presentationtime */) <br/> {<br/> mpegvideostreamframer * framer = (mpegvideostreamframer *) clientdata; <br/> framer-> continuereadprocessing (); <br/>}It seems that the real timestamp is calculated in mpegvideostreamframer, but hsf-videostreamframer does not use the timestamp calculation functions in mpegvideostreamframer. In addition, hsf-videostreamframer does not actually calculate the timestamp, it is calculated using hsf-videostreamparser. in which function is it? In Parser!

Unsigned hsf-videostreamparser: parse () <br/>{< br/> try {<br/> // The stream must start with a 0x00000001: <br/> If (! Fhaveseenfirststartcode) {<br/> // skip over any input bytes that precede the first 0x00000001: <br/> u_int32_t first4bytes; <br/> while (first4bytes = test4bytes ())! = 0x00000001) {<br/> get1byte (); <br/> setparsestate (); // ensures that we progress over bad data <br/>}< br/> skipbytes (4); // skip this initial code </P> <p> setparsestate (); <br/> fhaveseenfirststartcode = true; // from now on <br/>}</P> <p> If (foutputstartcodesize> 0) {<br/> // include a start code in the output: <br/> save4bytes (0x00000001 ); <br/>}</P> <p> // then save everything up until the next 0x00 000001 (4 bytes) or 0x000001 (3 bytes), or we hit EOF. <br/> // also make note of the first byte, because it contains the "nal_unit_type": <br/> u_int8_t firstbyte; <br/> If (haveseeneof ()) {<br/> // We hit EOF the last time that we tried to parse this data, <br/> // so we know that the remaining unparsed data forms a complete nal unit: <br/> unsigned remainingdatasize = totnumvalidbytes ()-curoffset (); <Br/> If (remainingdatasize = 0) <br/> (void) get1byte (); // forces another read, which will cause EOF to get handled for real this time <br/> If (remainingdatasize = 0) <br/> return 0; <br/> firstbyte = get1byte (); <br/> savebyte (firstbyte); </P> <p> while (-- remainingdatasize> 0) {<br/> savebyte (get1byte ()); <br/>}< br/>} else {<br/> u_int32_t next4bytes = test4bytes (); <br/> firstbyte = next4bytes >>24; <br/> while (next4bytes! = 0x00000001 <br/> & (next4bytes & 0xffffff00 )! = 0x00000100) {<br/> // we save at least some of "next4bytes ". <br/> If (unsigned) (next4bytes & 0xff)> 1) {<br/> // common case: 0x00000001 or 0x000001 definitely doesn't begin anywhere in "next4bytes", so we save all of it: <br/> save4bytes (next4bytes ); <br/> skipbytes (4); <br/>}else {<br/> // Save the first byte, and continue testing the rest: <br/> savebyte (next4bytes> 24); <br/> skipbytes (1); <BR/>}< br/> next4bytes = test4bytes (); <br/>}< br/> // assert: next4bytes starts with 0x00000001 or 0x000001, and we 've saved all previous bytes (forming a complete nal unit ). <br/> // skip over these remaining bytes, up until the start of the next nal unit: <br/> If (next4bytes = 0x00000001) {<br/> skipbytes (4); <br/>}else {<br/> skipbytes (3 ); <br/>}</P> <p> u_int8_t nal_ref_idc = (firstbyte & 0x60)> 5; <br/> u_int8_t nal_unit_type = firstbyte & 0x1f; </P> <p> switch (nal_unit_type) {<br/> case 6: {// supplemental enhancement information (SEI) <br/> analyze_sei_data (); <br/> // later, perhaps adjust "fpresentationtime" if we saw a "pic_timing" Sei payload ??? ##### <Br/> break; <br/>}< br/> case 7: {// sequence parameter set <br/> // first, save a copy of this nal unit, in case the downstream object wants to see it: <br/> usingsource ()-> savecopyofsps (fstartofframe + foutputstartcodesize, <br/> FTO-fstartofframe-foutputstartcodesize); </P> <p> // parse this nal unit to check whether frame rate information is present: <br/> unsigned num_units_in_tick, time_s Cale, fixed_frame_rate_flag; <br/> latency (num_units_in_tick, time_scale, <br/> latency); <br/> If (time_scale> 0 & num_units_in_tick> 0) {<br/> usingsource ()-> fframerate = time_scale <br/>/(2.0 * num_units_in_tick ); <br/>}else {<br/>}< br/> break; <br/>}< br/> case 8: {// picture parameter set <br/> // save a copy of this nal unit, in case the downstream obje CT wants to see it: <br/> usingsource ()-> savecopyofpps (fstartofframe + foutputstartcodesize, <br/> FTO-fstartofframe-foutputstartcodesize ); <br/>}</P> <p> // update the timestamp variable <br/> usingsource ()-> setpresentationtime (); </P> <p> // if this nal unit is a VCL nal unit, we also scan the start of the next nal unit, to determine whether this nal Unit <br/> // ends the current 'Access unit '. we need this informa Tion to figure out when to increment "fpresentationtime ". <br/> // (RTP streamers also need to know this in order to figure out whether or not to set the "M" bit .) <br/> Boolean thisnalunitendsaccessunit = false; // until we learn otherwise <br/> If (haveseeneof () {<br/> // There is no next nal unit, so we assume that this one ends the current 'Access unit': <br/> thisnalunitendsaccessunit = true; <br />} Else {<br/> Boolean const isvcl = nal_unit_type <= 5 & nal_unit_type> 0; // wocould need to include type 20 for SVC and MVC ##### <br/> If (isvcl) {<br/> u_int32_t first4bytesofnextnalunit = test4bytes (); <br/> u_int8_t firstbyteofnextnalunit = timeout <br/> 24; <br/> u_int8_t next_nal_ref_idc = (firstbyteofnextnalunit & 0x60) <br/> 5; <br/> u_int8_t next_nal_unit_type = firs Tbyteofnextnalunit & 0x1f; <br/> If (next_nal_unit_type >=6) {<br/> // The next nal unit is not a VCL; therefore, we assume that this nal unit ends the current 'Access unit': <br/> thisnalunitendsaccessunit = true; <br/>} else {<br/> // The next nal Unit is also a VLC. we need to examine it a little to figure out if it's a different 'Access unit '. <br/> // (we use criteria of the criteria described in S Ection of the H. 264 specification .) <br/> Boolean idrpicflag = nal_unit_type = 5; <br/> Boolean next_idrpicflag = next_nal_unit_type = 5; <br/> If (next_idrpicflag! = Idrpicflag) {<br/> // idrpicflag differs in value <br/> thisnalunitendsaccessunit = true; <br/>} else if (next_nal_ref_idc! = Nal_ref_idc <br/> & next_nal_ref_idc * nal_ref_idc = 0) {<br/> // nal_ref_idc differs in value with one of the nal_ref_idc values being equal to 0 <br/> thisnalunitendsaccessunit = true; <br/>} else if (nal_unit_type = 1 | nal_unit_type = 2 <br/> | nal_unit_type = 5) <br/> & (next_nal_unit_type = 1 <br/> | next_nal_unit_type = 2 <br/> | next_nal_unit_type = 5 )) {<br/> // both this and Next NAL units begin with a "slice_header ". <br/> // parse this (for each), to get parameters that we can compare: </P> <p> // current nal unit's "slice_header ": <br/> unsigned frame_num, delimiter, idr_pic_id; <br/> Boolean field_pic_flag, delimiter; <br/> analyze_slice_header (<br/> fstartofframe + foutputstartcodesize, FTO, <br/> nal_unit_type, frame_num, pic_parameter_set_id, <br/> I Dr_pic_id, field_pic_flag, bottom_field_flag); </P> <p> // next nal unit's "slice_header": <br/> u_int8_t next_slice_header [Response]; <br/> testbytes (next_slice_header, sizeof next_slice_header); <br/> unsigned next_frame_num, numeric, <br/> next_idr_pic_id; <br/> Boolean next_field_pic_flag, numeric; <br/> analyze_slice_header (next_slice_head Er, <br/> & next_slice_header [sizeof next_slice_header], <br/> next_nal_unit_type, next_frame_num, <br/> Accept, next_idr_pic_id, <br/> Accept, ignore ); </P> <p> If (next_frame_num! = Frame_num) {<br/> // frame_num differs in value <br/> thisnalunitendsaccessunit = true; <br/>} else if (next_pic_parameter_set_id <br/>! = Pic_parameter_set_id) {<br/> // pic_parameter_set_id differs in value <br/> thisnalunitendsaccessunit = true; <br/>} else if (next_field_pic_flag! = Field_pic_flag) {<br/> // field_pic_flag differs in value <br/> thisnalunitendsaccessunit = true; <br/>} else if (next_bottom_field_flag <br/>! = Bottom_field_flag) {<br/> // bottom_field_flag differs in value <br/> thisnalunitendsaccessunit = true; <br/>} else if (next_idrpicflag = 1 <br/> & next_idr_pic_id! = Idr_pic_id) {<br/> // idrpicflag is equal to 1 for both and idr_pic_id differs in value <br/> // note: we already know that idrpicflag is the same for both. <br/> thisnalunitendsaccessunit = true; <br/>}</P> <p> // note! Note! Note! The timestamp is calculated here !! <Br/> If (thisnalunitendsaccessunit) {<br/> usingsource ()-> fpictureendmarker = true; <br/> ++ usingsource ()-> fpicturecount; </P> <p> // note that the presentation time for the next nal unit will be different: <br/> struct timeval & nextpt = usingsource ()-> fnextpresentationtime; // alias <br/> nextpt = usingsource ()-> fpresentationtime; <br/> double nextfraction = nextpt. TV _usec/1000000.0 <br/> + 1/usingsource ()-> fframerate; <br/> unsigned nextsecsincrement = (long) nextfraction; <br/> nextpt. TV _sec + = (long) nextsecsincrement; <br/> nextpt. TV _usec = (long) (nextfraction-nextsecsincrement) <br/> * 1000000); <br/>}< br/> setparsestate (); </P> <p> return curframesize (); <br/>} catch (INT/* E */) {<br/> return 0; // The parsing got interrupted <br/>}< br/>}

Each time a new frame is started, a new timestamp is calculated. The timestamp is saved in fnextpresentationtime and transmitted to fpresentationtime in usingsource ()-> setpresentationtime.
Wow, we can see that the call relationship between the live555 classes is complicated and difficult to maintain! At the same time, I did not analyze what I wrote, and I was dizzy when I looked at it myself. If I fainted you, it would be normal!

Fpresentationtime is a 64-bit time. It is converted to a 32 RTP timestamp by converttortptimestamp. See the following function:

U_int32_t rtpsink: converttortptimestamp (struct timeval TV) <br/>{< br/> // begin by converting from "struct timeval" units to RTP timestamp units: <br/> u_int32_t timestampincrement = (ftimestampfrequency * TV. TV _sec); <br/> timestampincrement + = (u_int32_t) (<br/> (2.0 * ftimestampfrequency * TV. TV _usec + 1000000.0)/2000000); <br/> // Note: Rounding </P> <p> // then add this to our 'timestamp base ': <br/> If (fnexttimestamphasbeenpreset) {<br/> // make the returned timestamp the same as the current "ftimestampbase ", <br/> // so that timestamps begin with the value that was previusly preset: <br/> ftimestampbase-= timestampincrement; <br/> fnexttimestamphasbeenpreset = false; <br/>}</P> <p> u_int32_t const rtptimestamp = ftimestampbase + timestampincrement; </P> <p> return rtptimestamp; <br/>}In fact, the time stamp conversion is mainly to increase the time in seconds to the time in frequency units. that is, after the upgrade, the interval is not in seconds, but in 1/ftimestampfrequency, that is, 1/9000 seconds. Then convert it to 32.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.