Live555 Study Notes 9-h264 RTP transmission (1)

Source: Internet
Author: User

IX h264 RTP transmission details (1)

In the previous chapters, the introduction to the server has a very important issue that has not been carefully explored: how to open a file and obtain its SDP information. Let's start from here.

When the rtspserver receives a DESCRIBE request to a media set, it finds the corresponding servermediasession and calls servermediasession: generatesdpdescription (). In generatesdpdescription (), all servermediasubsessions called in servermediasession are traversed, and SDP of each subsession is obtained through subsession-> sdplines (), which is merged into a complete SDP and returned.
We can almost conclude that the file opening and analysis should be completed in the sdplines () function of each subsession. Let's look at this function:

Char const * ondemandservermediasubsession: sdplines () <br/>{< br/> If (fsdplines = NULL) {<br/> // we need to construct a set of SDP lines that describe this <br/> // subsession (as a unicast stream ). to do so, we first create <br/> // dummy (unused) source and "rtpsink" objects, <br/> // whose parameters we use for the SDP lines: <br/> unsigned estbitrate; <br/> framedsource * inputsource = createnewstreamsource (0, estbitrate); <br/> If (inputsource = NULL) <br/> return NULL; // file not found </P> <p> struct in_addr dummyaddr; <br/> dummyaddr. s_addr = 0; <br/> groupsock dummygroupsock (envir (), dummyaddr, 0, 0); <br/> unsigned char rtppayloadtype = 96 + tracknumber ()-1; // If dynamic <br/> rtpsink * dummyrtpsink = createnewrtpsink (& dummygroupsock, <br/> rtppayloadtype, inputsource); </P> <p> dumps (dummyrtpsink, inputsource, estbitrate); <br/> medium: Close (dummyrtpsink); <br/> closestreamsource (inputsource); <br/>}</P> <p> return fsdplines; <br/>}In subsession, the SDP of the corresponding media file is directly saved, but fsdplines is null when it is obtained for the first time. Therefore, fsdplines must be obtained first. The method is quite troublesome. It turns out that a temporary source and rtpsink are built, and they are connected into a streamtoken. After playing for a while, fsdplines is obtained. Createnewstreamsource () and createnewrtpsink () are both virtual functions, so the Source and Sink created here are designated by the inheritance class. We analyze h264, which is specified by hsf-videofileservermediasubsession, let's take a look at these two functions:

Framedsource * hsf-videofileservermediasubsession: createnewstreamsource (<br/> unsigned/* clientsessionid */, <br/> unsigned & estbitrate) <br/>{ <br/> estbitrate = 500; // kbps, estimate </P> <p> // create the Video Source: <br/> bytestreamfilesource * filesource = bytestreamfilesource: createnew (envir (), <br/> ffilename); <br/> If (filesource = NULL) <br/> return NULL; <br/> ffilesize = filesource-> filesize (); </P> <p> // create a framer for the video elementary stream: <br/> return hsf-videostreamframer: createnew (envir (), filesource ); <br/>}</P> <p> rtpsink * hsf-videofileservermediasubsession: createnewrtpsink (<br/> groupsock * rtpgroupsock, <br/> unsigned char rtppayloadtypeifdynamic, <br/> framedsource */* inputsource */) <br/> {<br/> return hsf-videortpsink: createnew (envir (), rtpgroupsock, <br/> rtppayloadtypeifdynamic ); <br/>}We can see that hsf-videostreamframer and hsf-videortpsink are created respectively. It is certain that hsf-videostreamframer is also a source, but it uses another source-bytestreamfilesource internally. I will analyze why I want to do this later. Do not worry about it here. You have not seen the code that actually opened the file. Continue to explore:

Void ondemandservermediasubsession: setsdplinesfromrtpsink (<br/> rtpsink * rtpsink, <br/> framedsource * inputsource, <br/> unsigned estbitrate) <br/>{< br/> If (rtpsink = NULL) <br/> return; </P> <p> char const * mediatype = rtpsink-> sdpmediatype (); <br/> unsigned char rtppayloadtype = rtpsink-> rtppayloadtype (); <br/> struct in_addr serveraddrforsdp; <br/> serveraddrforsdp. s_addr = fserveraddressforsdp; <br/> char * const ipaddressstr = strdup (our_inet_ntoa (serveraddrforsdp); <br/> char * rtpmapline = rtpsink-> rtpmapline (); <br/> char const * rangeline = rangesdpline (); <br/> char const * auxsdpline = getauxsdpline (rtpsink, inputsource); <br/> If (auxsdpline = NULL) <br/> auxsdpline = ""; </P> <p> char const * const sdpfmt = "m = % S % u RTP/AVP % d \ r \ n" <br/> "c = in ip4 % s \ r \ n "<br/>" B =: % u \ r \ n "<br/>" % s "<br/>" A = control: % s \ r \ n "; <br/> unsigned sdpfmtsize = strlen (sdpfmt) + strlen (mediatype) + 5/* max short Len */<br/> + 3/* max char Len */<br/> + strlen (ipaddressstr) + 20/* max int Len */<br/> + strlen (rtpmapline) + strlen (rangeline) + strlen (auxsdpline) <br/> + strlen (trackID ()); <br/> char * sdplines = new char [sdpfmtsize]; <br/> sprintf (sdplines, sdpfmt, mediatype, // M = <media> <br/> fportnumforsdp, // M = <port> <br/> rtppayloadtype, // M = <FMT list> <br/> ipaddressstr, // C = address <br/> estbitrate, // B = as: <bandwidth> <br/> rtpmapline, // A = rtpmap :... (if present) <br/> rangeline, // A = range :... (if present) <br/> auxsdpline, // optional extra SDP line <br/> trackID (); // A = control: <track-ID> <br/> Delete [] (char *) rangeline; <br/> Delete [] rtpmapline; <br/> Delete [] ipaddressstr; </P> <p> fsdplines = strdup (sdplines); <br/> Delete [] sdplines; <br/>}This function retrieves the SDP of the subsession and saves it to fsdplines. Open the file in rtpsink-> rtpmapline () or even when the source is created. We do not want to put it first, but first make the SDP acquisition process transparent. So focus on getauxsdpline.

Char const * ondemandservermediasubsession: getauxsdpline (<br/> rtpsink * rtpsink, <br/> framedsource */* inputsource */) <br/>{< br/> // default implementation: <br/> return rtpsink = NULL? Null: rtpsink-> auxsdpline (); <br/>}It's easy. After rtpsink-> auxsdpline () is called, we need to see hsf-videortpsink: auxsdpline (): No need to read it. It's easy to get the PPS saved in source, SPS form a = fmpt rows. But in fact it is not that simple. hsf-videofileservermediasubsession overwrites getauxsdpline ()! If this parameter is not overwritten, it indicates that auxsdpline has been obtained before the file is analyzed. Since it is rewritten, it indicates that it has not been obtained before and can only be rewritten in this function. Look hsf-videofileservermediasubsession:

Char const * hsf-videofileservermediasubsession: getauxsdpline (<br/> rtpsink * rtpsink, <br/> framedsource * inputsource) <br/>{< br/> If (fauxsdpline! = NULL) <br/> return fauxsdpline; // It's already been set up (for a previous client) </P> <p> If (fdummyrtpsink = NULL) {// we're not already setting it up for another, concurrent stream <br/> // Note: For h264 video files, the 'config' information ("Profile-level-ID" and "sprop-parameter-sets") isn' t known <br/> // until we start reading the file. this means that "rtpsink" S "auxsdpline ()" will be null initially, <br/> // and we need to start reading data from our file until this changes. <br/> fdummyrtpsink = rtpsink; </P> <p> // start reading the file: <br/> fdummyrtpsink-> startplaying (* inputsource, afterplayingdummy, this ); </P> <p> // check whether the sink's 'auxsdpline () 'is ready: <br/> checkforauxsdpline (this ); <br/>}</P> <p> envir (). taskscheduler (). doeventloop (& fdoneflag); </P> <p> return fauxsdpline; <br/>}The annotations clearly explain that h264 cannot get PPS/SPS In the file header. It must be played after (of course, it is an original stream file without a file header. That is to say, it cannot be obtained from rtpsink. To ensure that the auxsdp can be obtained before the function exits, move the large loop here. Afterplayingdummy () is executed after the playback ends, that is, after obtaining the aux SDP. What does checkforauxsdpline () do before a large loop?

Void hsf-videofileservermediasubsession: checkforauxsdpline1 () <br/>{< br/> char const * dasl; </P> <p> If (fauxsdpline! = NULL) {<br/> // signal the event loop that we're done: <br/> setdoneflag (); <br/>} else if (fdummyrtpsink! = NULL <br/> & (dasl = fdummyrtpsink-> auxsdpline ())! = NULL) {<br/> fauxsdpline = strdup (dasl); <br/> fdummyrtpsink = NULL; </P> <p> // signal the event loop that we're done: <br/> setdoneflag (); <br/>} else {<br/> // try again after a brief delay: <br/> int usecstodelay = 100000; // 100 ms <br/> nexttask () = envir (). taskscheduler (). scheduledelayedtask (usecstodelay, <br/> (taskfunc *) checkforauxsdpline, this); <br/>}< br/>}It checks whether the aux SDP has been obtained. if it has been obtained, it sets the end flag and returns directly. If no, check whether the aux SDP has been obtained in the sink. If yes, set the end flag and return. If not, add this check function as a delay task to the scheduled task. Check every 100 milliseconds, and call fdummyrtpsink-> auxsdpline (). The large loop stops when fdoneflag is detected and the aux SDP is obtained. However, if aux SDP is not obtained until the end of the file, afterplayingdummy () is executed and the large loop is stopped. Disable the temporary source and sink in the parent subsession class. Re-create during direct playback.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.