RTP Reference Document
Rfc3550/rfc3551
Real-Time Transport Protocol) is a transport layer protocol for multimedia data streams on the Internet. The RTP protocol details the standard packet formats for transmitting audio and video on the Internet. RTP is often used in streaming media systems (with RTCP protocol), video conferencing and one-click push to talk systems (with H.323 or sip), making it the technical basis of the IP telephone industry. The RTP protocol is used together with the RTP control protocol RTCP, And it is built on the UDP protocol.
RTP does not provide the on-time transmission mechanism or other quality of service (QoS) guarantees. It relies on low-level services to implement this process. RTP does not guarantee transmission or prevent unordered transmission, nor determine the reliability of the underlying network. RTP implements ordered transmission. The serial number in RTP allows the receiver to restructure the packet sequence of the sender, and the serial number can also be used to determine the appropriate packet location. For example, in video decoding, sequential decoding is not required.
RTP consists of two closely linked parts: RTP-transmits data with real-time attributes; RTP Control Protocol (RTCP)-monitors service quality and transmits information about ongoing session participants.
RTCP
Real-time transport control protocol or RTP control protocol or RTCP is a sister protocol of real-time transmission protocol (RTP. RTCP provides out-of-band control for RTP media streams. RTCP itself does not transmit data, but it works with RTP to package and send multimedia data. RTCP regularly transmits control data between participants in a stream multimedia session. The main function of RTCP is to provide feedback on the quality of service provided by RTP.
RTCP collects statistics about related media connections, such as the number of transmission bytes, number of transmission groups, number of lost groups, jitter, one-way and two-way network latency, and so on. Network applications can use the information provided by RTCP to try to improve the service quality, such as limiting information traffic or switching to a small compression decoder. RTCP itself does not provide data encryption or identity authentication. Srtcp can be used for such purposes.
SRTP & srtcp references
Rfc3711
Secure real-time transmission protocol (secure real-time transport protocol or SRTP) is a protocol defined based on the real-time transport protocol or RTP, the aim is to provide encryption, message authentication, integrity assurance and replay protection for data in real-time transmission protocols in unicast and multicast applications. It was developed by David Oran (Cisco) and Rolf BLOM (Ericsson) and was first released by IETF as rfc3711 in March 2004.
Due to the close relationship between the real-time transmission protocol and the real-time transmission control protocol (RTP control protocol or RTCP) that can be used to control Sessions of the real-time transmission protocol, secure real-time transmission protocol also has an associated protocol, which is called secure real-time transmission control protocol (secure RTCP or srtcp ); the secure real-time transmission control protocol provides similar security-related features for the real-time transmission control protocol, just as the secure real-time transmission protocol provides for the real-time transmission protocol.
When real-time transmission protocol or real-time transmission control protocol is used, it is optional to disable secure real-time transmission protocol or secure real-time transmission control protocol; however, even if the secure real-time transmission protocol or secure real-time transmission control protocol is used, all the features they provide (such as encryption and authentication) are optional, these features can be used or disabled independently. The only exception is that the Message Authentication feature must be used when secure real-time transmission control protocol is used.
RTSP
ReferencesRfc2326
It was jointly proposed by Real Networks and Netscape. This Protocol defines how one-to-multiple applications can effectively transmit multimedia data over an IP network. RTSP provides an extensible framework for real-time data, such as controlled audio and video, and on-demand video. Data sources include on-site data and data stored in clips. This protocol is used to control multiple data transmission connections. It provides a channel for selection of transmission channels, such as UDP, multicast UDP, and TCP, and provides a method for choosing the RTP-based transmission mechanism.
RTSP (Real Time Streaming Protocol) is a multimedia stream protocol used to control sound or images and allows control of multiple streaming demands at the same time, the network communication protocol used for transmission is not within the defined range. The server can use TCP or UDP to transmit streaming content. Its syntax and operation are similar to HTTP 1.1, however, the time synchronization is not particularly emphasized, So network latency can be tolerated. In addition to reducing the network usage on the server side and supporting multi-party video conferences, the multicast allows multiple stream requests at the same time ). Because it works in a similar way as http1.1, the cache function of proxy is also applicable to RTSP, convert the server that provides the service according to the actual load to avoid delay caused by concentrated loads on the same server.
Relationship between RTSP and RTP
Unlike HTTP and FTP, RTP can completely download the entire video file. It sends data on the network at a fixed data rate, and the client also watches the video file at this speed. When the video screen is played, you cannot play the video again unless you request data from the server again.
The biggest difference between RTSP and RTP is that RTSP is a two-way real-time data transmission protocol that allows the client to send requests to the server, such as playback, fast forward, and backward operations. Of course, RTSP can transmit data based on RTP, and can also choose TCP, UDP, multicast UDP and other channels to send data, with good scalability. It is a network application layer protocol similar to HTTP. One application currently encountered: the server collects, encodes, and sends two videos in real time, and the client receives and displays two videos. Because the client does not have to perform any playback or fallback operations on the video data, UDP + RTP + Multicast can be used directly.
RTP: Real-Time Transport Protocol)
RTP/RTCP is the actual data transmission protocol.
RTP transmits audio/video data. If it is play, the server sends the data to the client. If it is record, the client can send the data to the server.
The whole RTP protocol is composed of two closely related parts: RTP data protocol and RTP Control Protocol (RTCP)
RTSP: Real Time Streaming Protocol (RTSP)
RTSP requests mainly include describe, setup, play, pause, teardown, options, etc. As the name suggests, you can understand the role of dialog and control.
During the RTSP conversation, setup can determine the port used by RTP/RTCP, and play/pause/teardown can start or stop sending RTP, etc.
RTCP:
RTP/RTCP is the actual data transmission protocol.
RTCP includes Sender report and javaser report for audio/video synchronization and other purposes. It is a control protocol.
SDP
The Session Description Protocol (SDP) provides multimedia session descriptions for session notifications, session invitations, and other forms of multimedia session initialization.
The session directory is used to assist with multimedia conference announcements and send relevant settings for session participants. SDP is used to transmit this information to the receiving end. SDP is completely a session description format-it is not a Transport Protocol-it only uses different appropriate transport protocols, including session notification protocol (SAP) and Session Initiation Protocol (SIP), real-time stream protocol (RTSP), mime extension protocol email and Hypertext Transfer Protocol (HTTP ).
SDP is designed to be universal. It can be applied to a wide range of network environments and applications, not just multicast session directories, but SDP does not support negotiation of session content or media encoding.
In the Internet multicast Backbone Network (mbone), the Session Directory tool is used to notify multimedia meetings and send the meeting address and the meeting-specific tool information required by the participants, which is completed by SDP. After the session is connected, the SDP sends enough information to the session participant. Session notification protocol (SAP) is used to send SDP messages, which periodically multicast notification packets to known multicast addresses and ports. This information is a UDP packet, which contains the sap protocol header and text payload ). Here, the text payload refers to the SDP Session Description. In addition, the information can also be sent via email or WWW (World Wide Web.
SDP text information includes:
- Session name and intent;
- Session duration;
- Media that constitutes a session;
- Information about the received media (addresses, etc ).
- Protocol Structure
SDP information is text information, using the ISO 10646 character set in the UTF-8 encoding. The SDP session is described as follows: (an optional field marked with the * symbol ):
V = (Protocol Version)
O = (owner/creator and session identifier)
S = (Session name)
I = * (session information)
U = * (URI description)
E = * (email address)
P = * (phone number)
C = * (connection information-this field is not required if it is included in all media)
B = * (bandwidth information)
One or more time descriptions (as shown below ):
Z = * (Time Zone adjustment)
K = * (encryption key)
A = * (0 or multiple session attribute rows)
0 or more media descriptions (as shown below)
Time description
T = (Session Activity time)
R = * (0 or repeated times)
Media description
M = (media name and transfer address)
I = * (media title)
C = * (connection information-this field is optional if it is included in the Session Layer)
B = * (bandwidth information)
K = * (encryption key)
A = * (0 or multiple session attribute rows)
Rtmp/rtmps
Rtmp (real time messaging protocol) is an open protocol developed by Adobe Systems for audio, video, and data transmission between flash players and servers.
It has three variants:
1) the plain text protocol working on TCP uses port 1935;
2) rtmpt is encapsulated in HTTP requests and can pass through the firewall;
3) rtmps is similar to rtmpt, but uses HTTPS connections;
The rtmp protocol (real time messaging protocol) is used by flash for object, video, and audio transmission. This protocol is based on TCP protocol or round-robin HTTP protocol.
The rtmp protocol is like a container used to load data packets. The data can be either AMF or FLV video/audio data. A single connection can transmit multiple network streams through different channels. packages in these channels are transmitted according to fixed packages.
MMS
MMS (Microsoft Media Server Protocol) is a protocol used to access and stream receive. ASF files on Windows Media Servers. The MMS protocol is used to access unicast content on Windows Media publishing points. MMS is the default method for connecting to the Windows Media Unicast service. If the audience type a URL in Windows Media Player to connect to the content, rather than accessing the content through a hyperlink, they must reference the stream using the MMS protocol. The default port (port) of MMS is 1755.
When you connect to a publishing point using the MMS protocol, use protocol flip to obtain the optimal connection. Protocol flip begins with an attempt to connect to the client through mmsu. Mmsu is the MMS protocol combined with UDP data transmission. If the mmsu connection fails, the server tries to use mmst. Mmst is a combination of MMS protocol and TCP data transmission.
If you want to connect to the indexed. ASF file and want to fast forward, backward, pause, start, and stop the stream, you must use MMs. You cannot use the UNC path to fast forward or backward. If you connect to a publishing point from an independent Windows Media Player, you must specify the URL of the unicast content. If the content is published on demand at the main publishing point, the URL consists of the server name and. ASF file name. Example: MMS: // windows_media_server/sample. ASF. Here, windows_media_server is the name of the Windows Media Server, and sample. ASF is the file name of the. ASF file you want to convert it into a stream.
If you want to publish real-time content through broadcast unicast, the URL consists of the server name and the alias of the publishing point. For example, MMS: // windows_media_server/liveevents. Here, windows_media_server is the name of the Windows Media Server, while liveevents is the name of the release server.
Hls
HTTP live streaming (HLS) is Apple Inc .) the HTTP-based streaming media transmission protocol enables live and on-demand streaming media. It is mainly used in IOS systems and provides audio and video live and on-demand solutions for iOS devices (such as iPhone and iPad. Hls VOD is basically a common multipart http vod. The difference is that its segmentation is very small.
Compared with common live streaming media protocols, such as rtmp, RTSP, and MMS, the biggest difference between Hls live streams is that what the live streaming client obtains is not a complete data stream. The HLS protocol stores live streams on the server as continuous, very short-term long media files (MPEG-TS format), while the client continuously downloads and plays these small files, because the server always generates new small files from the latest live video data, the client can broadcast live as long as it keeps playing the files obtained from the server in order. It can be seen that HLS is basically achieved by means of VOD technology. Because data is transmitted over HTTP, you do not need to worry about firewall or proxy issues, and the length of the segment file is very short, so the client can quickly select and switch the bit rate, to adapt to playback under different bandwidth conditions. However, this technical feature of HLS determines that its latency is generally higher than that of common live streaming media protocols.
To achieve HTTP live streaming live broadcast, you need to study and implement the following key technical points:
- Collect data from video sources and audio sources
- Encode raw data with h264 and aac
- Encapsulation of video and audio data into a MPEG-TS package
- Hls segmentation generation policy and m3u8 index file
- HTTP transmission protocol
Basic knowledge about streaming media protocols