The similarities and differences between RTSP, RTMP, and HTTP
Common:
1:RTSP RTMP HTTP is applied at the application layer.
2: Theoretically RTSP rtmphttp can do live and on-demand, but generally do live with RTSP RTMP, do on-demand http. Video conferencing when the original SIP protocol, and now basically replaced by the RTMP protocol.
Difference:
1:http: Hypertext Transfer Protocol (FTP is a file Transfer Protocol).
HTTP: (Real Time streaming Protocol), live streaming protocol.
HTTP Full Name routing table maintenance Protocol (Routing Table Maintenance Protocol).
2:http all the data as a file. The HTTP protocol is not a streaming media protocol.
RTMP and RTSP protocols are streaming media protocols.
The 3:rtmp agreement is Adobe's private agreement, which is not fully disclosed, and the RTSP protocol and the HTTP protocol are common agreements and have specialized agencies for maintenance.
The 4:RTMP protocol generally transmits the FLV,F4V format stream, and the RTSP protocol generally transmits a stream in the Ts,mp4 format. HTTP does not have a specific stream.
5:RTSP transmission generally requires 2-3 channels, command and Data channel separation, HTTP and rtmp generally on a TCP channel to transmit commands and data.
RTSP, RTCP, RTP differences
1:RTSP Real-Time Streaming protocol
As an application-level protocol, RTSP provides an extensible framework that makes it possible to control and on-demand real-time streaming media data. In general, RTSP is a streaming media representation protocol that is primarily used to control data transmission with real-time characteristics, but it does not transmit data by itself, but must rely on some of the services provided by the underlying transport protocol. RTSP can provide streaming media such as play, pause, fast forward and other operations, it is responsible for defining the specific control message, operation method, status code, etc., also describes the interaction with RTP (RFC2326).
2:RTCP Control Protocol
The RTCP control protocol needs to be used in conjunction with the RTP data protocol, and when an application initiates an RTP session, it consumes two ports, respectively, for RTP and RTCP. RTP by itself does not provide a reliable guarantee for sequential transmission of packets, nor does it provide traffic control and congestion control, which is done by RTCP. Typically, the RTCP uses the same distribution mechanism as RTP to periodically send control information to all members of the session, and the application receives the data, obtains information about the session participants, and feedback such as network conditions, packet loss probabilities, and so on to the quality of service into control or diagnose network conditions.
The functions of the RTCP protocol are implemented by different RTCP datagrams, mainly in the following categories:
SR: Send-side report, the so-called sender is the application or terminal to emit RTP datagram, the sender can also be the receiving side. (server time is sent to client).
RR: The receiving side reports that the so-called receiver is the application or terminal that receives but does not send RTP datagrams. (The server receives a response sent by the client side).
SDEs: Source Description, the main function is as a member of the session on identity information carrier, such as user name, e-mail address, telephone number, and also has the ability to communicate session control information to the session members.
BYE: Notification departure, the primary function is to indicate that one or several sources are no longer valid, that is, the other members in the notification session will exit the session themselves.
App: defined by the application itself, it solves the extensibility problem of RTCP and provides a great deal of flexibility for the implementation of the Protocol.
3:RTP Data Protocol
RTP Data Protocol is responsible for the streaming media data packets and the real-time transmission of media streams, each RTP datagram by the head (header) and load (Payload) two parts, wherein the head of the first 12 bytes meaning is fixed, while the load can be audio or video data.
The place that RTP uses is PLAY, the server transmits data to the client with the UDP protocol, RTP is a 12-byte header (description information) in front of the transmitted data.
RTP Payload Package Design This paper's network transmission is based on the IP protocol, so the maximum transmission Unit (MTU) maximum of 1500 bytes, when using the IP/UDP/RTP protocol hierarchy, which includes at least 20 bytes of IP header, 8 bytes of UDP header, and 12 bytes of RTP header. This way, the header information takes at least 40 bytes, and the maximum size of the RTP payload is 1460 bytes. Taking H264 as an example, if a frame of data is greater than 1460, it needs to be fragmented and then unpacked to the receiving end and then assembled into one frame of data for decoding playback.
In the live app, rtmp and HLS can basically cover all the clients watching,
HLS mainly is the delay is relatively large, the main advantage of RTMP is the low delay.
First, the application scenario
Low latency application scenarios include:
. Interactive live broadcast: For example, the 2013 's big line of the beautiful anchor, game live and so on
A variety of anchors, streaming media distributed to users to watch. Users can interact with text chat and host.
. Video conferencing: If we have a colleague on a business trip, we use video conferencing to open internal meetings.
In fact, the meeting 1 seconds delay does not matter, because people after the speech, others need to think,
The delay in thinking will also be about 1 seconds. Of course, if you fight with video conferencing, you can't.
. Other: monitoring, live there are some places where delays are required,
The latency of the RTMP protocol on the Internet basically satisfies the requirements.
Two, rtmp and delay 1. The features of rtmp are as follows:
1) Adobe is well supported:
 RTMP is actually an industry standard protocol for encoder output, and basically all encoders (cameras, and so on) support rtmp output.
The reason is that the PC market is huge, the PC is mainly windows,windows browser basically support Flash,
flash and support rtmp support is very good.
2) Suitable for long-time playback:
Because rtmp support is very perfect, so can flash play rtmp stream for a long time,
Then the test is 1 million seconds, that is, more than 10 days can be played continuously.
for commercial streaming media applications, the stability of the client is of course also necessary, otherwise the end user can not see how to play?
I knew there was an educational client who initially played the HTTP stream with the player, needed to play a different file, and the result was a total problem,
If the server side converts the different files into rtmp streams, the client can play them all the time;
After the client went through the RTMP scheme, it was distributed by the CDN and did not hear that the client was faulty.
3) Low latency:
The rtmp delay is large (delayed at 1-3 seconds) compared to the type of UDP private protocol yy, and
The rtmp is low latency compared to the latency of the HTTP stream (typically more than 10 seconds).
Normal live apps, rtmp latency is acceptable as long as it's not the type of phone conversation.
RTMP delay is also acceptable in general video conferencing applications because we are generally listening when others are speaking,
actually 1-second delay does not matter, We also have to think (that some people don't have the CPU processing speed so fast).
4) There is a cumulative delay:
Technology must be aware of weaknesses, RTMP has a weakness is cumulative error, because rtmp based on TCP will not drop packets.
So when the network status is poor, the server caches the package, causing a cumulative delay;
When the network is in good condition, it is sent to the client.
The countermeasure is that when the client's buffer is large, it disconnects the re-connection.
2. HLS Low Latency
Some people are always asking this question, how to reduce HLS delay.
HLS solves the delay, like climbing to the maple to catch fish, oddly enough, there are people shouting, look at that, there are fish.
What do you mean, what's going on?
I can only say that you are involved in the magic of the show, the illusion.
If you are sure of it, please use the actual measurement of the picture to show it, refer to the following delay measurement.
3. RTMP Delay Measurement
How to measure the delay is a difficult problem,
However, there is an effective method, that is, using a mobile phone stopwatch, you can compare the accurate comparison delay.
Measured and detected, when the network is in good condition:
. RTMP delay can be achieved in about 0.8 seconds.
. Multi-level edge nodes do not affect latency (and SRS-homologous CDN Edge servers can do)
. Nginx-rtmp latency is a bit large, it is estimated that the processing of the cache, multi-process communication caused?
. The GOP is a tough, but the SRS can close the GOP cache to avoid this effect.
. Server performance is too low, it can also cause the delay to become larger, the server too late to send data.
. The client's buffer length also affects latency.
For example, the Flash client's netstream.buffertime is set to 10 seconds, then the delay is at least 10 seconds.
4. Gop-cache
What is a GOP? is the time distance of two I frames in the video stream.
What is the effect of GOP?
Flash (decoder) only gets the GOP to start decoding playback.
In other words, the server usually gives an I-frame to flash first.
Unfortunately the problem comes, assuming the GOP is 10 seconds, that is, every 10 seconds to have keyframes,
What happens if the user starts playing in the 5th second?
The first scenario: wait for the next I-frame,
That is, wait 5 seconds before starting to give the client data.
This delay is very low, always in real-time flow.
The problem is: wait for this 5 seconds, will be black screen, the phenomenon is the player card there, nothing,
Some users may think they are dead and will refresh the page.
In short, some customers think that waiting for KeyFrames is an unforgivable error, what does delay matter?
I just want to start and play the video quickly, it's best to open it!
The second option: Start the release immediately,
Put what?
You must have known, put one I-frame before.
In other words, the server needs to always cache a GOP,
This allows the client to start playing on the previous I-frame, and it can be started quickly.
The problem is that the delay is naturally large.
Is there a good plan?
Yes! There are at least two types:
The encoder lowered the GOP, for example, 0.5 seconds a GOP, so the delay is also very low, do not wait.
The disadvantage is that the encoder compression rate will be reduced, the image quality is not so good.
5. Cumulative delay
In addition to Gop-cache, there is a relationship, that is, cumulative delay.
The server can configure the length of the live queue, and the server will put the data in the live queue,
If more than this length is emptied to the last I frame:
Of course this cannot be configured too small,
For example, GOP is 1 seconds, queue_length is 1 seconds, this will cause 1 seconds of data will be emptied, will cause jumping.
There's a better way? Some.
Latency is basically equal to the client's buffer length because the latency is mostly due to low network bandwidth,
Server cache is sent to the client together, the phenomenon is that the client's buffer becomes larger,
For example, netstream.bufferlength=5 seconds, then there is at least 5 seconds of data in the buffer.
The best way to handle the cumulative delay is for the client to detect that the buffer has a lot of data and, if possible, to re-connect the server.
Of course, if the network has been bad, there is no way.
HTTP protocol/RTSP Protocol/RTMP Protocol Differences