RTMP Live application and delay analysis

Source: Internet
Author: User

In the live app, rtmp and HLS can basically cover all the clients watching,
HLS mainly is the delay is relatively large, the main advantage of RTMP is the low delay.

First, the application scenario

Low latency application scenarios include:
. Interactive live broadcast: For example, the 2013 's big line of the beautiful anchor, game live and so on
A variety of anchors, streaming media distributed to users to watch. Users can interact with text chat and host.
. Video conferencing: If we have a colleague on a business trip, we use video conferencing to open internal meetings.
In fact, the meeting 1 seconds delay does not matter, because people after the speech, others need to think,
The delay in thinking will also be about 1 seconds. Of course, if you fight with video conferencing, you can't.
. Other: monitoring, live there are some places where delays are required,
The latency of the RTMP protocol on the Internet basically satisfies the requirements.


Two, rtmp and delay 1. The features of rtmp are as follows:

1) Adobe is well supported:
   RTMP is actually an industry standard protocol for encoder output, and basically all encoders (cameras, and so on) support rtmp output.
    The reason is that the PC market is huge, the PC is mainly windows,windows browser basically support Flash,
   flash and support rtmp support is very good.
2) Suitable for long-time playback:
    Because rtmp support is very perfect, so can flash play rtmp stream for a long time,
    Then the test is 1 million seconds, that is, more than 10 days can be played continuously.
    for commercial streaming media applications, the stability of the client is of course also necessary, otherwise the end user can not see how to play?
    I knew there was an educational client who initially played the HTTP stream with the player, needed to play a different file, and the result was a total problem,
    If the server side converts the different files into rtmp streams, the client can play them all the time;
    After the client went through the RTMP scheme, it was distributed by the CDN and did not hear that the client was faulty.
3) Low latency:
    The rtmp delay is large (delayed at 1-3 seconds) compared to the type of UDP private protocol yy, and
    The rtmp is low latency compared to the latency of the HTTP stream (typically more than 10 seconds).
    Normal live apps, rtmp latency is acceptable as long as it's not the type of phone conversation.
    RTMP delay is also acceptable in general video conferencing applications because we are generally listening when others are speaking,
    actually 1-second delay does not matter, We also have to think (that some people don't have the CPU processing speed so fast).
4) There is a cumulative delay:
    Technology must be aware of weaknesses, RTMP has a weakness is cumulative error, because rtmp based on TCP will not drop packets.
    So when the network status is poor, the server caches the package, causing a cumulative delay;
    When the network is in good condition, it is sent to the client.
    The countermeasure is that when the client's buffer is large, it disconnects the re-connection.


2. HLS Low Latency

Some people are always asking this question, how to reduce HLS delay.
HLS solves the delay, like climbing to the maple to catch fish, oddly enough, there are people shouting, look at that, there are fish.
What do you mean, what's going on?


I can only say that you are involved in the magic of the show, the illusion.
If you are sure of it, please use the actual measurement of the picture to show it, refer to the following delay measurement.


3. RTMP Delay Measurement

How to measure the delay is a difficult problem,
However, there is an effective method, that is, using a mobile phone stopwatch, you can compare the accurate comparison delay.


Measured and detected, when the network is in good condition:
. RTMP delay can be achieved in about 0.8 seconds.
. Multi-level edge nodes do not affect latency (and SRS-homologous CDN Edge servers can do)
. Nginx-rtmp latency is a bit large, it is estimated that the processing of the cache, multi-process communication caused?
. The GOP is a tough, but the SRS can close the GOP cache to avoid this effect.
. Server performance is too low, it can also cause the delay to become larger, the server too late to send data.
. The client's buffer length also affects latency.
For example, the Flash client's netstream.buffertime is set to 10 seconds, then the delay is at least 10 seconds.


4. Gop-cache

What is a GOP? is the time distance of two I frames in the video stream.
What is the effect of GOP?
Flash (decoder) only gets the GOP to start decoding playback.
In other words, the server usually gives an I-frame to flash first.
Unfortunately the problem comes, assuming the GOP is 10 seconds, that is, every 10 seconds to have keyframes,
What happens if the user starts playing in the 5th second?
The first scenario: wait for the next I-frame,
That is, wait 5 seconds before starting to give the client data.
This delay is very low, always in real-time flow.
The problem is: wait for this 5 seconds, will be black screen, the phenomenon is the player card there, nothing,
Some users may think they are dead and will refresh the page.
In short, some customers think that waiting for KeyFrames is an unforgivable error, what does delay matter?
I just want to start and play the video quickly, it's best to open it!


The second option: Start the release immediately,
Put what?
You must have known, put one I-frame before.
In other words, the server needs to always cache a GOP,
This allows the client to start playing on the previous I-frame, and it can be started quickly.
The problem is that the delay is naturally large.


Is there a good plan?
Yes! There are at least two types:
The encoder lowered the GOP, for example, 0.5 seconds a GOP, so the delay is also very low, do not wait.
The disadvantage is that the encoder compression rate will be reduced, the image quality is not so good.


5. Cumulative delay

In addition to Gop-cache, there is a relationship, that is, cumulative delay.
The server can configure the length of the live queue, and the server will put the data in the live queue,
If more than this length is emptied to the last I frame:


Of course this cannot be configured too small,
For example, GOP is 1 seconds, queue_length is 1 seconds, this will cause 1 seconds of data will be emptied, will cause jumping.


There's a better way? Some.
Latency is basically equal to the client's buffer length because the latency is mostly due to low network bandwidth,
Server cache is sent to the client together, the phenomenon is that the client's buffer becomes larger,
For example, netstream.bufferlength=5 seconds, then there is at least 5 seconds of data in the buffer.


The best way to handle the cumulative delay is for the client to detect that the buffer has a lot of data and, if possible, to re-connect the server.
Of course, if the network has been bad, there is no way.

RTMP Live application and delay analysis

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.