"Video Broadcast Technology details" Series 5: latency optimization,

Source: Internet
Author: User

"Video Broadcast Technology details" Series 5: latency optimization,

There are a lot of technical articles on live broadcasting, and there are not many systems. We will use seven articles to give a more systematic introduction to the key technologies of live video in all aspects of the current hot season, and help live video entrepreneurs to gain a more comprehensive and in-depth understanding of live video technology, better technology selection.

This series of articles outlines as follows:

(1) Collection

(2) handling

(3) coding and Encapsulation

(4) streaming and transmission

(5) latency Optimization

(6) Principles of Modern players

(7) SDK Performance Test Model

In the previous stream push and transmission, we introduced in detail the key factors of "first kilometer of live broadcasting. This article is part 5 of the video live decryption technology series: latency optimization.

 

On many online and offline occasions, we shared how to optimize the live broadcast experience, explained in detail the causes of low latency and choppy performance in each part, and the corresponding optimization principles. In fact, the live video system of audio and video is a complex engineering system. To achieve a very low-latency live video, it requires complex system engineering optimization and familiarity with various components. Here we will share some simple and common tuning techniques.

Code optimization

1. Make sure that the minimum latency setting is enabled for Codec. Codec usually has a low-latency optimization switch, especially for H.264. Many may not know H. 264 of the decoder normally caches certain video frames before display. For videos with QCIF resolution (176x144), 16 frames are usually cached, for videos with 720 P, 5 frames are cached. This is a great delay for reading the first frame. If your video does not use H. 264 encoding and compression, to ensure that B frames are not used, it also has a great impact on latency, because the decoding of B frames in the video depends on the video frames before and after, will increase latency.

2. the encoder generally has latency caused by code control, which is also called initialization latency or the cache size of the video cache detector VBV. It is used as the cache between the encoder and the decoder bit stream, you can set it as small as possible or reduce latency without affecting the video quality.

3. If the latency is optimized only, a large number of key frames can be inserted between video frames so that the client can decode the video stream as soon as it receives it. However, if you need to optimize the cumulative latency during transmission, try to use as few key frames as possible, that is, I frame (GOP becomes larger). When the same video quality is ensured, the more I frames, the larger the bit rate, the more network bandwidth required for transmission, which means the larger the accumulated latency. This optimization effect may not be significant in systems with latency in seconds, but it will be significant in systems with 100 MS or even lower latency. At the same time, try to use AAC-LC Codec to encode the audio, HE-AAC or HE-AAC V2 although the encoding efficiency is high, but the encoding takes a longer time, the transmission delay caused by the generation of larger volumes of audio will have less impact on the transmission of video streams.

4. do not use the video compression format of the video MJPEG. At least use the MPEG4 video compression format (Simple profile) without B frames, or even use H. 264 baseline profile (X264 also has an "-tune zerolatency" optimization switch ). Such a simple optimization can reduce the latency because it can encode the full frame rate video at a lower bit rate.

5. if FFmpeg is used, reduce the value of the "-probesize" and "-analyze duration" parameters. These two values are used for video frame information monitoring and monitoring duration, the greater these two values, the greater the impact on the encoding latency. In live video scenarios, the analyzeduration parameter is not even required for video streams.

6. fixed bit rate encoding CBR can eliminate network jitters to some extent. If you can use variable bit rate encoding VBR, you can save unnecessary network bandwidth and reduce latency. Therefore, we recommend that you use VBR encoding whenever possible.

Transmission Protocol Optimization

1. Try to use RTMP instead of HTTP-based HLS protocol for transmission between server nodes and nodes, which can reduce the overall transmission latency. This is mainly for end users to use HLS for playback.

2. If the end user uses RTMP to play the video, try to transcode it at the stream receiving node near the streaming end. In this way, the transmitted video stream is smaller than the original video stream.

3. If necessary, you can use a custom UDP protocol to replace the TCP protocol, saving the packet loss retransmission in the weak network segment and reducing the latency. Its main disadvantage is that the transmission and distribution of custom video streams based on UDP protocol is not universal enough, and CDN vendors support standard transmission protocols. Another drawback is that the screen or blur caused by packet loss (the key frame decoding reference is missing). This requires the Protocol customizer to control packet loss based on UDP.

Transmission Network Optimization

1. we have introduced the real-time stream transmission network, which is a new type of node self-organizing mesh transmission network. It is suitable for transmission optimization under the network conditions of multiple carriers in China, it is also suitable for the needs of many overseas live broadcasts.

2. cache the current GOP on the server node and optimize the video start time with the player.

3. The server records the second-level Frame Rate and bit rate when each video stream flows to each link in real time, and monitors the fluctuation of Bit Rate and frame rate in real time.

4. The client (streaming and playback) obtains the current optimal node (once every 5 seconds) in quasi-real time by querying the server, and deprecates the current faulty node and line in quasi-real time.

Stream pushing and playback Optimization

1. Check the size of the network buffer provided by the sending end system. The system may cache data before sending data. The optimization of this parameter also requires a balance point.

2. the Cache control on the playing end also has a great impact on the video's first-start latency. If you only optimize the first-start latency, you can decode the video immediately when the data arrives at 0 cache. However, in a weak network environment, it is also necessary to set a certain cache to eliminate the impact of network jitters. Therefore, you need to find a balance between the stability of the live broadcast and the optimization of the initial latency, adjust the value of optimized buffer size.

3. The dynamic buffer policy of the playing end, which is an improved version of the Cache Control of the playing end above. If you select to find a balance between the zero cache and a fixed-size cache, a fixed-size cache will eventually be selected, which is unfair to hundreds of millions of mobile internet terminal users, their different network conditions determine that this fixed-size cache is not completely suitable. Therefore, we can consider a "Dynamic buffer policy" that uses a very small or even zero cache policy when the player is enabled, the cache size of the next time slice is determined by the time consumption of downloading the first video. The current network is monitored in real time during the playing process, and the cache size during the playing process is adjusted in real time. In this way, an extremely low initial start time can be achieved, and the impact of network jitter can be eliminated as much as possible.

4. Dynamic Bit Rate playing strategy. In addition to the policy of dynamically adjusting the buffer size, you can also dynamically adjust the bit rate during playback by using the Network Information monitored in real time. When the network bandwidth is insufficient, the bit rate is reduced for playback and the latency is reduced.

The above is part of our skills in Low-latency optimization. In fact, when we optimize low latency, we do not only focus on "low latency", but try to achieve low latency while ensuring that other conditions do not affect user experience, therefore, its content involves more broad topics. The optimization of apsaravideo for live includes all aspects. Here we only share some of the practices we have taken. With the accumulation of practice, we will share it online and offline.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.