"Video broadcast technology detailed" series of five: push stream and transmission

Source: Internet
Author: User
Tags failover

Author: seven Cow Cloud

Original link: Click to open the link push protocol

Here's a look at what the push protocols are, their status and pros and Cons in the live broadcast field. RTMP WebRTC UDP-based private protocol

1. RTMP

RTMP is the acronym for Real Time Messaging Protocol (Live Message Transfer Protocol). The protocol is based on TCP and is a protocol family, including various variants such as RTMP Basic protocol and Rtmpt/rtmps/rtmpe. RTMP is a network protocol designed for real-time data communication, which is used primarily for audio and video and data communication between the Flash/air platform and the streaming/interactive server that supports the RTMP protocol. The software that supports this protocol includes Adobe media server/ultrant media server/red5, and so on.

RTMP is currently the mainstream streaming media transmission protocol, widely used in the field of broadcasting, it can be said that the vast majority of live products on the market have adopted this protocol.

The advantages of CDN support is good, mainstream CDN vendors support the protocol is simple, on the platform to achieve easy

Disadvantages based on TCP, high transmission costs, high packet loss rate in weak network environment, the browser push Adobe Private agreement is not supported, Adobe no longer updates

2. WebRTC

WebRTC, a name derived from the abbreviation for Web Instant Messaging (English: Web real-time communication), is an API that supports Web browsers for real-time voice conversations or video conversations. It was open source on June 1, 2011 and was incorporated into the World Wide Web Consortium's recommended standards with Google, Mozilla and Opera support.

Currently it is mainly used in video conferencing and continuous wheat, and the protocols are layered as follows:

The advantages of the standard, mainstream browser support a high degree of Google in the back support, and the platform has a reference to achieve the underlying base based on SRTP and UDP, weak network optimization of space can be achieved point-to-point communication, communication both sides delay low

Disadvantage Ice,stun,turn Traditional CDN does not offer similar services to 3. UDP-based private protocol

Some live applications use UDP to develop their own private protocols for the underlying protocol, because the advantages of UDP in a weak network environment can achieve better weak network optimization effect through some customized tuning, but also because the private protocol is bound to have a real problem:

Benefits more space for customized optimization

Disadvantages of high cost of development CDN unfriendly, need to build a CDN or agreement with the CDN to fight independently, unable to evolve with the Community transport network

We push out the streaming media needs to be transmitted to the audience, the entire link is the transmission network, the analogy of freight logistics is from the origin to the destination see all the way, if the road capacity is not enough, will cause traffic jams that is the network congestion, then we will change the distance is called Intelligent Dispatch, But the transmission network will be at the global point of dispatch, so it will be better than the atomic world scheduling, you can imagine a god in the sky overlooking the origin and destination of all the traffic information, but also real-time, and then give you a clear road, how magical, but these we in livenet have already been implemented.

Here's a look at the traditional content distribution network.

1. Why there is a content distribution network, the origin of the content distribution network

The internet originated in an internal U.S. military network, Tim Berners-lee is one of the Internet inventors, he has long foreseen that in the near future network congestion will become the biggest obstacle to the development of the Internet, so he put forward an academic problem, to invent a new, A fundamental solution to the problem-free distribution of Internet content, the academic conundrum eventually spawned an innovative Internet service--CDN. Dr. Berners-lee, who was next door to Professor Tom Leighton's office, a professor of applied mathematics at MIT, was intrigued by Berners-lee's challenge. Letghton finally solved the problem and began its own business plan, setting up Akamai as the world's first CDN company.

2. Architecture of traditional CDN

Above is a typical CDN system three-level deployment diagram, the node is the most basic deployment unit of CDN system, divided into three levels of deployment, central node, regional node and Edge node, the top level is the central node, the middle level is a regional node, the edge node geographically dispersed, to provide users with access to the content of the nearest service.

The following is a description of the CDN node classification, mainly divided into two major categories, backbone nodes and POP nodes, the backbone node is divided into central nodes and regional nodes. Backbone node central node Area node Pop node Edge node

Logically speaking, the backbone node is mainly responsible for the content distribution and the Edge node misses when the return source, the POP node is mainly responsible for providing users the nearest content access services. However, if the CDN network is large, the edge node directly to the central node back to the source will cause the core equipment of the middle tier too much pressure, the physical introduction of regional nodes, responsible for the management of a geographical area, saving some hot data.

3. Live transmission network is different from traditional CDN pain point

With the advent of the live era, live broadcast become the current CDN manufacturers another major battlefield, then the live Era CDN need to support what kind of service. Support for streaming media protocols, including Rtmp,hls, http-flv, etc. First screen seconds, from the user click to play control within the second level of the delay control, from the push stream end to the playback side, the delay control in 1-3 seconds between the global network intelligent routing, you can use all the nodes in the entire CDN network for a single user service, not subject to geographical restrictions. As the global integration process progresses, trans-regional, trans-national, trans-continent broadcasts are becoming the norm, and are likely to be hosts in Europe and the United States, while users are in Asia. Day-level nodes on demand, Chinese companies to sea has become the trend, CDN needs more overseas nodes, now compete more is the overseas node can be quickly deployed, from the point of node to increase demand to the node network to provide services, need to reach a day, the CDN operation and planning to put forward very high requirements. The original month level planning and network can not meet the advanced requirements.

4. Link routing for traditional CDN

CDN is based on a tree-like network topology, and each layer has GSLB (Global Server load Balancing) for multiple CDN node load balancing within the same tier, what is the benefit of this.

In the many CDN scenarios mentioned earlier, Web acceleration, video acceleration, and file transfer acceleration are both dependent on the GSLB and cache system, and the cache system is the cost of the entire CDN system, and the design tree structure can maximize the savings of the cache system's capital investment. Because only the central node needs to maintain the opportunity to all the cache copy, down-to-down, to the edge node only need a small number of hotspot Cache can hit most of the CDN access requests, which greatly reduces the cost of the CDN network, but also in line with the needs of CDN users, is a win.

But in the live era, the live stream business is a streaming business, rarely involved in the Cache system, the basic is to release the storage resources, even for policy reasons there are storage requirements are cold storage, the investment in storage is relatively inexpensive, and does not require storage in all nodes, as long as the data can be traced back, can be used.

We look at the tree-like network topology, the user's link selection is limited, the following figure, the user in a certain area can choose the number of links is: 2 * 5 = 10

Within a region, the GSLB (usually Smart DNS at the Edge node) routes the user to an edge node within the zone, and the previous layer is routed to a zone node (where the GSLB is usually the internal load balancer), and finally back to the central node, where the central node links the source station.

The assumption here is that the fastest node that a user can access must be an edge node within that region, and if the area does not have an edge node, the fastest must be the edge node within the logically contiguous region. The fastest node that an edge node can access must be an area node within that region, and must not be a node in another region. The region node to the central node must be the fastest, the link speed and bandwidth are optimal.

But is it actually true? Is it true that so many assumptions have been introduced?

In fact, even if theoretically we can prove that the above hypothesis is effective, but the node planning and regional configuration mostly depend on human design and planning, we know that many people are not reliable, and even if the regional planning is correct, who can ensure that these static network planning is not because of the laying of a fiber or because some IDC The pressure is too great to change. So we can jump out of the tree-like network topology and explore a new network topology suitable for live acceleration.

In order to get rid of the limited link routing line limit and activate the ability to organize the network, we can turn the above nodes into a mesh network topology:

We see that once we change the network structure to a mesh structure, the user's selectable link becomes: All paths between the specified two points of the no-map, and the students who have studied graph theory know that the number is staggering.

The system can select any one of the fastest links through intelligent routing without relying on outdated manual planning when the system is deployed, either by adding fiber to some links or by some IDC pressure that can be reflected in real time to the finishing network, helping users to pull out the optimal link in real time. At this point we can remove some of the previous assumptions and plan the network's link routing in real time through machines rather than humans, a real-time, large-scale computing task that is inherently not a human strength, and we should give it to a more suitable species.

5. Expansion of CDN

The previous mention of the Chinese company's sea has become the general trend, the demand for CDN overseas nodes is increasing, the need for CDN manufacturers in the new region to deploy new backbone and edge nodes, need to do detailed network planning. Times change, the original CDN users are enterprise-level users, their business line of the iterative cycle is longer, there is a long time planning, left to the CDN manufacturers time is also more. And the Internet companies pay attention to the speed, the two-week iteration has become the norm, which involves the contradiction between cost and response speed, if the deployment of nodes ahead of time can better serve these internet companies, but there is high cost pressure, in turn, can not respond to these fast-growing Internet companies.

Ideally, the user puts forward the demand, the CDN vendor internal evaluation, the same day give feedback, the same day deployment, the customer can test the new area of the new node that day. How to solve.

The answer is based on the mesh topology of the peer-network, each node in the mesh topology is peer, logically each node provides service equivalence, do not need to design a complex network topology, nodes on-line after the complex start process, directly on-Line registration node information, you can provide services to users, Combining virtualization technology before and after time can theoretically be controlled within a day.

6. Return to Nature: livenet

We know that the first Internet is the network topology, and then slowly joined the backbone to solve a variety of problems, we are time to return to the essence of the next generation of Live distribution network: Livenet. Summarizing the previous discussion, we found that the content distribution network we need for the live ERA is: The requirements for the Cache are not as high as they were before. Very high demands on the operation of the node, to be more intelligent, to minimize the human intervention on the expansion of this operational incident response to the requirements of high

To achieve the above, we need: to be centralized, network topology global network scheduling node stateless, node-equivalent intelligent operation and maintenance

These are the livenet design time, let operations more automation, the system runs a high degree of autonomy, relying on machine computing rather than artificial judgment, the following are introduced separately.

1) Go to center, mesh topology

The network topology is the fundamental and foundation of the design, and the mesh topology is more advantageous only when we see the decrease in the Cache requirements.

2) Global All-Network scheduling

Based on a global web, not constrained by regional network scheduling, the scope of the dispatch from the regional network to expand the global, the entire network of nodes can respond to user requests, participate in link routing, no longer first by artificial assumptions selected part of the node to route, remove human intervention, make the whole system more intelligent.

3) node stateless, node peer

Livenet node stateless and node equivalence are convenient operation and maintenance, remove the concept of the global network to make the entire topological structure becomes unusually complex, if each node has a succession of dependencies, it is bound to make operations become a nightmare, the need for a proprietary service orchestration system, but also to the expansion of difficulties, Operators need to design complex expansion plans, need to rehearse many times to dare to expand in the complex network topology. When the nodes themselves were equal and stateless, the operations and expansion were much easier.

But the whole system still has some state and data to maintain during the operation, such as the need for some Live content to be played back, which is stored through the proven seven Qiniu storage.

4) Intelligent operation and maintenance

It will be much easier to build intelligent operations on the basis of the above-mentioned "mesh-topology Peering network". can be convenient to the problem of the node without affecting the entire Livenet network, can be easily and quickly on-line new nodes, improve system capacity. Through the data analysis of the node, the whole state of the whole network can be better understood.

The following is a list of some of the livenet adoption of the smart Operations plan, the content distribution network to upgrade again to meet the requirements of the Live era. Monitor node health, real-time offline problematic node Failover mechanism, ensure service has been available for rapid expansion

7.LiveNet VS Peer

Finally we make a comparison with the peer network:

We found that the peer-to-peer solution, the control of the node and the stability of the link is still a lot of room for improvement, more suitable for the real-time requirements of the scene use, suitable for long-tail needs, in the live scene is more than the real-time requirements of heavy users, can not tolerate frequent FailOver And the network jitter caused by uneven node quality, but if the file distribution is more suitable to use this hybrid scheme, can effectively reduce the cost of CDN manufacturers, using shared economy to improve resource utilization.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.