CDN, video Cloud, has been "go round" the continuous rise of video live, inadvertently also let the contention of the bandwidth business become extremely brutal. For a time, a variety of cloud computing, CDN, video cloud providers in the video, especially live on the heavily invested, the uprising of the new rebel troops are also rushing to this side of the battlefield, various claims can be in the IaaS, PaaS, SaaS different levels to provide platform-level, The "Go Round" may be the most appropriate reference for the current video solutions market, as the fancy combat slogan for interface-level and product-level services is dizzying. In this situation, video cloud and CDN are technically competing for what? As a video platform and will be entering the field of video operators, the technology platform in the selection and construction of how to avoid falling into the big pits? A player behind everyone knows that the most important thing is the smooth and high-definition video, but this bright and beautiful behind the technology and cost of the double high threshold, is a lot of technical links difficult to accumulate and bitter force of human operation. The main broadcast of a simple live, the backbone of the process through the collection, coding, streaming, transcoding, distribution, Lahue, decoding and playback so many links, but also required in a few seconds to complete, in addition to live like recording, flow control, security, audit and many other complex functional requirements. Again, if only a cock-silk audience from the player to see the host, there may be so many unknowable situation occurred. How about this dick Wire's access network? What about the system environment you're using? Even if an audience so, to ensure the millions level of smooth viewing, the difficulty imaginable. What is the high definition of fluency? Perhaps for some video operators and new entrants, the live push stream and the player side still feel big head, but overall, in addition to the mobile side, PC-side push stream and playback technology is relatively mature. Difficult, mostly difficult in transmission and distribution! Normally, as long as the push-end network is in good condition, transmission and distribution determine whether the live stream will be smooth. Transmission and distribution, involving the most core technologies of video, huge server and bandwidth costs, and an extremely complex domestic network environment. Because of this, the video platform basically will transfer and distribution links to professional third-party video cloud service providers or CDN service providers to complete. We take from the seven layer of the network transmission of the four layers related to video transmission distribution, such as: L2 resource layer: For the video cloud and CDN, the resources do differ, but within their affordability can be regarded as a small difference; L4 Transport Layer: The transport layer can target different business scenarios, For example, for ultra-low latency can be based on UDP private protocol and so on. This article focuses on the protection of video fluency, the support of different application scenarios will be introduced in the following articles; L3 network layer: Video Cloud and CDN Company in this layer to achieve the network between the operators, multi-layer cache system design and user scheduling nearby. The design and optimization of this layer is very important to the quality of access, with the increasing maturity of CDN technology, although there may be architectural differences, but basically can ensure that network routing normal operation;L7 Application layer: Throw away the details, the main line of the video stream is input, transmission and output, take these work is the video platform core components streaming media server, this is the most essential feature of video broadcast distribution, need dedicated streaming media server to distribute, all video cloud and CDN, Need to deploy streaming media server at both the hub and edge tiers. Through the above analysis, we know that, when the resource and network level is not small, the performance of streaming media server determines the effect and quality of video streaming distribution, so streaming media server is the highest point of competition between video cloud and CDN technology. The main streaming media servers in the market compared to the current mainstream streaming media servers, with Adobe FMS, Real Helix, Wowza as the representative of the first generation of products, they are characterized by single-process multithreading. Based on the Linux2.7 Epoll technology, the second generation streaming media server with multi-process single thread is presented, nginxrtmp, CRTMPD is the best representative, and the stream media ancestor Red5 based on Java. Papawaqa Cloud Open Source Media server SRS (simple RTMP Server), with its powerful, lightweight and easy to use, especially suitable for interactive live broadcast, and many other features are popular at home and abroad video practitioners. Blue flood Chiancache has used SRS to carry its live edge distribution business, the promotion CDN based on SRS to build its streaming media base platform, the other also has the video, Verycdn, Verycloud, Yumbo, etc. will also apply SRS to their own business. Each video cloud, cloud computing platform in the docking of the source station also attaches great importance to the SRS support. SRS as a purely domestic open source server, in China's streaming media industry is very valuable. Papawaqa Cloud Source Station cluster BMS (BRAVO Media Server) is the commercial version of SRS, the BMS has enhanced 11 major functions on the basis of SRS, and has added 9 large functions: 11 major functions of add-ons: New 9 Major features: Streaming media Server is not short, Listed above currently on the market mainstream streaming server, there is a veritable martyrs RED5, there are untimely crtmpd, are not large-scale commercial is not too much discussion. One of the most widely used is nginx-rtmp, the following is the nginx-rtmp of several important factors in the world: the 2012 CDN business began to grow very high, with the demand for live broadcast more, and then the industry has not yet a well-recognized and particularly satisfied with the streaming media server; Nginx is the absolute supremacy of the HTTP domain, everyone (especially CDN operations) to the Nginx familiar degree is very high, easy to start maintenance; based on Nginx, live broadcast on-demand use a set of servers, which is very tempting, a set of management is more than a set of simple; CDN is the business of operation and maintenance, The faith of operation and maintenance is shipped out for years,Nginx in the graphics and text so excellent, Nginx rtmp is also not good. Nginx-rtmp is indeed born with a halo outside, the performance is indeed high, higher than the CRTMPD. However, as time passes by, with the strong rise of the interactive live broadcast, mobile live broadcast era, choose Nginx-rtmp in the end is a blessing or a curse? The following small series will be from the protocol support, architecture, core function support, configuration operations, performance, server logs, data the seven dimensions of the current mainstream streaming server to make a horizontal comparison, for video practitioners according to their own business scene characteristics of the preferred choice. 1 network protocol comparison BMS supports the distribution of HDS, DASH, rtmpe/s/t and other protocols, which will support more business scenarios, and FLASH-to-peer support can significantly reduce network bandwidth costs. 2 architecture vs. architecture, compared to the 160,000 lines of code in NGINX-RTMP, SRS achieves 230% more than nginx-rtmp with 65,000 lines of code, nginx-rtmp annotation rate is 3%, and SRS is 23.7%. This shows that SRS is light and simple in architecture. Papawaqa Cloud BMS has added multi-process support, source station cluster, dynamic configuration, traceability log and other capabilities on the basis of SRS. The source station cluster subsystem has opened the distributed deployment problem of the source station across the network, and the dynamic configuration subsystem reads the configuration from the business system, updates the configuration dynamically according to the update mechanism, and ensures that the broadcast service configuration changes without interruption; the end-to-end traceability log and the monitoring and debugging subsystem shorten the live fault location time to the minute level. 3 core functions in contrast to core functions, BMS supports the functions that all streaming media systems do not have, such as live streaming, real-time transcoding, large-scale recording, second-level low latency, hls+, and concurrent feeds. Hls+ implements the "Virtual connection" (uuid identifier) of streaming media based on each play request, and has many advantages in reducing back source, error, anti-theft chain and low latency of mobile web. The concurrent return source can improve the quality of the return source greatly, such as the poor condition of the back-source network and the serious loss of the transnational transmission. 4 Configuring operations comparison The following are just a few of the most common examples of streaming media configuration, operations in the daily work, the number of configuration needs to operate more. (1) Vhost configuration FMS copy default Vhost directory: sudo cp-r conf/_defaultroot_/_defaultvhost_ conf/_defaultroot_/bravo.sina.com nginx-rtmp Srs/bms Dynamic Get configuration file is not supported: Vhost bravo.sina.com {} conclusion: BMS Dynamic Acquisition configuration is the simplest (2) app configuration FMSCopy the default app directory: CP applications/live applications/mylive-r nginx-rtmp Modify the configuration file to add the following: Application Live {live on;} Srs/bms without configuration conclusion: The BMS does not need to be configured, the simplest (3) HTTP configuration in the output for HLS, http-flv and other HTTP-based live streaming, you need to configure the HTTP service FMS configuration FMS built-in Apache server files: apache2.2/ Conf/httpd.conf again modify the following fields: Httpstreamingenabled True Httpstreamingliveeventpath ". /applications "Httpstreamingcontentpath". /applications "Httpstreamingf4mmaxage 2 httpstreamingbootstrapmaxage 2 HttpStreamingFragMaxAge-1 options-indexes FollowSymLinks nginx-rtmp > Crtmpd > Wowza > FMs > RED5 A few reasons why SRS performance is so high: 1. St-load, this is the most important reason SRS can achieve high performance, a st-load can simulate 2000+ client, if there is no st-load, how to know the performance bottleneck of the system where? Can't always open 3,000 flash pages to play rtmp stream? Open 3,000 ffmpeg to catch a stream? High performance is not imagined and guessed, but repeated testing, debugging and improvement. 2. Gperf/gprof performance, when compiling SRS, can open GCP or gprof performance analysis options, very convenient to get the data. Improved and optimized development cycles. 3. The msgs of the reference count avoids the memory copy. 4. Use Writev to send the chunked packet to avoid the memory copy of the message to the chunked package. 5. MW (Merged-write) technology, which sends multiple messages at once. 6. Reduce the timeout recv, each connection is a st-thread in service. 7. Fast buffer and cache. 8. Vector or list? vector! Vector is 10% more powerful than list. 6 server log logs are the only way to locate faults and locate faults quicklyThe wrong speed. So to speak, for the live, 10 minutes of the wrong row, who will feel long. But who can do 10 minutes with the current video cloud or CDN? Let's take a look at the log. The Journal of FMS is like this, forgive me stupid, can you see what information? 2015-03-24 12:23:58 3409 (s) 2641173 Accepted a connection from ip:192.168.1.141, referrer:http://www.ossrs.net/players/ Srs_player/release/srs_player.swf?_version=1.23,pageurl:http://www.ossrs.net/players/srs_player.html?vhost=dev &stream=livestream&server=dev&port=1935-702111234525315439 3130 3448 Normal livestream--rtmp:// 192.168.1.185:1935/live/livestream rtmp://192.168.1.185:1935/live/livestream-flv--0-0 0--HTTP://WWW.OSSRS.NET/PL Ayers/srs_player.html?vhost=dev&stream=livestream&server=dev&port=1935-1 -1.000000 crtmpd Log Detail But I'm stupid again , if thousands of people online, can you see anything useful? /home/winlin/tools/crtmpserver.20130514.794/sources/thelib/src/netio/epoll/iohandlermanager.cpp:120handlers Count Changed:15->16 ioht_tcp_carrier/home/winlin/tools/crtmpserver.20130514.794/sources/thelib/src/netio/ Epoll/tcpacceptor.cpp:185client connected:192.168.1.141:54823, 192.168.1.173:1935/Home/winlin/tools/crtmpserver.20130514.794/sources/applications/appselector/src/rtmpappprotocolhandler.cpp : 83Selected application:flvplayback (Live)/home/winlin/tools/crtmpserver.20130514.794/sources/thelib/src/ Application/baseclientapplication.cpp:246protocol CTCP (+) <-> TCP <-> [IR] Unregistered fromapplication:appselector/home/winlin/tools/crtmpserver.20130514.794/sources/thelib/src/application/ Baseclientapplication.cpp:257stream NR (5) with name "registered to Application ' FLVPlayback" from Protocolir (+)/home/ Winlin/tools/crtmpserver.20130514.794/sources/thelib/src/application/baseclientapplication.cpp:268stream NR (5) With name "Unregistered from application ' FLVPlayback ' Fromprotocol IR (+)/home/winlin/tools/ Crtmpserver.20130514.794/sources/thelib/src/application/baseclientapplication.cpp:257stream NR (6) with name ' Registered to application ' FLVPlayback ' from Protocolir (/home/winlin/tools/crtmpserver.20130514.794/sources/) Thelib/src/protocols/rtMp/basertmpappprotocolhandler.cpp:1043play Request for Stream name ' livestream '. Start:-2000; Length: -1000/home/winlin/tools/crtmpserver.20130514.794/sources/thelib/src/application/ Baseclientapplication.cpp:268stream NR (6) with name "unregistered from application ' FLVPlayback ' Fromprotocol IR (19) to Nginx-rtmp, the log has finally progressed, can be separated by the connection. But unfortunately, Nginx log can only know this connection, this connection in the CDN multi-Layer network path, the network status of the connection itself is still unknown. 2015/03/2411:42:01 [INFO] 7992#0: * * Client Connected ' 192.168.1.141 ' 2015/03/2411:42:01 [INFO] 7992#0: * * connect:app= ' l Ive ' args= ' flashver= ' mac17,0,0,134 ' swf_url= ' http://www.ossrs.net/players/srs_player/release/srs_player.swf?_ version=1.23 ' tc_url= ' rtmp://192.168.1.173:1935/live ' page_url= ' http://www.ossrs.net/players/srs_player.html? vhost=dev&stream=livestream&server=dev&port=1935 ' acodecs=3575 vcodecs=252 object_encoding=3, client: 192.168.1.141, server:0.0.0.0:1935 2015/03/2411:42:01 [INFO] 7992#0: * * createstream, client:192.168.1.141, server:0. 0.0.0:1935 2015/03/2411:42:01 [INFO] 7992#0: Play:name= ' livestream ' args= ' start=0 duration=0reset=0 silent=0, client:192.168.1.141, server:0.0.0.0:193 5 in the SRS, especially the BMS body, finally has the streaming media traceable log, can chase to the corresponding push stream connection from the play connection, prints the summary information which connects and joins, this is also the Papawaqa cloud can control the fault location time to the minute level reason. Before the small series specifically introduced Papawaqa cloud traceable log, interested can refer to the "Traceability log: Video cloud era of the new operation of the big chest," here to see the retrospective log run: Play stream: Rtmp://dev:1935/live/livestream The client display ID can see the SRSIP, that is, the server IP is 192.168.1.107, thus knowing which edge node is providing the service for that stream. Srspid is 12665,srsid for 114, so go to this server with the grep keyword "[12665] [114]". Continuous grep tracing, which was eventually found to be the stream ID pushed up by the source_id=149 encoder. To check the log of 149, the entire stream of logs will be quickly presented in front. Encoder = Origin + Edge + Player, the log that flows throughout the Papawaqa cloud distribution process can be found within minutes. 7 data is not fully aware of the data for a video operating platform value in the end how much. Small parts know, at least for the video live, relying on the more real-time data, the more able to quickly locate and solve some user failure problems, to ensure different payment levels, different terminals, different regions, different content, such as viewing experience, CDN Billing data reconciliation, precision advertising and so on. So Papawaqa cloud BMS is capable of providing some data, and is almost real-time data, a few pictures: can see real-time online viewing number, as well as the region, operator distribution, viewing smoothness, real-time bandwidth load, etc. can be targeted to a specific user, monitoring the user's terminal environment (operating system, browser, Player version) to view the experience. Other systems, I'm sorry, I have not seen the small series. The final video broadcast of the people will continue to burn, people live in the big era behind the video technology, the support of cloud technologies, the future in the panorama of Live, VR broadcast all the time, more need to pay attention to the technical upgrading and stability of the platform. For video operators, choosing a reliable cloud platform to drastically reduce their infrastructure and investment in research and development is the best policy to move the focus forward to business and products. For the video cloud platform and CDN service providers, when the broadcast market is calm, video technology will eventually become the core competitiveness, itsA manageable streaming media server cluster is a priority, whether you are IaaS, PaaS, SaaS, and the last s are service.
Headline number/New Media megatrends
Links: http://toutiao.com/i6272689973091107329/
Source: Headline number (today's headline creation platform)
Copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please specify the source.
Streaming media Choose Nginx is a blessing or a curse?