First, HTTP (WebService)
Progressive download based on HTTP progressive download streaming media playback is only based on the full download after the playback mode has made some minor improvements. Unlike download playback mode, which must wait for the entire file to be downloaded before it can begin playback, the incremental download client waits a short period of time to download and buffer the data in the front of the media file before it can begin playback, and then play it while downloading. This small buffer before the formal start of playback should make it possible to continue uninterrupted playback of the media data even when the network is more congested, usually for a few 10 seconds or even hundreds of seconds. In this mode, the client requests data from the server as quickly as possible to the maximum speed allowed by itself and the Web server and network, regardless of the actual rate parameters of the currently playing compression stream. Only the media file format that satisfies the specific packaging conditions supports this type of progressive download playback, for example, the encoding parameters used for the initial resolution encoder must be placed at the beginning of the media file, and the audio and video data are interleaved in full chronological order.
Progressive download Streaming media playback uses the standard HTTP protocol to deliver media data between the Web server and the client, while HTTP is hosted on TCP. TCP was originally designed for non-real time data transmission, and its optimization goal is to maximize the data transmission rate under the premise of guaranteeing the overall stability and high throughput of the whole network. To achieve this, TCP uses a slow-start algorithm that first sends data at a lower rate and then gradually increases that rate until it receives a packet loss feedback report from the destination party. TCP now thinks it has reached the maximum bandwidth limit or congestion in the network, and then starts to send the data at a lower rate and then gradually improves, and the process repeats itself. TCP achieves the goal of reliable transmission by retransmission of lost packets. However, for streaming media data, TCP cannot guarantee that all retransmission data will reach the client on time before their scheduled playback time. When this happens, the client can not skip these lost or late data directly playback time on the back of the media data, and must stop to wait, resulting in the player screen pause and intermittent phenomenon occurs. In the progressive download playback mode, the client needs to cache all the previously downloaded media data on the hard disk, requiring greater local storage space. During playback, the user can only perform progress bar search and fast forward and rewind in the time range of the previously downloaded media data, but not the entire media file time range.
Strictly speaking, the VOD based on HTTP is not really streaming media, English is called "progressive downloading" or "pseudo streaming". Because HTTP lacks streaming media basic flow control, it is very difficult to realize the fast forward of the media playback, fast rewind and pause based on HTTP protocol. So how does a typical Media player use HTTP to do this?
As we all know, regardless of the size of the media file, HTTP is only to see it as an element of HTTP, you can only send an HTTP request, the WEB server will continue to push the media streaming to the client, regardless of whether the client accepts, this is the HTTP protocol itself is not flow control reasons, So what's the consequence of that?
If the server's push speed and the client synchronization, so basically no big problem, if less than the client's receive stream speed, then play will be a card one card, if greater than or larger than the client's reception speed, that will be what kind of results. Unfortunately, in all of our ISTV projects, as long as the VOD is based on HTTP, there is no exception is the third case. Because we VOD is based on the LAN, we all know that the bandwidth of the LAN resources are very rich, the server's speed of the push is definitely greater than the player's playback speed, then in such an extremely uncoordinated situation, the speed of the server's push flow by WHO to limit it.
The answer is: TCP protocol stack. Our VOD is based on the TCP HTTP protocol. TCP is safe, reliable, the package will not be lost, the server detects the client's receive buffer full, will reduce the size of the Send data sliding window. So the flow control of HTTP is regulated by TCP protocol stack, not HTTP itself. Imagine how much pressure the server is causing!
The following is an analysis of how to implement Seek,pause based on HTTP protocol
1.SEEK (Fast forward and rewind)
Close the previous TCP connection, reconnect, and send the HTTP request with the offset of the media. This shows that every time the fast forward and rewind, is equal to start playing again, but each start playing a different position.
2.PAUSE
The operation is more interesting, the client paused playback, that is, does not read data from the buffer, but the server did not know that the client stopped playing, still keep sending data to the client until the client's receive buffer is full, and then the server's data can not send out, Theoretically, the size of the server-side sliding window is estimated to be 0, and the protocol stack is still trying to send data because it is based on TCP, and the data cannot be lost. Nnd, this way to achieve a pause, the protocol stack is crying. Unfortunately, that's what MPlayer did. So the pause time is long, it is easy to have problems.
Although HTTP does not have pause support, but for pause can be optimized, the optimization method is that the media file fragmentation, fragmentation of the size of slightly less than the TCP stack buffer size is appropriate. The HTTP request only requests a fragment size at a time, and after the playback is paused, the fragmentation request is not sent. This ensures that the server is running for a long time, and the player pause can theoretically be infinitely long. two, HTTP Live streaming
HTTP Live Streaming (HLS) is Apple Inc. (AAPL) Implementation of the HTTP-based streaming media transmission protocol, can realize streaming media live and on demand, mainly used in iOS system, for iOS devices (such as the iphone, IPad) to provide audio and video live and on-demand programs. HLS on demand, basically is the common segmented http on demand, the difference is that it's very small segmentation. To achieve HLS on demand, the focus is on the media file segmentation, there are many open source tools can be used, I will not discuss here, only talk about HLS live technology. A typical HTTP Live streaming streaming media system consists of content preparation, content distribution, and client software, as shown in the figure.
1. Content Preparation
The Content Preparation section is responsible for converting the input audio and video media content into a format suitable for delivery by the content distribution component. For live video, the encoder first captures the audio and video data compressed by the camera in real time to conform to a specific standard audio and video basic stream (currently Apple's system only supports H.264 video and AAC audio), and then reused and encapsulated into the MPEG-2 system layer standard transmission flow (TS) format for output. The stream Segmenter is responsible for dividing the MPEG-2 TS stream of the encoder output into a series of continuous, length equal small ts files (suffix. ts), which are sent to the content distribution component in turn.
The Web server is stored. At the same time, in order to track the availability and current location of media files during playback, the stream separator also needs to create an index file that contains pointers to these small TS files, which are also placed on the Web server. This index file can be viewed as a playlist sliding window in a continuous media stream. Whenever the stream separator generates a new TS file, the contents of the index file are also updated, the new file URI (Uniform Resource Locator) is added to the end of the sliding window, and the old file URI is removed. Such an index file will always contain the most recent fixed number of x segments, as shown in the figure. The stream separator can also encrypt each of the small TS files it generates and generate the corresponding key file.
The reason for the unified encapsulation of the encoded media stream using the MPEG-2 TS format is that it can make the audio and video media stream be interleaved and reused strictly according to the sequential sequence, and after arbitrary interception and segmentation, each segment may not rely on the preceding segment to decode and play independently. To do this, the TS file must contain only one MPEG-2 program, at the beginning of each file should contain a program associated table (PAT) and a Program mapping table (PMT), and a file containing video must also contain at least one keyframe and other sufficient information (such as a sequence header) to complete the initialization of the decoder. The index file takes an extended m3u playlist format, suffix
Named. m3u8. A m3u playlist is a text file that consists of several lines of text, each of which is either a URI, a blank line, or a row that starts with the annotation character "#". Each URI line points to a segmented media file, or a derived index (playlist) file. Except that the line starting with "#EXT" is the label row, other lines starting with "#" are comments and should be ignored. The following is an example of a simple. m3u8 index file that represents a media stream consisting of 3 unencrypted TS files with a length of 10 seconds.
#EXTM3U
#EXT-x-media-sequence:0
#EXT-x-targetduration:10
#EXTINF:
http://media.example.com/ Segment1.ts
#EXTINF:
http://media.example.com/segment2.ts
#EXTINF:
http:// Media.example.com/segment3.ts
#EXT-x-endlist
For video on Demand (V O D), the file separator (F i l e segmenter) First transcoding the encoded media file into the MPEG-2 TS format file (which is already encapsulated in TS format ignores this step), and then divides the TS file into a series of lengths And so on the small TS file. The file separator also generates an index file that contains pointers to these small files. Unlike a live session, the index file here is a static file that is not updated over time, and contains a list of URIs for all segments of the program from beginning to end, ending with a #ext-x-endlist tag. You can convert a live event to a VOD program source for later use: Do not delete the segmented media files that have expired on the server, and do not delete the corresponding URI index entries in the index file, at the end of the live broadcast The ext-x-endlist tag is added to the end of the index file. 2. Content distribution
The content distribution system is used to deliver the segmented small media file and its index file to the client player via the HTTP protocol, which can be either a normal Web server or a Web caching system. There is little need to make any special configuration of the Web server, and to add additional custom modules. The recommended configuration is limited to the MIME type Association of the. m3u8 file and the. ts file, as shown in the table.
Because index files need to be updated and downloaded frequently, it is necessary to fine-tune the TTL (time-to-live) value of the. m3u8 file in the configuration of the Web cache to ensure that the client can download the latest version of the file each time it is requested. 3. Client Software
Typically, the client software obtains and downloads an index file for a streaming session by accessing the URL link in the Web page. This index file further specifies the location of the currently available TS format media files, decryption keys, and other replacement streams on the server. For the selected media stream, the client downloads each of the available media files listed in the index file in turn. When these media files are buffered enough, the client will rearrange them sequentially into a coherent TS stream and then send it to the player for decoding and rendering. For encrypted media files, the client is also responsible for obtaining the decryption key according to the guidelines of the index file, providing the user authentication interface and decrypting on demand. For video-on-demand, the process continues until the client encounters the #ext-x-endlist tag in the index file. For live video, the #ext-x-endlist tag does not exist in the index file, and the client periodically requests to the Web server for an updated version of the index file, then finds the new media file decryption key in the updated index file, and adds their
The URI is added to the end of the download queue. Note that HTTP live streaming is not a real-time streaming media system, because there is a potential latency for the size and duration of media fragmentation. In the client, the playback can begin at least once a segmented media file is completely downloaded, and
It is usually required to download the two segmented media files before they start playing to ensure seamless connection between the different segmented audio and video. In addition, before the client starts downloading, it must wait for the server-side encoder and stream separator to generate at least one TS file, which can also lead to latency. Under the recommended configuration, the HTTP Live streaming system typically Siyanyo around 30s. 4. Network adaptive inter-flow switching and fault protection
In streaming media systems based on HTTP Live streaming, servers can prepare multiple replacement streams with different code rates and quality codes for the same program source, and generate a derived index file for each replacement stream. The corresponding replacement stream is found in the primary index file by including a series of URI pointers to other derived index files, as shown in the figure.
In mobile Internet environment, mobile terminals may switch between different wireless access networks (such as 3G,EDGE,GPRS and WiFi) at any time due to the different network coverage and the change of signal strength. At this point the client software can be switched at any time according to network and bandwidth changes
to the replacement stream which is pointed to by the different derivative index files, so as to provide the user with the near optimal audio and video QoS experience in the corresponding network condition adaptively. The above substitution stream and derivation index file mechanism can also be used for server fault protection in addition to dynamic flow switching based on bandwidth fluctuation. To do this, you first generate a media stream or multiple replacement streams, along with the corresponding index files, on a single server, and then generate a parallel set of backup media streams and index files on another server. Next, the index to the backup stream is added to the primary index file so that there should be one primary media stream and one backup media stream for each bandwidth value. For example, assuming that the primary server and the backup server are Alpha and beta respectively, the contents of the primary index file might look like the following:
#EXTM3U
#EXT-x-stream-inf:program-id=1,
bandwidth=200000
http://ALPHA.example.com/lo/prog_index.
m3u8
#EXT-x-stream-inf:program-id=1,
bandwidth=200000
http://BETA.example.com/lo/prog_index.
m3u8
#EXT-x-stream-inf:program-id=1,
bandwidth=500000
http://ALPHA.example.com/md/prog_index.
m3u8
#EXT-x-stream-inf:program-id=1,
bandwidth=500000
http://BETA.example.com/md/prog_index.
m3u8
In the example above, when the client connects to the primary server Alpha fails, it attempts to connect to the Backup server Beta, obtain the derived index file for the highest bandwidth value substitution stream, and further download the corresponding replacement media stream file based on the index file.
Relative to the common streaming media live protocol, such as RTMP protocol, RTSP protocol, MMS protocol, etc., the biggest difference is that the direct broadcast client is not a complete data stream. The HLS protocol stores live streaming data as a continuous, very short, long media file (mpeg-ts format) on the server side, while the client downloads and plays the small files continuously because the server side always generates new small files for the latest live data. So that the client as long as the sequential playback from the server to obtain the files, the realization of the live broadcast. This shows that, basically can think, HLS is to be on demand the technical way to realize live broadcast. Because the data is transmitted through the HTTP protocol, there is no need to consider the firewall or agent problem, and the length of the segmented file is very short, the client can quickly select and switch the rate of code to adapt to different bandwidth conditions of playback. However, the technical characteristics of HLS determine that its delay is always higher than the ordinary streaming media protocol.
Based on the above understanding to implement the HTTP live streaming live broadcast, the following technical key points need to be studied and implemented
1. Data acquisition of video source and audio source
2. H264 Code and AAC code for RAW data
3. Video and audio data encapsulated as Mpeg-ts package
4.HLS segmentation Generation Strategy and m3u8 index file
5.HTTP Transport Protocol III, RTSP (real Time streaming Protocol)
The above streaming media playback based on progressive downloading can only support on-demand and cannot support live broadcast, the rate at which the media stream data arrives at the client is not precisely controlled, and the client still needs to maintain a buffer storage space of the same size as the media file on the server, waiting for a long buffer time before it can begin playback, resulting in poor real-time performance, During playback, due to the fluctuation of network bandwidth or packet loss may cause the screen to pause or intermittent waiting, does not support the whole time range of search, fast forward, fast rewind VCR operation. In order to overcome these problems, it is necessary to introduce a dedicated streaming media server and the corresponding real-time streaming media transmission and control protocol to support. RTSP/RTP is the most popular and widely used real-time streaming media protocol in the industry. It is actually made up of a set of protocols that are standardized in the IETF, including RTSP (real-time streaming media session protocol), SDP (Session Description Protocol), RTP (Real-time transmission protocol), as well as for different codec standard RTP net load format, and so on, together to form a streaming media protocol stack, as shown in the figure. The extensions based on this protocol stack have been adopted by ISMA (Internet streaming media Federation) and 3GPP (third-generation Partner program) as streaming media standards for the Internet and 3G mobile Internet.
RTSP is a session protocol used to establish and control the continuous audio-video streaming of one or more time synchronization. By passing the RTSP session command between the client and the server, you can complete VCR control actions such as request playback, start, pause, find, fast-forward, and rewind. Although RTSP sessions are typically hosted on a reliable TCP connection, you can also use connectionless protocols such as UDP to transfer RTSP session commands. The commands that play a key role in the RTSP protocol mainly include the following:
1 SETUP: Make the server allocate media streaming resources and start a RTSP session.
2 Play and record: Start the data transfer on a stream on which Setup initiates the session and allocates resources.
3) PAUSE: temporarily suspend a stream of data transfer without releasing server resources.
4) Teardown: Frees the streaming resources on the server and ends the RTSP session.
The SDP protocol is used to describe multimedia sessions. The main function of the SDP protocol is to advertise the relevant descriptive information of all media streams in a multimedia session so that the receiver can perceive the descriptive information and participate in the conversation based on these descriptions. SDP Session description information is usually passed through the RTSP command interaction, in which the media class information mainly includes:
1 The type of media (video, audio, etc.).
2) Transfer Protocol (RTP/UDP/IP,RTP/TCP/IP, etc.).
3 Media encoding Format (H.264 video, AVS video, etc.).
4 Stream Media server receives the IP address and port number of the media stream.
RTP is also known as the Real-time Transport protocol, which is used to effectively host media data and provide end-to-end transport services for media data with real-time characteristics, such as net load type recognition, serial number, timestamp and transmission monitoring. Applications typically choose to run the RTP protocol on top of UDP to take advantage of the functionality of UDP multiplexing and checksum, and to improve the efficient throughput of network transmissions. However, RTP can still choose to use it with other network transport protocols (such as TCP). A RTP packet consists of RTP head and RTP net load (payload). The upper application mainly implements the synchronous timing playback and QoS control of the media stream data that it hosts through the Serial Number field (sequence numbers) and the timestamp (timestamp) field in the RTP header. The RTP net load part actually hosts the audio and video media data required by the client. For different audio and video coding standards, it may be necessary to define different RTP net-load formats, such as the RTP Net-load format standard for H.264 and the RTP Net-load format standard for AVS video. Streaming media servers and client players perform RTP packaging and unbundling of media data streams according to these net-load format standards. The use of RTSP/RTP Streaming media protocol stacks requires a dedicated streaming media server to participate. Unlike the passive burst delivery of media data in progressive downloads, media data is sent actively and intelligently at a rate that matches the compressed audio and video media rate in the media distribution process in which the streaming media server participates. Throughout the media delivery process, the server is in close contact with the client and is able to respond to feedback from the client. RTP is the real real time Transfer protocol, and the client only needs to maintain a small decoding buffer for caching the few reference frame data needed for video decoding, which greatly shortens the initial playback delay and is usually controlled within 1 seconds. Using UDP to host RTP packets can improve the real-time and throughput of media data transfer. When RTP packet loss occurs because of network congestion, the server can intelligently carry out selective retransmission according to the media encoding characteristics, discard some unimportant packets intentionally, and the client can continue to play without waiting for the data not arrived on time to ensure the fluency of media playback.
RTSP (real Time streaming Protocol), a real-time streaming transport protocol, is an application layer protocol in the TCP/IP protocol system, which is submitted by the Columbia University, Netscape and RealNetworks company to the IETF RFC standard. This protocol defines how a one-to-many application can efficiently transfer multimedia data over an IP network. A flow control protocol similar to the HTTP protocol. They all use plain text to send information, and the syntax of the RTSP protocol is similar to HTTP, unlike the HTTP protocol RTSP protocol, where the RTSP protocol is stateful and HTTP is a stateless protocol. RTSP maintains a session to maintain its state of transformation. The default port for the RTSP protocol is 554, and the default hosting protocol is TCP.
At present, the RTSP in the iOS client playback technology is mainly in the way of ffmpeg+ mapping. IV. Structure:
For ease of management and scalability, bandwidth throttling and multi-user concurrency considerations, the commercial solution will adopt streaming media server +web server + Relay Server + Mobile client program, in which: Streaming media server (streaming server) Responsible for capturing the video source and compressing the encoding and waiting for the RTSP connection request from the client, the Web server (Web server) is easy to publish and manage the video information; the transit server (transmission server) is optional, For forwarding RTSP requests from clients to the server and sending real-time server-side streaming to the client, with the benefit of supporting more users to watch simultaneously at the same bandwidth; Mobile clients (client) You can use the mobile phone's built-in player (such as the RealPlayer on Nokia) or your own independent player, the former advantage is to reduce the user threshold for large-scale application, the latter easy to expand and customize, to meet more features.
Streaming server is the core of the entire solution, the current mainstream streaming media server solution is as follows: Helix server: With real company's strong strength, this is the most popular solution, can support all audio and video formats, stable performance, is the only way across Windows Mac and Linux, Solaris, Hp/ux user streaming media services platform, support on the phone with the player playback. The Helix server free version only supports 1M traffic, and the Enterprise Edition is expensive. Of course you have to be something else. Darwin server: This is an Open-source streaming solution from Apple, which is not helix so much, but because it is open source and free, it has a lot of development space for developers. LIVE555 Media Server: Stable performance, but less support format (only several streams such as Mp3,amr,aac,mpeg4 es), rarely used independently and generally as part of the system. Windows Media Server: Microsoft platform only, do not consider.
Mobile phone end frame process is as follows:
Mobile phone client and server-side transmission protocols are commonly used HTTP,RTSP two kinds:
Early mobile TV Http,http advantage of the use of Special Server Software, IIS can be, regardless of firewall NAT, but HTTP does not support real-time streaming, but also waste bandwidth;
RTSP is the current mainstream streaming media standards, even Microsoft has abandoned MMS and instead support RTSP, RTSP can support the client suspend playback stop, and so on, basically do not consider audio and video synchronization problems (because audio and video from different RTP port read into the buffer). It is worth noting that RTSP success, the beginning of RTP transmission, divided into RTP over TCP and RTP over UDP, the former to ensure that each packet can be received, if not received on the retransmission, and do not consider firewall Nat; the latter only ensure that the maximum efforts of the transmission, will not be repeated lost frames, real-time good, To troubleshoot firewall NAT issues. If the frame rate of higher demand for mobile TV, the recommended use of UDP transmission, because the delay of large retransmission data to the user is meaningless, rather discard.
The network part can implement the RTSP/RTP protocol with powerful open Source Library live555, its performance is stable and support the transmission of most audio and video formats. (Of course, FFmpeg also implemented the network Transmission section, after the change can also be used) to live555 after the transplant to Symbian and Windows Mobile, this part of the work in Symbian Real machine debugging more time-consuming.
Video decoding part of course or using ffmpeg, transplanted MPEG4 sp/h.264 decoder, in the absence of any optimization can support the effect of 32k,cif,5-10fps, for the general streaming media applications enough. The algorithm and assembly optimization are also needed later. After decoding also need to go through yuv2rgb and scale, it is necessary to note that FFmpeg decoding has a hidden area, that is, the QCIF image of its linesize is not 176 but 192, if you find the image after decoding is green, need Img_convert () turn ( The destination format is also pix_fmt_yuv420p). On Symbian, use DSA to write the screen directly on the line. SDL can be used on Windows Mobile.
Audio decoding mainly includes AAC,AMRNB,AMRWB. AAC and AMRNB are GPRS and edge bandwidth-supported audio (AAC effect is better than AMRNB), AMRWB is the 3G audio format. The AMRNB/WB fixed point decoding, which has been supported in the FFmpeg 0.5 release, is powerful. v. Analysis and comparison
As the simplest and original streaming media solution, the only significant advantage of HTTP progressive downloads is that it only needs to maintain a standard Web server, which is already ubiquitous on the Internet, and its installation and maintenance workload and complexity are simpler and more complex than a dedicated streaming media server. However, its shortcomings and deficiencies are also many, the first is only applicable on demand and not support live broadcast, followed by the lack of flexible session control function and intelligent flow regulation mechanism, again is the client needs hard disk space to cache the entire file and not suitable for embedded devices. The streaming media system based on RTSP/RTP is specially designed for the application of large-scale streaming media, such as live broadcast and VOD, and it needs special streaming media server support, which has the following advantages compared with HTTP progressive downloading:
The real-time performance of streaming media playback. and the incremental download client needs to buffer a certain amount of media data before it can begin to play different, streaming media clients based on RTSP/RTP can start playback almost as soon as they receive the first frame of media data.
Support progress bar search, Fast forward, fast rewind and other advanced VCR control functions.
Smooth, smooth audio and video playback experience. During a RTSP streaming media session, the client and server always maintain session contact, and the server can respond dynamically to feedback from the client. When the available bandwidth is insufficient due to network congestion, the server can intelligently adjust the sending rate by reducing the frame rate appropriately. In addition, the use of the UDP transport protocol allows the client to detect the occurrence of a packet loss you can choose to have the server only selectively retransmit some of the important data, such as keyframes, while ignoring other lower-priority data to ensure that the client is still able to play continuously and smoothly in a bad network situation.
Support for large-scale user extensions. The typical Web server is optimized for a large number of small HTML file downloads and lacks the performance advantage in transferring mass media files. The professional streaming media server is optimized in large capacity media file hard disk reading, memory buffering and network sending, and can support large-scale user access. Support for Network layer multicast. Network layer multicast allows a single media stream to share a network path and send to multiple clients, which can greatly reduce network bandwidth requirements. This functionality can be achieved only through a dedicated streaming media server. Content copyright protection. In progressive download mode, the downloaded file is cached in a temporary directory on the client's hard disk, and the user can copy it to another location for later playback. In the streaming media system based on RTSP/RTP, the client only maintains a small decoding buffer in memory, and the media data is cleared at any time, and the user is not easy to intercept and copy.
In addition, DRM and other copyright protection systems can be used for encryption processing. However, the streaming media system based on RTSP/RTP is still encountering many problems in the practical application deployment, especially in mobile Internet application, which is mainly embodied in:
Compared to Web servers, streaming media servers are more complex to install, configure, and maintain, especially for operators who already have infrastructure such as CDN (content distribution network), which has a lot of work to do to support RTSP/RTP streaming media servers.
The logic implementation of RTSP/RTP protocol stack is more complicated, and it is more difficult to support RTSP/RTP client hardware and software than HTTP, especially for embedded terminals.
The network port number (554) used by the RTSP protocol may be blocked by firewalls and NAT in some user networks, resulting in unusable use. Although some streaming media servers can be tunneled to host the RTSP configuration on HTTP port 80, the actual deployment is not particularly convenient.
Apple's HTTP Live streaming is designed to address these issues, with the main features of abandoning a dedicated streaming server, returning to a standard Web server to deliver media data, and segmenting large volumes of contiguous media data. Segmentation for a large number of small files to pass, catering to the Web server file transfer characteristics, the use of a constantly updated lightweight index files to control the partition of small media files to download and playback, can support both live and on demand, as well as VCR class session control operations. The use of the HTTP protocol reduces the difficulty of deploying HTTP Live streaming systems and simplifies the development complexity of the client (especially the embedded mobile terminal) software. In addition, the introduction of File segmentation and indexing file makes it more convenient for bandwidth-adaptive flow switching, server fault protection and media encryption. Compared with RTSP/RTP, the biggest disadvantage of HTTP live streaming is that it is not a real real-time streaming media system, and there are some initial delays in both the server and the client. and currently mainly for mobile multimedia applications, recommended support for the highest video rate of only Kbps, for higher code rate, especially High-definition video support needs to be further explored and validated. To sum up, the above three kinds of streaming media protocol Comprehensive comparison is shown in the table.
http progressive download, RTSP/RTP and HTTP live streaming three kinds of streaming media protocols, their basic methods and characteristics are introduced, especially the HTTP live streaming protocol is described in more detail, On this basis, three kinds of streaming media protocols are compared and analyzed. Overall, the
HTTP progressive download system is the easiest to deploy, but only for smaller, short video-on-demand applications;
The RTSP/RTP protocol stack is suitable for large-scale and scalable interactive real-time streaming media applications, but requires dedicated streaming media server support, Installation and maintenance are more complex;
HTTP Live streaming can be deployed directly to a Web server network environment with a broad operating base, without the need to upgrade the network infrastructure, especially for consumer-level mobile internet streaming applications that are not too high for real-time requirements.
Text/rideronthewheel (Jane book author)
Original link: http://www.jianshu.com/p/5b0fa403b3ce
Copyright belongs to the author, reprint please contact the author to obtain authorization, and Mark "Jane book author ”。