HTTP Live Streaming Live (iOS live) technical analysis and implementation

Source: Internet
Author: User

Spent some time studying HTTP Live streaming (HLS) technology and implementing a HLS encoder Hlsliveencoder, of course, written in C + +. Its function is to capture the camera and microphone, real-time video coding and AAC audio encoding, and according to HLS protocol specification, the generation of segmented standard TS files and m3u8 index files. Through my hlsliveencoder and third-party HTTP server (for example: Nginx), the successful implementation of the HTTP live streaming live, and on the iphone test pass. I'm going to write some of the gains here. key points of HLS technologyHTTP Live Streaming (HLS) is Apple Inc. (Apple Inc) The implementation of the HTTP-based streaming media transmission protocol, can achieve streaming media live and on-demand, mainly in the iOS system, for iOS devices (such as the iphone, IPad) to provide audio and video live and on-demand programs. HLS on-demand, basically is the common segment HTTP on-demand, the difference is that its segmentation is very small. To achieve HLS on-demand, the focus is on the media file segmentation, there are many open source tools can be used, here I will no longer discuss, only talk about HLS live technology. Compared to the common streaming live broadcast protocol, such as RTMP protocol, RTSP protocol, MMS protocol, the biggest difference of HLS live is that the live client obtains, not a complete data stream. The HLS protocol stores the live stream as a continuous, short-length media file (mpeg-ts format) on the server side, while the client constantly downloads and plays these small files because the server side always generates new small files with the latest live data. So that the client as long as the sequential playback of the files obtained from the server, the implementation of the live broadcast. It can be seen, basically, that HLS is on-demand technical way to achieve live. Because the data through the HTTP protocol transmission, so completely do not consider the firewall or proxy problems, and the length of the fragmented file is very short, the client can quickly select and switch the bitrate to adapt to different bandwidth conditions of playback. However, this technical feature of HLS determines that its delay will always be higher than the normal streaming live protocol. Based on the above understanding to implement the HTTP live streaming live, The following technical key points need to be researched and implemented 1. Capture data from a video source and audio Source 2. H264 encoding and AAC encoding of the original data 3. Video and audio data encapsulated as Mpeg-ts package 4.HLS segmentation generation Strategy and M3U8 index file 5.HTTP Transmission protocol 1th and 2nd, As I mentioned in my previous article, and finally, we can use the existing HTTP server, so the implementation of 3rd and 4th is the key. Program Framework and implementationThrough the above analysis, the realization of HLS Liveencoder Live encoder, its logic and flow is basically clear: the audio and video encoding thread, respectively, through the DirectShow (or other) technology to achieve audio and video capture, libx264 and LIBFAAC are then called separately for video and audio encoding. After two encoding threads encode the audio and video data in real time, it is stored in a mpeg-ts format fragment file according to the custom Shard policy, and when the storage of a segmented file is completed, the m3u8 index file is updated. As shown in: Hlsliveencoder when you receive video and audio data, you need to first determine whether the current shard should end and create a new shard to continue the continuous generation of TS shards. It is important to note that the new shard should start from the keyframe to prevent the player from decoding the failure. The core code is as follows: The Tsmuxer interface is also relatively simple. HLS Segmentation Generation strategy and m3u81. Segmentation strategy A. HLS segmentation strategy, basically recommended is 10 seconds a shard, of course, the specific time also depends on the partition after the actual length of the Shard to do labeling. B. In general, for the sake of caching, the most recent three Shard addresses are kept in the index file and updated in a similar "sliding window" format. 2.m3u8 file Introduction m3u8, is an index file for HTTP live streaming live. M3u8 can basically be thought of as the. m3u format file, except that the m3u8 file uses UTF-8 character encoding.        #EXTM3U m3u file header, must be placed on the first line #ext-x-media-sequence the first TS Shard's serial number #ext-x-targetduration                                        The maximum length of time per shard TS #ext-x-allow-cache whether cache#ext-x-endlist m3u8 file terminator is allowed #extinf Extra info, information about the Shard TS, such as duration, bandwidth, etc. A simple m3u8 index file Run effectStart the Hlsliveencoder in the Nginx working directory and connect to the VLC player. The effect of playing through the iphone.

HTTP Live Streaming Live (iOS live) technical analysis and implementation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.