Analysis and Implementation of HTTP live streaming technology
I accidentally discovered that I had not written a blog for more than half a year and felt ashamed. In fact, in the second half of 2012, the same thing happened at home, and there was no time. After the Chinese New Year, I finally spent some time in my work and summarized some recent technical achievementsArticleTo share with you.
A few days ago, it was also a project requirement and took some time to study it.HTTP live streaming(HlsAnd implements an HLS encoder.HlsLiveEncoderOf course, written in C ++. The function is to collect the camera and microphone, perform H.264 video encoding and AAC audio encoding in real time, and generate standard TS files and m3u8 index files based on HLS protocol specifications. Through myHlsLiveEncoderAnd third-party HTTP servers (such as nginx ),HTTP
Live streamingLive Broadcast and pass the test on iPhone. I will write some of the gains here.
Analysis of HLS technical points
HTTP live streaming(HlsIs an HTTP-based streaming media transmission protocol implemented by Apple Inc.. It can implement live streaming and On-Demand Streaming Media. It is mainly used inIOSSystem,IOSDevices (such as iPhone and iPad) provide live audio and video broadcast and on-demand solutions.HlsOn-demand videos are basically common multipart HTTP on-demand videos. The difference is that the segments are very small. To implementHlsOn-demand video streaming focuses on the segmentation of media files. Currently, many open-source tools are available. I will not discuss it here. I will only talk about HLS live video technology.
Compared with common live streaming media protocols, suchRtmp Protocol, RTSP protocol, MMS protocol, etc. The biggest difference between Hls live broadcasting is that what the live broadcasting client obtains is not a complete data stream. The HLS protocol stores live streaming data streams on the server as continuous, short-term, and long media files (MPEG-TSAnd the client continuously downloads and plays these small files, because the server always generates new small files from the latest live video data, in this way, as long as the client continuously plays the files obtained from the server in order, the live video is realized. It can be seen that HLS is basically achieved by means of VOD technology. Because data is transmitted over HTTP, you do not need to worry about firewall or proxy issues, and the length of the segment file is very short, so the client can quickly select and switch the bit rate, to adapt to playback under different bandwidth conditions. However, this technical feature of HLS determines that its latency is generally higher than that of common live streaming media protocols.
Based on the above knowledgeHTTP live streamingLive broadcasting requires research and implementation of the following key technical points
- Collect data from video sources and audio sources
- Encode raw data with h264 and aac
- Encapsulation of video and audio data into a MPEG-TS package
- Hls segmentation generation policy and m3u8 index file
- HTTP transmission protocol
Among them, and have been mentioned in my previous articles. The last point is that we can use the existing HTTP server. Therefore, the key is to achieve and.
ProgramFramework and implementation
Through the above analysisHls liveencoderThe logic and process of the live video encoder are basically clear: Enable the audio and video encoding threads separatelyDirectShow(Or other) technology for audio and video collection, and then call libx264 and libfaac for Video and Audio Encoding respectively. After two encoding threads encode audio and video data in real time, they are stored inMPEG-TSWhen you store a multipart file, update the m3u8 index file. As shown in:
MediumHlsliveencoderAfter receiving the video and audio data, you must first determine whether the current part should end and create a new part to continue the Continuous Generation of the TS part. Note that the new part should start from the key frame to prevent the player from decoding failure. CoreCodeAs follows:
Tsmuxer interfaces are also relatively simple.
Hls segmentation generation policy and m3u8 1. Segmentation Policy
- The HLS segmentation policy basically recommends a 10-second sharding. Of course, the specific time must be based on the actual duration of the sharding.
- Generally, for caching and other reasons, the latest three shard addresses are retained in the index file and updated in the form of a sliding window.
2. Introduction to m3u8 files
M3u8, yesHTTP live streamingThe index file of the live video. M3u8 can basically be considered as a. m3u file. The difference is that m3u8 files are usedUTF-8Character encoding.
# Extm3u M3U File Header, must be placed in the first line # EXT-X-MEDIA-SEQUENCE the serial number of the first ts fragment # EXT-X-TARGETDURATION the maximum length of each fragment ts # EXT-X-ALLOW-CACHE whether to allow cache # EXT-X-ENDLIST m3u8 file Terminator # extinf extra info, partition ts information, such as duration and bandwidth
A simple m3u8 index file
Running Effect
Start in the nginx working directoryHlsliveencoderAnd connect to the VLC player for playback.
IPhone playback effect
++ ++
Haibinnet network technology. Contact QQ for cooperation. (For more information, see the author and source ~)
++ ++
Posted on Haibindev