Streaming Media Basics

Source: Internet
Author: User

Composition of live streaming media system

Streamingmedia live systems are generally composed of cameras, encoder, image servers, and image display terminals. The camera is responsible for collecting images, the encoder converts and compresses the collected image data, the image server is responsible for storing the image data, and the image display terminal generates images based on the image data.

The process of live streaming media systems can be divided into two types: the server relay mode and the encoder direct playback mode.

Server relay means that the encoder sends the transformed and compressed image data to the server, and the server saves the received data before sending it to the display terminal. Currently, this method is often used for online live broadcasting. However, after an image is stored, it will take dozens of seconds to forward it.

Direct playback of encoder means that the encoder directly sends the transformed and compressed image data to the server and display terminal, and the server saves the received data for future use. Because the data is sent to the server and display terminal at the same time, the image display is not delayed due to data storage and forwarding.

3. Data Transmission Mode of the live streaming media system

If the relay mode is used, when the image data is transmitted from the server to the display terminal, the unicast and multicast modes can be used. Similarly, if you use a direct method, when the image data is transmitted from the encoder to the display terminal, you can also use a single play or multicast.

Unicast (unicast) means that the sender transmits data to each receiver separately, or one-to-one transmission mode. With

The increase in the receiving end, the load of servers (relay mode) or encoder (direct mode), and the load of network equipment routers will increase, to the server (relay mode) or encoder (direct mode) high Performance and network bandwidth requirements. Unicast is more suitable for Vod ).

Multicast refers to the one-to-N data transmission mode in which the sender sends data to a specific group of receivers at the same time. Because the sender sends only one data packet, even if the number of receivers increases, the network transmission volume (Traffic) does not increase.

Multicast imposes low requirements on the Performance of servers (relay mode) or encoder (direct mode) and network bandwidth. Multicast allows you to transmit large images and high-definition images more smoothly in real time. However, when multicast uses UDP instead of TCP, packet loss is inevitable. FEC (forward error correction) and DCCP (datinto congestion control protocol) technologies can be used to reduce the impact of data loss.

4. Problems with multicast over IPv4

In IPv4 internet (Internet), the multicast address is a Class D address. Only organizations that have obtained the as number (rfc2770) can use multicast, And the IPv4 multicast address space is narrow. Therefore, IPv4 internet is not suitable for commercial use of multicast.

However, you can still use multicast in an IPv4 Intranet (Intranet. Assume that an enterprise has several departments, each department has its own LAN, and the departments and departments are connected over the Internet. In order to achieve cross-region multicast across the Intranet, when crossing the Internet, supports protocol conversion. Set a protocol conversion server in each LAN. Before the Internet, perform multicast-Unicast conversion. After the Internet is crossed, perform unicast-multicast conversion. In this way, the Intranet can be multicast.

5. IPv6 makes multicast at your fingertips

The multicast address space of IPv6 is very large. The multicast address can be generated by the prefix of the Global IP address (rfc3306), so it is easy to obtain the global multicast address. With a global multicast address, you can implement video services on the IPv6 internet in the form of multicast.

Real-time stream protocol (RTSP) is an application-level protocol that controls the transmission of real-time data. RTSP provides an extensible framework for real-time data, such as controlled audio and video, and on-demand video. Data sources include on-site data and stored in editing. This protocol is used to control multiple data transmission connections. It provides a channel for selection of transmission channels, such as UDP, multicast UDP, and TCP, and provides a method for choosing the RTP-based transmission mechanism.

 

Generally speaking, playing a video involves four steps:
1. Access acess, or read, obtain, or obtain
2. Demux demultiplexing means to separate audio and video (subtitles are also possible) that are normally combined)
3. Decode decoding, including audio and video decoding
4. Output output, also divided into audio and video output (aout and Vout)
Let's take a UDP multicast mpeg ts stream for example. The access part is responsible for receiving multicast streams from the network and placing them in the VLC memory buffer. The access module focuses on the IP protocol, such as IPV6, multicast address, multicast protocol, port, and other information;
If the RTP protocol is detected (the RTP protocol simply adds 12 bytes of information to the UDP header), it also analyzes the information of the RTP Header. For this part, see VLC source code/modules/access/udp. C.

The Demux part first needs to parse the TS stream information. The ts format is part of the MPEG2 protocol. In summary, ts is usually a 188-byte packet. a ts stream can contain multiple program programs ),
A program can contain multiple elasticsearch streams of video, audio, and text information. Each elasticsearch stream has a different PID. To analyze these es streams, TS has some fixed PID used to send program and es at intervals.
Stream information tables: Pat and PMT tables. For more information about ts format, go to Google.

VLC implements an independent library libdvbpsi to parse and encode ts streams. For the code that calls it, see VLC source code/modules/Demux/ts. C.
In fact, Demux is required because audio and video are actually independently encoded during production and separate data is obtained. To facilitate transmission, you must combine them in a certain way, in this case, Demux is available for various encapsulation formats.

The audio and video streams decomposed by Demux are sent to the audio decoder and Video Decoder respectively. Because the original audio and video occupy a lot of space,
In addition, data with high redundancy is usually compressed during creation. This is a well-known audio and video encoding format, including
Mpeg1 (VCD), MPEG2 (DVD), MPEG4, H.264, and rmvb. The audio/video decoder is used to restore the compressed data
Raw audio and video data. VLC decoding MPEG2 uses an independent library libmp eg2. The source file called is/modules/codec/libmp eg2.c.
VLC codec modules are all placed under the/modules/codec directory, including the famous huge FFMPEG.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.