Reprint Please specify source: http://www.cnblogs.com/fangkm/p/4401075.html
The first two blog posts complete the WEBRTC audio and video collection module, and the next step is to introduce the key audio and video coding modules. However, before introducing the audio and video coding module, we need to introduce the channel concept, and the transmission flow of each WEBRTC data is encapsulated into a channel object. The detailed UML diagram is as follows:
Mediachannel and its derived classes encapsulate the logic of the codec to be transmitted, the RTP/RTCP packet unpacking, and the specific object is created by the corresponding media engine class. The final implementation class WEBRTCVIDEOCHANNEL2 of the video channel is created by WebRtcVideoEngine2, The final implementation class Webrtcvoicemediachannel of the audio channel is created by Webrtcvoiceengine.
The channel is exposed to the outside operating interface or ChannelManager class in the management of Basechannel and its derived classes, through these classes, the external module can set the audio and video collection source (such as Videocapturer), Specifies the renderer (such as audiorenderer/videorenderer) for the audio and video data sent over the network, which wraps a layer on the basis of Mediachannel and its derived classes, Basechannel implements the Mediachannel NetworkInterface interface to complete the sending operation of the encapsulated RTP/RTCP packet, and the data-specific network send request is ultimately delegated to the Transportchannel object. Transportchannel the logic of the object later when the network layer is introduced.
Well, the next article begins with the introduction of WebRtcVideoEngine2 and the video channel class WebRtcVideoChannel2 created by it.
WEBRTC Notes Channel Concept