WEBRTC Introductory article

Source: Internet
Author: User
Tags node server
What is WEBRTC.
As we all know, the browser itself does not support each other directly to establish channels for communication, are through the server relay. For example, now there are two clients, A and B, they want to communicate, first need a and server, B and server to establish a channel between. A to send a message to B, a first message sent to the server, the server to a message relay, sent to B, and vice versa. In this way a message between A and B passes through two channels, and the efficiency of the communication is constrained by the bandwidth of the two channels. At the same time, such a channel is not suitable for data stream transmission, how to establish point-to-point transmission between browsers, has been plagued by developers. WebRTC was born.

WEBRTC is an Open-source project designed to enable browsers to provide a simple JavaScript interface for real-time communications (RTC). The simple and clear point is to let the browser to provide JS instant communication interface. The channel created by this interface is not like WebSocket, through the communication between a browser and the WebSocket server, but through a series of signaling, creating a channel between the browser and the browser (Peer-to-peer), which can send any data, Without having to go through the server. and WEBRTC through the implementation of MediaStream, through the browser to invoke the device's camera, microphone, so that the browser can transfer audio and video.

Three interfaces
The WEBRTC implements three APIs, respectively:
* MediaStream: Through the MediaStream API can be through the device's camera and microphone to obtain video, audio synchronization stream
* Rtcpeerconnection:rtcpeerconnection is a component of WEBRTC used to build a stable, efficient flow transfer between point-to-point points
* Rtcdatachannel:rtcdatachannel enables a high throughput, low latency channel between browsers (point-to-point) for the transmission of arbitrary data
Here's a general introduction to these three APIs:

MediaStream (Getusermedia)
The MediaStream API provides WEBRTC with the ability to obtain video and audio streaming data from the device's camera and microphone.

How to Invoke:
The same gate can accept three parameters by calling Navigator.getusermedia ():
1. A Constraint object (Constraints object), which will be followed by a separate
2. A call to a successful callback function, if the call succeeds, passed to it a stream object
3. A callback function that fails the call, if the call fails, passes it an Error object

Browser compatibility:

Because browsers are implemented differently, they often prefix the method before implementing the standard version, so a compatible version is like this:
var Getusermedia = (Navigator.getusermedia | |
Navigator.webkitgetusermedia | |
Navigator.mozgetusermedia | |
Navigator.msgetusermedia);

A super Simple example

Here's a super simple example to show the Getusermedia effect:

<!doctype html>  

Save this piece of content in an HTML file and place it on the server. With a newer version of opera, Firefox, Chrome Open, in the browser pop-up asked whether to allow access to the camera and microphone, the choice of consent, the browser will appear on the camera footage of the

Note that the HTML file should be placed on the server, otherwise you will get a navigatorusermediaerror error, display Permissiondeniederror, the simplest way is the CD to the directory where the HTML file is located, and then Python-m Simplehttpserver (loaded with Python), and then enter the http://localhost:8000/{file name in the browser}.html

After using Getusermedia to get the stream, you need to output it, typically binding it to the output on the video label, and you need to use window. Url.createobjecturl (Localmediastream) to create a blob URL that can be played with the SRC attribute in the video, notice that the AutoPlay attribute is added to the display, otherwise only a single picture is captured

After the stream is created, the Label property can be used to obtain its unique identity, and the Getaudiotracks () and Getvideotracks () methods can be used to obtain an array of the stream's tracking objects (if a stream is not opened, its tracking object array will be an empty array) Constraint object (Constraints)

The constraint object can be set in the Addstream method of Getusermedia () and Rtcpeerconnection, which is the WEBRTC used to specify what stream to accept, which can define the following properties:
* Video: Whether to accept the stream
* Audio: Whether to accept audio streaming
* MinWidth: Minimum width of video stream
* MaxWidth: Maximum width of video stream
* MinHeight: Minimum height of video stream
* MAXHIEHGT: Maximum height of video stream
* Minaspectratio: Minimum aspect ratio of video stream
* Maxaspectratio: Maximum width/height ratio of video stream
* Minframerate: Minimum frame rate for video streaming
* MaxFrameRate: Maximum frame rate for video streaming rtcpeerconnection

WEBRTC uses rtcpeerconnection to pass streaming data between browsers, which is point-to-point and does not need to be relayed through the server. But this does not mean that we can abandon the server, we still need it to pass the signaling (signaling) to build this channel. WEBRTC does not define a protocol used to establish a channel: signaling is not part of the Rtcpeerconnection API.

Signaling

Since there is no specific signaling protocol, we can choose any way (AJAX, WebSocket), using any protocol (SIP, XMPP) to pass the signaling, establish the channel, such as my demo, is the use of node WS module, on the WebSocket transmission of signaling

There are three kinds of messages that need signaling to exchange:
* Session information: Used to initialize the communication and error
* Network configuration: such as IP address and port what's
* Media Adaptation: What Encoders and resolutions can be accepted by the sender and receiver browsers

The exchange of this information should be completed before point-to-point streaming, and a general architectural diagram is as follows:

establishing a channel through the server

Here again, even if WEBRTC provides a point-to-point channel between browsers for data transmission, the establishment of this channel must involve the server. WEBRTC requires four functional support from the server:
1. User Discovery and communication
2. Signaling Transmission
3. nat/Firewall through
4. If point-to-point communication fails to build, it can be used as a relay server nat/firewall through technology

One of the common problems in establishing point-to-point channels is NAT crossing technology. NAT Traverse Technology is required to establish a connection between hosts in a private TCP/IP network that uses a NAT device. In the past, this problem is often encountered in the area of VoIP. There are already many Nat crossing techniques, but none of them is perfect, because NAT is a nonstandard behavior. Most of these technologies use a common server that uses an IP address that can be accessed from anywhere in the world. In Rtcpeeconnection, the ice framework is used to ensure that rtcpeerconnection can achieve NAT traverse

ICE, all called Interactive connection establishment (Interactive connectivity establishment), a comprehensive nat traversing technology, it is a framework that can integrate various NAT traversing technology such as stun, TURN (traversal Using Relay NAT relay NAT implementation penetration). Ice will use stun first, try to establish a UDP based connection, if failed, will go to TCP (first try HTTP, then try HTTPS), if still fail ice will use a relay turn server.

We can use Google's stun server: stun:stun.l.google.com:19302, so an architecture that incorporates the ice framework should look like this

Browser compatible

or prefix a different problem, using the same approach as above:

var peerconnection = (window. peerconnection | |
                    window.webkitpeerconnection00 | | 
                    window.webkitrtcpeerconnection | | 
                    Window.mozrtcpeerconnection);
Create and use:

Use Google's stun server var iceserver = {"Iceservers": [{"url": "stun:stun.l.google.com:19302"}]};
                    Getusermedia of compatible browsers var Getusermedia = (Navigator.getusermedia | | 
                    Navigator.webkitgetusermedia | | 
                    Navigator.mozgetusermedia | |
Navigator.msgetusermedia); Compatible browser peerconnection the var peerconnection = (window.
                    peerconnection | | 
                    window.webkitpeerconnection00 | | 
                    window.webkitrtcpeerconnection | |
Window.mozrtcpeerconnection);
WebSocket connection to the background server var socket = __createwebsocketchannel ();
Create Peerconnection instance var pc = new Peerconnection (iceserver);
        Send ice candidate to other client Pc.onicecandidate = function (event) {Socket.send (json.stringify) ({"Event": "__ice_candidate",
"Data": {"candidate": Event.candidate}));
}; If the media is detected to be lingering to the local, bind it to a video label output Pc.onaddstream = function (event) {somevideoelement.src = Url.createobjecTurl (Event.stream);
}; Gets the local media stream, binds to the output on a video label, and sends this media stream to the other client Getusermedia.call (navigator, {"Audio": True, "video": true}, Functio N (Stream) {//Send an offer and answer function, send local session description var SENDOFFERFN = function (desc) {pc.setlocaldescription (
            DESC); Socket.send (json.stringify ({"Event": "__offer", "data": {"SDP": des
        C}}));
            }, SENDANSWERFN = function (desc) {pc.setlocaldescription (DESC); Socket.send (json.stringify ({"Event": "__answer", "data": {"SDP": de
        SC}));
    };
    Bind the media stream to the video tag for output myselfvideoelement.src = Url.createobjecturl (stream);
    Add Flow Pc.addstream (stream) that need to be sent to the peerconnection;
    Send an offer letter if it is the sender, otherwise send a answer signaling if (Iscaller) {pc.createoffer (SENDOFFERFN); else {pc.createanswer (SendansWERFN);
}, function (Error) {//Process Media stream creation failure});
    Processing incoming Signaling socket.onmessage = function (event) {var json = Json.parse (event.data); If it is an ice candidate, then add it to the peerconnection, otherwise set the session description of the other party as a passing description if (json.event = = "__ice_candidate") {Pc.addice
    Candidate (new Rtcicecandidate (json.data.candidate));
    else {pc.setremotedescription (new rtcsessiondescription (JSON.DATA.SDP)); }
};
Instance

Due to the more complex and flexible signaling transmission, there are no short examples here, you can go directly to the final

Rtcdatachannel
Since we can establish a point-to-point channel to transmit real-time video and audio data streams, why not use this channel to transmit a little bit of other data. The Rtcdatachannel API is used to do this, based on which we can transfer arbitrary data between browsers. Datachannel is built on the peerconnection and cannot be used alone.

using Datachannel

We can use Channel = Pc.createdatachannel ("Somelabel") to create a data Channel on an instance of peerconnection and give it a label

Datachannel uses almost as much as WebSocket, with several events:
* OnOpen
* OnClose
* OnMessage
* OnError

At the same time it has several states, which can be obtained by readystate:
* Connecting: Browsers are trying to build channel
* Open: Built successfully, you can send data using the Send method
* Closing: Browser is shutting down channel
* Closed:channel has been shut down.

Two methods of exposure:
* Close (): for closing channel
* Send (): Used to send data to each other via channel to send a file through the channel general idea

JavaScript has provided the file API to extract files from the elements of input[type= ' file ', and through FileReader to convert the files into Dataurl, This also means that we can split the dataurl into pieces to transfer files through channel. a comprehensive demo

Skyrtc-demo, this is a demo I wrote. Set up a video chat room, and be able to broadcast files, of course, also support single file transmission, write is still very rough, later will continue to improve the use of the way to download unzip and CD to the directory running NPM install installation dependent library (Express, WS, NODE-UUID) Run node server.js, Access localhost:3000, allow webcam access to open another computer, open {server ip}:3000 in browsers (Chrome and opera, not compatible with Firefox), allow webcam and microphone access Broadcast files: Select a file in the lower left corner, click the "Send File" button broadcast information: the lower left corner input information box, click Send may be wrong, pay attention to F12 dialog box, General F5 can solve the function

Video Audio Chat (connected to the camera and microphone, at least have a camera), broadcast files (can be transmitted separately, provide APIs, broadcast is based on the transmission of the individual, can simultaneously propagate multiple, small file is OK, big file waiting for memory to eat), broadcast chat information from: http:// Segmentfault.com/a/1190000000436544#articleheader19

Related Article

E-Commerce Solutions

Leverage the same tools powering the Alibaba Ecosystem

Learn more >

Apsara Conference 2019

The Rise of Data Intelligence, September 25th - 27th, Hangzhou, China

Learn more >

Alibaba Cloud Free Trial

Learn and experience the power of Alibaba Cloud with a free trial worth $300-1200 USD

Learn more >

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.