Using WEBRTC to build a front-end video chat room-introductory article

Source: Internet
Author: User
Tags node server


What is WEBRTC?

It is well known that the browser itself does not support the direct establishment of communication channels between each other through the server to relay. For example, there are now two clients, A and B, they want to communicate, first need a and server, B and server to establish a channel between. A to send a message to B, a first send the message to the server, the server to a message to relay, sent to B, the reverse is the same. So a message between A and b through two channels, the efficiency of communication is subject to the bandwidth of both channels. At the same time such a channel is not suitable for the transmission of data flow, how to establish a point-to-point transmission between browsers, has plagued the developers. WebRTC was born

WEBRTC is an open source project designed to enable browsers to provide a simple JavaScript interface for real-time communication (RTC). The simple and clear point is to let the browser provide JS instant communication interface. The channel created by this interface is not the same as websocket, the communication between a browser and the WebSocket server, but through a series of signaling, to establish a browser and browser (Peer-to-peer) channel, the channel can send any data, Without having to go through the server. and WEBRTC through the implementation of MediaStream, through the browser to invoke the device's camera, microphone, so that the browser can transfer audio and video

WEBRTC is already in our browser.

Such a good function, the major browser manufacturers will naturally not ignore. Now WEBRTC can already be used in newer versions of Chrome, opera and Firefox, and a well-known browser compatibility query site Caniuse provides a detailed browser compatibility

In addition, according to 36Kr ago the news Google launched the support WEBRTC and web audio Android version of Chrome 29@36kr and Android version of opera began to support WEBRTC, Allows users to implement voice and video chat without any plugins, Android also starts to support WEBRTC

Three interfaces

The WEBRTC implements three APIs, namely:
* MediaStream: Through the MediaStream API can get video, audio synchronization flow through the device's camera and microphone
* Rtcpeerconnection:rtcpeerconnection is a component that WEBRTC uses to build stable, efficient streaming between point-to-point
* Rtcdatachannel:rtcdatachannel enables a high-throughput, low-latency channel between browsers (Point-to-point) for transmitting arbitrary data

Here's a general introduction to these three APIs

MediaStream (Getusermedia)

The MediaStream API provides WEBRTC with the ability to get video, audio streaming data from the device's camera, microphone

Standard

Standard Portal

How to Invoke

The same can be done by calling Navigator.getusermedia (), which accepts three parameters:
1. A Constraint object (Constraints objects), which will be followed by a separate
2. A call to a successful callback function, if the call succeeds, pass it to a stream object
3. A call failed callback function, if the call fails, pass it an Error object

Browser compatibility

Because browser implementations are different, they often prefix the method before implementing the standard version, so a compatible version is like this

var getusermedia = (Navigator.getusermedia | |                     Navigator.webkitgetusermedia | |                     Navigator.mozgetusermedia | |                     Navigator.msgetusermedia);
A super-simple example

Here is a super simple example to show the effect of Getusermedia:

<!doctype html> <html lang="ZH-CN"> <head> <meta charset="UTF-8"> <title>Getusermedia instances</title> </head> <body> <video id="video" autoplay></video> </body> <script type="Text/javascript"> varGetusermedia = (Navigator.getusermedia | | navigator.webkitgetusermedia | | navigator.mozgetusermedia | |    Navigator.msgetusermedia); Getusermedia.call (Navigator, {video:true, Audio:true}, function(localmediastream) {varVideo =Document. getElementById (' video '); VIDEO.SRC =window.        Url.createobjecturl (Localmediastream); Video.onloadedmetadata = function(e) {Console. log ("Label:"+ Localmediastream.label);Console. log ("Audiotracks", Localmediastream.getaudiotracks ());Console. log ("Videotracks", Localmediastream.getvideotracks ());    }; }, function(e) {Console. log (' reeeejected! ', e); });</script> </html>

Save this piece of content in an HTML file and place it on the server. With a newer version of opera, Firefox, Chrome Open, in the browser pop-up asking whether to allow access to the camera and the microphone, choose to agree, the browser will appear on the camera on the screen.

Note that the HTML file to be placed on the server, otherwise you will get a navigatorusermediaerror error, display Permissiondeniederror, the simplest way is the CD to the HTML file in the same directory, and then python-m Simplehttpserver (loaded with Python), then enter the http://localhost:8000/{file name in the browser}.html

After using Getusermedia to get the stream, you need to output it, typically binding to the output on the Video tab, and you need to use window. Url.createobjecturl (Localmediastream) to create a blob URL that can be played in video using the SRC attribute, notice that the AutoPlay attribute is added to the video, or only one picture can be captured

Once the stream has been created, the Label property can be used to obtain its unique identity, and the Getaudiotracks () and Getvideotracks () methods can be used to get an array of trace objects for the stream (if a stream is not turned on, its tracking object array will be an empty array)

Constraint Object (Constraints)

Constraint objects can be set in the Addstream method of Getusermedia () and rtcpeerconnection, a constraint object that is used by WEBRTC to specify what kind of stream to accept, which can define the following properties:
* Video: Whether to accept the stream
* Audio: Whether to accept the stream
* MinWidth: Minimum width of the video stream
* MaxWidth: Maximum width of the video stream
* MinHeight: Minimum height of the video stream
* MAXHIEHGT: Maximum height of the video stream
* Minaspectratio: Minimum aspect ratio of the video stream
* Maxaspectratio: Maximum aspect ratio of the video stream
* Minframerate: Minimum frame rate for video streams
* MaxFrameRate: Maximum frame rate of the video stream

For details see resolution Constraints in Web Real time Communications draft-alvestrand-constraints-resolution-00

Rtcpeerconnection

WEBRTC uses rtcpeerconnection to pass streaming data between browsers, which are point-to-point and do not need to be brokered by the server. But this does not mean that we can abandon the server, we still need it to pass signaling (signaling) to build this channel. WEBRTC there is no protocol defined for signaling to establish a channel: signaling is not part of the Rtcpeerconnection API

Signaling

Since there is no protocol to define the specific signaling, we can choose any way (AJAX, WebSocket), use arbitrary protocol (SIP, XMPP) to pass the signaling, establish the channel, such as I write the demo, is the WS module of node, to pass the signaling on the WebSocket

There are three types of information that need to be exchanged by signaling:
* Session information: Used to initialize the communication and error
* Network configuration: such as IP address and port what
* Media Adaptation: What Encoders and resolutions can be accepted by the sender and receiver's browser

The exchange of this information should be completed before the point-to-point stream is transmitted, and a general architecture diagram is as follows:

Establishing a channel through a server

Here again, even if WEBRTC provides a point-to-point channel between browsers for data transmission, it is necessary for the server to be involved in establishing this channel. WEBRTC requires the server to support four aspects of its functionality:
1. User Discovery and communication
2. Signaling Transmission
3. nat/Firewall traversal
4. If the point-to-point communication fails to establish, it can be used as a transit server

nat/Firewall Traversal Technology

A common problem in establishing a point-to-point channel is the NAT traversal technique. NAT traversal technology is required to establish a connection between hosts in a private TCP/IP network that uses a NAT device. This problem has often been encountered in the VoIP field in the past. There are already many NAT traversal techniques, but none is perfect, because NAT behavior is nonstandard. Most of these technologies use a common server, which uses an IP address that can be accessed from anywhere in the world. In Rtcpeeconnection, the ice framework is used to ensure that the rtcpeerconnection can achieve NAT traversal

ICE, called Interactive Connection creation (Interactive Connectivity establishment), is a comprehensive NAT traversal technology, a framework that integrates various NAT traversal techniques such as stun, TURN (traversal Traversal of the Using Relay NAT relay implementation). Ice will first use stun, try to establish a UDP-based connection, if it fails, it will go to TCP (try HTTP first, then try HTTPS), and if it fails, ice will use a trunk turn server.

We can use Google's stun server: stun:stun.l.google.com:19302, so an architecture that integrates the ice framework should look like this.

Browser compatible

or a different prefix, using a method similar to the one above:

var peerconnection = (windowwindow window windows. mozrtcpeerconnection);
Create and use
//Using Google's stun server varIceserver = {"Iceservers": [{"url":"stun:stun.l.google.com:19302"}]};//browser-compatible Getusermedia notation varGetusermedia = (Navigator.getusermedia | |                     Navigator.webkitgetusermedia | |                     Navigator.mozgetusermedia | | Navigator.msgetusermedia);//browser-compatible peerconnection notation varPeerconnection = (window. peerconnection | |window. webkitPeerConnection00 | |window. webkitrtcpeerconnection | |window. mozrtcpeerconnection);//WebSocket connection to the background server varSocket = __createwebsocketchannel ();//Create Peerconnection instances varPC =NewPeerconnection (Iceserver);//Send ice candidates to other clientsPc.onicecandidate = function(event) {Socket.send (JSON. stringify ({"Event":"__ice_candidate","Data": {"Candidate": Event.candidate});};//If the media is detected to be connected to the local, bind it to a video label on the outputPc.onaddstream = function(event) {somevideoelement.src = Url.createobjecturl (Event.stream);};//Get local media stream and bind to output on a video label, and send this media stream to other clientsGetusermedia.call (Navigator, {"Audio":true,"Video":true}, function(stream) {//Send offer and answer functions, send local session description varSENDOFFERFN = function(DESC) {pc.setlocaldescription (DESC); Socket.send (JSON. stringify ({"Event":"__offer","Data": {"SDP": desc})); }, SENDANSWERFN = function(DESC) {pc.setlocaldescription (DESC); Socket.send (JSON. stringify ({"Event":"__answer","Data": {"SDP": desc})); };//Bind the media stream to the video tag for outputMYSELFVIDEOELEMENT.SRC = Url.createobjecturl (stream);//Add a stream to the peerconnection that needs to be sentPc.addstream (stream);//If the sender sends an offer signaling, otherwise sends a answer signaling if(Iscaller)    {Pc.createoffer (SENDOFFERFN); }Else{Pc.createanswer (SENDANSWERFN); }}, function(error) {//Processing Media stream creation failure error});//Handling Incoming signalingSocket.onmessage = function(event) {varJSON =JSON. Parse (Event.data);//If it is an ice candidate, add it to the peerconnection, otherwise set the session description as passed. if(Json.event = = ="__ice_candidate") {Pc.addicecandidate (NewRtcicecandidate (json.data.candidate)); }Else{Pc.setremotedescription (NewRtcsessiondescription (JSON.DATA.SDP)); }};
Instance

Because of the more complex and flexible signaling transmission, so there is no short example, you can go directly to the last

Rtcdatachannel

Now that you can set up a point-to-point channel to deliver real-time video and audio streams, why not use this channel to pass a bit of other data? The Rtcdatachannel API is used to do this, based on which we can transfer arbitrary data between browsers. Datachannel is built on peerconnection and cannot be used alone.

Using Datachannel

We can use Channel = Pc.createdatachannel ("Somelabel") to create a data Channel on an instance of peerconnection and give it a label

Datachannel is used in almost the same way as WebSocket, there are several events:
* OnOpen
* OnClose
* OnMessage
* OnError

At the same time it has several states that can be obtained by readystate:
* Connecting: Browser is trying to establish channel
* Open: Established successfully, you can send data using the Send method
* Closing: Browser is closing channel
* Closed:channel has been shut down.

Two methods of exposure:
* Close (): Used to close channel
* Send (): Used to send data to each other via channel

General idea of sending files via data channel

JavaScript has provided the file API to extract files from the elements of input[type= ' file '], and filereader to convert the files to Dataurl, This also means that we can divide the dataurl into multiple fragments to transfer files over the channel.

A comprehensive demo

Skyrtc-demo, this is a demo I wrote. Set up a video chat room, and be able to broadcast files, of course, also support single-to-single file transfer, writing is also very rough, the latter will continue to improve

How to use
    1. Download unzip and CD to directory
    2. Run NPM Install dependent libraries (Express, WS, NODE-UUID)
    3. Run node server.js, Access localhost:3000, allow camera access
    4. Open another computer, in the browser (Chrome and opera, not compatible with Firefox) Open {server ip}:3000, allow camera and microphone access
    5. Broadcast file: Select a file in the lower left corner and click the "Send File" button
    6. Broadcast information: Lower left input box enter information, click Send
    7. May be error, note F12 dialog box, General F5 can solve
Function

Video Audio Chat (connected to the camera and microphone, at least to have a webcam), broadcast files (can be transmitted separately, provide APIs, broadcast is based on the implementation of separate propagation, can be spread at the same time, small files okay said, large files waiting for memory to eat up), broadcast chat information

Resources
    • WEBRTC official website
    • W3c-getusermedia
    • W3c-webrtc
    • Capturing Audio & Video in [email protected]
    • Getting Started with [email protected]
    • Caniuse
    • Ice Interactive Connection Setup
    • WebRTC
    • WebSocket
    • node. js
    • Javascript
    • Webim

Using WEBRTC to build a front-end video chat room-introductory article

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.