Original address: http://segmentfault.com/a/1190000000436544 what is WEBRTC?
It is well known that the browser itself does not support the direct establishment of communication channels between each other through the server to relay. For example, there are now two clients, A and B, they want to communicate, first need a and server, B and server to establish a channel between. A to send a message to B, a first send the message to the server, the server to a message to relay, sent to B, the reverse is the same. So a message between A and b through two channels, the efficiency of communication is subject to the bandwidth of both channels. At the same time such a channel is not suitable for the transmission of data flow, how to establish a point-to-point transmission between browsers, has plagued the developers. WebRTC was born
WEBRTC is an open source project designed to enable browsers to provide a simple JavaScript interface for real-time communication (RTC). The simple and clear point is to let the browser provide JS instant communication interface. The channel created by this interface is not the same as websocket, the communication between a browser and the WebSocket server, but through a series of signaling, to establish a browser and browser (Peer-to-peer) channel, the channel can send any data, Without having to go through the server. and WEBRTC through the implementation of MediaStream, through the browser to invoke the device's camera, microphone, so that the browser can transfer audio and video
WEBRTC is already in our browser.
Such a good function, the major browser manufacturers will naturally not ignore. Now WEBRTC can already be used in newer versions of Chrome, opera and Firefox, and a well-known browser compatibility query site Caniuse provides a detailed browser compatibility
In addition, according to the news 36Kr ago, Google launched the support WEBRTC and web audio Android version of chrome [email protected] and the Android version of opera began to support WEBRTC, Allows users to implement voice and video chat without any plugins, Android also starts to support WEBRTC
Three interfaces
The WEBRTC implements three APIs, namely:
* MediaStream: Through the MediaStream API can get video, audio synchronization flow through the device's camera and microphone
* Rtcpeerconnection:rtcpeerconnection is a component that WEBRTC uses to build stable, efficient streaming between point-to-point
* Rtcdatachannel:rtcdatachannel enables a high-throughput, low-latency channel between browsers (Point-to-point) for transmitting arbitrary data
Here's a general introduction to these three APIs
MediaStream (Getusermedia)
The MediaStream API provides WEBRTC with the ability to get video, audio streaming data from the device's camera, microphone
Standard
Standard Portal
How to Invoke
The same can be done by calling Navigator.getusermedia (), which accepts three parameters:
1. A Constraint object (Constraints objects), which will be followed by a separate
2. A call to a successful callback function, if the call succeeds, pass it to a stream object
3. A call failed callback function, if the call fails, pass it an Error object
Browser compatibility
Because browser implementations are different, they often prefix the method before implementing the standard version, so a compatible version is like this
var getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
A super-simple example
Here is a super simple example to show the effect of Getusermedia:
<!doctype html><Htmllang="ZH-CN" ><Head><Metacharset="UTF-8" ><Title>getusermedia instances</Title></Head><Body><VideoId="Video"Autoplay></Video></Body><ScriptType="Text/javascript" > var Getusermedia = (Navigator.getusermedia | | navigator.webkitgetusermedia | | navigator.mozgetusermedia | | Navigator.msgetusermedia); Getusermedia.call (Navigator, {video:True, Audio:True},function(Localmediastream) {var video =document.getElementById (' Video '); VIDEO.SRC =window. Url.createobjecturl (Localmediastream); Video.onloadedmetadata = function (e) {console.log ( "Label:" + Localmediastream.label); console.log ( "Audiotracks", Localmediastream.getaudiotracks ()); console.log ( "Videotracks", Localmediastream.getvideotracks ()); }; }, function (e) {console.log ( ' reeeejected! ', e);}); </script></ HTML>
Save this piece of content in an HTML file and place it on the server. With a newer version of opera, Firefox, Chrome Open, in the browser pop-up asking whether to allow access to the camera and the microphone, choose to agree, the browser will appear on the camera on the screen.
Note that the HTML file to be placed on the server, otherwise you will get a navigatorusermediaerror error, display Permissiondeniederror, the simplest way is the CD to the HTML file in the same directory, and then python -m SimpleHTTPServer
(Python is installed), and then enter it in the browserhttp://localhost:8000/{文件名称}.html
Here, using the getUserMedia
get stream, you need to output it, typically bound to the video
output on the label, you need window.URL.createObjectURL(localMediaStream)
to use to create a video
blob URL that can be used to src
play the property in, notice that the attribute is video
added autoplay
, or only one picture is captured
Once the stream has been created label
, it is possible to obtain its unique identity through a property, and the method can be used getAudioTracks()
getVideoTracks()
to obtain an array of trace objects for the stream (if a stream is not turned on, its array of tracked objects will be an empty array)
Constraint Object (Constraints)
Constraint objects can be set in getUserMedia()
and Rtcpeerconnection addStream
methods, the constraint object is WEBRTC used to specify what kind of stream to accept, which can define the following properties:
* Video: Whether to accept the stream
* Audio: Whether to accept the stream
* MinWidth: Minimum width of the video stream
* MaxWidth: Maximum width of the video stream
* MinHeight: Minimum height of the video stream
* MAXHIEHGT: Maximum height of the video stream
* Minaspectratio: Minimum aspect ratio of the video stream
* Maxaspectratio: Maximum aspect ratio of the video stream
* Minframerate: Minimum frame rate for video streams
* MaxFrameRate: Maximum frame rate of the video stream
For details see resolution Constraints in Web Real time Communications draft-alvestrand-constraints-resolution-00
Rtcpeerconnection
WEBRTC uses rtcpeerconnection to pass streaming data between browsers, which are point-to-point and do not need to be brokered by the server. But this does not mean that we can abandon the server, we still need it to pass signaling (signaling) to build this channel. WEBRTC there is no protocol defined for signaling to establish a channel: signaling is not part of the Rtcpeerconnection API
Signaling
Since there is no protocol to define the specific signaling, we can choose any way (AJAX, WebSocket), use arbitrary protocol (SIP, XMPP) to pass the signaling, establish the channel, such as I write the demo, is the WS module of node, to pass the signaling on the WebSocket
There are three types of information that need to be exchanged by signaling:
* Session information: Used to initialize the communication and error
* Network configuration: such as IP address and port what
* Media Adaptation: What Encoders and resolutions can be accepted by the sender and receiver's browser
The exchange of this information should be completed before the point-to-point stream is transmitted, and a general architecture diagram is as follows:
Establishing a channel through a server
Here again, even if WEBRTC provides a point-to-point channel between browsers for data transmission, it is necessary for the server to be involved in establishing this channel. WEBRTC requires the server to support four aspects of its functionality:
1. User Discovery and communication
2. Signaling Transmission
3. nat/Firewall traversal
4. If the point-to-point communication fails to establish, it can be used as a transit server
nat/Firewall Traversal Technology
A common problem in establishing a point-to-point channel is the NAT traversal technique. NAT traversal technology is required to establish a connection between hosts in a private TCP/IP network that uses a NAT device. This problem has often been encountered in the VoIP field in the past. There are already many NAT traversal techniques, but none is perfect, because NAT behavior is nonstandard. Most of these technologies use a common server, which uses an IP address that can be accessed from anywhere in the world. In Rtcpeeconnection, the ice framework is used to ensure that the rtcpeerconnection can achieve NAT traversal
ICE, called Interactive Connection creation (Interactive Connectivity establishment), is a comprehensive NAT traversal technology, a framework that integrates various NAT traversal techniques such as stun, TURN (traversal Traversal of the Using Relay NAT relay implementation). Ice will first use stun, try to establish a UDP-based connection, if it fails, it will go to TCP (try HTTP first, then try HTTPS), and if it fails, ice will use a trunk turn server.
We can use Google's stun server: stun:stun.l.google.com:19302
So, an architecture that integrates the ice framework should look like this.
Browser compatible
or a different prefix, using a method similar to the one above:
var PeerConnection = (window.PeerConnection || window.webkitPeerConnection00 || window.webkitRTCPeerConnection || window.mozRTCPeerConnection);
Create and use
Using Google's stun servervar iceserver = {"Iceservers": [{"url":"stun:stun.l.google.com:19302"}]};Browser-compatible Getusermedia notationvar Getusermedia = (Navigator.getusermedia | | navigator.webkitgetusermedia | | navigator.mozgetusermedia | | Navigator.msgetusermedia);Browser-compatible peerconnection notationvar peerconnection = (Window. peerconnection | |window.webkitpeerconnection00 | |window.webkitrtcpeerconnection | |Window.mozrtcpeerconnection);WebSocket connection to the background servervar socket = __createwebsocketchannel ();Creating an Peerconnection instancevar pc =New Peerconnection (Iceserver);Send ice candidate to other client Pc.onicecandidate =function(event) {Socket.send (Json.stringify ({"Event":"__ice_candidate","Data": {"Candidate": Event.candidate});};If the media is detected to be connected to the local, bind it to a video label on the output Pc.onaddstream =function(event) {somevideoelement.src = Url.createobjecturl (Event.stream);};Gets the local media stream and binds to the output on a video label, and sends this media stream to the other client Getusermedia.call (navigator, {"Audio":True"Video":True},function(stream) {Send offer and answer functions, send local session descriptionvar SENDOFFERFN =function(DESC) {pc.setlocaldescription (DESC); Socket.send (Json.stringify ({"Event":"__offer","Data": {"SDP": desc})); }, SENDANSWERFN =function(DESC) {pc.setlocaldescription (DESC); Socket.send (Json.stringify ({"Event":"__answer","Data": {"SDP": desc})); };Bind the media stream to the video tag for output myselfvideoelement.src = Url.createobjecturl (stream);Add the stream Pc.addstream (stream) that needs to be sent to the peerconnection;If the sender sends an offer signaling, otherwise sends a answer signalingif (Iscaller) {pc.createoffer (SENDOFFERFN);}else {pc.createanswer (SENDANSWERFN);}}, function(error) { //handling Media stream creation failed error}); Processing incoming Signaling Socket.onmessage = function(event) { var json = json.parse (event.data); //If it is an ice candidate, add it to the peerconnection, otherwise set the session description as the pass-through description if (json.event = = = "__ice_candidate") { Pc.addicecandidate (new Rtcicecandidate (Json.data.candidate));} else {pc.setremotedescription (new Rtcsessiondescription (JSON.DATA.SDP));}};
Instance
Because of the more complex and flexible signaling transmission, so there is no short example, you can go directly to the last
Rtcdatachannel
Now that you can set up a point-to-point channel to deliver real-time video and audio streams, why not use this channel to pass a bit of other data? The Rtcdatachannel API is used to do this, based on which we can transfer arbitrary data between browsers. Datachannel is built on peerconnection and cannot be used alone.
Using Datachannel
We can use channel = pc.createDataCHannel("someLabel");
it to create a data Channel on an instance of peerconnection and give it a label
Datachannel is used in almost the same way as WebSocket, there are several events:
* OnOpen
* OnClose
* OnMessage
* OnError
At the same time it has several states that can be readyState
obtained by:
* Connecting: Browser is trying to establish channel
* Open: Established successfully, can use send
method to send data
* Closing: Browser is closing channel
* Closed:channel has been shut down.
Two methods of exposure:
* Close (): Used to close channel
* Send (): Used to send data to each other via channel
General idea of sending files via data channel
JavaScript has provided the file API to input[type=‘file‘]
extract files from the elements of the file and to convert the files to Dataurl through FileReader, which means we can divide the dataurl into multiple fragments for file transfer through the channel.
A comprehensive demo
Skyrtc-demo, this is a demo I wrote. Set up a video chat room, and be able to broadcast files, of course, also support single-to-single file transfer, writing is also very rough, the latter will continue to improve
How to use
- Download unzip and CD to directory
- Run
npm install
Install dependent libraries (Express, WS, NODE-UUID)
- Run
node server.js
, Access localhost:3000
, allow camera access
- Open another computer, open in browser (Chrome and opera, not yet compatible with Firefox)
{server所在IP}:3000
, allow camera and microphone access
- Broadcast file: Select a file in the lower left corner and click the "Send File" button
- Broadcast information: Lower left input box enter information, click Send
- May be error, note F12 dialog box, General F5 can solve
Function
Video Audio Chat (connected to the camera and microphone, at least to have a webcam), broadcast files (can be transmitted separately, provide APIs, broadcast is based on the implementation of separate propagation, can be spread at the same time, small files okay said, large files waiting for memory to eat up), broadcast chat information
Resources
- WEBRTC official website
- W3c-getusermedia
- W3c-webrtc
- Capturing Audio & Video in [email protected]
- Getting Started with [email protected]
- Caniuse
- Ice Interactive Connection Setup
Using WEBRTC to build a front-end video chat room-introductory article