Original: http://www.cnblogs.com/jscode/p/3601648.html?comefrom=http://blogread.cn/news/
1. Overview
WEBRTC is the acronym for "Network Real Time Communication", which is used primarily to allow the browser to capture and Exchange video, audio, and data in real time.
WEBRTC a total of three APIs.
- MediaStream (also known as Getusermedia)
- Rtcpeerconnection
- Rtcdatachannel
Getusermedia is primarily used to obtain video and audio information, and the latter two APIs are used for data exchange between browsers.
2, getUserMedia2.1 Introduction
First, check to see if the browser supports the Getusermedia method.
Navigator.getusermedia | | (Navigator.getusermedia = Navigator.mozgetusermedia | | Navigator.webkitgetusermedia | | Navigator.msgetusermedia), if (Navigator.getusermedia) { //do something} else { console.log (' Your browser not Support Getusermedia ');}
CHROME21, Opera 18 and Firefox 17 support this method, currently IE is not supported, the above code in the Msgetusermedia just to ensure future compatibility.
The Getusermedia method accepts three parameters.
Getusermedia (streams, success, error);
The meaning is as follows:
- Streams: Object that represents which multimedia device is included
- Success: callback function to get the multimedia device to be called when successful
- Error: callback function to get multimedia device failed when called
Use the following:
Navigator.getusermedia ({ video:true, audio:true}, onsuccess, OnError);
The above code is used to get real-time information about the camera and microphone.
If the Web page uses Getusermedia, the browser asks the user if it is licensed to provide information. If the user rejects, call the callback function onerror.
When an error occurs, the parameter of the callback function is an error object, and it has a code parameter with the following values:
- Permission_denied: The user refuses to provide information.
- Not_supported_error: The specified media type is not supported by the browser.
- Mandatory_unsatishied_error: The specified media type did not receive a media stream.
2.2 Displaying camera images
To display images taken by a user's webcam on a Web page, you need to first place a video element on the page. The image is displayed in this element.
<video id= "Webcam" ></video>
Then, get this element in code.
function onsuccess (stream) { var video = document.getElementById (' webcam '); More Code}
Finally, the SRC attribute of this element is bound to the data stream, and the image taken by the camera can be displayed.
function onsuccess (stream) { var video = document.getElementById (' webcam '); if (window. URL) { video.src = window. Url.createobjecturl (stream); } else { video.src = stream; } Video.autoplay = true; or Video.play ();}
Its main purpose is to let users use the camera to take pictures of themselves.
2.3 Capturing Microphone Sound
Capturing sounds through a browser is relatively complex and requires the use of the Web audio API.
function onsuccess (stream) {//Creates an audio environment pair like Audiocontext = window. Audiocontext | | Window.webkitaudiocontext; context = new Audiocontext (); Enter the sound into this pair like Audioinput = Context.createmediastreamsources (stream); Set Volume Node volume = Context.creategain (); Audioinput.connect (volume); Create cache to cache sound var buffersize = 2048; Create a cache node for the sound, the//second and third parameters of the Createjavascriptnode method refer to both the input and the output as two channels. Recorder = Context.createjavascriptnode (buffersize, 2, 2); The callback function of the recording process is basically to put the left and right two channels of sound//into the cache respectively. recorder.onaudioprocess = function (e) {console.log (' recording '); var left = e.inputbuffer.getchanneldata (0); var right = E.inputbuffer.getchanneldata (1); We clone the Samples Leftchannel.push (new Float32array (left)); Rightchannel.push (New Float32array (right)); Recordinglength + = buffersize; }//Connect the volume node to the cache node, in other words, the volume node is the intermediate link between input//and output. Volume.connect (recorder); Connect the cache node to the output destination, either as a megaphone or as an audio file. RecordeR.connect (context.destination); }
3. Real-time data exchange
WEBRTC's other two api,rtcpeerconnection are used for point-to-point connections between browsers, Rtcdatachannel for Point-to-point data communication.
The rtcpeerconnection has a browser prefix and is mozrtcpeerconnection in the Chrome browser for the Webkitrtcpeerconnection,firefox browser. Google maintains a library of adapter.js that is used to pump out differences between browsers.
var datachanneloptions = { Ordered:false,//Don't guarantee order maxretransmittime:3000,//in milliseconds}; var peerconnection = new Rtcpeerconnection ();//Establish your peer connection using your signaling channel Herevar Datach Annel = Peerconnection.createdatachannel ("MyLabel", datachanneloptions);d atachannel.onerror = function (Error) { console.log ("Data Channel Error:", error);}; Datachannel.onmessage = function (event) { console.log ("Got Data Channel Message:", event.data);}; Datachannel.onopen = function () { datachannel.send ("Hello world!");}; Datachannel.onclose = function () { Console.log ("The Data Channel is Closed");
4. Reference links
[1] Andi Smith, Get Started with WebRTC
[2] Thibault Imbert, from microphone to. WAV With:getusermedia and Web Audio
[3] Ian Devlin, Using the Getusermedia API with the HTML5 video and canvas elements
[4] Eric Bidelman, capturing Audio & Video in HTML5
[5] Sam Dutton, Getting Started with WebRTC
[6] Dan Ristic, WebRTC data channels
[7] RUANYF, WebRTC
HTML5 new characteristics of the webrtc[turn]