Introduction of Android WEBRTC

Source: Internet
Author: User
Tags maven central

    • Original link: Introduction to WebRTC on Android
    • Original Author: Dag-inge Aas
    • Translated by: appear.in
    • Translator: Dorisminmin
    • Status: Complete

WebRTC is regarded as a New of web long-term open source development, and is the most important innovation in web development in recent years. WEBRTC allows web developers to add video chats or point-to-point data transfers to their Web applications without complicated code or expensive configuration. Currently supported by Chrome, Firefox and Opera, the subsequent support for more browsers, it has the ability to reach billions of of devices.

However, WEBRTC has been misunderstood to be suitable only for browsers. In fact, one of the most important features of WEBRTC is to allow interoperability between local and Web applications, and few people use this feature.

This article will explore how to implant WebRTC in your Android app, using the local libraries provided in WebRTC Initiative. This article does not explain how to use the signaling mechanism to establish calls, but focuses on the differences and similarities between Android and browser implementations. Some of the interfaces for implementing the corresponding functions in Android are explained below. If you want to learn the basics of WebRTC, it is highly recommended that Sam Dutton's Getting started with WebRTC.

Add WEBRTC to the project

The following explanations are based on Android WEBRTC library version 9127.

The first thing to do is to add the WEBRTC library to your app. WebRTC Initiative provides a neat way to compile, but try not to use that approach. Instead, the original IO-compiled version is recommended and can be obtained from MAVEN central repository.

To add WEBRTC to your project, you need to add the following to your dependencies:

1 compile ' io.pristine:libjingle:[email protected] '

Once the project is synchronized, the WEBRTC library is ready.

Permissions

As with other Android apps, the use of certain APIs requires the appropriate permissions. WEBRTC is no exception. The required set of permissions is different, depending on the application you are making, or the features you need, such as audio or video. Please make sure you apply on demand! A good video chat app permission set is as follows:

1<Uses-featureAndroid:name="Android.hardware.camera"/>2<Uses-featureAndroid:name="Android.hardware.camera.autofocus"/>3<Uses-featureandroid:glesversion="0x00020000"Android:required="true"/>4 5 <uses-permission android:name="Android.permission.CAMERA"/>6 <  Uses-permission android:name="Android.permission.RECORD_AUDIO"/>7 <uses-permission  Android:name= "Android.permission.INTERNET"/>8 <uses-permission android:name=" Android.permission.ACCESS_NETWORK_STATE "/>9 <uses-permission android:name=" Android.permission.MODIFY_AUDIO_SETTINGS "/>          
Lighting, photography, factory

When using WEBRTC in a browser, there are some well-developed and detailed APIs to use. Navigator.getusermedia and rtcpeerconnection contain almost all the features that may be used. Combined with the <video> label, you can display any local video streams and remote video streams that you want to display.

Fortunately, Android also has the same API, although their names are different. The Android-related APIs are videocapturerandroid, Videorenderer, MediaStream, Peerconnection, and Peerconnectionfactory. Here we will explain each of them individually.

Before you start, you need to create a peerconnectionfactory, which is the most core API for Android on the use of WEBRTC.

Peerconnectionfactory

Android WEBRTC is the most core class. Understanding this class and understanding how it creates anything else is key to WEBRTC in-depth understanding of Android. It's different from the way we expect it to be, so we started digging deeper into it.

First you need to initialize the Peerconnectionfactory, as follows:

First, we initiate the peerconnectionfactory with//Our application context and some options. Peerconnectionfactory.initializeandroidglobals (    context,    Initializeaudio,    Initializevideo,    Videocodechwacceleration,    rendereglcontext);

To understand this approach, you need to understand the meaning of each parameter:

Context

The application context, or context-sensitive, is passed in the same way as anywhere else.

Initializeaudio

Whether to initialize the audio Boolean value.

Initializevideo

The Boolean value of whether to initialize the video. Skipping these two allows you to skip the permissions associated with the request API, such as data channel applications.

Videocodechwacceleration

Boolean value for whether hardware acceleration is allowed.

Rendereglcontext

Used to provide support for hardware video decoding, a shared EGL context can be created in a video decoding thread. can be null--in this case the hardware video decoding will produce yuv420 frames instead of texture frames.

Initializeandroidglobals also returns a Boolean value, which means that everything ok,false represents a failure. If returning false is the best practice. For more information, please refer to the source code.

If everything is OK, you can use the Peerconnectionfactory constructor to create your own factory, just like any other class.

New Peerconnectionfactory ();
Action, get media streaming, rendering

With an peerConnectionFactory instance, you can get video and audio from the user's device and eventually render it to the screen. You can use and in the Web getUserMedia <video> . In Android, it's not that simple, but you can have more options! In Android, we need to understand Videocapturerandroid,videosource,videotrack and videorenderer, starting with Videocapturerandroid first.

Videocapturerandroid

The videocapturerandroid is actually a series of camera API packages that provide convenient access to the streaming information of the camera device. It allows access to multiple camera device information, including a front-facing camera, or a rear-facing camera.

1 //Returns the number of camera devices                        2 videocapturerandroid.getdevicecount (); 3 4 //Returns the front face device name 5 Videocapturerandroid.getnameoffrontfacingdevice () ; 6 //Returns the back facing device name 7 Videocapturerandroid.getnameofbackfacingdevice (); 8 9 //creates a videocapturerandroid instance for the Device Name10 videocapturerandroid.create (name);        

With the Videocapturerandroid instance containing the video stream information, you can create a mediastream that contains the stream information from the local device and send it to the other end. But before we do this, we first look at how to show our videos to the app.

Videosource/videotrack

To get some useful information from a videocapturer instance, or to reach the final goal ———— get the right media stream for the connection, or simply render it to the user, we need to understand the VideoSource and Videotrack classes.

VideoSource allows methods to turn on and stop device capture video. This is useful in situations where video capture is forbidden in order to prolong battery life.

Videotrack is simply an encapsulation of adding VideoSource to MediaStream objects.

Let's look at the code to see how they work together. is an instance of capturer Videocapturer, an videoConstraints instance of mediaconstraints.

1  Create a videosource                                     2  videosource videosource =                                            3          Peerconnectionfactory.createvideosource (Capturer, videoconstraints); 4                                                                       5  9 Peerconnectionfactory.createvideotrack (video_track_id, VideoSource);   
Audiosource/audiotrack

audiosource  and  AudioTrack  are similar to VideoSource and Videotrack, but do not require audiocapturer to get a microphone,   Audioconstraints   is an instance of mediaconstraints.

1  Create an audiosource                                    2  audiosource audiosource =                                            3          Peerconnectionfactory.createaudiosource (audioconstraints);          4                                                                       5  9 peerconnectionfactory.createaudiotrack (audio_track_id, Audiosource);   
Videorenderer

By using WEBRTC in your browser, you are certainly familiar with using <Video> tags to show mediastream from the Getusermedia method. But in local Android, there are no similar <Video> tags. Entering the VIDEORENDERER,WEBRTC library allows VideoRenderer.Callbacks you to implement your own rendering. In addition, it provides a very good default way of Videorenderergui. In short, Videorenderergui is a glsurfaceview that uses it to draw its own video stream. Let's look at the code to see how it works, and how to add renderer to videotrack.

1//ToCreate our Videorenderer, we can use the2//included VideorendererguiFor simplicity3//first we need to set the Glsurfaceview that it should render to 4 glsurfaceview Videoview = ( Glsurfaceview) Findviewbyid (R.id.glview_call), 5 6//Then we set span class= "keyword" >view, and pass a Runnable 7// To run once the surface is ready 8 Videorenderergui.setview (Videoview, runnable); 9//Now that Videorenderergui are ready, we can get our videorenderer one-videorenderer renderer = Videorenderergui.crea Tegui (x, y, width, height);//And finally, with our videorenderer ready, we are//can add our renderer to the Videot Rack. Localvideotrack.addrenderer (renderer);            

One point to note here is that Creategui requires four parameters. This is done to make it possible for a single glsurfaceview to render all videos. In practice, however, we use multiple glsurfaceviews, which means that x and Y are always 0 in order to render properly. This allows us to understand the meaning of each parameter in the implementation process.

Mediaconstraints

Mediaconstraints is a class that supports WEBRTC libraries of different constraints and can be loaded into audio and video tracks in MediaStream. See the support list for specific reference specifications. For most methods that require mediaconstraints, a simple mediaconstraints instance can be done.

1  new mediaconstraints ();

To add an actual constraint, you can define KeyValuePairs it and push it to the constrained mandatory or optional list.

MediaStream

Now you can see yourself in the local, then you have to find a way to let each other see themselves. When it comes to web development, MediaStream is already familiar. getUserMediareturn MediaStream directly, and then add it to rtcpeerconnection to send to the other person. This method is also common on Android, but we need to create mediastream ourselves. Next we'll look at how to add local videotrack and audiotrack to create a suitable mediastream.

1   with an empty MediaStream object,                                             2   4 MediaStream mediastream = Peerconnectionfactory.createlocalmediastream (local_media_stream_id); 5 6//Now we can add our tracks. 7 Mediastream.addtrack (Localvideotrack); 8 Mediastream.addtrack (localaudiotrack);  
Hi, is there anyone there?

We now have MediaStream instances that contain video streams and audio streams, and our beautiful faces are displayed on the screen. It's time to send this information to the other person. This article does not describe how to set up your own signal flow, we directly describe the corresponding API methods, and how they relate to the web. The APPRTC uses Autobahn to make the websocket connected to the signal end. I suggest downloading this project to carefully examine how to build your own signal stream in Android.

Peerconnection

Now that we have our own mediastream, we can start connecting to the far end. Fortunately this part is very similar to the processing on the Web, so if you are familiar with the WEBRTC in the browser, this part is quite simple. Creating peerconnection is simple and requires only peerconnectionfactory assistance.

1  peerconnection peerconnection = peerconnectionfactory.createpeerconnection (2           iceservers,                                                               3           Constraints,                                                              4           observer);

The function of the parameter is as follows:

Iceservers

This parameter is required to connect to an external device or network. Adding stun and TURN servers here allows connections, even under conditions of poor network conditions.

Constraints

An instance of mediaconstraints that should contain offerToRecieveAudio andofferToRecieveVideo

Observer

An instance of the Peerconnectionobserver implementation.

Peerconnection and the corresponding API on the Web are very similar, including Addstream, Addicecandidate, Createoffer, Createanswer, Getlocaldescription, Setremotedescription and other similar methods. Download WEBRTC get started to learn how to coordinate all of your work to establish a communication channel between two points, or APPRTC how to make a real-time feature complete Android WEBRTC app work. Let's take a quick look at some of these important ways to see how they work.

Addstream

This is used to add MediaStream to the peerconnection, as it is named. If you want the other person to see your video and hear your voice, you need to use this method.

Addicecandidate

Icecandidates is created when an internal iceframework discovers that candidates allows other parties to connect to you. When passing data to each other through peerconnectionobserver.onicecandidate, you need to get to each other's icecandidates through any signal channel you choose. Use Addicecandidate to add them to peerconnection so that peerconnection can attempt to connect to each other through existing information.

Createoffer/createanswer

These two methods are used for the establishment of the original call. As you know, in WebRTC, there is the concept of caller and callee, one is call, one is answer. Createoffer is used by caller, it needs a sdpobserver, it allows to get and transfer session Description protocol session Description Protocol (SDP) to each other and also needs a mediaconstraint. Once the other party gets the request, it creates an answer and transmits it to caller. SDP is used to describe the expected format of data (such as video, formats, codecs, encryption, resolution, size, etc.) to each other. Once the caller receives this response, the two sides agree on the communication needs established, such as video, audio, decoder, etc.

Setlocaldescription/setremotedescription

This is used to set the SDP data generated by Createoffer and createanswer, including data obtained from the remote. It allows internal peerconnection to configure links so that once you start transmitting audio and video, you can start to really work.

Peerconnectionobserver

This interface provides a way to monitor peerconnection events, such as when a mediastream is received, when a icecandidates is found, or when the communication needs to be re-established. These are functionally related to the web, and if you have learned about Web development you will not be very difficult to understand, or learn WEBRTC to get started. This interface must be implemented so that you can effectively handle incoming events, such as sending a signal to icecandidates when the other person becomes visible.

Conclusion

As mentioned above, if you know how to correspond to the Web, the API above Android is very straightforward. With these tools, we can develop a WEBRTC related product and deploy it to billions of devices immediately.

WEBRTC opened the communication between people, free for developers, free for end users. It not only provides video chat, but also other applications such as health services, low latency file transfer, Torrent downloads, and even gaming applications.

To see a real WEBRTC application, download the Android or iOS version of appear.in. It works perfectly between the browser and the local app, and can be used for up to 8 people in the same room for free. Installation and registration are not required.

Introduction of Android WEBRTC

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.