Project Background
The company developed a website. When making changes to the user's profile picture, the leaders mentioned the addition of a function for modifying the profile picture by taking a camera. Because our website is developed based on Html5, we use H5 to take photos directly. At first, I thought this function was very simple, but I found it was not that simple.
In AngularJs, this is an example of successfully calling the camera to take a photo and upload a screenshot:
2. How to call a camera
$ Scope. photoErr = false;
$ Scope. photoBtnDiable = true;
Var mediaStream = null, track = null;
Navigator. getMedia = (navigator. getUserMedia |
Navigator. webkitGetUserMedia | navigator. mozGetUserMedia |
Navigator. msGetUserMedia );
If (navigator. getMedia ){
Navigator. getMedia (
{
Video: true
},
// SuccessCallback
Function (stream ){
Var s = window. URL. createObjectURL (stream );
Var video = document. getElementById ('video ');
Video. src = window. URL. createObjectURL (stream );
MediaStream = stream;
Track = stream. getTracks () [0];
$ Scope. photoBtnDiable = false; $ scope. $ apply ();
},
// ErrorCallback
Function (err ){
$ Scope. errorPhoto ();
Console. log ("The following error occured:" + err );
});
} Else {
$ Scope. errorPhoto ();
}
Code parsing:
Navigator is a browser object that contains information about the browser. This object is used to open the camera. $ Scope is the AndularJs syntax. Step 1 declare navigator. getMedia calls different camera-opening functions of the browser. Currently, only getUserMedia, webkitGetUserMedia, javasgetusermedia, and msGetUserMedia correspond to general browsers, Google browsers, Firefox browsers, and IE browsers respectively, the browser automatically determines which function to call. The second step is to call to open the browser, which contains three parameters: the multimedia type to be used, the stream data processing function to obtain the successful returned results, and the number of error message processing functions returned when the operation fails. In this example, you can set not only the video but also the microphone. The setting method is as follows:
{
Video: true,
Audio: true
}
When the call succeeds, the camera is turned on and the video stream data is returned. We can set the stream data to the video tag to display the image in real time on the interface. MediaStream is used to record the obtained stream data. track is used to track the camera status in Chrome. Both variables can be used to close the camera.
3. Take a photo
$ Scope. snap = function (){
Var canvas = document. createElement ('canvas ');
Canvas. width = "400 ";
Canvas. height = "304 ";
Var ctx = canvas. getContext ('2D ');
Ctx. Fig (video, 0, 0,400,304 );
$ Scope. closeCamera ();
$ UibModalInstance. close (canvas. toDataURL ("image/png "));
};
When taking a photo, you need to use the canvas label to create a canvas label, set the size of the image we want to take, and save the current video image to the canvas label through the drawImage function, finally, convert the image data to base64 and return the data and turn off the camera. This completes our photo taking function. Here, the $ uibModalInstance object is an object that opens the pop-up layer in our project to control the display of the pop-up layer.
4. How to close the camera
$ Scope. closeCamera = function (){
If (mediaStream! = Null ){
If (mediaStream. stop ){
MediaStream. stop ();
}
$ Scope. videosrc = "";
}
If (track! = Null ){
If (track. stop ){
Track. stop ();
}
}
}
As mentioned above, the way to close the camera is through mediaStream and track variables, except that the track can only close the camera in the Chrome browser, which is also the way to close the camera in Chrome Version 45 or later.
5. Integrate into AndularJs
As a matter of fact, all we mentioned above are implemented in AndularJs. Of course, here we only implement the camera and return the photo data. We want to use it elsewhere, we need to separate this part. Here we use the service mechanism in AngularJs to separate this part into a service and inject it into the project, and then we can call it elsewhere.
Service registration:
App (). registerService ("h5TakePhotoService", function ($ q, $ uibModal ){
This. photo = function (){
Var deferred = $ q. defer ();
Require ([config. server + "/com/controllers/photo. js"], function (){
$ UibModal. open ({
TemplateUrl: config. server + "/com/views/modal_take_photo.html ",
Controller: "photoModalController ",
WindowClass: "modal-photo"
}). Result. then (function (e ){
Deferred. resolve (e );
});
});
Return deferred. promise;
}
});
Call method:
$ Scope. takePhoto = function (){
H5TakePhotoService. photo (). then (function (res ){
If (res! = Null & amp; res! = ""){
$ Scope. myImage = res;
}
});
}
H5TakePhotoService is the image service object injected into the controller. It processes the returned image data and sets the data to be displayed on the interface.
6. Compatibility issues
It mainly exists in the Chrome browser. During local testing, the Chrome browser can be used normally, but it cannot be used normally after being deployed to the server. The error message is [object NavigatorUserMediaError]. this is because Chrome only supports secure source access when using cameras, so it can only be used through https access.
Finally, you can only use http: // url for access during the test. You cannot use file: // url for access. That is, you must deploy the code to access the service, it can be completed in Visual Studio, java web, and php.