WEBRTC Learning nine: Camera capture and display

Source: Internet
Author: User
Tags emit

The newer WEBRTC source code does not have the corresponding vidoeengine with the voiceengine structure, instead of the meidaengine. Mediaengine includes the Mediaengineinterface interface and the fact that the compositemediaengine,compositemediaengine itself is also a template class, and two template references are audio engines and video engines respectively. The compositemediaengine derived class Webrtcmediaengine depends on the template parameters Webrtcvoiceengine and WebRtcVideoEngine2.


In the base folder are some abstract classes. The engine folder is an implementation of the corresponding abstract class, which is used to directly invoke the interface in the engine folder. Webrtcvoiceengine is actually the Voiceengine package again. It uses Voiceengine for audio processing.

Note naming, WebRtcVideoEngine2 with a 2 words, don't think, this is definitely an upgrade version number of Videoengine, there is a Webrtcvideoengine class.

WebRtcVideoEngine2 's improvement over Webrtcvideoengine is to split the video stream into split: The Send Stream (Webrtcvideosendstream) and the receive stream (Webrtcvideoreceivestream), So that the structure is more reasonable. The source code is clearer.
The main implementation of this paper is to use the Webrtcvideocapturer class in WebRtcVideoEngine2.


I. Environment
Participation in the article: WEBRTC Learning Three: recording and playback
Two. Implement
Open the Webrtcvideocapturer header file webrtcvideocapture.h. Public functions are basically implementations of the Videocapturer class in the base folder for initializing devices and initiating snapping. Private functions Onincomingcapturedframe and oncapturedelaychanged are recalled in the Camera collection module Videocapturemodeule. The image of the collection is passed to onincomingcapturedframe and the delay change of the collection is passed to oncapturedelaychanged.
The WEBRTC also implements a signal and groove mechanism similar to QT, as described in WEBRTC Learning Seven: refining the signal and groove mechanism. However, as mentioned in this article, the emit function name in Sigslot.h will conflict with the emit macro in Qt. I will sigslot.h in the emit changed to emit, of course, after the change, need to compile rtc_baseproject again.


The Videocapturer class has two signals sigslot::signal2<videocapturer*, capturestate> Signalstatechange and sigslot::signal2< videocapturer*, const capturedframe*, sigslot::multi_threaded_local> signalframecaptured, We can see from the signalframecaptured that we can get to the Capturedframe only by implementing the corresponding slot function, and the capturedframe in the slot function will be displayed. The capturestate of the Signalstatechange signal is an enumeration. Identifies the state of the capture (stopped, started, in progress, failed).
The signal signalframecaptured is emitted in the callback function Onincomingcapturedframe.

onincoming The asynchronous operation of the function is used in the capturedframe. See WEBRTC Learning Eight: asynchronous operation of functions.

Mainwindow.h

#ifndef mainwindow_h#define mainwindow_h#include <QMainWindow> #include <QDebug> #include <map># Include <memory> #include <string> #include "webrtc/base/sigslot.h" #include "webrtc/modules/video_ Capture/video_capture.h "#include" webrtc/modules/video_capture/video_capture_factory.h "#include" webrtc/media/ Base/videocapturer.h "#include" webrtc/media/engine/webrtcvideocapturer.h "#include" webrtc/media/engine/ Webrtcvideoframe.h "namespace Ui {class MainWindow;}  Class Mainwindow:public Qmainwindow,public sigslot::has_slots<>{q_objectpublic:explicit MainWindow (QWidget    *parent = 0);    ~mainwindow ();    void Onframecaptured (cricket::videocapturer* capturer, const cricket::capturedframe* frame); void Onstatechange (cricket::videocapturer* capturer, cricket::capturestate State);p rivate slots:void On_    Pushbuttonopen_clicked ();p rivate:void getdevicelist ();p Rivate:ui::mainwindow *ui;    Cricket::webrtcvideocapturer *videocapturer; Cricket::webRtcvideoframe *videoframe;    Std::unique_ptr<uint8_t[]> Videoimage;    Qstringlist devicenamelist; Qstringlist deviceidlist;}; #endif//Mainwindow_h
Mainwindow.cpp

#include "mainwindow.h" #include "ui_mainwindow.h" Mainwindow::mainwindow (Qwidget *parent): Qmainwindow (parent), UI (n    EW Ui::mainwindow), Videocapturer (New Cricket::webrtcvideocapturer ()), Videoframe (New Cricket::webrtcvideoframe ()) {   UI-&GT;SETUPUI (this); Getdevicelist ();}    Mainwindow::~mainwindow () {Delete UI;    Videocapturer->signalframecaptured.disconnect (this);    Videocapturer->signalstatechange.disconnect (this); Videocapturer->stop ();} void mainwindow::onframecaptured (cricket::videocapturer* capturer,const cricket::capturedframe* frame) {videoFrame-    >init (frame, frame->width, frame->height,true); Convert video image to RGB format videoframe->converttorgbbuffer (Cricket::fourcc_argb, Videoimage.get (                                  ), Videoframe->width () *videoframe->height () *32/8,    Videoframe->width () *32/8); Qimage image (Videoimage.get (), Videoframe->width (), Videoframe->heIght (), QIMAGE::FORMAT_RGB32); Ui->label->setpixmap (qpixmap::fromimage (image));} void Mainwindow::onstatechange (cricket::videocapturer* capturer, cricket::capturestate State) {}void MainWindow::    Getdevicelist () {devicenamelist.clear ();    Deviceidlist.clear ();    Webrtc::videocapturemodule::D eviceinfo *info=webrtc::videocapturefactory::createdeviceinfo (0);    int devicenum=info->numberofdevices ();        for (int i = 0; i < Devicenum; ++i) {const uint32_t ksize = 256;        Char Name[ksize] = {0};        Char Id[ksize] = {0};            if (Info->getdevicename (i, name, ksize, ID, ksize)! =-1) {Devicenamelist.append (QString (name));            Deviceidlist.append (QString (id));        Ui->comboboxdevicelist->additem (QString (name));    }} if (devicenum==0) {ui->pushbuttonopen->setenabled (false);    }}void mainwindow::on_pushbuttonopen_clicked () {static bool flag=true; if (flag) {Ui->pushbuttoNopen->settext (Qstringliteral ("Off"));        Const std::string kdevicename = Ui->comboboxdevicelist->currenttext (). tostdstring ();        Const std::string Kdeviceid = deviceidlist.at (Ui->comboboxdevicelist->currentindex ()). ToStdString ();        Videocapturer->init (Cricket::D evice (Kdevicename, Kdeviceid));        int width=videocapturer->getsupportedformats ()->at (0). width;        int height=videocapturer->getsupportedformats ()->at (0). Height;        Cricket::videoformat format (videocapturer->getsupportedformats ()->at (0)); Start capturing if (cricket::cs_starting = = Videocapturer->start (format)) {Qdebug () << "Capture is        Started ";        }//Connect the WEBRTC signal and slot videocapturer->signalframecaptured.connect (this,&mainwindow::onframecaptured);        Videocapturer->signalstatechange.connect (This,&mainwindow::onstatechange); if (videocapturer->isrunning ()) {qdebug () << Capture is running ";    } videoimage.reset (new UINT8_T[WIDTH*HEIGHT*32/8]);        } else {Ui->pushbuttonopen->settext (qstringliteral ("open"));        Repeated connection will be error, need to disconnect first, the ability to reconnect videocapturer->signalframecaptured.disconnect (this);        Videocapturer->signalstatechange.disconnect (this);        Videocapturer->stop ();        if (!videocapturer->isrunning ()) {qdebug () << "Capture is stoped";    } ui->label->clear (); } Flag=!flag;}
main.cpp

#include "mainwindow.h" #include <qapplication>int main (int argc, char *argv[]) {    qapplication A (argc, argv); C2/>mainwindow W;    W.show ();    while (true)    {        //WEBRTC message loop        rtc::thread::current ()->processmessages (0);        Rtc::thread::current ()->sleepms (1);        QT message loop        a.processevents ();    }}
Note that the WEBRTC and QT message loops are handled in the main function, which is the key to capturing and displaying the camera using QT call WEBRTC.

Three. Effects







WEBRTC Learning nine: Camera capture and display

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.