WEBRTC Learning nine: Camera capture and display

Source: Internet
Author: User
Tags emit

The newer WEBRTC source has no voiceengine structure corresponding to the vidoeengine, replaced by Meidaengine. Mediaengine contains the Mediaengineinterface interface and its implementation compositemediaengine,compositemediaengine itself is also a template class, two template parameters are the audio engine and video engine respectively. Compositemediaengine derived classes Webrtcmediaengine dependent template parameters are Webrtcvoiceengine and WebRtcVideoEngine2.
In the figure above, there are some abstract classes in the base directory, the engine directory is the implementation of the corresponding abstract class, the use of direct call to the interface in the engine directory. Webrtcvoiceengine is actually the Voiceengine package, which uses Voiceengine for audio processing. Note naming, WebRtcVideoEngine2 with a 2 words, do not think, this is definitely an upgrade version of the Videoengine, there is a Webrtcvideoengine class. WebRtcVideoEngine2 's improvement over Webrtcvideoengine is to split the video stream into split: The Send Stream (Webrtcvideosendstream) and the receive stream (Webrtcvideoreceivestream), Thus the structure is more reasonable, the source code clearer.
The main implementation of this paper is to use the Webrtcvideocapturer class in WebRtcVideoEngine2.
 First. Environment
Refer to the previous article: WEBRTC Learning Three: recording and playback
Second. Implement
Open Webrtcvideocapturer header file webrtcvideocapture.h, public functions are basically implementations of the Videocapturer class in the base directory for initializing the device and initiating the capture. Private functions Onincomingcapturedframe and oncapturedelaychanged will be recalled in the Camera capture module Videocapturemodeule, The captured image is transmitted to the onincomingcapturedframe, and the time delay changes are transmitted to oncapturedelaychanged.
The WEBRTC also implements a signal and groove mechanism similar to QT, as described in WEBRTC Learning Seven: refining the signal and groove mechanism. But as mentioned in this article, sigslot.h in the emit function name and QT Emit macro conflict, I will sigslot.h emit into emit, of course, after the change, you need to recompile rtc_base project.
The Videocapturer class has two signals sigslot::signal2<videocapturer*, capturestate> Signalstatechange and sigslot::signal2< videocapturer*, const capturedframe*, sigslot::multi_threaded_local> signalframecaptured, We can see from the parameters of signalframecaptured that we can get to capturedframe as long as we implement the corresponding slot function, and the capturedframe in the slot function will be displayed. The parameter capturestate of the Signalstatechange signal is an enumeration that identifies the state of the capture (stop, start, in progress, fail).
The signal signalframecaptured is emitted in the callback function Onincomingcapturedframe. Onincomingcapturedframe inside uses the function asynchronous execution, see WEBRTC Learning Eight: function asynchronous execution.


Mainwindow.h






#ifndef MAINWINDOW_H
#define MAINWINDOW_H

#include <QMainWindow>
#include <QDebug>

#include <map>
#include <memory>
#include <string>

#include "webrtc/base/sigslot.h"
#include "webrtc/modules/video_capture/video_capture.h"
#include "webrtc/modules/video_capture/video_capture_factory.h"
#include "webrtc/media/base/videocapturer.h"
#include "webrtc/media/engine/webrtcvideocapturer.h"
#include "webrtc/media/engine/webrtcvideoframe.h"

namespace Ui {
class MainWindow;
}

class MainWindow : public QMainWindow,public sigslot::has_slots<>
{
    Q_OBJECT

public:
    explicit MainWindow(QWidget *parent = 0);
    ~MainWindow();
    void OnFrameCaptured(cricket::VideoCapturer* capturer, const cricket::CapturedFrame* frame);
    void OnStateChange(cricket::VideoCapturer* capturer, cricket::CaptureState state);

private slots:
    void on_pushButtonOpen_clicked();

private:
     void getDeviceList();

private:
    Ui::MainWindow *ui;
    cricket::WebRtcVideoCapturer *videoCapturer;
    cricket::WebRtcVideoFrame *videoFrame;
    std::unique_ptr<uint8_t[]> videoImage;
    QStringList deviceNameList;
    QStringList deviceIDList;
};

#endif // MAINWINDOW_H
Mainwindow.cpp







#include "mainwindow.h"
#include "ui_mainwindow.h"

MainWindow :: MainWindow (QWidget * parent):
    QMainWindow (parent),
    ui (new Ui :: MainWindow),
    videoCapturer (new cricket :: WebRtcVideoCapturer ()),
    videoFrame (new cricket :: WebRtcVideoFrame ())
{
   ui-> setupUi (this);
   getDeviceList ();
}

MainWindow :: ~ MainWindow ()
{
    delete ui;
    videoCapturer-> SignalFrameCaptured.disconnect (this);
    videoCapturer-> SignalStateChange.disconnect (this);
    videoCapturer-> Stop ();
}

void MainWindow :: OnFrameCaptured (cricket :: VideoCapturer * capturer, const cricket :: CapturedFrame * frame)
{

    videoFrame-> Init (frame, frame-> width, frame-> height, true);
    // Convert video image to RGB format
    videoFrame-> ConvertToRgbBuffer (cricket :: FOURCC_ARGB,
                                  videoImage.get (),
                                  videoFrame-> width () * videoFrame-> height () * 32/8,
                                  videoFrame-> width () * 32/8);

    QImage image (videoImage.get (), videoFrame-> width (), videoFrame-> height (), QImage :: Format_RGB32);
    ui-> label-> setPixmap (QPixmap :: fromImage (image));
}


void MainWindow :: OnStateChange (cricket :: VideoCapturer * capturer, cricket :: CaptureState state)
{

}

void MainWindow :: getDeviceList ()
{
    deviceNameList.clear ();
    deviceIDList.clear ();
    webrtc :: VideoCaptureModule :: DeviceInfo * info = webrtc :: VideoCaptureFactory :: CreateDeviceInfo (0);
    int deviceNum = info-> NumberOfDevices ();

    for (int i = 0; i <deviceNum; ++ i)
    {
        const uint32_t kSize = 256;
        char name [kSize] = {0};
        char id [kSize] = {0};
        if (info-> GetDeviceName (i, name, kSize, id, kSize)! = -1)
        {
            deviceNameList.append (QString (name));
            deviceIDList.append (QString (id));
            ui-> comboBoxDeviceList-> addItem (QString (name));
        }
    }

    if (deviceNum == 0)
    {
        ui-> pushButtonOpen-> setEnabled (false);
    }
}

void MainWindow :: on_pushButtonOpen_clicked ()
{
    static bool flag = true;
    if (flag)
    {
         ui-> pushButtonOpen-> setText (QStringLiteral ("closed"));

        const std :: string kDeviceName = ui-> comboBoxDeviceList-> currentText (). toStdString ();
        const std :: string kDeviceId = deviceIDList.at (ui-> comboBoxDeviceList-> currentIndex ()). toStdString ();

        videoCapturer-> Init (cricket :: Device (kDeviceName, kDeviceId));
        int width = videoCapturer-> GetSupportedFormats ()-> at (0) .width;
        int height = videoCapturer-> GetSupportedFormats ()-> at (0) .height;
        cricket :: VideoFormat format (videoCapturer-> GetSupportedFormats ()-> at (0));
        // Start capturing
        if (cricket :: CS_STARTING == videoCapturer-> Start (format))
        {
            qDebug () << "Capture is started";
        }
        // Connect signals and slots of WebRTC
        videoCapturer-> SignalFrameCaptured.connect (this, & MainWindow :: OnFrameCaptured);
        videoCapturer-> SignalStateChange.connect (this, & MainWindow :: OnStateChange);

        if (videoCapturer-> IsRunning ())
        {
            qDebug () << "Capture is running";
        }

        videoImage.reset (new uint8_t [width * height * 32/8]);

    }
    else
    {
        ui-> pushButtonOpen-> setText (QStringLiteral ("Open"));
        // Repeated connection will report an error, you need to disconnect before you can connect again
        videoCapturer-> SignalFrameCaptured.disconnect (this);
        videoCapturer-> SignalStateChange.disconnect (this);
        videoCapturer-> Stop ();
        if (! videoCapturer-> IsRunning ())
        {
            qDebug () << "Capture is stoped";
        }
        ui-> label-> clear ();
    }
    flag =! flag;
}
Main.cpp








#include "mainwindow.h"
#include <QApplication>

int main (int argc, char *argv[])
{
    Qapplication A (argc, argv);
    MainWindow W;
    W.show ();
    while (true)
    {
        //WEBRTC message loop
        rtc::thread::current ()->processmessages (0);
        Rtc::thread::current ()->sleepms (1);
        QT message loop
        a.processevents ();
    }
}
Note that the WEBRTC and QT message loops are handled in the main function, which is the key to capturing and displaying the camera using QT call WEBRTC.





Third. Effects


















Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.