Qt+openface do brush face machine

Source: Internet
Author: User
Tags sprintf

Recently due to project needs, need to use QT in the Ubuntu14.04 use of Openface, configured for a long time to configure, the configuration process is recorded, so that the future generations to go a little detour.


Installing Openface

Openface's official website: https://github.com/TadasBaltrusaitis/OpenFace
Follow the above operation to install Openface, note: Be sure to follow the above steps strictly, otherwise it is easy to make mistakes. Once installed, you can use Openface in Qt.


Using Openface in QT

I need to use openface to calculate the angle of the face in Qt, and the sample code for the Openface comes in as follows:
FaceLandmarkVid.cpp (path: openface/exe/facelandmarkvid/facelandmarkvid.cpp)

//////////////////////////////////////////////
Copyright (C), Carnegie Mellon University and University of Cambridge,
All rights reserved.
//
This software is provided ' as is ' for academic use only and any EXPRESS
OR implied warranties warranties, including, but not LIMITED to,
The implied warranties of merchantability and FITNESS for A particular
PURPOSE is disclaimed. In NO EVENT shall the COPYRIGHT holders OR CONTRIBUTORS
be liable for any DIRECT, INDIRECT, incidental, special, exemplary.
OR consequential damages (including, but not LIMITED to, procurement of
Substitute GOODS OR SERVICES; LOSS of Use, DATA, OR profits; OR business Interruption)
However caused and on any theory of liability, WHETHER in contract,
STRICT liability, or TORT (including negligence OR OTHERWISE) arising in
Any-on-the-software, even IF advised of the
Possibility of SUCH DAMAGE.
//
Notwithstanding the license granted herein, Licensee acknowledges that certain components
Of the software is covered by so-called "open source" software licenses ("open source
Components "), which means any software licenses approved as open source licenses by the
Open Source Initiative or any substantially similar licenses, including without limitation any
License that, as a condition of distribution of the software licensed under such license,
Requires the distributor make the software available in source code format. Licensor shall
Provide a list of Open Source components for a particular version of the Software upon
Licensee ' s request. Licensee would comply with the applicable terms of such licenses and to
The extent required by the licenses covering Open Source components, the terms of such
Licenses'll apply in lieu of the terms of this agreement. To the extent the terms of the
Licenses applicable to Open Source prohibit all of the restrictions in this
License agreement with respect to such Open Source Component, such restrictions would not
Apply to such Open Source Component. The extent the terms of the licenses applicable to
Open Source components require licensor to make a offer to provide Source code or
Related information in connection with the software, such offer is hereby made. Any request
For source code or related information should is directed to cl-face-tracker-distribution@lists.cam.ac.uk
Licensee acknowledges receipt of notices for the Open Source components for the initial
Delivery of the software.

* Any publications arising from the "This software", including but
not limited to academic journal and Conference Publications, technical
Reports and manuals, must cite at least one of the following works:
//
Openface:an Open Source Facial Behavior Analysis Toolkit
Tadas Baltrušaitis, Peter Robinson, and Louis-philippe morency
In IEEE Winter Conference on applications of computer Vision, 2016
//
Rendering of Eyes for eye-shape registration and gaze estimation
Erroll Wood, Tadas Baltrušaitis, Xucong Zhang, Yusuke Sugano, Peter Robinson, and Andreas bulling
In IEEE International. Conference on Computer Vision (ICCV), 2015
//
Cross-dataset Learning and Person-speci?c normalisation for automatic Action Unit detection
Tadas Baltrušaitis, Marwa Mahmoud, and Peter Robinson
In facial Expression recognition and Analysis Challenge,
IEEE International Conference on Automatic face and Gesture recognition, 2015
//
Constrained Local neural fields for robust facial landmark detection in the wild.
Tadas Baltrušaitis, Peter Robinson, and Louis-philippe morency.
In IEEE int. Conference on Computer Vision workshops, Faces In-the-wild Challenge, 2013.
//
//////////////////////////////////////////////
FaceTrackingVid.cpp:Defines the entry point for the console application for tracking faces in videos.

Libraries for landmark Detection (includes CLNF and CLM modules)
#include "LandmarkCoreIncludes.h"
#include "GazeEstimation.h"

#include <fstream>
#include <sstream>

OpenCV includes
#include <opencv2/videoio/videoio.hpp>//Video Write
#include <opencv2/videoio/videoio_c.h>//Video Write
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>

Boost includes
#include <filesystem.hpp>
#include <filesystem/fstream.hpp>

#define Info_stream (STREAM) \
Std::cout << Stream << Std::endl

#define Warn_stream (STREAM) \
Std::cout << "Warning:" << stream << Std::endl

#define Error_stream (STREAM) \
Std::cout << "Error:" << stream << Std::endl

static void Printerrorandabort (const std::string & Error)
{
Std::cout << error << Std::endl;
Abort ();
}

#define Fatal_stream (STREAM) \
Printerrorandabort (std::string ("Fatal error:") + stream)

using namespace Std;

vector<string> get_arguments (int argc, char **argv)
{

vector<string> arguments;

for (int i = 0; i < argc; ++i)
{
Arguments.push_back (String (argv[i));
}
return arguments;
}

Some globals for tracking timing information for visualisation
Double fps_tracker =-1.0;
Int64 t0 = 0;

Visualising the results
void Visualise_tracking (cv::mat& captured_image, cv::mat_<float>& depth_image, const LandmarkDetector: :clnf& Face_model, const landmarkdetector::facemodelparameters& det_parameters, CV::P oint3f GazeDirection0, CV::P oint3f gazeDirection1, int frame_count, double fx, double FY, double cx, double CY)
{

Drawing the facial landmarks on the face and the bounding box around it if tracking is successful and initialised
Double detection_certainty = Face_model.detection_certainty;
bool detection_success = face_model.detection_success;

Double visualisation_boundary = 0.2;

Only draw if the reliability are reasonable, the value is slightly ad-hoc
if (Detection_certainty < visualisation_boundary)
{
Landmarkdetector::D Raw (Captured_image, Face_model);

Double vis_certainty = Detection_certainty;
if (Vis_certainty > 1)
Vis_certainty = 1;
if (Vis_certainty <-1)
Vis_certainty =-1;

Vis_certainty = (vis_certainty + 1)/(visualisation_boundary + 1);

A rough heuristic for box around the face width
int thickness = (int) Std::ceil (2.0* (double) captured_image.cols)/640.0);

cv::vec6d Pose_estimate_to_draw = Landmarkdetector::getcorrectedposeworld (Face_model, FX, FY, CX, CY);

Draw it in Reddish if uncertain, blueish if certain
Landmarkdetector::D rawbox (Captured_image, Pose_estimate_to_draw, Cv::scalar (1-vis_certainty) *255.0, 0, Vis_ Certainty * 255), thickness, FX, FY, CX, CY);

if (Det_parameters.track_gaze && detection_success && Face_model.eye_model)
{
Faceanalysis::D rawgaze (Captured_image, Face_model, GazeDirection0, GazeDirection1, FX, FY, CX, CY);
}
}

Work out the framerate
if (frame_count% 10 = = 0)
{
Double T1 = Cv::gettickcount ();
Fps_tracker = 10.0/(double (t1-t0)/cv::gettickfrequency ());
t0 = t1;
}

Write out the framerate on the image before displaying it
Char fpsc[255];
std::sprintf (FPSC, "%d", (int) fps_tracker);
String Fpsst ("FPS:");
Fpsst + = FPSC;
CV::p uttext (Captured_image, FPSST, CV::P oint (Ten), Cv_font_hershey_simplex, 0.5, Cv_rgb (255, 0, 0));

if (!det_parameters.quiet_mode)
{
Cv::namedwindow ("Tracking_result", 1);
Cv::imshow ("Tracking_result", captured_image);

if (!depth_image.empty ())
{
Division needed for visualisation purposes
Imshow ("depth", depth_image/2000.0);
}

}
}

int main (int argc, char **argv)
{

vector<string> arguments = get_arguments (argc, argv);

Some initial parameters that can is overriden from command line
vector<string> files, depth_directories, Output_video_files, Out_dummy;

By default try webcam 0
int device = 0;

Landmarkdetector::facemodelparameters det_parameters (arguments);

Get the input Output file parameters

Indicates that rotation should is with respect to world or camera coordinates
BOOL U;
Landmarkdetector::get_video_input_output_params (Files, depth_directories, Out_dummy, Output_video_files, U, arguments);

The modules that is being used for tracking
LANDMARKDETECTOR::CLNF Clnf_model (det_parameters.model_location);

Grab camera parameters, if they is not defined (approximate values would be used)
Float FX = 0, fy = 0, cx = 0, cy = 0;
Get Camera Parameters
Landmarkdetector::get_camera_params (device, FX, FY, CX, CY, arguments);

If CX (Optical axis centre) is undefined would use the image SIZE/2 as an estimate
BOOL cx_undefined = false;
BOOL fx_undefined = false;
if (cx = = 0 | | cy = = 0)
{
Cx_undefined = true;
}
if (FX = = 0 | | | fy = = 0)
{
Fx_undefined = true;
}

If multiple video files is tracked, use this to indicate if we is done
bool done = false;
int f_n =-1;

Det_parameters.track_gaze = true;

while (!done)//The-is-not-a for-loop as we might also be reading from a webcam
{

String Current_file;

We might specify multiple video files as arguments
if (files.size () > 0)
{
f_n++;
Current_file = Files[f_n];
}
Else
{
If we want to write off from webcam
f_n = 0;
}

BOOL use_depth =!depth_directories.empty ();

Do some grabbing
Cv::videocapture video_capture;
if (current_file.size () > 0)
{
if (!boost::filesystem::exists (current_file))
{
Fatal_stream ("File does not exist");
}

Current_file = Boost::filesystem::p ath (current_file). generic_string ();

Info_stream ("Attempting to read from file:" << current_file);
Video_capture = Cv::videocapture (current_file);
}
Else
{
Info_stream ("Attempting to capture from device:" << device);
Video_capture = cv::videocapture (device);

Read a first frame often empty in camera
Cv::mat Captured_image;
Video_capture >> Captured_image;
}

if (!video_capture.isopened ()) Fatal_stream ("Failed to open Video source");
else Info_stream ("Device or file opened");

Cv::mat Captured_image;
Video_capture >> Captured_image;

If Optical Centers is not defined just use center of image
if (cx_undefined)
{
CX = captured_image.cols/2.0f;
cy = captured_image.rows/2.0f;
}
Use a rough guess-timate of focal length
if (fx_undefined)
{
FX = $ * (captured_image.cols/640.0);
FY = * (captured_image.rows/480.0);

FX = (FX + FY)/2.0;
FY = FX;
}

int frame_count = 0;

Saving the Videos
Cv::videowriter Writerface;
if (!output_video_files.empty ())
{
Writerface = Cv::videowriter (Output_video_files[f_n], CV_FOURCC (' D ', ' I ', ' V ', ' X '), +, Captured_image.size (), true);
}

Use for timestamping if using a webcam
Int64 t_initial = Cv::gettickcount ();

Info_stream ("starting tracking");
while (!captured_image.empty ())
{

Reading the images
Cv::mat_<float> Depth_image;
Cv::mat_<uchar> Grayscale_image;

if (captured_image.channels () = = 3)
{
Cv::cvtcolor (Captured_image, Grayscale_image, Cv_bgr2gray);
}
Else
{
Grayscale_image = Captured_image.clone ();
}

Get depth Image
if (use_depth)
{
char* DST = new char[100];
Std::stringstream Sstream;

Sstream << Depth_directories[f_n] << "\\depth%05d.png";
sprintf (DST, Sstream.str (). C_STR (), Frame_count + 1);
Reading in 16-bit PNG image representing depth
cv::mat_<short> depth_image_16_bit = Cv::imread (String (DST),-1);

Convert to a floating point depth image
if (!depth_image_16_bit.empty ())
{
Depth_image_16_bit.convertto (Depth_image, cv_32f);
}
Else
{
Warn_stream ("Can ' t find depth image");
}
}

The actual facial landmark detection/tracking
BOOL detection_success = landmarkdetector::D etectlandmarksinvideo (Grayscale_image, Depth_image, Clnf_model, det_ parameters);

Visualising the results
Drawing the facial landmarks on the face and the bounding box around it if tracking is successful and initialised
Double detection_certainty = Clnf_model.detection_certainty;

Gaze tracking, absolute gaze direction
CV::P oint3f gazeDirection0 (0, 0,-1);
CV::P oint3f gazeDirection1 (0, 0,-1);

if (Det_parameters.track_gaze && detection_success && Clnf_model.eye_model)
{
Faceanalysis::estimategaze (Clnf_model, GazeDirection0, FX, FY, CX, CY, True);
Faceanalysis::estimategaze (Clnf_model, GazeDirection1, FX, FY, CX, CY, false);
}

Visualise_tracking (Captured_image, Depth_image, Clnf_model, Det_parameters, GazeDirection0, GazeDirection1, frame_ Count, FX, FY, CX, CY);

Output the tracked video
if (!output_video_files.empty ())
{
Writerface << Captured_image;
}

Video_capture >> Captured_image;

Detect key Presses
Char character_press = cv::waitkey (1);

Restart the tracker
if (character_press = = ' R ')
{
Clnf_model. Reset ();
}
Quit the application
else if (character_press== ' Q ')
{
return (0);
}

Update the frame Count
frame_count++;

}

Frame_count = 0;

Reset the model, for the next video
Clnf_model. Reset ();

Break out of the The loop if do with all the files (or using a webcam)
if (f_n = = Files.size ()-1 | | files.empty ())
{
Done = true;
}
}

return 0;
}


The above code needs to be noted:
Change landmarkdetector::facemodelparameters det_parameters (arguments) to Landmarkdetector::facemodelparameters det_ Parameters
I am not quite sure why I have changed the appeal code and I can use it.

Create a new project in Qt, and then copy the above code into the project, paying special attention to the configuration file settings:


I will post my configuration file for your reference:

QT + + Core
QT-= GUI

TARGET = Openface
CONFIG + = Console
CONFIG-= App_bundle
CONFIG + = c++11

TEMPLATE = App

SOURCES + = Main.cpp

Includepath+=/home/qq/document/usr/local/opencv_3.1/so/include \
/home/qq/document/work/openface/lib/local/landmarkdetector/include/\
/home/qq/document/work/openface/lib/local/faceanalyser/include/\
/home/qq/document/usr/local/boost/include/\
/home/qq/document/usr/local/boost/include/boost \
/home/qq/document/work/openface/lib/3rdparty/dlib/include \
/home/qq/document/usr/local/tbb/include/\
/usr/local/include\
/usr/include/boost \
/home/qq/document/usr/local/cblas/include

LIBS + =-l/home/qq/document/work/openface/build/lib/local/faceanalyser \
-lfaceanalyser \

LIBS + =-l/home/qq/document/work/openface/build/lib/local/landmarkdetector \
-llandmarkdetector \

LIBS + =-l/home/qq/document/work/openface/build/lib/3rdparty/dlib \
-ldlib \

LIBS + =-l/home/qq/document/usr/local/opencv_3.1/so/lib \
-LOPENCV_CALIB3D \
-lopencv_core \
-lopencv_cudaarithm \
-LOPENCV_CUDABGSEGM \
-LOPENCV_CUDACODEC \
-LOPENCV_CUDAFEATURES2D \
-lopencv_cudafilters \
-lopencv_cudaimgproc \
-lopencv_cudalegacy \
-lopencv_cudaobjdetect \
-lopencv_cudaoptflow \
-lopencv_cudastereo \
-lopencv_cudawarping \
-lopencv_cudev \
-LOPENCV_FEATURES2D \
-lopencv_flann \
-lopencv_highgui \
-lopencv_imgcodecs \
-lopencv_imgproc \
-LOPENCV_ML \
-lopencv_objdetect \
-lopencv_photo \
-lopencv_shape \
-lopencv_stitching \
-lopencv_superres \
-LOPENCV_VIDEOIO \
-lopencv_video \
-lopencv_videostab

LIBS + =-l/home/qq/document/usr/local/boost/lib/\
-lboost_filesystem \
-lboost_system

LIBS + =-l/home/qq/document/usr/local/tbb/lib/\
-LTBB \
-ltbbmalloc

LIBS +=/home/qq/document/usr/local/cblas/lib/cblas_linux.a
LIBS +=/HOME/QQ/DOCUMENT/USR/LOCAL/CBLAS/LIB/LIBBLAS.A

LIBS + =-l/etc/alternatives \
-llapack \


Issues to be aware of
1.c++ code error in QT creater

For this issue, you need to add the following in the configuration file. Pro:
CONFIG + = c++11
2. You need to place the Model folder in the same directory as the executable file

That is, model,classifiers in Openface, and Au_predictors folder


Run results

Run successfully.


Permanent update of this article address: http://www.linuxdiyf.com/linux/21068.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.