Keywords: camera posture estimation Opencv::solvepnp LabVIEW three-dimensional image
Article Type: Application Showcase +demo Demo
@Author: vshawn ([email protected])
@Date: 2016-12-12
@Lab: [Email protected]
Objective
This article will show a real-time camera posture estimation routines, the principle of which has been mentioned in the previous article, and then using the "camera posture Estimation 1_1:OPENCV, solvepnp two times package and performance test" in the class constructed, making the program processing more simple. This routine uses the HSV space, tracks the red feature points, uses the traced feature points to solve the PNP problem, and obtains the camera pose (world coordinates of the camera and three rotation angles of the camera). Finally, the three-dimensional picture control in LabVIEW is used to reconstruct the whole system 3D.
Processing flow
- First, the industrial camera is initialized, the real-time image is collected, and the image is displayed using Imshow.
- In the real-time camera drawing, select P1, P2, P3, P4 (in the previous article, "Camera posture Estimation 1: According to the Coplanar four-point estimation camera Attitude" mentioned), must be in order to point, or can not get the correct posture. The point is tracked immediately after the selection is complete.
- When the number of feature points for a trace reaches 4, the program starts calling the Pnpsolver class to estimate the camera pose.
- Writes the resulting posture information to TXT, located in the D-Packing directory (This is why the file was written in the previous article).
- After the LabVIEW program runs, it reads TXT continuously and applies the read posture data to 3D to draw the correct three-dimensional scene. (Here the two processes through the TXT communication efficiency is very low, but I lazy, no more to write a better program)
A flowchart is used to indicate that:
The process is very simple, the C + + program is used to calculate the posture, LabVIEW program for display.
(for readers who do not understand LabVIEW: You can also use OpenGL to implement the display section)
Feature Point Tracking method
For the sake of laziness, the feature point tracking here directly uses the simplest method of tracking color. The logo I made is like this:
Each feature point is a red dot painted with a red marker.
In the actual operation, the user first clicks on the feature point in the display interface in order (World coordinate input order of the program midpoint) to get the initial position of the feature point. According to the initial position, the ROI is selected near it, the BGR image is converted to HSV image for color segmentation, the H channel is binary, the red region is set to 255, and the two value image is obtained. Find the connected domain in the binary image, calculate the position of the center of gravity g of the connected domain, return the coordinates of G as the tracking result, and serve as the starting point of the next tracking.
As a result, the green circle in the figure is drawn at the center of Gravity G.
The functions are as follows:
Trace feature Point//Find the red area near the input point, calculate the center of gravity, as the new position of the feature point//input as, 1 current image, 2 is traced to the position of the previous round of feature points//return This trace result CV::P oint2f Tracking (cv::mat image, const CV::P oint2f lastcenter) {//cv::gaussianblur (image, Image, Cv::size (11, 11), 0);/*********** Initialize Roi**********/const int R = 100;//detection radius const INT r2 = r * 2;int startx = lastcenter.x-r;int starty = lastcenter.y-r;if (startx < 0) StartX = 0;if (Starty < 0) Starty = 0;int width = r2;int height = r2;if (startx + width >= image.size (). width) StartX = image.s Ize (). Width-1-Width;if (starty + height >= image.size (). height) Starty = image.size (). height-1-Height;cv::mat Ro i = Image (Cv::rect (StartX, starty, width, height)) Cv::mat roihsv;cv::cvtcolor (ROI, ROIHSV, CV_BGR2HSV);// Convert the BGR image to HSV image vector<cv::mat> Hsv;cv::split (ROIHSV, HSV);//Isolate HSV three channels Cv::mat h = hsv[0];cv::mat s = HSV[1];CV:: Mat v = Hsv[2];cv::mat roibinary = Cv::mat::zeros (Roi.size (), cv_8u);//two value image, 255 of the place is judged as red/************* judging color ********** /const Double ts = 0.5 * 255;//s threshold, less than the value does not determine const double TV = 0.1 * 255;//v threshold, less than the value does not determine the const double th = 0 * 180/360;//h Center const Double Thadd = The 180/360;//h range within the th±thadd is counted as red// The HSV image is binary for (int i = 0; i < roi.size (). Height; i++) {Uchar *PTRH = h.ptr<uchar> (i); Uchar *ptrs = S.PTR&L through a specific threshold value T;uchar> (i); Uchar *PTRV = v.ptr<uchar> (i); Uchar *ptrbin = roibinary.ptr<uchar> (i); for (int j = 0; J < ro I.size (). width; J + +) {if (Ptrs[j] < TS | | ptrv[j] < TV) continue;if (th + thadd >) if (Ptrh[j] < Th-thadd && Ptrh[j ] > th + thadd-180) continue;if (Th-thadd < 0) if (Ptrh[j] < Th-thadd + && ptrh[j] > th + Thad d) Continue;ptrbin[j] = 255;//Find the red Pixel point, in the corresponding position marker 255}}/***************** to the two value of the image to find the connected domain center of gravity ****************/std::vector <STD::VECTOR<CV::P oint>> contours;cv::findcontours (Roibinary.clone (), contours, CV_RETR_EXTERNAL, CV_ Chain_approx_none);//may have multiple connected domains, calculated their center of gravity STD::VECTOR<CV::P oint2f> gravitycenters;//Center of Gravity Point set for (int i = 0; i < Contours.size (); i++) {if (contours[I].size () < 10)//connected domain is too small continue;int xsum = 0;int ysum = 0;for (int j = 0; J < Contours[i].size (), j + +) {xsum + = Contou Rs[i][j].x;ysum + = Contours[i][j].y;} Double GPX = xsum/contours[i].size ();d ouble gpy = Ysum/contours[i].size (); Gravitycenters.push_back (CV::P oint2f (GPX + S Tartx, Gpy + Starty));} /********************* return to the most advantageous ******************///find the center of gravity the closest to the previous position of the CV::P oint ret = lastcenter;double dist = 1000000000; Double Distx = 1000000000;double disty = 1000000000;for (int i = 0; i < gravitycenters.size (); i++) {if (Distx > abs ( lastcenter.x-gravitycenters[i].x) && disty > Abs (LASTCENTER.Y-GRAVITYCENTERS[I].Y)) {Double newdist = sqrt ( (lastcenter.x-gravitycenters[i].x) * (lastcenter.x-gravitycenters[i].x) + (LASTCENTER.Y-GRAVITYCENTERS[I].Y) * ( LASTCENTER.Y-GRAVITYCENTERS[I].Y)); if (Dist > newdist) {distx = ABS (lastcenter.x-gravitycenters[i].x);d isty = ABS (l ASTCENTER.Y-GRAVITYCENTERS[I].Y);d ist = Newdist;ret = Gravitycenters[i];}}} return ret;}
Position and posture Estimation
When the user clicks on four feature points, the program starts to run the Pose estimation section. The specific process of posture is no longer described, please refer to the previous blog post:
Camera pose Estimation 1: Estimating camera posture based on four feature points
"Camera pose estimation 1_1:opencv:solvepnp two times package and performance test"
Three-dimensional display
After the posture estimation is completed, two txt is output to record the current posture of the camera.
LabVIEW program is to read the two TXT information, and then display three-dimensional space. The programming process of LabVIEW program is difficult to describe, the idea is to establish the world coordinate system first, then create a three-dimensional object in the world coordinate system as a three-dimensional model of the camera. Then according to the information in TXT, set the location of the model (that is, three-dimensional coordinates), and then set the model three spin angle, complete three-dimensional drawing.
The above process can run in the project folder:
~\ using LabVIEW to reconstruct camera position \ World-Manual adjustment parameters set the camera pose. VI
To manually set the parameters and experience the drawing process.
People who are interested in this section can refer to the documentation:
http://zone.ni.com/reference/zhs-XX/help/371361J-0118/lvhowto/create_3d_scene/
Effect Demo
This demonstration has also been released before, that is, real-time tracking feature points, and then on the right to reconstruct the camera posture.
Program Download
Finally, we give a routine, a routine C + + part based on VS2013 development, using the opencv2.4.x , the three-dimensional reconstruction part uses Labview2012 development. OpenCV configuration Reference My Blog "opencv2+ Primer Series (i): OpenCV2.4.9 Installation and Testing ",Labview The direct installation is OK.
After the routine is downloaded, the image acquisition part needs to be modified to your camera driver, and then the camera parameters and distortion parameters can be modified to use.
Address:
C + + program: Github
LabVIEW Program: Github
Camera position and pose estimation 2:[application] real-time pose estimation and three-dimensional reconstruction camera posture