The first step to complete with a single visual distance and a binocular range is the calibration of the camera
recommended to use MATLAB calibration, calibration method can see this blog http://blog.csdn.net/dreamharding/article/details/53700166
and https://blog.csdn.net/heroacool/article/details/51023921
When adding a toolbox, be careful to select the second add withsubfolders (added with the subfolders), otherwise it will be easy to open the Toolbox with the Calib command
The second issue to be aware of when calibrating a camera is that the image size used for the calibration is also scaled to 640* due to the size of the USB bandwidth that is used by the external USB camera for the 640*480 of its transmitted pixels. 480 otherwise it will result in the calibration of the internal reference than the actual use due to the bandwidth effect caused by the image reduction of the internal of the large number of times
After calibrating the camera to get the reference matrix and the distortion vector, we can start the OpenCV of the single visual distance.
The single visual distance using the SOVELPNP function, the first parameter is the input three-dimensional point coordinate matrix (unit is cm), note that this three-dimensional coordinates can not be arbitrary points (PS: Some of the three-dimensional dot write on the internet at random), The three-dimensional point needs to correspond to point one by one of the second parameter imgpoints (imgpoints is the input image two-dimensional point vector)
The third parameter is the camera's Reference matrix (9*9) obtained from the previous step
The fourth parameter is the external parameter of the camera (1*5)
The fifth parameter is the output of the camera rotation vector
The sixth parameter output is the camera's translation vector
The rotation parameters and translation of the camera coordinate system relative to the world coordinate system can be obtained by rotating the vector and moving the vector.
The problem of the principle formula of world coordinate system and camera coordinate system can be see
Http://blog.csdn.net/chenmohousuiyue/article/details/78157509?locationnum=9&fps=1
Here's a little bit of my understanding.
Through the principle of keyhole imaging, according to the similarity triangle theorem x*z=f*x, where x is the pixel width, f is the focal length of the corresponding axis, x is the actual length of the object, Z is the distance from the object to the camera
Then, taking into account the coordinates of the points in the main point pinhole plane (relative to the offset in the upper-left corner of (0,0)), the pixel coordinates in the x direction can be obtained, and the Y-direction is similarly
And then, by the knowledge of linear algebra, multiply the above-mentioned equations into matrices
I've got a reverse derivation here to prove the existence of the formula.
You can see that the first matrix is the camera's internal reference matrix.
The relationship between the camera coordinate system and the image coordinate system is obtained by the above equation.
Because the camera coordinate system and the world coordinate system exist the rotation of the XYZ axis and the Yaw,pinch,row, and the world coordinate system to the camera coordinate system needs to multiply R (selection matrix) T (translation matrix)
The relationship between the pixel coordinate system and the world coordinate system is then multiplied by the RT matrix based on the original pixel coordinate system to the camera coordinate system.
That is: x =m*[r|t]*x, where M is the camera's reference matrix, and r,t is the rotation translation matrix of the world coordinates converted to camera coordinates
This step can be found in equation X =m*[r|t]*x We know that x,x,m can get r,t, so it can also reflect the importance of scaling the image to 640*480 in camera calibration, as well as the point of pixel and world coordinate point one by one of the importance of corresponding
The process of finding the r,t matrix here is to use the OpenCV SOVELPNP input pixel coordinates (and the formula X), the world coordinates (and the formula X), the input camera's internal reference matrix and the distortion matrix (and the formula m) can be obtained Rvec and Tvec and the formula (R,t matrix)
After obtaining the r,t matrix of the relationship between camera and world coordinate system through SOVELPNP, it is necessary to solve the Euler angle relation and distance relationship between camera coordinate system and world coordinate system by r,t matrix.
The formula derivation of the rotation matrix can be viewed in this figure if the point of the P point in the original coordinate system is (x, y), the coordinates in the coordinate system after the rotation angle o are (× ', Y ')
The mathematical formula can be obtained for x ' =xcoso-ysino,y ' =xsino+ysino;
The rotation matrix of a single axis is
Rotation matrix R[3][3]
R[3][3]=r (Row) *r (pitch) *r (yaw) obtained, can get the total
Then the formula for solving the angle is
where RM is the 3*3RVTC matrix, obtained by SOVELPNP, and then the distance conversion formula I don't know how to push directly to the code I find online
The final result is a ranging accuracy of around 1cm