About G2O Learning and use

Source: Internet
Author: User
Tags prepare


Original reference: A brief introduction to http://www.cnblogs.com/gaoxiang12/p/5304272.html G2O Common modules



G2O is a C + + written project, built with CMake. Its github address is: HTTPS://GITHUB.COM/RAINERKUEMMERLE/G2O
is a heavy template class of C + + project, where the matrix data structure much from Eigen.



The G2O project contains several folders. Dig out the Gitignore and the like, mainly in the following several:



EXTERNAL Three-party library, there are Ceres, Csparse, Freeglut, can be selectively compiled;
cmake_modules gives CMake a file to look for a library. When we use G2O, we also use things in it, such as Findg2o.cmake .
Doc document. Including G2o's own instructions (a very difficult description of the document).
G2o 's most important source code is here.
scripts that are compiled with other systems such as Android, because we don't need to talk about it under Ubuntu.
To sum up, the most important thing is G2O source code file. So we have to take a closer look at it.



We also introduce the contents of each folder:



apps Some applications. The G2o_viewer is right here. There are other, less commonly used command-line tools.
core components, very important. Basic vertex, edge, diagram structure definition, algorithm definition, Solver interface definition here.
examples Some routines, you can refer to the things here to write. But there are not too many comments.
The implementation of the solvers solver. Mainly from Choldmod, Csparse. Select one of the G2O when you use it.
Stuff Some tool functions that are dispensable to the user.
types various vertices and edges.



The files in the types folder are important . When building a graph optimization problem, our users have to think about whether their vertices and edges already provide a definition. If not, make it yourself. If so, it is provided with G2O. The selection of types is the main concern of G2O users . Then the contents of the core below, we have to strive to get more familiar, to ensure that the use of errors can be properly responded to.



What is the most basic class structure of G2O? How do we express a graph and select a solver? We sacrifice a picture:



Follow Dr. Gao's blog.
Sparseoptimizer is the last thing we want to maintain. It is a optimizable graph, which is also a Hyper graph.
Sparseoptimizer contains many vertices (both inherited from Base Vertex) and many edges (inherited from Baseunaryedge, basebinaryedge , or Basemultiedge). These base Vertex and base edges are abstract base classes , and the actual vertex and edge are their derived classes . We add vertices and edges to a graph using Sparseoptimizer.addvertex and Sparseoptimizer.addedge , and the last call Sparseoptimizer.optimize Complete optimization.



Before you optimize, you need to specify a Solver and an iterative algorithm .
As you can see from the lower half of the figure, a sparseoptimizer has a optimization algorithm, inherited from Gauss-newton, Levernberg-marquardt, One of the three Powell ' s dogleg (we often use GN or LM). At the same time, this optimization algorithm has a Solver, which contains two parts. One is Sparseblockmatrix, used to compute sparse Jacobian and Heisse, and the other is a solver of linear equations. This solver, which can be selected from PCG, Csparse, and Choldmod, is used to compute the most critical step in the iterative process: solution Hδx =-B.



To sum up, it takes three steps to select an optimization method in G2O:



1 Select a linear equation solver, from PCG, Csparse, Choldmod, and actually from the G2o/solvers folder defined in the object.
2) Select a blocksolver.
3 Select an iterative strategy, from the GN, LM, Doglog selected.



Gaobo's blog does pay more attention to engineering applications. In fact, I think we all pay attention to learning things quickly. This model is most suitable for our current rookie. Research career is short. There are too many pits to fill. If not to read Bo, really do not have so much time to sit down to chew the mathematical formula. So I think we should gradually develop their ability to learn quickly. And then go back to the code bar after the combination of the previous content of the Gaobo GitHub on the code again, and then to the self.
Address: Get clone https://github.com/gaoxiang12/g2o_ba_example instance: Bundle adjustment in Dual view






The goal is to estimate the movement between these two graphs. (This example uses only two views and camera internal reference, no depth diagram)



This is mainly based on the characteristics of the method of derivation.
First of all, we can use feature detection and matching to get two graph n pair matching points. Recorded as:

and the camera internal reference matrix C to solve the camera motion r,t in two graphs.
Note: The superscript of the character z represents the point, and the following table represents the point below the first few pictures. At the same time, the specific value of each point Z, refers to the pixel coordinates corresponding to the point, is a two-dimensional vector. ZI=[UI,VI]






Assuming that the pose of camera 1 is a unit matrix, the real coordinates of the three-dimensional space are in X J for any one feature point, while in the two camera coordinates it appears to be zj1, zj2. According to the projection relationship, we have:

C is the camera's internal reference, where the λ1,λ2 represents the depth of two pixels, which is the z-coordinate of X J in camera 1 coordinates. Although we don't know what the actual X J is, the relationship between it and Z can be written out.



The traditional solution to this problem is to eliminate the X-J in the two equations, to get the relational formula for Z,r,t, and then to optimize it. This path leads to polar geometry and fundamental matrices (essential matrix), in theory, we need more than eight matching points to compute the r,t. But here we do not, because we are in the introduction of graph optimization, another way:
In graph optimization, we construct an optimization problem and represent the graph to solve it.



Due to the existence of various noises, the projection relationship is not perfect, so we turn to optimize the two norm of their error. So for each feature point, we can write a two-norm error term. By summing them up, you get the whole optimization problem:






It is called minimizing the weight projection error (minimization of reprojection error). Of course, it is regrettable that it is a non-linear, convex optimization problem, which means that we are not necessarily able to solve it, nor necessarily to find the global optimal solution. In practice, we are actually adjusting each x j to make them more consistent with each observation zj, which is to make each error term as small as possible. For this reason, it is also called bundle set Adjustment (Bundle adjustment).

BA is easy to describe as a form of graph optimization. In this two-view BA, there are two kinds of nodes:



Camera Pose node: expresses the position of the two cameras and is an element in SE (3) SE (3).
the spatial location node of a feature point: is an xyz coordinate.
Correspondingly, the edge mainly represents the projection relation of the spatial point to the pixel coordinate . Is
   Implement



Let's use G2O to realize the BA. The selected nodes and edges are as follows:



Node 1: Camera pose node:g2o::vertexse3expmap, from


/ **
 * BA Example
 * Author: Xiang Gao
 * Date: 2016.3
 * Email: gaoxiang12@mails.tsinghua.edu.cn
 *
 * In this program, we read two images and perform feature matching. Then based on the matched features, calculate the camera motion and the position of the feature point. This is a typical Bundle Adjustment, we use g2o for optimization.
 * /

// for std
#include <iostream>
// for opencv
#include <opencv2 / core / core.hpp>
#include <opencv2 / highgui / highgui.hpp>
#include <opencv2 / features2d / features2d.hpp>
#include <boost / concept_check.hpp>
// for g2o
#include <g2o / core / sparse_optimizer.h>
#include <g2o / core / block_solver.h>
#include <g2o / core / robust_kernel.h>
#include <g2o / core / robust_kernel_impl.h>
#include <g2o / core / optimization_algorithm_levenberg.h>
#include <g2o / solvers / cholmod / linear_solver_cholmod.h>
#include <g2o / types / slam3d / se3quat.h>
#include <g2o / types / sba / types_six_dof_expmap.h>


using namespace std;

// Find corresponding points in two images, pixel coordinate system
// Input: img1, img2 two images
// Output: points1, points2, two corresponding 2D points
int findCorrespondingPoints (const cv :: Mat & img1, const cv :: Mat & img2, vector <cv :: Point2f> & points1, vector <cv :: Point2f> & points2);

// camera internal reference
double cx = 325.5;
double cy = 253.5;
double fx = 518.0;
double fy = 519.0;

int main (int argc, char ** argv)
{
    // Call format: command [first picture] [second picture]
    if (argc! = 3)
    {
        cout << "Usage: ba_example img1, img2" << endl;
        exit (1);
    }

    // read image
    cv :: Mat img1 = cv :: imread (argv [1]);
    cv :: Mat img2 = cv :: imread (argv [2]);

    // Find the corresponding point
    vector <cv :: Point2f> pts1, pts2;
    if (findCorrespondingPoints (img1, img2, pts1, pts2) == false)
    {
        cout << "Not enough matching points." << endl;
        return 0;
    }
    cout << "found" << pts1.size () << "group corresponding feature points." << endl;
    // Construct the graph in g2o
    // Construct the solver first
    g2o :: SparseOptimizer optimizer;
    // Use the linear equation solver in Cholmod
    g2o :: BlockSolver_6_3 :: LinearSolverType * linearSolver = new g2o :: LinearSolverCholmod <g2o :: BlockSolver_6_3 :: PoseMatrixType> ();
    // 6 * 3 parameters
    g2o :: BlockSolver_6_3 * block_solver = new g2o :: BlockSolver_6_3 (linearSolver);
    // L-M drops
    g2o :: OptimizationAlgorithmLevenberg * algorithm = new g2o :: OptimizationAlgorithmLevenberg (block_solver);

    optimizer.setAlgorithm (algorithm);
    optimizer.setVerbose (false);

    // Add node
    // Two pose nodes
    for (int i = 0; i <2; i ++)
    {
        g2o :: VertexSE3Expmap * v = new g2o :: VertexSE3Expmap ();
        v-> setId (i);
        if (i == 0)
            v-> setFixed (true); // The first point is fixed at zero
        // The default value is the unit Pose because we do n’t know any information
        v-> setEstimate (g2o :: SE3Quat ());
        optimizer.addVertex (v);
    }
    // Nodes with many feature points
    // Subject to the first frame
    for (size_t i = 0; i <pts1.size (); i ++)
    {
        g2o :: VertexSBAPointXYZ * v = new g2o :: VertexSBAPointXYZ ();
        v-> setId (2 + i);
        // Since the depth is unknown, you can only set the depth to 1.
        double z = 1;
        double x = (pts1 [i] .x-cx) * z / fx;
        double y = (pts1 [i] .y-cy) * z / fy;
        v-> setMarginalized (true);
        v-> setEstimate (Eigen :: Vector3d (x, y, z));
        optimizer.addVertex (v);
    }

    // Prepare camera parameters
    g2o :: CameraParameters * camera = new g2o :: CameraParameters (fx, Eigen :: Vector2d (cx, cy), 0);
    camera-> setId (0);
    optimizer.addParameter (camera);

    // Prepare the side
    // first frame
    vector <g2o :: EdgeProjectXYZ2UV *> edges;
    for (size_t i = 0; i <pts1.size (); i ++)
    {
        g2o :: EdgeProjectXYZ2UV * edge = new g2o :: EdgeProjectXYZ2UV ();
        edge-> setVertex (0, dynamic_cast <g2o :: VertexSBAPointXYZ *> (optimizer.vertex (i + 2)));
        edge-> setVertex (1, dynamic_cast <g2o :: VertexSE3Expmap *> (optimizer.vertex (0)));
        edge-> setMeasurement (Eigen :: Vector2d (pts1 [i] .x, pts1 [i] .y));
        edge-> setInformation (Eigen :: Matrix2d :: Identity ());
        edge-> setParameterId (0, 0);
        // kernel function
        edge-> setRobustKernel (new g2o :: RobustKernelHuber ());
        optimizer.addEdge (edge);
        edges.push_back (edge);
    }
    // second frame
    for (size_t i = 0; i <pts2.size (); i ++)
    {
        g2o :: EdgeProjectXYZ2UV * edge = new g2o :: EdgeProjectXYZ2UV ();
        edge-> setVertex (0, dynamic_cast <g2o :: VertexSBAPointXYZ *> (optimizer.vertex (i + 2)));
        edge-> setVertex (1, dynamic_cast <g2o :: VertexSE3Expmap *> (optimizer.vertex (1)));
        edge-> setMeasurement (Eigen :: Vector2d (pts2 [i] .x, pts2 [i] .y));
        edge-> setInformation (Eigen :: Matrix2d :: Identity ());
        edge-> setParameterId (0,0);
        // kernel function
        edge-> setRobustKernel (new g2o :: RobustKernelHuber ());
        optimizer.addEdge (edge);
        edges.push_back (edge);
    }

    cout << "Start Optimization" << endl;
    optimizer.setVerbose (true);
    optimizer.initializeOptimization ();
    optimizer.optimize (10);
    cout << "Optimization completed" << endl;

    // We are more concerned about the transformation matrix between the two frames
    g2o :: VertexSE3Expmap * v = dynamic_cast <g2o :: VertexSE3Expmap *> (optimizer.vertex (1));
    Eigen :: Isometry3d pose = v-> estimate ();
    cout << "Pose =" << endl << pose.matrix () << endl;

    // and the position of all feature points
    for (size_t i = 0; i <pts1.size (); i ++)
    {
        g2o :: VertexSBAPointXYZ * v = dynamic_cast <g2o :: VertexSBAPointXYZ *> (optimizer.vertex (i + 2));
        cout << "vertex id" << i + 2 << ", pos =";
        Eigen :: Vector3d pos = v-> estimate ();
        cout << pos (0) <<"," << pos (1) << "," << pos (2) << endl;
    }

    // Estimate the number of inliers
    int inliers = 0;
    for (auto e: edges)
    {
        e-> computeError ();
        // chi2 is error * \ Omega * error, if this number is large, it means that the value of this side is very inconsistent with other sides
        if (e-> chi2 ()> 1)
        {
            cout << "error =" << e-> chi2 () << endl;
        }
        else
        {
            inliers ++;
        }
    }

    cout << "inliers in total points:" << inliers << "/" << pts1.size () + pts2.size () << endl;
    optimizer.save ("ba.g2o");
    return 0;
}


int findCorrespondingPoints (const cv :: Mat & img1, const cv :: Mat & img2, vector <cv :: Point2f> & points1, vector <cv :: Point2f> & points2)
{
    cv :: ORB orb;
    vector <cv :: KeyPoint> kp1, kp2;
    cv :: Mat desp1, desp2;
    orb (img1, cv :: Mat (), kp1, desp1);
    orb (img2, cv :: Mat (), kp2, desp2);
    cout << "found" << kp1.size () << "and" << kp2.size () << "characteristic points" << endl;

    cv :: Ptr <cv :: DescriptorMatcher> matcher = cv :: DescriptorMatcher :: create ("BruteForce-Hamming");

    double knn_match_ratio = 0.8;
    vector <vector <cv :: DMatch>> matches_knn;
    matcher-> knnMatch (desp1, desp2, matches_knn, 2);
    vector <cv :: DMatch> matches;
    for (size_t i = 0; i <matches_knn.size (); i ++)
    {
        if (matches_knn [i] [0] .distance <knn_match_ratio * matches_knn [i] [1] .distance)
            matches.push_back (matches_knn [i] [0]);
    }

    if (matches.size () <= 20) // Too few matching points
        return false;

    for (auto m: matches)
    {
        points1.push_back (kp1 [m.queryIdx] .pt);
        points2.push_back (kp2 [m.trainIdx] .pt);
    }

    return true;
}

Using CMake to compile and then give the parameters of two graphs on the command line;



About Monocular BA still have one point to say, is scale uncertainty. Because of the existence of the λλ in the projection formula, we can only push a relative depth, and we cannot know exactly how far the feature points are from us. If we amplify the coordinates of all the feature points by one times, we multiply the translation by two, and the result is exactly the same.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.