Detailed description of the use of the C ++ matrix database Eigen
C ++ matrix database Eigen
Recently, I need to use C ++ for some numerical calculations. Previously, I had been using Matlab hybrid programming to process matrix operations. It was very troublesome until I found the Eigen library, and it was very easy to use. Eigen is a linear algebra Library Based on the C ++ template. It is very convenient to download the library and put it under the project directory. In addition, the Eigen interface is clear, stable, and efficient. The only problem was that we had been using Matlab and were not familiar with the APIs of Eigen. If there was a description of Eigen and Matlab, it would have been an excellent solution. Finally, I had to find it.
Eigen matrix Definition
#include
Matrix
A; // Fixed rows and cols. Same as Matrix3d.Matrix
B; // Fixed rows, dynamic cols.Matrix
C; // Full dynamic. Same as MatrixXd.Matrix
E; // Row major; default is column-major.Matrix3f P, Q, R; // 3x3 float matrix.Vector3f x, y, z; // 3x1 float matrix.RowVector3f a, b, c; // 1x3 float matrix.VectorXd v; // Dynamic column vector of doubles// Eigen // Matlab // commentsx.size() // length(x) // vector sizeC.rows() // size(C,1) // number of rowsC.cols() // size(C,2) // number of columnsx(i) // x(i+1) // Matlab is 1-basedC(i,j) // C(i+1,j+1) //
Basic usage of Eigen
// Basic usage// Eigen // Matlab // commentsx.size() // length(x) // vector sizeC.rows() // size(C,1) // number of rowsC.cols() // size(C,2) // number of columnsx(i) // x(i+1) // Matlab is 1-basedC(i, j) // C(i+1,j+1) //A.resize(4, 4); // Runtime error if assertions are on.B.resize(4, 9); // Runtime error if assertions are on.A.resize(3, 3); // Ok; size didn't change.B.resize(3, 9); // Ok; only dynamic cols changed. A << 1, 2, 3, // Initialize A. The elements can also be 4, 5, 6, // matrices, which are stacked along cols 7, 8, 9; // and then the rows are stacked.B << A, A, A; // B is three horizontally stacked A's.A.fill(10); // Fill A with all 10's.
Eigen Special Matrix generation
// Eigen // MatlabMatrixXd::Identity(rows,cols) // eye(rows,cols)C.setIdentity(rows,cols) // C = eye(rows,cols)MatrixXd::Zero(rows,cols) // zeros(rows,cols)C.setZero(rows,cols) // C = ones(rows,cols)MatrixXd::Ones(rows,cols) // ones(rows,cols)C.setOnes(rows,cols) // C = ones(rows,cols)MatrixXd::Random(rows,cols) // rand(rows,cols)*2-1 // MatrixXd::Random returns uniform random numbers in (-1, 1).C.setRandom(rows,cols) // C = rand(rows,cols)*2-1VectorXd::LinSpaced(size,low,high) // linspace(low,high,size)'v.setLinSpaced(size,low,high) // v = linspace(low,high,size)'
Eigen Matrix partitioning
// Matrix slicing and blocks. All expressions listed here are read/write.// Templated size versions are faster. Note that Matlab is 1-based (a size N// vector is x(1)...x(N)).// Eigen // Matlabx.head(n) // x(1:n)x.head
() // x(1:n)x.tail(n) // x(end - n + 1: end)x.tail
() // x(end - n + 1: end)x.segment(i, n) // x(i+1 : i+n)x.segment
(i) // x(i+1 : i+n)P.block(i, j, rows, cols) // P(i+1 : i+rows, j+1 : j+cols)P.block
(i, j) // P(i+1 : i+rows, j+1 : j+cols)P.row(i) // P(i+1, :)P.col(j) // P(:, j+1)P.leftCols
() // P(:, 1:cols)P.leftCols(cols) // P(:, 1:cols)P.middleCols
(j) // P(:, j+1:j+cols)P.middleCols(j, cols) // P(:, j+1:j+cols)P.rightCols
() // P(:, end-cols+1:end)P.rightCols(cols) // P(:, end-cols+1:end)P.topRows
() // P(1:rows, :)P.topRows(rows) // P(1:rows, :)P.middleRows
(i) // P(i+1:i+rows, :)P.middleRows(i, rows) // P(i+1:i+rows, :)P.bottomRows
() // P(end-rows+1:end, :)P.bottomRows(rows) // P(end-rows+1:end, :)P.topLeftCorner(rows, cols) // P(1:rows, 1:cols)P.topRightCorner(rows, cols) // P(1:rows, end-cols+1:end)P.bottomLeftCorner(rows, cols) // P(end-rows+1:end, 1:cols)P.bottomRightCorner(rows, cols) // P(end-rows+1:end, end-cols+1:end)P.topLeftCorner
() // P(1:rows, 1:cols)P.topRightCorner
() // P(1:rows, end-cols+1:end)P.bottomLeftCorner
() // P(end-rows+1:end, 1:cols)P.bottomRightCorner
() // P(end-rows+1:end, end-cols+1:end)
Eigen matrix element exchange
// Of particular note is Eigen's swap function which is highly optimized.// Eigen // MatlabR.row(i) = P.col(j); // R(i, :) = P(:, i)R.col(j1).swap(mat1.col(j2)); // R(:, [j1 j2]) = R(:, [j2, j1])
Eigen matrix transpose
// Views, transpose, etc; all read-write except for .adjoint().// Eigen // MatlabR.adjoint() // R'R.transpose() // R.' or conj(R')R.diagonal() // diag(R)x.asDiagonal() // diag(x)R.transpose().colwise().reverse(); // rot90(R)R.conjugate() // conj(R)
Eigen Matrix Product
// All the same as Matlab, but matlab doesn't have *= style operators.// Matrix-vector. Matrix-matrix. Matrix-scalar.y = M*x; R = P*Q; R = P*s;a = b*M; R = P - Q; R = s*P;a *= M; R = P + Q; R = P/s; R *= Q; R = s*P; R += Q; R *= s; R -= Q; R /= s;
Eigen matrix single element operation
// Vectorized operations on each element independently// Eigen // MatlabR = P.cwiseProduct(Q); // R = P .* QR = P.array() * s.array();// R = P .* sR = P.cwiseQuotient(Q); // R = P ./ QR = P.array() / Q.array();// R = P ./ QR = P.array() + s.array();// R = P + sR = P.array() - s.array();// R = P - sR.array() += s; // R = R + sR.array() -= s; // R = R - sR.array() < Q.array(); // R < QR.array() <= Q.array(); // R <= QR.cwiseInverse(); // 1 ./ PR.array().inverse(); // 1 ./ PR.array().sin() // sin(P)R.array().cos() // cos(P)R.array().pow(s) // P .^ sR.array().square() // P .^ 2R.array().cube() // P .^ 3R.cwiseSqrt() // sqrt(P)R.array().sqrt() // sqrt(P)R.array().exp() // exp(P)R.array().log() // log(P)R.cwiseMax(P) // max(R, P)R.array().max(P.array()) // max(R, P)R.cwiseMin(P) // min(R, P)R.array().min(P.array()) // min(R, P)R.cwiseAbs() // abs(P)R.array().abs() // abs(P)R.cwiseAbs2() // abs(P.^2)R.array().abs2() // abs(P.^2)(R.array() < s).select(P,Q); // (R < s ? P : Q)
Eigen matrix simplification
// Reductions.int r, c;// Eigen // MatlabR.minCoeff() // min(R(:))R.maxCoeff() // max(R(:))s = R.minCoeff(&r, &c) // [s, i] = min(R(:)); [r, c] = ind2sub(size(R), i);s = R.maxCoeff(&r, &c) // [s, i] = max(R(:)); [r, c] = ind2sub(size(R), i);R.sum() // sum(R(:))R.colwise().sum() // sum(R)R.rowwise().sum() // sum(R, 2) or sum(R')'R.prod() // prod(R(:))R.colwise().prod() // prod(R)R.rowwise().prod() // prod(R, 2) or prod(R')'R.trace() // trace(R)R.all() // all(R(:))R.colwise().all() // all(R)R.rowwise().all() // all(R, 2)R.any() // any(R(:))R.colwise().any() // any(R)R.rowwise().any() // any(R, 2)
Eigen matrix point Multiplication
// Dot products, norms, etc.// Eigen // Matlabx.norm() // norm(x). Note that norm(R) doesn't work in Eigen.x.squaredNorm() // dot(x, x) Note the equivalence is not true for complexx.dot(y) // dot(x, y)x.cross(y) // cross(x, y) Requires #include
Eigen matrix type conversion
//// Type conversion// Eigen // MatlabA.cast
(); // double(A)A.cast
(); // single(A)A.cast
(); // int32(A)A.real(); // real(A)A.imag(); // imag(A)// if the original type equals destination type, no work is done
Eigen solves Linear Equations Ax = B
// Solve Ax = b. Result stored in x. Matlab: x = A \ b.x = A.ldlt().solve(b)); // A sym. p.s.d. #include
x = A.llt() .solve(b)); // A sym. p.d. #include
x = A.lu() .solve(b)); // Stable and fast. #include
x = A.qr() .solve(b)); // No pivoting. #include
x = A.svd() .solve(b)); // Stable, slowest. #include
// .ldlt() -> .matrixL() and .matrixD()// .llt() -> .matrixL()// .lu() -> .matrixL() and .matrixU()// .qr() -> .matrixQ() and .matrixR()// .svd() -> .matrixU(), .singularValues(), and .matrixV()
Eigen matrix feature value
// Eigenvalue problems// Eigen // MatlabA.eigenvalues(); // eig(A);EigenSolver
eig(A); // [vec val] = eig(A)eig.eigenvalues(); // diag(val)eig.eigenvectors(); // vec// For self-adjoint matrices use SelfAdjointEigenSolver<>