OPENCV--KD Tree (Introduction to complete Flann proximity search) __OPENCV

Source: Internet
Author: User
Tags memory usage postgresql

It 's written in front .

About Kd-tree Search article is not much, in fact, in OpenCV, the so-called Kd-tree search, just Flann "Fast approximate nearest search" in the index of one. The Kd-tree search refers to the Kd-tree index established in the step of indexing.

So the essence of this article is: OPENCV and Flann Library interface. FLANN (Fast search Soku near nearest neighbor) is a tool library that contains algorithms for fast nearest neighbor search and high dimensional feature optimization for large datasets.


The author uses the OpenCV 2.49 (may change after 3.0)
header file To reference:

#include "opencv2//flann/miniflann.hpp"


Using Flann search, the whole is divided into two steps, one is to build the index, and the other is search. 

Index Establishment

Flann::index_::index_ (const mat& features, const indexparams& params)
Parameters:features–matrix of containing the features (points) to index. The
The size of the matrix is num_features x feature_dimensionality and the
Data type of the elements in the matrix must coincide with the type
of the index. Params–structure containing the index parameters. The type of index that is constructed depends on the type of this parameter. 

The description.

In fact, it is to have two parts of the parameters, one is the data is the mat matrix, and the second is a number of specific parameters, this parameter to be based on the index type set up. And what are the index types?
A total of: Linear index, Kd-tree index, k mean index, composite index, LSH Method index, automatic index six, the following details each index and its corresponding parameters:


//Linear index

struct Linearindexparams:public indexparams
Nothing to set the parameters

kdtreeindexparams //kd-tree Index

struct Kdtreeindexparams:public indexparams
Kdtreeindexparams (int trees = 4);


The number of parallel kd-trees to use. Scope: [1-16]

Know the different index of the corresponding set of parameters, then how the specific should be set. Just take the Kd-tree index as an example:
How to establish a Kd-tree index:

/** establishes the KD tree index **//
* Features/
Mat Source = Cv::mat (m_poriptxy). Reshape (1);//vector<point2d> m_poriptxy< c4/>//that M_poriptxy is a vector array of point2d types, and because the mat type is required for indexing, it is directly m_poriptxy to generate a Mat object
Source.convertto (source,cv_32f );
/* Params/
flann::kdtreeindexparams indexparams (2);//Here I set the trees parameter to 2 (which is the kd-trees only need to set params)
Flann: : Index kdtree (source, indexparams); KD Tree Index established


//k mean index

struct Kmeansindexparams:public indexparams
Kmeansindexparams (
int branching = 32,
int iterations = 11,
flann_centers_init_t Centers_init = Centers_random,
float cb_index = 0.2);


The branching factor to the hierarchical K-means tree


The maximum number of iterations to is the K-means clustering stage when building the K-means. A value of-1 used here means this k-means clustering should be iterated until


The algorithm to the selecting of the initial centers when performing a k-means step. The possible values are Centers_random (picks initial cluster CENTERS randomly), Centers_gonzales (picks the initial C Enters using Gonzales ' algorithm ' and CENTERS_KMEANSPP (picks the initial CENTERS using the algorithm suggested in Arthur_ kmeanspp_2007)


This parameter (Cluster boundary index) influences the way exploration are performed in the hierarchical Kmeans. When Cb_index was zero the next Kmeans domain to being chosen to being the one with the closest center. A value greater then zero also takes into account the size of the domain.


///The conforming structure index combined with random kd tree and hierarchical k mean tree to construct index

struct Compositeindexparams:public indexparams
Compositeindexparams (
int trees = 4,
int branching = 32,
int iterations = 11,
flann_centers_init_t Centers_init = Centers_random,
float cb_index = 0.2);


//This structure creates an index using the Multi-probe LSH method

struct Lshindexparams:public indexparams
Lshindexparams (
unsigned int table_number,
unsigned int key_size,
unsigned int multi_probe_level);
}; Table_number the number of hash tables to use (between usually). Key_size the size of the hash key in bits (between usually). Multi_probe_level the number of bits to shift to check for neighboring buckets (0 are regular LSH, 2 is recommended).



struct Autotunedindexparams:public indexparams
Autotunedindexparams (
float target_precision = 0.9,
Float Build_weight = 0.01,
Float memory_weight = 0,
float sample_fraction = 0.1);
}; Target_precision is a number between 0 and 1 specifying the percentage of the approximate Nearest-neighbor searches that r Eturn the exact nearest-neighbor. Using a higher value for this parameter gives more accurate results, but the search takes longer. The optimum value usually depends on the application. Build_weight Specifies the importance of the index build time raported to the Nearest-neighbor search time. In some applications it's acceptable for the "index build" step to take a long "if" subsequent searches in the index Can be performed very fast. In other applications it's required that the index is build as fast as possible even if this leads to slightly longer Ch Times. Memory_weight is used to specify the tradeoff between time (index builds time and search time) and memory used by the index . A value less than 1 gives more importance to the time spent and a value greater than 1 gives more importance to the memory Usage. Sample_fraction is a number between 0 and 1indicating what fraction the dataset to the automatic parameter configuration. Running the algorithm on the "full dataset gives" most accurate results, but for very large datasets can take longer tha N desired. In such case using just a fraction to the data helps speeding up this algorithm while still giving good approximations of The optimum parameters.

Savedindexparams//Read the previous index file

struct Savedindexparams:public indexparams
Savedindexparams (std::string filename);
}; filename, the filename in which, the index was saved. 


There are two ways to search
Flann::index_::knnsearch//Search K proximity
Flann::index_::radiussearch//Search RADIUS Recent
The difference between the two is considered from the result of the return:
Knnsearch return the nearest neighbor point (the number of specific points by the user set, set n will certainly return N);
Radiussearch returns all the points within the search radius (that is, the point where the criteria may not exist, then returns an empty).

Here is the specific usage: 

1. Knnsearch

Void Flann::index_::knnsearch (const vector& Query, vector& indices, vector& dists, int knn, const SEARCHPA rams& params)//parameter types all vector arrays
void Flann::index_::knnsearch (const mat& queries, mat& indices, mat& D ists, int KNN, const searchparams& params)//parameter type is mat type
parameter: query–the query Point Indices–ve ctor that would contain the indices of the k-nearest neighbors found. It must have at least KNN size. Dists–vector that would contain the distances to the k-nearest neighbors found. It must have at least KNN size. Knn–number of nearest neighbors to search for. Params–search parameters
struct Searchparams {
Searchparams (int checks = 32);
checks the number of times the "tree" (s) in the index should is recursively traversed. (Higher values have higher search accuracy but more time to spend) 

2 . Radiussearch
int flann::Index_::radiusSearch(const vector& query, vector& indices, vector& dists, float radius, const SearchParams& params)
int flann::Index_::radiusSearch(const Mat& query, Mat& indices, Mat& dists, float radius, const SearchParams& params)
Parameters: query – The query point indices – Vector that will contain the indices of the points found within the search radius in decreasing order of the distance to the query point. If the number of neighbors in the search radius is bigger than the size of this vector, the ones that don't fit in the vector are ignored. dists – Vector that will contain the distances to the points found within the search radius radius – The search radius params – Search parameters

A more complete example of ams–search parameters

Take the Kd-tree index as an example, using the Knnsearch

/** establishes the KD tree index **/
Mat Source = Cv::mat (m_poriptxy). Reshape (1);
Source.convertto (source,cv_32f);
Flann::kdtreeindexparams Indexparams (2);
Flann::index kdtree (source, indexparams); This section establishes the Kd-tree index above example, therefore does not make

the detailed narration/** preset knnsearch the required parameter and the container **/
unsigned querynum = 7;//is used to set the number of return neighboring points
vector< Float> Vecquery (2);//The container that holds the query point (this example is a vector type)
vector<int> vecindex (querynum);//Storing the returned point index
vector <float> vecdist (querynum)//Storage distance
flann::searchparams params (32)//Set Knnsearch search parameters

/**kd Tree KNN query **/
vecquery[0] = (float) DX;//query point x coordinate
vecquery[1] = (float) DY;//query point y-coordinate
kdtree.knnsearch (vecquery, Vecindex, Vecdist, querynum, params);
Note the logic of this sentence: the Knnsearch () function is invoked by the previously generated Kdtree Index object, and a point of KNN search


OpenCV2.49 Document

Alibaba Cloud Hot Products

Elastic Compute Service (ECS) Dedicated Host (DDH) ApsaraDB RDS for MySQL (RDS) ApsaraDB for PolarDB(PolarDB) AnalyticDB for PostgreSQL (ADB for PG)
AnalyticDB for MySQL(ADB for MySQL) Data Transmission Service (DTS) Server Load Balancer (SLB) Global Accelerator (GA) Cloud Enterprise Network (CEN)
Object Storage Service (OSS) Content Delivery Network (CDN) Short Message Service (SMS) Container Service for Kubernetes (ACK) Data Lake Analytics (DLA)

ApsaraDB for Redis (Redis)

ApsaraDB for MongoDB (MongoDB) NAT Gateway VPN Gateway Cloud Firewall
Anti-DDoS Web Application Firewall (WAF) Log Service DataWorks MaxCompute
Elastic MapReduce (EMR) Elasticsearch

Alibaba Cloud Free Trail

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.