TLD (tracking-Learning-detection) learning and source code understanding (7)

Source: Internet
Author: User
Tags tld

The following is my understanding of the Code in my thesis and the analysis of these experts. However, it is not long since I have been involved in image processing and machine vision. In addition, my programming skills are weak, therefore, there may be many errors in the analysis process. I hope you will not correct them. In addition, because programming is not understood in many places, the comments are messy and haihan is still involved.

 

Fernnclassifier. h

/** Fernnclassifier. h *** created on: Jun 14,201 1 * Author: alantrrs */# include <opencv2/opencv. HPP> # include <stdio. h> class fernnclassifier {PRIVATE: // The following parameters are read into parameters when the program starts running. yml file initialization float thr_fern; int structsize; int nstructs; float valid; float ncc_thesame; float thr_nn; int Acum; public: // parameters float thr_nn_valid; void read (const CV :: filenode & file); void prepare (const STD: Vector <CV: size> & scales); void getfeatures (const CV: mat & image, const Int & scale_idx, STD: vector <int> & Fern ); void Update (const STD: vector <int> & fern, int C, int N); float measure_forest (STD: vector <int> Fern); void trainf (const STD:: vector <STD: pair <STD: vector <int>, int> & ferns, int resample); void trainnn (const STD: vector <CV :: mat> & nn_examples); void nnconf (const CV: mat & example, STD: vector <int> & ISIN, float & rsconf, float & csconf); void evaluateth (const STD: vector <STD: pair <STD: vector <int>, int> & NXT, const STD: vector <CV: mat> & next); void show (); // ferns members int getnumstructs () {return nstructs;} float getfernth () {return thr_fern;} float getnnth () {return thr_nn;} struct feature // feature struct {uchar X1, Y1, X2, Y2; feature (): x1 (0 ), y1 (0), X2 (0), Y2 (0) {} feature (INT _ X1, int _ Y1, int _ x 2, int _ Y2): x1 (uchar) _ X1), y1 (uchar) _ Y1), X2 (uchar) _ x2), Y2 (uchar) _ Y2) {} bool operator () (const CV: mat & patch) const {// two-dimensional single-channel elements can be accessed using mat: at (I, j, I is the row serial number, J is the column serial number // The returned patch image slices are compared in pixels between (Y1, X1) and (Y2, X2), and 0 or 1 return patch is returned. at <uchar> (Y1, X1)> patch. at <uchar> (Y2, X2) ;}; // ferns (FERN: roots, stems, leaves, no flowers) features feature group? STD: vector <feature> features; // ferns features (one STD: vector for each scale) STD: vector <STD :: vector <int> ncounter; // negative counter STD: vector <int> pcounter; // positive counter STD: vector <STD :: vector <float> posteriors; // ferns posteriors float thrn; // negative threshold float thrp; // positive thershold // NN members STD: vector <CV :: mat> PEX; // NN positive examples STD: vector <CV: mat> NEX; // NN negative examples };

 

 

Fernnclassifier. cpp

/** Fernnclassifier. CPP *** created on: Jun 14,201 1 * Author: alantrrs */# include <fernnclassifier. h> using namespace CV; using namespace STD; void fernnclassifier: Read (const filenode & file) {// classifier parameters // The following parameters are read when the program starts running. initialize the yml file valid = (float) file ["valid"]; ncc_thesame = (float) file ["ncc_thesame"]; nstructs = (INT) file ["num_trees"]; // number of trees (constructed by a feature group, each group of features represents different views of the image block) S Tructsize = (INT) file ["num_features"]; // number of features of each tree, that is, the number of nodes of each tree; every feature in the tree is used as a decision node thr_fern = (float) file ["thr_fern"]; thr_nn = (float) file ["thr_nn"]; thr_nn_valid = (float) file ["thr_nn_valid"];} void fernnclassifier: Prepare (const vector <size> & scales) {Acum = 0; // initialize test locations for features int totalfeatures = nstructs * structsize; // the two-dimensional vector contains the scanning window of all Scale (scales), and each scale contains the totalfeatures feature S = vector <feature> (scales. size (), vector <feature> (totalfeatures); // class RNG & RNG = therng (); float x1f, x2f, y1f, y2f; int x1, x2, Y1, Y2; // The set classifier is based on N basic classifiers. Each classifier is based on one pixel comparisons (pixel comparison set; // pixel comparisons generation method: Use a normalized patch to discretization the pixel space, generate all possible vertical and horizontal pixel comparisons // then we randomly allocate these pixel comparisons to N classifiers, each of which gets a completely different pixel comparisons (feature set ), // In this way, the feature groups of all classifiers can be unified to cover the whole Patch the features for (INT I = 0; I <totalfeatures; I ++) {x1f = (float) RNG of each scale scan window with random numbers; y1f = (float) RNG; x2f = (float) RNG; y2f = (float) RNG; For (int s = 0; S <scales. size (); s ++) {x1 = x1f * scales [s]. width; Y1 = y1f * scales [s]. height; x2 = x2f * scales [s]. width; y2 = y2f * scales [s]. height; // The pixel coordinate features [s] [I] = feature (x1, Y1, X2, Y2) randomly assigned to the I feature at the second scale );}} // thresholds thrn = 0.5 * nstructs; // in Itialize posteriors initialize posterior probability // posterior probability indicates that each classifier compares the input image slices in pixels. Each pixel is compared to 0 or 1, and all features are compared with 13 comparison, // concatenate a 13-bit binary code X and index it to an array P (Y | x) that records the posterior probability. Y is 0 or 1 (binary classification ), that is, based on the appearance of X //, the probability that the image piece is Y is equal to the posterior probability of N basic classifiers, if the value is greater than 0.5, it is determined that it contains the target for (INT I = 0; I <nstructs; I ++) {// each type of device maintains a posterior probability distribution, this distribution has 2 ^ d entries (entries). Here, D is the number of pixels that compare pixel comparisons //. Here is structsize, that is, 13 comparison, so there will be 2 ^ 13, that is, 8,192 possible codes. Each Code corresponds to a posterior probability // posterior probability P (Y | X) = # P/(# P + # N), # P and # N are the number of positive and negative image slices, respectively. When pcounter and ncounter are initialized, Each posterior probability must be initialized to 0; the following method is updated during running: samples with known class labels (training samples) are classified by N classifiers // if the classification result is incorrect, then # P and # N in the response will be updated, so P (Y | x) will also update posteriors accordingly. push_back (vector <float> (POW (2.0, structsize), 0); pcounter. push_back (vector <int> (POW (2.0, structsize), 0); ncounter. push_back (vector <int> (POW (2.0, structsize), 0) ;}// this function obtains the node used for the input image for the tree, that is, the feature of the feature group (13-bit binary code) void fernnclassifier: getfeature S (const CV: mat & image, const Int & scale_idx, vector <int> & Fern) {int leaf; // The final node of the leaf tree // each type of device maintains a posterior probability distribution, which has 2 ^ d entries (entries ), here, D is the number of pixels that compare pixel comparisons //. Here, it is structsize, that is, 13 comparison, SO 2 ^ 13 is generated, that is, 8,192 possible codes, each Code corresponds to a posterior probability for (int t = 0; t <nstructs; t ++) {// nstructs indicates the number of trees 10 leaf = 0; for (INT f = 0; F <structsize; F ++) {// represents the number of features of each tree. 13 // struct feature struct has an operator to overload bool operator () (const CV: m At & patch) const // compare the pixels of the returned patch image slices at (Y1, X1) and (Y2, X2) points, returns 0 or 1 // then leaf records the 13-bit binary code as the feature leaf = (leaf <1) + features [scale_idx] [T * nstructs + F] (image);} fern [T] = leaf ;}} float fernnclassifier: measure_forest (vector <int> Fern) {float votes = 0; For (INT I = 0; I <nstructs; I ++) {// posterior probability posteriors [I] [idx] = (float) (pcounter [I] [idx])/(pcounter [I] [idx] + ncounter [I] [idx]); votes + = posteriors [I] [Fern [I]; // The cumulative posterior probability value corresponding to each feature value of each tree is used as the voting value ??} Return votes;} // update the number of positive and negative samples and the posterior probability void fernnclassifier: Update (const vector <int> & fern, int C, int N) {int idx; For (INT I = 0; I <nstructs; I ++) {idx = fern [I]; (C = 1 )? Pcounter [I] [idx] + = N: ncounter [I] [idx] + = N; If (pcounter [I] [idx] = 0) {posteriors [I] [idx] = 0;} else {posteriors [I] [idx] = (float) (pcounter [I] [idx]) /(pcounter [I] [idx] + ncounter [I] [idx]); }}// training set classifier (N basic classifier sets) void fernnclassifier :: trainf (const vector <STD: pair <vector <int>, int> & ferns, int resample) {// conf = function (2, X, Y, margin, bootstrap, idx) // 0 1 2 3 4 5 // double * x = mxgetpr (PRH S [1]);-> ferns [I]. first // int numx = mxgetn (prhs [1]);-> ferns. size () // double * Y = mxgetpr (prhs [2]);-> ferns [I]. second // double thrp = * mxgetpr (prhs [3]) * ntrees;-> threshold * nstructs // int Bootstrap = (INT) * mxgetpr (prhs [4]); -> resample // thr_fern: 0.6 thrp is defined as positive thershold thrp = thr_fern * nstructs; // int step = numx/10; // For (Int J = 0; j <resample; j ++) {// For (Int J = 0; j <Bootstrap; j ++) {for (INT I = 0; I <ferns. size (); I ++) {// For (INT I = 0; I <step; I ++) {// For (int K = 0; k <10; k ++) {// int I = K * Step + I; // box index // double * x = x + ntrees * I; // tree index if (ferns [I]. second = 1) {// 1 indicates the positive sample // If (Y [I] = 1) {// measure_forest function returns the posterior probability cumulative value corresponding to all feature values of all trees. // If the cumulative value is smaller than the positive sample threshold, that is, the input is a positive sample, the sample is classified as a negative sample. // a classification error occurs. Therefore, the sample is added to the positive sample database and the posterior probability if (measure_forest (ferns [I]. first) <= Thrp) // If (measure_forest (x) <= thrp) // update the number of positive samples, and update the posterior probability Update (ferns [I]. first, 1, 1); // Update (x, 1, 1);} else {//} else {If (measure_forest (ferns [I]. first)> = thrn) // If (measure_forest (x)> = thrn) Update (ferns [I]. first, 0, 1); // Update (x,) ;}//// train the nearest neighbor classifier void fernnclassifier: trainnn (const vector <CV :: mat> & nn_examples) {float Conf, dummy; vector <int> Y (nn_examples.size (), 0); // v Ector <t> V3 (n, I); V3 contains n elements whose values are I. The array Y element is initialized to 0 y [0] = 1; // As mentioned above, the nn_data sample set passed in by the trainnn function has only one PEX, in nn_data [0] vector <int> isin; For (INT I = 0; I <nn_examples.size (); I ++) {// For each example // calculate the similarity between input image slices and online models. conf nnconf (nn_examples [I], isin, Conf, dummy ); // measure relative similarity // thr_nn: 0.65 threshold // the label is a positive sample. If the correlation similarity is smaller than 0.65, it is deemed that it does not include a foreground target, that is, a classification error; add it to the positive sample library if (Y [I] = 1 & conf <= thr_nn) {// if y (I) = 1 & conf1 <= TLD. model. th R_nn % 0.65 if (ISIN [1] <0) {// If isnan (ISIN (2) PEX = vector <mat> (1, nn_examples [I]); // TLD. pex = x (:, I); continue; // continue;} // end // Pex. insert (PEX. begin () + isin [1], nn_examples [I]); // TLD. pex = [TLD. pex (:, 1: isin (2) x (:, I) TLD. pex (:, isin (2) + 1: End)]; % add to model Pex. push_back (nn_examples [I]);} // end if (Y [I] = 0 & conf> 0.5) // if y (I) = 0 & amp; conf1 & gt; 0.5 NEX. push_back (nn_examples [I]);/ /TLD. nex = [TLD. nex X (:, I)];} // end Acum ++; printf ("% d. trained NN examples: % d POSITIVE % d negative \ n ", Acum, (INT) Pex. size (), (INT) NEX. size ();} // end/* inputs: *-nn patch * Outputs: *-relative similarity (rsconf) similarity, conservative similarity (csconf) conservative similarity, * In POS. set | id pos set | in neg. set (ISIN) */void fernnclassifier: nnconf (const mat & example, vector <int> & isin, float & rsconf, float & CS Conf) {isin = vector <int> (3,-1); // vector <t> V3 (n, I); V3 contains n elements whose values are I. The three elements are-1 If (PEX. empty () {// If isempty (TLD. PEX) % if positive examples in the model are not defined then everything is negative rsconf = 0; // conf1 = zeros (1, size (x, 2); csconf = 0; return;} If (NEX. empty () {// If isempty (TLD. NEX) % if negative examples in the model are not defined then everything is positive rsconf = 1; // conf1 = ones (1, size (x, 2); csconf = 1; return;} mat NCC (1, 1, cv_32f); float NCCP, csmaxp, maxp = 0; bool anyp = false; int maxpidx, validatedpart = Ceil (PEX. size () * Valid); // the return value of Float nccn, maxn = 0, and bool anyn = false is the smallest integer greater than or equal to the specified expression; // compare the distance (similarity) between image slices P and online model m to calculate the closest neighbor similarity of positive samples, that is, match the input image slices with all the image slices in the // online model to find the most similar image slices, that is, the maximum similarity for (INT I = 0; I <Pex. size (); I ++) {matchtemplate (PEX [I], example, NCC, cv_tm_c1__normed); // measure NCC to positive examples NCCP = (float *) NCC. data) [0] + 1) * 0. 5; // calculate the matching similarity if (NCCP> ncc_thesame) // ncc_thesame: 0.95 anyp = true; If (NCCP> maxp) {maxp = NCCP; // record the maximum similarity and the corresponding image index value maxpidx = I; if (I <validatedpart) csmaxp = maxp ;}} // calculate the negative nearest neighbor similarity for (INT I = 0; I <NEX. size (); I ++) {matchtemplate (NEX [I], example, NCC, cv_tm_c1__normed); // measure NCC to negative examples nccn = (float *) NCC. data) [0] + 1) * 0.5; If (nccn> ncc_thesame) anyn = true; If (nccn> maxn) maxn = NCC N;} // set isin // if he query patch is highly correlated with any positive patch in the model then it is considered to be one of them if (anyp) ISIN [0] = 1; ISIN [1] = maxpidx; // get the index of the maximall correlated positive patch // If the query patch is highly correlated with any negative patch in the model then it is considered to be one of them if (anyn) ISIN [2] = 1; // measure relative Similarity // Correlation similarity = positive sample nearest neighbor similarity/(positive sample nearest neighbor similarity + negative sample nearest neighbor similarity) float DN = 1-maxn; float dp = 1-maxp; rsconf = (float) DN/(DN + dp); // measure conservative similarity dp = 1-csmaxp; csconf = (float) DN/(DN + dp);} void fernnclassifier :: evaluateth (const vector <pair <vector <int>, int> & NXT, const vector <CV: mat> & next) {float fconf; For (INT I = 0; I <NXT. size (); I ++) {// average value of the posterior probability of all basic classifiers if it is greater than thr_fern, it is considered to contain foreground targets // measure_forest to return What is the sum of all posterior probabilities? nstructs is the number of trees, that is, the number of basic classifiers ?? Fconf = (float) measure_forest (NXT [I]. first)/nstructs; If (fconf> thr_fern) // thr_fern: 0.6 thrp is defined as positive thershold thr_fern = fconf; // take this average value as the new threshold of the classifier of the set, this is training ??} Vector <int> isin; float Conf, dummy; For (INT I = 0; I <next. size (); I ++) {nnconf (next [I], isin, Conf, dummy); If (CONF> thr_nn) thr_nn = conf; // take the maximum correlation similarity as the new threshold of the nearest neighbor classifier. This is training ??} If (thr_nn> thr_nn_valid) // thr_nn_valid: 0.7 thr_nn_valid = thr_nn;} // display all positive samples contained in the positive sample library (online model) in the window void fernnclassifier :: show () {mat examples (INT) Pex. size () * PEX [0]. rows, PEX [0]. cols, cv_8u); double minval; MAT ex (PEX [0]. rows, PEX [0]. cols, PEX [0]. type (); For (INT I = 0; I <Pex. size (); I ++) {// position of the minimum and maximum values in the minmaxloc query matrix (one-dimensional array as a vector, defined by mat. minmaxloc (PEX [I], & minval); // find the minimum PEX [I] of PEX [I]. copyto (Ex); Ex = Ex-minval; // reset the pixel with the smallest brightness to 0, and reset other pixels accordingly. // mat: rowrange (INT startrow, int endrow) const creates a new matrix header for the specified row span. // Mat: rowrange (const range & R) const // The range structure contains the start and end index values. Mat TMP = examples. rowrange (range (I * PEX [I]. rows, (I + 1) * PEX [I]. rows); Ex. convertize (TMP, cv_8u);} imshow ("Examples", examples );}

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.