[Opencv learning] particle filter algorithm for Object Tracking

Source: Internet
Author: User
Tags new set

Author: gnuhpc
Source: http://www.cnblogs.com/gnuhpc/

1. Introduction

This project is written by a PhD student at Ohio State University (OSU), http://web.engr.oregonstate.edu /~ Hess/, the doctor described the project on his personal homepage as follows:
Object Tracking is a tricky problem. A general, all-purpose object tracking algorithm must deal with difficulties like camera motion, erratic object motion, cluttered backgrounds, and other moving objects. such hurdles render general image processing techniques an inadequate solution to the object tracking problem.
Particle Filtering is a Monte Carlo sampling approach to Bayesian filtering. it has passed uses but has become the state of the art in object tracking. conceptually, a particle filtering algorithm maintains a probability distribution over the state of the system it is monitoring, in this case, the state -- location, scale, etc. -- of the object being tracked. in most cases, non-linearity and non-gaussianity in the object's motion and likelihood models yields an intractable filtering distribution. particle Filtering overcomes this intractability by representing the distribution as a set of weighted samples, or participant. each particle represents a possible instantiation of the state of the system. in other words, each particle describes one possible location of the object being tracked. the set of particles contains more weight at locations where the object being tracked is more likely to be. we can thus determine the most probable state of the object by finding the location in the particle filtering distribution with the highest weight.
The general translation is as follows:
Object Tracking is a tricky issue. A universal object tracking algorithm must deal with difficult situations such as camera motion, unstable object tracking, complex background, and other mobile objects. Particle filter is a Bayesian filter method that uses monte carlo sampling. This method has many purposes, but it has become the best method for object tracking. Conceptually, a particle filter algorithm includes the probability distribution of the state of a monitored system. In this project, the State refers to the position, size, and so on of the tracked object. In many cases, non-linear and non-Gaussian models of object motion and similarity can obtain an unmanageable filtering distribution. Particle Filter overcomes this difficulty by re-representing this distribution as a group of weighted values, or by calling it a particle. Each particle represents a possible system status instance. In other words, each particle describes a possible location of the tracked object. A particle set contains the most likely location of a tracked object. Therefore, we can determine the most likely state of an object by finding the maximum weight in the particle filter distribution.

 

2. PROCEDURE Summary

1. Command Line Parameter processing->
2. Set the random number generator environment, create a random number generator, and initialize it. ->
3. initialize the video handle->
4. Take one frame of the video for processing->
1) GRB-> HSV
2) Save the current frame in frames
3) determine whether it is the first frame,
If yes,
(1) waiting for the user to select the region to be tracked
(2) Calculate the histogram of relevant regions
(3) obtain the tracking Particle
If not,
(1) transform each particle and calculate the weight of each particle
(2) Normalize the particle set
(3) re-sample particles
4) Draw the area represented by the particle
5. Release Images

 

3. Command Line Parameter Processing

void arg_parse( int argc, char** argv ){  int i = 0;  /*extract program name from command line (remove path, if present) */  pname = remove_path( argv[0] );  /*parse commandline options */  while( TRUE )    {      char* arg_check;      int arg = getopt( argc, argv, OPTIONS );      if( arg == -1 )    break;      switch( arg )    {      /* user asked for help */    case 'h':      usage( pname );      exit(0);      break;          case 'a':      show_all = TRUE;      break;      /* user wants to output tracking sequence */    case 'o':      export = TRUE;      break;      /* user wants to set number of particles */    case 'p':      if( ! optarg )        fatal_error( "error parsing arguments at -%c/n"    /             "Try '%s -h' for help.", arg, pname );      num_particles = strtol( optarg, &arg_check, 10 );      if( arg_check == optarg  ||  *arg_check != '/0' )        fatal_error( "-%c option requires an integer argument/n"    /             "Try '%s -h' for help.", arg, pname );      break;                  /* catch invalid arguments */    default:      fatal_error( "-%c: invalid option/nTry '%s -h' for help.",               optopt, pname );    }    }    /* make sure input and output files are specified */  if( argc - optind < 1 )    fatal_error( "no input image specified./nTry '%s -h' for help.", pname );  if( argc - optind > 2 )    fatal_error( "too many arguments./nTry '%s -h' for help.", pname );   /* record video file name */  vid_file = argv[optind];}

The author uses the getopt system function to parse the command line.-H indicates that the help is displayed,-A indicates that the position represented by all particles is displayed, and-O indicates that the tracking frame is output, -P number is used to set the number of particles, and then specify the video file to be processed.

 

4. RGB-> HSV Transformation

IplImage* bgr2hsv( IplImage* bgr ){IplImage* bgr32f, * hsv;bgr32f = cvCreateImage( cvGetSize(bgr), IPL_DEPTH_32F, 3 );hsv = cvCreateImage( cvGetSize(bgr), IPL_DEPTH_32F, 3 );cvConvertScale( bgr, bgr32f, 1.0 / 255.0, 0 );cvCvtColor( bgr32f, hsv, CV_BGR2HSV );cvReleaseImage( &bgr32f );return hsv;}

The program now normalizes the pixel value of the image, and then uses the cvcvtcolor function in opencv to convert the image from the RGB space to the HSV space.

 

5. Set a random number

Gsl_rng_env_setup (); // setup the enviorment of random number generator

RNG = gsl_rng_alloc (gsl_rng_mt19937); // create a random number generator

Gsl_rng_set (RNG, time (null); // initializes the random number generator.

The author uses the GSL library to generate random numbers. gsl is the GNU scientific computing library. In the random section of the manual, there are three steps to generate random numbers:

Create a random number generator environment, create a random number generator, and initialize a random number generator.

 

6. Calculate the histogram of the selected area

/*  Computes a reference histogram for each of the object regions defined by  the user  @param frame video frame in which to compute histograms; should have been    converted to hsv using bgr2hsv in observation.h  @param regions regions of /a frame over which histograms should be computed  @param n number of regions in /a regions  @param export if TRUE, object region images are exported  @return Returns an /a n element array of normalized histograms corresponding    to regions of /a frame specified in /a regions.*/histogram** compute_ref_histos( IplImage* frame, CvRect* regions, int n ){  histogram** histos = malloc( n * sizeof( histogram* ) );  IplImage* tmp;  int i;  /* extract each region from frame and compute its histogram */  for( i = 0; i < n; i++ )    {      cvSetImageROI( frame, regions[i] );//set the region of interest      tmp = cvCreateImage( cvGetSize( frame ), IPL_DEPTH_32F, 3 );      cvCopy( frame, tmp, NULL );      cvResetImageROI( frame );//free the ROI      histos[i] = calc_histogram( &tmp, 1 );//calculate the hisrogram      normalize_histogram( histos[i] );//Normalizes a histogram so all bins sum to 1.0      cvReleaseImage( &tmp );    }  return histos;}

The program first sets a pointer to the histogram type, which is a pointer to the histogram pointer array, which is the position where the histogram data of multiple selected regions is stored. Then, for each region specified by the user, the ROI region is set in the first frame. The selected region is retrieved by setting the ROI region, and the histogram is calculated by the calc_histogram function, use normalize_histogram to normalize the histogram.

The function for calculating the histogram is described as follows:

/*  Calculates a cumulative histogram as defined above for a given array  of images    @param img an array of images over which to compute a cumulative histogram;    each must have been converted to HSV colorspace using bgr2hsv()  @param n the number of images in imgs      @return Returns an un-normalized HSV histogram for /a imgs*/histogram* calc_histogram( IplImage** imgs, int n ){  IplImage* img;  histogram* histo;  IplImage* h, * s, * v;  float* hist;  int i, r, c, bin;  histo = malloc( sizeof(histogram) );  histo->n = NH*NS + NV;  hist = histo->histo;  memset( hist, 0, histo->n * sizeof(float) );  for( i = 0; i < n; i++ )    {      /* extract individual HSV planes from image */      img = imgs[i];      h = cvCreateImage( cvGetSize(img), IPL_DEPTH_32F, 1 );      s = cvCreateImage( cvGetSize(img), IPL_DEPTH_32F, 1 );      v = cvCreateImage( cvGetSize(img), IPL_DEPTH_32F, 1 );      cvCvtPixToPlane( img, h, s, v, NULL );            /* increment appropriate histogram bin for each pixel */      for( r = 0; r < img->height; r++ )    for( c = 0; c < img->width; c++ )      {        bin = histo_bin( pixval32f( h, r, c ),                 pixval32f( s, r, c ),                 pixval32f( v, r, c ) );        hist[bin] += 1;      }      cvReleaseImage( &h );      cvReleaseImage( &s );      cvReleaseImage( &v );    }  return histo;}

This function extracts H, S, and V, respectively, and then traverses from top to bottom, from left to right, and puts the Judgment Rules of The histo_bin function into the corresponding Bin (very vividly ). For the rules for judging the histo_bin function, see:

| ---- | .... | ---- | ------ | .... | ------- |

1nh 2nh 3nh NS * NH + 1 NS * NH + 2 NS * NH + NV

 

7. initialize the particle set

/*Creates an initial distribution of particles at specified locations@param regions an array of regions describing player locations aroundwhich particles are to be sampled@param histos array of histograms describing regions in /a regions@param n the number of regions in /a regions@param p the total number of particles to be assigned@return Returns an array of /a p particles sampled from around regions in/a regions*/particle* init_distribution( CvRect* regions, histogram** histos, int n, int p){particle* particles;int np;float x, y;int i, j, width, height, k = 0;particles = malloc( p * sizeof( particle ) );np = p / n;/* create particles at the centers of each of n regions */for( i = 0; i < n; i++ ){width = regions[i].width;height = regions[i].height;x = regions[i].x + width / 2;y = regions[i].y + height / 2;for( j = 0; j < np; j++ ){particles[k].x0 = particles[k].xp = particles[k].x = x;particles[k].y0 = particles[k].yp = particles[k].y = y;particles[k].sp = particles[k].s = 1.0;particles[k].width = width;particles[k].height = height;particles[k].histo = histos[i];particles[k++].w = 0;}}/* make sure to create exactly p particles */i = 0;while( k < p ){width = regions[i].width;height = regions[i].height;x = regions[i].x + width / 2;y = regions[i].y + height / 2;particles[k].x0 = particles[k].xp = particles[k].x = x;particles[k].y0 = particles[k].yp = particles[k].y = y;particles[k].sp = particles[k].s = 1.0;particles[k].width = width;particles[k].height = height;particles[k].histo = histos[i];particles[k++].w = 0;i = ( i + 1 ) % n;}return particles;}

The variable NP in the program indicates that if there are multiple regions N, the number of particles in one region is P/N, so that the total number of particles is P. Then the program initializes P/N particles in each region (n). The coordinates of the three locations are the midpoint of the selected region, with a ratio of 1, the width and height are the heights of the selected area. Then a loop is run to determine that P particles are initialized.

8. determine the location possibility

/*  Computes the likelihood of there being a player at a given location in  an image    @param img image that has been converted to HSV colorspace using bgr2hsv()  @param r row location of center of window around which to compute likelihood  @param c col location of center of window around which to compute likelihood  @param w width of region over which to compute likelihood  @param h height of region over which to compute likelihood  @param ref_histo reference histogram for a player; must have been    normalized with normalize_histogram()    @return Returns the likelihood of there being a player at location    (/a r, /a c) in /a img*/float likelihood( IplImage* img, int r, int c,          int w, int h, histogram* ref_histo ){  IplImage* tmp;  histogram* histo;  float d_sq;  /* extract region around (r,c) and compute and normalize its histogram */  cvSetImageROI( img, cvRect( c - w / 2, r - h / 2, w, h ) );  tmp = cvCreateImage( cvGetSize(img), IPL_DEPTH_32F, 3 );  cvCopy( img, tmp, NULL );  cvResetImageROI( img );  histo = calc_histogram( &tmp, 1 );  cvReleaseImage( &tmp );  normalize_histogram( histo );  /* compute likelihood as e^{/lambda D^2(h, h^*)} */  d_sq = histo_dist_sq( histo, ref_histo );  free( histo );  return exp( -LAMBDA * d_sq );}

The program first extracts the region represented by the relevant particles, then calculates its histogram, and normalize it. Compare the histogram with the histogram of the original user-selected area passed into the histo_dist_sq function, and finally return e ^ (-Lambda * d_sq) to return the weight of the particle.

The implementation of the histo_dist_sq function is as follows:

/*  Computes squared distance metric based on the Battacharyya similarity  coefficient between histograms.    @param h1 first histogram; should be normalized  @param h2 second histogram; should be normalized    @return Returns a squared distance based on the Battacharyya similarity    coefficient between /a h1 and /a h2*/float histo_dist_sq( histogram* h1, histogram* h2 ){  float* hist1, * hist2;  float sum = 0;  int i, n;  n = h1->n;  hist1 = h1->histo;  hist2 = h2->histo;  /*    According the the Battacharyya similarity coefficient,        D = /sqrt{ 1 - /sum_1^n{ /sqrt{ h_1(i) * h_2(i) } } }  */  for( i = 0; i < n; i++ )    sum += sqrt( hist1[i]*hist2[i] );  return 1.0 - sum;}

 

Statistical babbling distanceBhattacharyya distance, according to the Wiki description, Bhattacharyya distanceIt describes the similarity between two discrete probability distributions. It is usually used in classification operations to measure the severability of different types. That is to say, this distance formula is used to evaluate the similarity. Strictly defined:

For discrete probability distributions p and q over the same domain X, it is defined:

Where:

Is the bhattacharyya coefficient.

The formula in this program is slightly different from this formula.

9. Particle set Transformation

/*  Samples a transition model for a given particle    @param p a particle to be transitioned  @param w video frame width  @param h video frame height  @param rng a random number generator from which to sample  @return Returns a new particle sampled based on <EM>p</EM>'s transition    model*/particle transition( particle p, int w, int h, gsl_rng* rng ){  float x, y, s;  particle pn;    /* sample new state using second-order autoregressive dynamics */  x = A1 * ( p.x - p.x0 ) + A2 * ( p.xp - p.x0 ) +    B0 * gsl_ran_gaussian( rng, TRANS_X_STD ) + p.x0;  pn.x = MAX( 0.0, MIN( (float)w - 1.0, x ) );  y = A1 * ( p.y - p.y0 ) + A2 * ( p.yp - p.y0 ) +    B0 * gsl_ran_gaussian( rng, TRANS_Y_STD ) + p.y0;  pn.y = MAX( 0.0, MIN( (float)h - 1.0, y ) );  s = A1 * ( p.s - 1.0 ) + A2 * ( p.sp - 1.0 ) +    B0 * gsl_ran_gaussian( rng, TRANS_S_STD ) + 1.0;  pn.s = MAX( 0.1, s );  pn.xp = p.x;  pn.yp = p.y;  pn.sp = p.s;  pn.x0 = p.x0;  pn.y0 = p.y0;  pn.width = p.width;  pn.height = p.height;  pn.histo = p.histo;  pn.w = 0;  return pn;}

The program uses the dynamic second-order autoregressive model as the basic transformation idea. The transformed objects include coordinate X, coordinate Y, and ratio S. The X and Y values must conform to the conditions in width and height.

 

10. Particle set re-sampling

/*  Re-samples a set of particles according to their weights to produce a  new set of unweighted particles    @param particles an old set of weighted particles whose weights have been    normalized with normalize_weights()  @param n the number of particles in /a particles    @return Returns a new set of unweighted particles sampled from /a particles*/particle* resample( particle* particles, int n ){  particle* new_particles;  int i, j, np, k = 0;  qsort( particles, n, sizeof( particle ), &particle_cmp );  new_particles = malloc( n * sizeof( particle ) );  for( i = 0; i < n; i++ )    {      np = cvRound( particles[i].w * n );      for( j = 0; j < np; j++ )    {      new_particles[k++] = particles[i];      if( k == n )        goto exit;    }    }  while( k < n )    new_particles[k++] = particles[0]; exit:  return new_particles;}

The program first uses the qsort sorting function in the C standard library to sort the original subset by weight from large to small. Then, assign more weights to the new particle set.

 

11. Weight Normalization

/*  Normalizes particle weights so they sum to 1    @param particles an array of particles whose weights are to be normalized  @param n the number of particles in /a particles*/void normalize_weights( particle* particles, int n ){  float sum = 0;  int i;  for( i = 0; i < n; i++ )    sum += particles[i].w;  for( i = 0; i < n; i++ )    particles[i].w /= sum;}

Author: gnuhpc

Source: http://www.cnblogs.com/gnuhpc/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.