Foreground detection Algorithm (ii)--codebook and average background method

Source: Internet
Author: User
Tags comments numeric value
 Original: http://www.cnblogs.com/tornadomeet/archive/2012/04/08/2438158.html Foreground detection algorithm _1 (codebook and average background method)

A very important research direction in the foreground segmentation is the background subtraction method, because the background subtraction method is simple, the principle is easy to think, and in the field of intelligent video surveillance, the camera is fixed in many cases, and the background is basically invariable or slowly changing, In this context, the application of the background subtraction method has driven many researchers to study it.

But there are many disadvantages to the foreground image of background subtraction: for example, illumination factor, occlusion factor, dynamic period background, and background non-periodic background, and generally we consider the independence between each pixel point, which leaves a great hidden danger to the actual application.

This small talk is mainly about the simple background subtraction method and the Codebook method.

first, the simple background subtraction method working principle.

In the process of modeling the background of the video, the corresponding pixel gray value of each 2 frame image is calculated as an error value, the mean value of the pixel is calculated in the background modeling time, and then the average value of the error is averaged, and then on the basis of the mean difference--the constant of the error mean (this coefficient needs to be adjusted manually) times So when the foreground detection, when the corresponding point position to a pixel, if the pixel of the gray value of each channel is within the threshold range, it is considered to be the background with 0, otherwise it is considered to be the foreground 255 represents.

The following project is the source code provided by the author of the Learning OpenCV book, with the following code and comments on the simple background subtraction:

     avg_background.h file:

 1////////////////////////////////////////////////////////////////////////////////////////////////////////////// 2//Accumulate average and ~std (really absolute difference) image and use this to detect background and Foregroun
 D 3//4//Typical how to using this is To:5//allocateimages ();
 6////loop for N images to accumulate background differences 7//Accumulatebackground ();
8////when Done, turn this to our AVG and STD model with high and low bounds 9//createmodelsfromstats (); ////then use of the function to return background in a mask (255 = = Foreground, 0 = = background) one//Backgrounddiff (I
Plimage *i,iplimage *imask, int num); ////then tune the high and low difference from average image background acceptance thresholds//float Scalehigh, Scalelow; Set These, defaults is 7 and 6.
Note:scalelow is how many average differences below average//Scalehigh (Scalehigh);
//Scalelow (Scalelow); ////that is, theHigh and low bounds for what should is background to do it work. 
////then continue detecting foreground in the mask image//Backgrounddiff (iplimage *i,iplimage *imask, int num); ////notes:num is camera number which varies from 0 ...  Num_cameras-1.
Typically you has one camera, but the this routine allows//your to index many. At//#ifndef avgseg_ #define AVGSEG_ #include "cv.h"//define all of the OPENCV classes etc. #include "highgui.h" #include "cxcore.h"//important defines:32 #define NUM_CAMERAS 1// This function can handle a array of cameras #define HIGH_SCALE_NUM 7.0//how many average differences from Average image on the high side = = Background #define LOW_SCALE_NUM 6.0//how Many average differences from Avera
GE image on the low side = = background, void Allocateimages (Iplimage *i);
PNS void Deallocateimages (); Accumulatebackground void (Iplimage *I, int number=0);
Scalehigh void (Float scale = high_scale_num, int NUM = 0);
Scalelow (Float scale = low_scale_num, int NUM = 0);
Createmodelsfromstats void ();
Backgrounddiff void (iplimage *i,iplimage *imask, int num = 0); #endif

avg_background.cpp File:

  1//Avg_background.cpp: Defines the entry point of the console application. 2//3 4 #include "stdafx.h" 5 #include "avg_background.h" 6 7 8//globals 9 Iplimage *iavgf[num_cam
 Eras],*idifff[num_cameras], *iprevf[num_cameras], *ihif[num_cameras], *ilowf[num_cameras];
 One by one iplimage *iscratch,*iscratch2,*igray1,*igray2,*igray3,*imaskt; Iplimage *ilow1[num_cameras],*ilow2[num_cameras],*ilow3[num_cameras],*ihi1[num_cameras],*ihi2[num_cameras],*
 Ihi3[num_cameras];
 float Icount[num_cameras]; allocateimages void (iplimage *i)//i is just a sample for allocation purposes (int I = 0; i<n Um_cameras;
 i++) {Iavgf[i] = Cvcreateimage (Cvgetsize (i), ipl_depth_32f, 3);
 Idifff[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 3);
 Iprevf[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 3);
 Ihif[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 3);
 Ilowf[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 3);    24     Ilow1[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 1);
 Ilow2[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 1);
 Ilow3[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 1);
 Ihi1[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 1);
 Ihi2[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 1);
 Ihi3[i] = cvcreateimage (Cvgetsize (i), ipl_depth_32f, 1);
 Cvzero (Iavgf[i]);
 Cvzero (Idifff[i]);
 Cvzero (Iprevf[i]);
 Cvzero (Ihif[i]);        
 Cvzero (Ilowf[i]); Icount[i] = 0.00001;
 Protect against divide by zero iscratch = Cvcreateimage (Cvgetsize (I), ipl_depth_32f, 3);
 ISCRATCH2 = Cvcreateimage (Cvgetsize (I), ipl_depth_32f, 3);
 Igray1 = Cvcreateimage (Cvgetsize (I), ipl_depth_32f, 1);
 Igray2 = Cvcreateimage (Cvgetsize (I), ipl_depth_32f, 1); Igray3 = Cvcreateimage (Cvgetsize (I), ipl_depth_32f, 1);
 Imaskt = Cvcreateimage (Cvgetsize (I), ipl_depth_8u, 1);
 Cvzero (Iscratch);
 Cvzero (ISCRATCH2); Deallocateimages () (i=0) {(Int. i<num_cameras; i++) {cvreleaseimage
 Avgf[i]);
 Cvreleaseimage (&idifff[i]);
 Cvreleaseimage (&iprevf[i]);
 Cvreleaseimage (&ihif[i]);
 Cvreleaseimage (&ilowf[i]);
 Cvreleaseimage (&ilow1[i]);
 Cvreleaseimage (&ilow2[i]);
 Cvreleaseimage (&ilow3[i]);
 Cvreleaseimage (&ihi1[i]);
 Cvreleaseimage (&ihi2[i]);
 Cvreleaseimage (&ihi3[i]);
 Cvreleaseimage (&iscratch);
 Cvreleaseimage (&AMP;ISCRATCH2);
 Cvreleaseimage (&igray1);
 Cvreleaseimage (&igray2);
 Cvreleaseimage (&AMP;IGRAY3);
 Cvreleaseimage (&imaskt); Accumulate the background sTatistics for one + frame accumulate//We The images, the image differences and the count of images for the 75/
 /the routine createmodelsfromstats () to work in after we ' re-done accumulating N frames. Background image, 3 channel, 8u//number Camera number Accumulatebackground void (iplimage *i, I
 NT number) (= 1; Bayi Cvcvtscale (i,iscratch,1,0); to float; #define Cvcvtscale cvconvertscale #define Cvscale Cvconvertscale (!first) {CVACC H,iavgf[number]);//Add 2 images: iavgf[number]=iavgf[number]+iscratch,iavgf[] contains the time series picture of the cumulative Cvabsdiff (iscratch,ipre
 VF[NUMBER],ISCRATCH2);//Subtract 2 images: Iscratch2=abs (Iscratch-iprevf[number]);
 CVACC (Iscratch2,idifff[number]);//idifff[] contains image difference accumulation and icount[number] + = 1.0;//Cumulative picture frame Count 87}
 First = 0; Cvcopy (Iscratch,iprevf[number]);//After the function is executed, the current frame data is saved as the previous frame data.//Scale The average difference from the AVErage Image High acceptance threshold The void Scalehigh (float scale, int num)//Set the height threshold function for background modeling 94 {Cvconvertscale (Idifff[num],iscratch,scale);     Converts with rounding and saturation cvadd (Iscratch,iavgf[num],ihif[num]);//Multiply the average cumulative image and error cumulative image by scale and then add 97 Cvcvtpixtoplane (Ihif[num], Ihi1[num],ihi2[num],ihi3[num], 0);//#define Cvcvtpixtoplane Cvsplit, 
And Cvsplit is to convert a multi-channel matrix into several single-channel matrices 98} The average difference from the average image low acceptance threshold 101 void Scalelow (float scale, int num)//Set low threshold function for background modeling 102 {103 Cvconvertscale (Idifff[num],iscratch,scale);//conver TS with rounding and Saturation 104 cvsub (Iavgf[num],iscratch,ilowf[num]);//Multiply the average cumulative image and error cumulative image by scale and then subtract the CVCVTP
Ixtoplane (Ilowf[num], Ilow1[num],ilow2[num],ilow3[num], 0); 106} 107 108//once You ' ve learned the background long enough, turn it into a background model 109 void Createmodelsfrom       Stats () {111 for (int i=0; i<num_cameras; i++) 112 {113  Cvconvertscale (Iavgf[i],iavgf[i], (double) (1.0/icount[i]);//This is to find the average value of the cumulative sum image Cvconvertscale (idifff[i],idifff[  I], (double) (1.0/icount[i]);//This is to find the average value of the cumulative error image Cvadds (Idifff[i],cvscalar (1.0,1.0,1.0), idifff[i]); Make sure diff are always something,cvadds for a numeric value and a scalar addition of scalehigh (high_scale_num,i);//high_scale_num is initially defined as 7, which Real is a multiplier 117 scalelow (low_scale_num,i);//low_scale_num initially defined as 6 118} 119} 121//Create a binary:0,255 Mas K where 255 means forground pixel 122//I Input image, 3 channel, 8u 123//Imask mask image to be created, 1 c
Hannel 8u 124//NUM camera number. 126 void Backgrounddiff (Iplimage *i,iplimage *imask, int num)//mask should be grayscale 127 {$ cvcvtscale ( i,iscratch,1,0);
to float;
129//channel 1 Cvcvtpixtoplane (iscratch, igray1,igray2,igray3, 0); 131 Cvinrange (Igray1,ilow1[num],ihi1[num],imask),//igray1[] between ilow1[] and ihi1[, the corresponding point in Imask is 255 (background compliant) 132//   Channel 2 133  Cvinrange (Igray2,ilow2[num],ihi2[num],imaskt);//This means that the absolute difference between each image is less than 6 times times the absolute difference average or 7 times times greater than the average of the absolute difference is considered to be the foreground image 134 Cvor (
Imask,imaskt,imask);
135//channel 3 136 cvinrange (IGRAY3,ILOW3[NUM],IHI3[NUM],IMASKT);//Here the fixed threshold of 6 and 7 is unreasonable, fortunately, the project can be manually adjusted according to the actual situation.
137 Cvor (Imask,imaskt,imask); 138//finally, invert the results 139 Cvsubrs (Imask, Cvscalar (255), imask);//foreground is represented by 255, the background is 0 for 140}

Two, codebook algorithm working principle

Considering that the simple background subtraction method cannot model the dynamic background, a codebook algorithm is proposed by some scholars.

The algorithm establishes a codebook for each pixel in the image, and each code element can include multiple code elements, each of which has its own maximum minimum threshold, and the maximum minimum threshold at the time of Detection. During the background modeling, whenever a new picture is made, each pixel point is coded to match, that is, if the pixel value in the codebook in the learning threshold of a code element, it is considered that the corresponding point in the past, the history of the deviation is not small, through a certain pixel value comparison, if the conditions are met, You can also update the learning thresholds and detection thresholds for the corresponding points. If the new pixel value does not match each code element in the codebook, it is possible that the background is dynamic, so we need to create a new code element for it, and set the corresponding code element member variable. Therefore, in the background learning process, each pixel can correspond to multiple code elements, so you can learn the complex dynamic background.

The code and comments about the codebook algorithm are as follows:

cv_yuv_codebook.h File:

1//////////////////////////////////////////////////////////////////////////////////////////////////////////////
 2//accumulate average and ~std (really absolute difference) image and use

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.