By analyzing the COMPUTE function in Cvhop.cpp, we can call it directly to get the sample hog, and then train to get the detection operator.
1. Make a sample
2. Call for each picture
Hog.compute (IMG, descriptors,size (8,8), Size (0,0));
You can generate hog descriptors and save it to a file
for (int j=0;j<3780;j++)
fprintf (F, "%f,", Descriptors[j]);
3. Using SVM for training and classification, you can get the weight factor, which is called in the Getdefaultpeopledetector () function.
Detection operator detector[]
Using LIBSVM to take weights
Using LIBSVM directly, you need to construct the data in its format, the following is a brief description of using MATLAB under LIBSVM
Download libsvm-mat-2.9-1 (libsvm3.12 version)
Method 1:
Switch to the directory where Libsvm-mat-2.9-1 is located, open matlab type:
Mex-setup
Method 2:matlab Menu File-->set path will add the libsvm-mat-2.9-1 location.
----------------------
The following is an example of Libsvm-mat-2.9-1 's own Heart_scale
-----------Kernel_type for linear----------------------------------
Load Heart_scale.mat
Train_data = Heart_scale_inst (1:150,:);
Train_label = Heart_scale_label (1:150,:);
Test_data = Heart_scale_inst (151:270,:);
Test_label = Heart_scale_label (151:270,:);
Model_linear = Svmtrain (Train_label, Train_data, '-T 0 ')
[Predict_label_l, accuracy_l, dec_values_l] = Svmpredict (Test_label, test_data,model_linear);
Get the model after----------training-------
Model_linear =
Parameters: [5x1 Double]
Nr_class:2
totalsv:58
Rho:-1.1848
Label: [2x1 double]
ProbA: []
Probb: []
NSV: [2x1 double]
SV_COEF: [58x1 Double]
SVs: [58x13 Double]
-----------How to get the weighting coefficients from the model----
Refer to the following Web site,
http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html#f804
For Class 2 problems, the weighting coefficients W and b of the linear problem (y=wx+b) can be solved as follows
W = model_linear. SVs ' * MODEL_LINEAR.SV_COEF;
b =-model_linear.rho;
---------
The W is the detector[in OpenCV]
Note that training samples may have been fitted or under-fitted ~ ~ ~: All in the middle of the picture is a box!
LIBSVM parameter Description:
中文版:
Libsvm_options:
-S Svm_type:set type of SVM (default 0)
0--C-svc
1--Nu-svc
2--One-class SVM
3--Epsilon-svr
4--Nu-svr
-T kernel_type:set type of kernel function (default 2)
0--Linear:u ' *v
1--Polynomial: (gamma*u ' *v + coef0) ^degree
2--Radial basis function:exp (-gamma*|u-v|^2)
3--Sigmoid:tanh (gamma*u ' *v + coef0)
4--precomputed kernel (kernel values in Training_instance_matrix)
-d degree:set degree in kernel function (default 3)
-G Gamma:set Gamma in kernel function (default 1/k)
-R coef0:set coef0 in kernel function (default 0)
-C Cost:set The parameter C of C-svc, Epsilon-svr, and Nu-svr (default 1)
-N Nu:set the parameter nu of nu-svc, One-class SVM, and Nu-svr (default 0.5)
-P Epsilon:set The Epsilon in loss function of EPSILON-SVR (default 0.1)
-m Cachesize:set cache memory size in MB (default 100)
-e Epsilon:set tolerance of termination criterion (default 0.001)
-H Shrinking:whether to use the shrinking heuristics, 0 or 1 (default 1)
-B probability_estimates:whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)
-wi Weight:set The parameter C of Class I to weight*c, for c-svc (default 1)
-V n:n-fold Cross validation mode
==========================================================
Chinese:
Options: The option that is available is the meaning of the following
-S SVM type: SVM set type (default 0) (3 or 4 for regression only)
0--C-svc
1--v-svc
A class of SVM
3--E-svr
4--V-svr
-T kernel function type: kernel function set type (default 2)
0– linear: U ' V
-A-polynomial: (r*u ' v + coef0) ^degree
2–RBF function: exp (-r|u-v|^2) (Gaussian kernel)
3–sigmoid:tanh (r*u ' v + coef0)
-D Degree: degree settings in kernel functions (for polynomial kernel functions) (default 3)
-G (gamma): Gamma function setting in kernel function (for polynomial/rbf/sigmoid kernel function) (default 1/k, class reciprocal) (Gaussian kernel 2σ2)
-R COEF0: COEF0 settings in kernel functions (for polynomial/sigmoid kernel functions) (default 0)
-C Cost: Set parameter C in C-svc,e-svr and V-svr (default 1) (parameter C)
-N nu: Set v-svc, Parameter Nu in a class of SVM and V-svr (default 0.5)
-P Epsilon: Set the value of the Epsilon-svr loss function epsilon (default 0.1)
-M CacheSize: Sets the cache memory size in megabytes (default 100)
-E EPS: Sets the standard for allowed termination iterations (default 0.001)
-H shrinking: whether to use heuristics, 0 or 1 (default 1)
-wi Weight: Set the parameter C for Class I to weight*c, for C in C-svc (default 1)
-V n:n-fold Interactive test mode, n is the number of fold, must be greater than or equal to 2
where k in the-G option refers to the number of attributes in the input data. Option-v randomly splits the data into N parts and calculates the interactive test accuracy and RMS error. These parameter settings can be any combination of the type of SVM and the parameters supported by the kernel function, and the program will not accept the parameter if it does not have an effect on the function or SVM type, and the parameter will take the default value if the expected parameter is set incorrectly.
-parameters:parameters
-nr_class:number of classes; = 2 for Regression/one-class SVM: How many categories are in the data set
-totalsv:total SV: The total number of support vectors
-rho:-B of the decision function (s) wx+b
-label:label of each class; Empty for Regression/one-class SVM
-proba:pairwise probability information; Empty if-b 0 or in One-class SVM
-probb:pairwise probability information; Empty if-b 0 or in One-class SVM
-nsv:number of SVs for each class; Empty for Regression/one-class SVM: number of support vectors
-sv_coef:coefficients for SVs in decision functions: coefficients of support vectors in decision functions
-svs:support vectors
If You don't use the option '-B 1 ', ProbA and Probb are empty
Matrices. If the '-V ' option is specified, cross validation is
Conducted and the returned model is just a scalar:cross-validation
Accuracy for classification and mean-squared error for regression.
Parameter details ppt
LIBSVM Introduction Paper (Introduction written by Lin Da God, there are types of detailed and mathematical deduction ...). )
There is one more LIBSVM Note: http://blog.csdn.net/boyhailong/article/details/7100968
There is also an article explaining relaxation variables and penalty factors (useful):Relation Extraction-SVM Classification Sample unbalance data problem solving-relaxation variables and penalty factors
from:http://blog.csdn.net/yangtrees/article/details/7605279
LIBSVM+DETECTOR_ (LIBSVM parameter description)