Common parameters and usage mistakes of libsvm

Source: Internet
Author: User
Tags svm rbf kernel
Libsvm common parameters and errors 1 common parameters svm-traintraining_set_filemodel_filesvm-predicttest_filemodel_fileoutput_file automatic script: pythoneasy. pytrain_datatest_data automatically selects the optimal parameter and automatically performs... libsvm common parameters and errors 1 common parameters svm-training_set_file model_filesvm-predict test_file model_file output_file automatic script: python easy. py train_data test_data automatically selects the optimal parameter and automatically normalizes it. Combine the training set with the test and use the same normalization parameter. -C: parameter-g: parameter-v: number of cross-validation-s svm_type: set type of SVM (default 0) 0 -- C-SVC 1 -- nu-SVC 2 -- one-class SVM 3 -- epsilon-SVR 4 -- nu-SVR-t kernel_type: set type of kernel function (default 2) 0 -- linear: U' * v 1 -- polynomial: (gamma * U' * v + coef0) ^ degree 2 -- radial basis function: exp (-gamma * | u-v | ^ 2) 3 -- sigmoid: tanh (gamma * U' * v + coef0)-d degree: set degree in kernel function (default 3)-g gamma: Set gamma in kernel function (default 1/num_features)-r coef0: set coef0 in kernel function (default 0)-c cost: set the parameter C of C-SVC, epsilon-SVR, and nu-SVR (default 1)-n nu: set the parameter nu of nu-SVC, one-class SVM, and nu-SVR (default 0.5)-p epsilon: set the epsilon in loss function of epsilon-SVR (default 0.1)-m cachesize: set cache memory size in MB (default 100)-e epsilon: Set tolerance of termination criterion (default 0.001)-h shrinking: whether to use the shrinking heuristics, 0 or 1 (default 1)-B probability_estimates: whether to train a SVC or SVR model for probability estimates, 0 or 1 (default 0)-wi weight: set the parameter C of class I to weight * C, for C-SVC (default 1) The k in the-g option means the number of attributes in the input data.2 libsvm usage misunderstanding (1) Directly normalize the training set and test set to [0, 1], which may lead to poor experimental results. (2) if the number of features of a sample is very large, you do not need to use the RBF kernel to map the sample to a high-dimensional space. A) when the number of features is very large and the linear kernel is used, the result is very good, and you only need to select the parameter C. B) although the result of the RBF kernel is at least better than that of the linear kernel, the entire space is searched. (3) number of samples < <特征数的情况:a)   推荐使用线性核,可以达到与rbf同样的性能。(4) 样本数和特征数都非常多:推荐使用liblinear,更少的时间和内存,可比的准确率。(5) 样本数> > Number of features: to use a linear model, use liblinear and the-s 2 parameter.
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.