the effect is very poor. This is because the training data set at that time was small, and computing resources were limited, even training a smaller network would take a long time. Compared with other models, neural networks do not exhibit significant advantages in identifying accuracy. Therefore, more scholars began to use SVM, boosting, nearest neighbor and other classifiers. These classifiers can be simulated with a neural network with one or two
search for objects of different sizes, the classifier is designed to be able to change the size, which is more effective than changing the size of the image to be examined. Therefore, to detect an unknown target object in an image, scan Program You usually need to use a search window of different proportions to scan the image several times.Currently, four types of boosting technologies are supported: discrete Adaboost, real Adaboost, gentle AdaBoost
I.Dismax
1. Tie : Query and init Param for tiebreaker Value2. QF : Query and init Param for query Fields3. PF : Query and init Param for phrase boost Fields4. Pf2 : Query and init Param for bigram phrase boost Fields5. PF3 : Query and init Param for trigram phrase boost Fields6. Mm : Query and init Param for minshouldmatch Specification7. PS : Query and init Param for phrase slop value in phrase boost query (in PF fields)8. PS2 : Default phrase slop for bigram phrases (pf2)9. PS3 : Default phra
In my blog, the following topics are recommended:
In machine learningMathematicsSeries:
1)Regression and gradient descent (gradient descent)
2)Linear regression, deviation, variance trade-offs
3)Boosting and gradient boosting in model combining
4)Linear Discriminant Analysis (LDA) and principal component analysis (PCA)
5)Powerful Matrix Singular Value Decomposition (SVD) and Its Application
① Origin: Boosting algorithmThe purpose of the boosting algorithm is to extract different feature dimensions each time, based on all data sets, by using a different method of extracting parameters (such as a decision tree) of the same classifier to split the dataset.Train a number of different weak classifiers (one-time classification error rate >0.5), then combine them, and synthesize the evaluation (by de
model.6Model TuningFor random Forest and GBDT model, we need to select the optimal parameter in the large parameter space, and its parameter types can be divided into two main types: tree-specific parameter and boosting parameter. Tree-specific parameters are those that affect a single tree, and the boosting parameter refers to the parameters that affect the global integration algorithm. Adjusting these pa
difficult to understand. Therefore, the decision tree is generated, then the decision tree is drawn, and finally, the classification process can be better understood.The division of the core tree of the decision tree. The decision tree's bifurcation is the basis of the decision trees. The best way is to use information entropy to implement . The concept of entropy is a headache, it is easy to confuse people, simply speaking, is the complexity of information. The more information, the higher the
currently implemented in the following classes:
Ensemble. Randomforestclassifier([...])
A Random forest classifier.
Ensemble. Randomforestregressor([...])
A Random Forest Regressor.
Ensemble. Extratreesclassifier([...])
An extra-trees classifier.
Ensemble. Extratreesregressor([N_estimators, ...])
An extra-trees regressor.
Ensemble. Gradientboostingclassifier([loss, ...])
Gradient
structure: Eight balance find tree 2-3 tree-yangecnu-Blog Park
Yangecnu
On the algorithm and data structure: Nine balance search tree Red black tree-yangecnu-Blog Park
Yangecnu
On the algorithm and data structure: Ten-balance search tree B-Tree-yangecnu-Blog Park
Yangecnu
Talking about algorithm and data structure: 11 Hash table-YANGECNU-Blog Park
Yangecnu
Elementary introduction to algorithm and data structure: 12 Corr
, and Chih-Jen Lin. "The Analysis of decomposition methods for support vector machines ."Proceedings of ijcai99, SVM workshop (1999 ).
Keerthi, S. S., and E. g. Gilbert.Convergence of a generalized SMO Algorithm for SVM classifier design Control Division. Dept. Of Mechanical and Production Engineering, National University of Singapore CD-00-01 (2000 ).
Keerthi, S. S., S. K. shevade, C. bhattacharyya, and K. R. K. Murthy.Improvements to Platt's SMO Algorithm for SVM classifier design control divi
result, and x is the feature.
Bayesian formula is used to find the uniformity of the two models:
Because we are concerned about which probability is high in the discrete value result of y (for example, the goat probability and the sheep probability), rather than the specific probability, the above formula is rewritten:
This is called posterior probability and a anterior probability.
Therefore, the discriminant model is used to calculate the conditional probability, and the generated model is
Classification is an important research area in data mining, machine learning, and pattern recognition. By analyzing and comparing representative excellent classification algorithms in current data mining, the characteristics of various algorithms are summarized, which provides a basis for users to select algorithms or researchers to improve algorithms.
I. Classification Algorithm Overview
There are many ways to solve classification problems. A single classification method mainly includes decisi
, C4.5
5. A combination of boosting multiple discriminative sub-Classifiers
6. A Random forest consists of multiple decision trees.
7. Use the boosting Algorithm for face detection/Haar Classifier
8. unsupervised Generation Algorithm for clustering with the expectation of maximizing em
9. The simplest classifier of K-Nearest Neighbor
10. Neural Networks (multi-layer sensor) Train classifier very slowly, but
order to search for objects of different sizes, the classifier is designed to be able to change the size, which is more effective than changing the size of the image to be examined. Therefore, to detect an unknown target object in an image, the scanner usually needs to scan the image several times in a search window of different proportions.
Currently, there are four types of boosting technologies that support this classifier:
Discrete Adaboost, real
Transferred from: http://blog.csdn.net/v_july_v/article/details/40718799
the principle and derivation of Adaboost algorithm
1 AdaBoost Principle 1.1 AdaBoost is whatAdaBoost, an abbreviation for the English "Adaptive boosting" (adaptive enhancement), was presented by Yoav Freund and Robert Schapire in 1995. Its adaptation is that the sample of the previous basic classifier is strengthened, and the weight
statistical-based machine learning approach is more advantageous in many ways than in previous systems based on artificial rules. The artificial neural network at this time, although also known as Multilayer perceptron (multi-layer Perceptron), is actually a shallow layer model with only one layer of hidden layer nodes.In the the 1990s, a variety of shallow machine learning models were presented, such as support vector machines (svm,support vector machines),
The first two days in the writing of the app project, found a problem, that is, obviously CSS writing style is 14px, just beginning in the page display did not appear to be the problem, but the content of a lot, the font suddenly become larger.What? , unknown so, on the major web site for a long time to know is the browser font promotion problem.First talk about the solution, you can directly in the CSS style to write downbody,body*{ max-height:1000000px;}You can disable it directly.This may
LIGHTGBM is the gradient boosting framework introduced by Microsoft's DMTK because it is fast and efficient, and may later become another big killer in the data mining competition. Address: Https://github.com/Microsoft/LightGBM.The project just open source has been hot: three days on GitHub was star 1000+ times, fork 200+ times, there are nearly thousand people concerned about "how to view Microsoft Open source LIGHTGBM?" ”。The next step is to introdu
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.