Category: The meaning of classification
Classification in the traditional sense: biological species
Forecast: Weather Forecast
Decision: Yes or no
Traditional models of classification
What is the difference between classification (discriminant analysis) and clustering?
Supervised learning, unsupervised learning, semi-supervised learning
Common classification models and algorithms
Linear discriminant method
Distance Discriminant method
Bayesian classifier
Decision Tree
Support Vector Machine (SVM)
Neural network
Typical scenarios for text mining
Automatic categorization of Web pages
Junk Email judgment
Comment Automatic analysis
Discriminating user preferences by user access to content
Automatic categorization of Web pages
Automation Portal System (Baidu News, Google News, etc.)
Search engine pushes different categories of results based on user tag type
Distance Discriminant method
Principle: Calculate the distance to be measured and all kinds, take the shortest person for its classification
Markov distance (Shiry book p445, why not Euclidean distance?) ), Calculate function Mahalanobis ()
Nearest neighbor Algorithm KNN
The main idea of the algorithm:
1 Select the nearest sample point for K and to-classify points
2 look at the classification of the sample points in 1, voting determines the class to which the classification points belong
Bayesian classifier
Background: Naive Bayesian text classifier principle
Bayes is everywhere
Aoccdrnigto a rscheearchat cmabrigdeuinervtisy, it deosn ' Tmttaerin wahtoredrthe ltteersin a wrodare,
The Olnyiprmoetnttihngis tahtthe Fristand Lsatltteerbe at the rghitpclae.
The Rsetcan is a toatlmsesand you can sitllraedit Wouthitporbelm.
Tihsis bcuseaethe huamnmniddeosnot Raederveylteterby Istlef, but the Wrodas a Wlohe.
The study of the table, the preface of the Chinese character does not set a can read, such as when you finish reading this sentence, only to send the word in the present is all chaotic.
The study of Prof Daniel Kahneman
Bayesian Belief Network
Bayesbelief Network, referred to as BBN
The naïve Bayesian classifier needs the strong condition that the characteristic remainders is independent of each other, which restricts the application of the model.
Using a remainders graph to express the dependency between variables, variables are represented by nodes, and dependencies are represented by edges .
Ancestor, parent, and descendant nodes. A node in a Bayesian network, if its parent node is known, its condition is independent of all its non-descendant nodes
Each node comes with a conditional probability table (CPT)that represents the contact probability of the node and parent node
Modeling steps
Create a network structure (knowledge of hideaway industry personnel)
Calculate CPT (through learning data)
If the data is incomplete, training calculations (similar to neural networks, using gradient descent) are required.
CPT Calculation
If node X does not have a parent node, its CPT remainders contains a priori probability P (X)
If node X has only one parent node Y, the CPT contains the conditional probability P (x| Y
If node X has more than one parent node Y1,y2...,yk, the CPT contains the conditional probability P (x| Y1,y2...,yk)
The 5th Week of machine learning--into gold-----linear classifier, KNN algorithm, naive Bayesian classifier, text mining