Perception Machine: This is the simplest machine learning algorithm, but there are a few points to note.
The first is the selection of the loss function, and in order to minimize the loss function, the gradient descent method used in the iterative process, finally obtains the optimal w,b
The visual interpretation is to adjust the value of the w,b, so that the separation of the super-plane to the wrong classification point to move to reduce the distance between the point and the super-plane, until the super-plane
Cross the wrong classification point so that it is correctly categorized.
K Nearest Neighbor:
Given a training dataset, a new instance of the input is found in the training dataset with the most recent K-instances, and the K-instances belong to a
class, the input instance is divided into a class.
It is noteworthy that the KD tree used in the K-nearest neighbor algorithm is implemented. It is a special storage structure used to reduce the number of distances to be computed. It's
Implementation and search are to be learned.
Naive Bayes:
By training the data set to learn the joint probability distribution, and then based on this model, to the given input x, using Bayesian theorem to find the most posterior probability of the output Y.
Its most important feature is that the conditional probability distribution is assumed to be conditional independence, that is to say, the classification features are independent under the conditions of the class determination.
Summary of machine learning algorithms