Series article: "Machine learning combat" study notes
This chapter introduces the first machine learning algorithm in the Book of Machine Learning: the K-nearest neighbor algorithm, which is very effective and easy to master. First, we will explore the basic theory of K-nearest neighbor algorithm, and how to use distance measurement method to classify items; Secondly, we will use Python to import and parse the data from a text file; Again, this article discusses some common errors that can be encountered when there are many data sources, how to avoid distance calculations, and finally, Use the practical examples to illustrate how to improve dating sites and handwritten digital recognition systems using the K-nearest neighbor algorithm.
1. Overview of K-Nearest neighbor algorithm
Simply put, the K-nearest neighbor algorithm uses the distance method of measuring different eigenvalues to classify.
K-Nearest Neighbor algorithm
Advantages: High precision, insensitive to outliers, no data input assumptions.
Cons: High computational complexity, high spatial complexity
Applicable data range: Numerical and nominal type
The K-Nearest neighbor Algorithm (KNN) works by having a collection of sample data, also known as a training sample set, and a label for each data in the sample set, that is, we know the correspondence between each data in the sample set and the owning category. After entering new data without a label, each feature of the new data is compared to the feature in the sample set, and then the algorithm extracts the category label of the most similar data (nearest neighbor) in the sample set.
K-Nearest Neighbor algorithm