Simply put, the K-nearest neighbor algorithm is classified by measuring the distance between different eigenvalue values.
Advantages and Disadvantages
Advantages |
High accuracy, insensitive to outliers, no data input assumptions. |
Disadvantages |
High computational complexity and high spatial complexity. |
Working with data ranges |
Numerical and nominal type. |
Example:
Movie Name |
Fight Lens |
Kissing Lens |
Known movie types |
California |
3 |
104 |
Love Movies |
Gongfu |
99 |
5 |
Action movie |
Algorithm pseudo-code:
For each point in the dataset of the Unknown category property, do the following:
- Calculates the distance between a point in a well-known category dataset and the current point.
- Sort in ascending order of distance.
- Select the K points that are the smallest distance from the current point.
- Determines the frequency at which the first K points are in the category.
- Returns the category with the highest frequency of the first K points as the prediction classification for the current point.
Algorithm Implementation Details:
- The distance between unknown data and each data set in a well-known class dataset is obtained by matrix calculation.
- Returns the index value of the data from large to small.
- The index value is used to find the corresponding label (labels), and the number of different labels in the first k tags is calculated. and use the dictionary data structure to save {tags: number, ...}.
- By the number of high to low sorted dictionaries, the label that returns the first position of the dictionary is the most frequently occurring element label.
Other knowledge:
One: Parse data from a text file. 4 column data (the last column is the label)
def file2matrix (filename):
FR = open (filename)
NumberOfLines = Len (Fr.readlines ())
Returnmat = Zeros ((numberoflines,3))
Classlabelvector = []
FR = open (filename)
index = 0
For line in Fr.readlines ():
line = Line.strip ()
Listfromline = Line.split (' \ t ')
Returnmat[index,:] = Listfromline[0:3]
Classlabelvector.append (int (listfromline[-1]))
Index + = 1
Return Returnmat,classlabelvector
Two: Analyze data: Create scatter plots with matlotlib
Three: Prepare data: Normalized value
NewValue = (oldvalue-min)/(Max-min)
Summary:K-Adjacent storage space is large, the computation time is large, another drawback is that it cannot give the data infrastructure information, so we also do not know what the average instance sample and typical strength samples have characteristics.
Mlia. 2nd Chapter K-Nearest neighbor algorithm (KNN)