Classifier of pattern Recognition KNN---C language implementation with training data

Source: Internet
Author: User

Proximity algorithm


The decision process of KNN algorithm
K-nearest Neighbor algorithm
is the abbreviation form of k nearest node algorithm (k-nearest Neighbor algorithm), which is an algorithm of electronic information classifier.

The basic idea of this algorithm is that after a given new text, consider the most recent (most similar) K-text distance from the new text in the training text set to determine the category to which the new text belongs according to the category of the K-text
In the left, which class is the green circle to be given, is it a red triangle or a blue quad? If k=3, because the red triangle is the proportion of 2/3, the green circle will be given the red triangle that class, if k=5, because the blue four-square scale is 3/5, so the green circle is given the blue four-square class.
K Nearest neighbor (k-nearest NEIGHBOR,KNN) classification algorithm, is a theoretically more mature method, is also the simplestOne of the machine learning algorithms. The idea of this approach is: if aMost of the samples in a sample that are most similar in the K-nearest (that is, closest to the feature space) in a feature space belong to a category, and the sample belongs to that category. In the KNN algorithm, the selected neighbors are the objects that have been correctly categorized. This method determines the category to which the sample is to be divided based on the category of the nearest one or several samples in the categorical decision-making. Although the KNN method relies on the limit theorem in principle, it is only related to a very small number of adjacent samples in the class decision. The KNN method is more suitable than other methods because the KNN method mainly relies on the surrounding finite sample, rather than the Discriminant class domain method to determine the category of the class.
The KNN algorithm can be used not only for classification, but also for regression. by locating the K nearest neighbor of a sample and assigning the average of the properties of those neighbors to the sample, you can get the properties of the sample. A more useful approach is to give different weights (weight) to the effect that neighbors of different distances have on the sample, as the weights are proportional to the distance.
The main disadvantage of this algorithm in classification is that whenSample imbalance, such as a class of sample capacity is very large, and other class sample capacity is very small, it may lead to when a new sample is entered, the sample of the K neighbors of the large class of samples accounted for a majority. Therefore, the method of weight can be used (and the value of the neighbor with small distance of the sample is large) to improve. Another disadvantage of this method is that it is computationally large because each text to be classified is calculated from its distance to all known samples in order to obtain its K nearest neighbors. At present, the common solution is to pre-edit the known sample points in advance to remove the small sample of the role of classification. This algorithm is suitable for the automatic classification of the class domain with large sample capacity, while those with smaller sample capacity are more prone to error points.
knn-main application areas
• Text classification • Cluster analysis ·Data Mining ·Machine learning • Predictive analysis • Reduction of dimensions •Pattern Recognition ·Image processing
My KNN classification Algorithm program:


-------------------Code written by Stephen Liu-----------------------
#include
#include
#define MAX 1000
using namespace Std;
int M, I, J;
int types;
Class Str
{
Public
float x;
Float y;
float distance;
int type;
};
STR data;//data for a known class of input
STR point;//requires unknown data based on KNN to determine the category
STR temp;
void Input_data ()
{
cout << "Please enter the number of known points:";
Cin >> m;
for (i = 1; I <= m; i++)
{
cout << "Please enter point" << i << "coordinates x, Y and Category:";
CIN >> data.x >> data.y >> data.Type
}
}
void Distance ()//calculates the distance from an unknown category point to all known category points
{
for (i = 1; I <= m; i++)
Data.distance = sqrt ((data.x-point.x) * (data.x-point.x) + (DATA.Y-POINT.Y) * (DATA.Y-POINT.Y));
}
void sort ()//To sort distances from small to large
{
for (i = 1; I <= m; i++)
for (j = m; j > i; j--)
{
if (Data.distance < data.distance)
{
Temp=data;
Data=data;
Data=temp;
}
}
}
int KNN ()
{
int the_type,num = 0, k;
cout << "Please enter the K value of KNN:";
Cin >> K;
for (i = 1; i <=; i++)
types = 0;
for (i = 1; I <= K; i++)//statistics on sorted top K-bit distance categories
Types.type] + +;
for (i = 1; i <=; i++)//Find out which categories the unknown category points belong to
{
if (Types > num)
{
num = types;
The_type = i;
}
}
Return (the_Type);
}
int main ()
{
Input_data ();
cout << "Please enter the coordinates of the unknown category point x, y (enter 0 0 exit):";
CIN >> Point.x >> Point.y;
Do
{
Distance ();
Sort ();
cout << "points (" << point.x << "," << point.y << ") belong to the class" << KNN () << Endl;
cout << "Please enter the coordinates of the unknown category point x, y (enter 0 0 exit):";
CIN >> Point.x >> Point.y;
}
while (point.x! = 0 && Point.y! = 0);
cout << "======= KNN classification algorithm Stephen Liu e-mail:[email protected] 2010.8 =======";
System ("pause");
return 0;
}
-------------------------Code End---------------------------------
My comments:
This is the simplest case of the KNN classification algorithm, where the classification may be different when K takes different values. when the sample is too large, the efficiency decreases due to the increase of the number of comparisons.

http://emuch.net/html/201009/2366638.html principle

http://download.csdn.net/download/wang123sf/1770715 Code

Classifier of pattern recognition KNN---C language implementation with training data

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.