classification clustering of Iris
There are several iris data, each with 4 data, sepals long (in centimeters), sepals wide (in centimeters), petal length (cm) and petal width (in cm). We hope to find a viable way to divide the iris into several classes according to the difference of 4 data per flower, so that each class is as accurate as possible in order to help plant experts to further analyze these flowers.
This is a question of getting started with digital modeling.
No guidance clustering. With the K-means algorithm, the simplest Euclidean distance, the simplest average position calculation method, but the effect has been very good.
The visualization uses the parallel axis method and adjusts the primary and secondary manually.
The first time with Matlab, subversion of thinking, feeling is a strong impact on the former stylized thinking. It's a lot simpler to use than C + +. Strong desire to continue learning.
MAIN.M:
Clear all;
Close all;
CLC
dat=[4.8 3.1 1.6 0.2;
5.4 3.4 1.5 0.4;
5.2 4.1 1.5 0.1;
5.5 4.2 1.4 0.2;
4.9 3.1 1.5 0.2;
5.0 3.2 1.2 0.2;
5.5 3.5 1.3 0.2;
4.9 3.6 1.4 0.1;
4.4 3.0 1.3 0.2;
5.1 3.4 1.5 0.2;
5.0 3.5 1.3 0.3;
4.5 2.3 1.3 0.3;
4.4 3.2 1.3 0.2;
5.0 3.5 1.6 0.6;
5.1 3.8 1.9 0.4;
4.8 3.0 1.4 0.3;
5.1 3.8 1.6 0.2;
4.6 3.2 1.4 0.2;
5.3 3.7 1.5 0.2;
5.0 3.3 1.4 0.2;
7.0 3.2 4.7 1.4;
6.4 3.2 4.5 1.5;
6.9 3.1 4.9 1.5;
5.5 2.3 4.0 1.3;
6.5 2.8 4.6 1.5;
5.7 2.8 4.5 1.3;
6.3 3.3 4.7 1.6;
4.9 2.4 3.3 1.0;
6.6 2.9 4.6 1.3;
5.2 2.7 3.9 1.4;
5.0 2.0 3.5 1.0;
5.9 3.0 4.2 1.5;
6.0 2.2 4.0 1.0;
6.1 2.9 4.7 1.4;
5.6 2.9 3.9 1.3;
6.7 3.1 4.4 1.4;
5.6 3.0 4.5 1.5;
5.8 2.7 4.1 1.0;
6.2 2.2 4.5 1.5;
5.6 2.5 3.9 1.1;
5.9 3.2 4.8 1.8;
6.1 2.8 4.0 1.3;
6.3 2.5 4.9 1.5;
6.1 2.8 4.7 1.2;
6.4 2.9 4.3 1.3;
6.6 3.0 4.4 1.4;
6.8 2.8 4.8 1.4;
6.7 3.0 5.0 1.7;
6.0 2.9 4.5 1.5;
5.7 2.6 3.5 1.0; 5.5 2.4 3.8 1.1;
5.5 2.4 3.7 1.0;
5.8 2.7 3.9 1.2;
6.0 2.7 5.1 1.6;
5.4 3.0 4.5 1.5;
6.0 3.4 4.5 1.6;
6.7 3.1 4.7 1.5;
6.3 2.3 4.4 1.3;
5.6 3.0 4.1 1.3;
5.5 2.5 5.0 1.3;
5.5 2.6 4.4 1.2;
6.1 3.0 4.6 1.4;
5.8 2.6 4.0 1.2;
5.0 2.3 3.3 1.0;
5.6 2.7 4.2 1.3;
5.7 3.0 4.2 1.2;
5.7 2.9 4.2 1.3;
6.2 2.9 4.3 1.3;
5.1 2.5 3.0 1.1;
5.7 2.8 4.1 1.3;
6.3 3.3 6.0 2.5;
5.8 2.7 5.1 1.9;
7.1 3.0 5.9 2.1;
6.3 2.9 5.6 1.8;
6.5 3.0 5.8 2.2;
7.6 3.0 6.6 2.1;
4.9 2.5 4.5 1.7;
7.3 2.9 6.3 1.8;
6.7 2.5 5.8 1.8;
7.2 3.6 6.1 2.5;
6.5 3.2 5.1 2.0;
6.4 2.7 5.3 1.9;
6.8 3.0 5.5 2.1;
5.7 2.5 5.0 2.0;
5.8 2.8 5.1 2.4;
6.4 3.2 5.3 2.3;
6.5 3.0 5.5 1.8;
7.7 3.8 6.7 2.2;
7.7 2.6 6.9 2.3;
6.0 2.2 5.0 1.5;
6.9 3.2 5.7 2.3;
5.6 2.8 4.9 2.0;
7.7 2.8 6.7 2.0;
6.3 2.7 4.9 1.8;
6.7 3.3 5.7 2.1;
7.2 3.2 6.0 1.8;
6.2 2.8 4.8 1.8;
6.1 3.0 4.9 1.8;
6.4 2.8 5.6 2.1;
7.2 3.0 5.8 1.6;
7.4 2.8 6.1 1.9;
7.9 3.8 6.4 2.0;
6.4 2.8 5.6 2.2; 6.3 2.8 5.1 1.5;
6.1 2.6 5.6 1.4;
7.7 3.0 6.1 2.3;
6.3 3.4 5.6 2.4;
6.4 3.1 5.5 1.8;
6.0 3.0 4.8 1.8;
6.9 3.1 5.4 2.1;
6.7 3.1 5.6 2.4;
6.9 3.1 5.1 2.3;
5.8 2.7 5.1 1.9;
6.8 3.2 5.9 2.3;
6.7 3.3 5.7 2.5;
6.7 3.0 5.2 2.3;
6.3 2.5 5.0 1.9;
6.5 3.0 5.2 2.0;
6.2 3.4 5.4 2.3;
5.9 3.0 5.1 1.8];
Hold on;
[M n]=size (DAT);
For I=1:m Line (1:n,dat (i,[2 1 3 4]); The raw data is shown in parallel coordinate systems, and the observations are broadly divided into three categories.
End figure;
Hold on;
k=3;
rs = Kmean (dat,k);% without guidance cluster ou1=[];
Ou2=[];
Ou3=[];
For i=1:m% data after clustering if RS (i,1) = = 1 OU1 = [Ou1;i];
Line (1:n,dat (i,[2 1 3 4]), ' Color ', ' R ');
ElseIf rs (i,2) = = 1 OU2 = [Ou2;i];
Line (1:n,dat (i,[2 1 3 4]), ' Color ', ' G ');
else OU3 = [Ou3;i];
Line (1:n,dat (i,[2 1 3 4]), ' Color ', ' B ');
End End fid = fopen (' classify.txt ', ' wt ');
[ss A] = size (OU1);
fprintf (FID, ' first class ');
For I=1:ss fprintf (FID, '%d ', ou1 (i));
End fprintf (FID, ' \ n Second class ');
[ss A] = size (OU2) for I=1:ss fprintf (FID, '%d ', OU2 (i)); End fprintf (FID, '\ n Class III ');
[ss A] = size (OU3) for I=1:ss fprintf (FID, '%d ', ou3 (i)); End fclose (FID);
KMEANS.M:
function Re=kmean (dat,k)%k-mean algorithm for non-directed clustering
[m n] = size (DAT);
u = zeros (k,n);
For i=1:n
mi = min (Dat (:, i));
MA = max (DAT (:, i));
For j=1:k
u (j,i) = mi+ (Ma-mi) *rand ();% makes the seed point each dimension between the dimension's maximum and minimum values
end
End
dist = zeros (k,1);
While 1
re = zeros (m,k);
UU = u;% before the seed point for
i=1:m for
j=1:k
dist (j) = SUM (DAT (i,:)-U (j,:)). ^2);% calculates the distance from
each point to each seed point using the Euclidean distance formula End
[mm pp] = min (dist (:));% find the nearest seed point for each point
Re (i,pp) = 1;% in the record group some point
end for i=1:k for
j=1:n
SS = SUM (Re (:, i));
If ss~=0
u (i,j) = SUM (Re (:, i). *dat (:, J))/sum (Re (:, i));% update the values of each dimension of the seed point by using each of the points in each group. End End
if Norm (uu-u) <0.1% if the coordinates of the seed point converge Finally, exit the loop break
;
End
End