The so-called clustering refers to the process of classifying data members that are similar in some aspects of a data set, which is a technique for discovering this intrinsic structure, and clustering is often called unsupervised learning. K-means clustering is the most well-known classification clustering algorithm, which makes him the most widely used in all clustering algorithms because of its simplicity and efficiency. Given a collection of data points and the number of clusters required k,k is specified by the user, the K-mean algorithm repeats the data into K-clusters based on a distance function.
- K Mean value Algorithm description
K-objects are randomly selected as the initial cluster centers. It then calculates the distance between each object and each seed cluster center, assigning each object to the cluster center closest to it. The cluster centers and the objects assigned to them represent a cluster. Once all objects have been assigned, the cluster Center for each cluster is recalculated based on the existing objects in the cluster. This process repeats until a certain termination condition is met. The termination condition can be any one of the following: 1) No (or minimum) objects are reassigned to different clusters. 2) There is no (or minimum) cluster Center change. 3) squared error and local minimum. The classification of carrot images based on K-means is the idea of classification, which divides the target area and the background area into two categories, so the value of K is set to 2, because it is divided according to the gray value, then the error is set to 1 as the terminating condition; The implementation code of the algorithm is as follows:
iplimage* img = cvcloneimage (m_sourceimage); iplimage* grayimage = Cvcreateimage (Cvgetsize (IMG), ipl_depth_8u, 1);// iplimage* redimage = cvcreateimage (Cvgetsize (IMG), ipl_depth_8u, 1);//cvsplit (IMG, NULL, NULL, redimage, NULL); Cvcvtcolor (IMG, grayimage, cv_rgb2gray);//iplimage* img = Cvcreateimage (cvgetsize (m_sourceimage), IPL_DEPTH_8U, 1);// Cvcvtcolor (M_sourceimage, IMG, cv_rgb2gray); int total = Img->height*img->width;int Cluster_num = 2; Cvmat *row = Cvcreatemat (Img->height, Img->width, CV_32FC3);//cvconvert (redimage, row); Cvconvert (IMG, row);// Turn the type! Cvmat *clusters = Cvcreatemat (total, 1, CV_32SC1);//cvarr* cvreshapematnd (const cvarr* arr,//int Sizeof_header, CvArr* h eader,//int NEW_CN, int new_dims, int* new_sizes);///ARR: input array//Sizeof_header: Output header size, for Iplimage, Cvmat and cvmatnd various knots The header of the output is different.//Header: The output head is added.//NEW_CN: The number of new channels, if NEW_CN = 0, the number of channels remains the same//new_dims: the new dimension. If New_dims = 0, the dimensions remain the same. new_sizes//the new dimension size. only if the New_dims = 1 value is used because the total number of arrays remains the same, so if new_dims = 1, new_sizeS is not used Cvreshape (row, row, 0, total);//Turn the image into a data matrix, but note that the actual data is not changed, but the access order is changed. CvKMeans2 (row, cluster_num, clusters, Cvtermcriteria (Cv_termcrit_eps + Cv_termcrit_iter, ten, 1.0)); Cvreshape (clusters , clusters, 0, img->width);//The result of clustering is reshape back more convenient to see ~int i = 0, j = 0; Cvscalar s;iplimage* resimg = Cvcreateimage (Cvsize (Img->width, Img->height), 8, 1);//generate the image used to display the result s = cvget2d (IMG, I , j), Vector <int> v1, v2;//store the grayscale value after classification for (i = 0; i < img->height; i++) {for (j = 0; J < img->width; J + +) {D Ouble val = cvgetreal2d (Grayimage, I, j); if (Clusters->data.i[i*img->width + j] = = 0) {v1.push_back (val);// The white area of the finished image is processed s.val[0] = 255; CVSET2D (Resimg, I, J, s);//Note the Loop Order}else{v2.push_back (val);//black area of the image after processing completed s.val[0] = 0;cvset2d (Resimg, I, J, s);}}} Double thresh1 = Accumulate (V1.begin (), V1.end (), 0)/v1.size ();d ouble thresh2 = Accumulate (V2.begin (), V2.end (), 0)/v2 . Size (); if (Thresh2 > Thresh1)//If the gray value of the black area is greater than the white area {cvnot (resimg, resimg);//image take anti}iplconvkernel *element = CvcreatestructuriNgelementex (5, 5, 2, 2, cv_shape_ellipse); Cvsmooth (resimg, resimg, Cv_median); Cverode (resimg, resimg, Element, 1); Cvdilate (resimg, resimg, Element, 1); Cvreleasestructuringelement (&element); Cvsaveimage ("S2.jpg", M_sourceImage ); Cvsaveimage ("B2.jpg", resimg); ShowImage (resimg, idc_pic2); int key = Cvwaitkey (0); Cvreleaseimage (&img);//Remember to release memory Cvreleaseimage (&RESIMG); Cvreleasemat (&row); Cvreleasemat (&clusters);
The above code, the image processing results are corrected, because, K-mean clustering, is simply to classify the target area and the background area, and can not clearly mark the target area or the background area, so, according to experience, the gray value of the larger part as the target area;
And the lower gray value is marked as the background area;
The following is the contrast between the uncorrected image and the corrected image:
As shown: The left image is the processing effect without correction, the right image is the corrected image;
Because this method is simply based on the image of the gray value of binary image, it is difficult to the complex background of the image segmentation, related algorithms, in the later study to improve and supplement.
A pure color background removal method of carrot based on K-means clustering