Machine learning Algorithm Practice--k-means algorithm and image segmentation

Source: Internet
Author: User

first, the theoretical preparation 1.1, image segmentation

Image segmentation is an image processing method, image segmentation refers to the decomposition of an image into a number of disjoint areas of the collection, its essence can be regarded as a kind of pixel clustering process. The commonly used image segmentation method can be divided into:

    • Edge-based Technology
    • Region-based Technology

Image segmentation based on clustering algorithm belongs to region-based technology.

1.2. K-means algorithm

K-means algorithm is based on distance similarity clustering algorithm, by comparing the similarity between samples, the form of the sample into the same category, the basic process of the K-means algorithm is:

    • Initializing constants, random initialization of K cluster centers
    • Repeat the following process until the cluster center no longer changes
      • Calculates the similarity between each sample and the center of each cluster, dividing the sample into the most similar category
      • Calculates the mean of all the sample features divided into each category and takes that mean as a new cluster center for each class
    • Outputs the final cluster center and the category to which each sample belongs

In the K-means algorithm, it is necessary to initialize K cluster Center randomly, and the K-means algorithm is sensitive to the selection of the initial clustering center, if the chosen clustering Center is not good, the clustering result will be very poor, therefore, the K-means algorithm is proposed a lot of improved methods, such as k-means++ algorithm, in the k-means++ algorithm, the distance between the K cluster centers wishing to initialize is as large as possible, and the specific process is:

    • Randomly select a sample point in the DataSet as the first initialized cluster center
    • Select the remaining cluster centers:
      • Calculates the distance between each sample point in the sample and the cluster center that has been initialized, and selects the shortest distance
      • A sample of the largest number of probabilities is chosen as the new cluster center, repeating the process until a cluster center is determined
    • For k-initialized cluster centers, The final clustering center is computed using the K-means algorithm.

For the specific process of the K-means algorithm can refer to the blog post easy to learn the machine learning algorithm--kmeans,k-means++ algorithm of the specific process will be added Later.

II. Preparation of the practice

In practice, Python is used as the development language, and the modules used include NumPy and Image. The NumPy module is the most used module in Python for matrix Computing.

The image module is a module in the PiL (Python Imaging Library), and for the image module, there are some operations on images:

    • Header file of the module

import Image as image

    • Open picture

fp = open("003.JPG", "rb")
im = image.open(fp)

The first is to open the file as a binary file, and then use the open method of the Image module to import the Picture.

For the following pictures (santorini):

    • Properties of the picture

im.format, im.size, im.mode

The results are: JPEG (1067) RGB

    • Channel Separation:

r,g,b = im.split()

Split into three channels, at which time the r,g,b is three image objects.

    • Gets the value of the pixel point

im.getpixel((4,4))

Because it is RGB three-channel, the value here is: (151, 169, 205)

    • Change the value of a single pixel point

im.putpixel(xy, color)

    • Image Type Conversions:

im=im.convert("L")

The image converted from RGB to grayscale, with the Result:

    • Create a new image

Image.new(mode, size)

Image.new(mode, size, color)

Such as: newimg = image.new ("GBA", (640,480), (0,255,0))

    • Save picture

im.save("save.gif","GIF")

third, using k-means++ algorithm image segmentation 3.1, The use of k-means++ clustering

In the image segmentation using the k-means++ algorithm, each pixel in the image is taken as a sample, and for the RGB image, each sample includes three dimensions: (151, 169, 205), and by normalization, the value of each channel is compressed to the [0,1] interval. The data is imported and processed as shown in the following procedure:

#coding:UTF-8import Image as imageimport numpy as npfrom KMeanspp import run_kmeansppdef load_data(file_path):    ‘‘‘导入数据    input:  file_path(string):文件的存储位置    output: data(mat):数据    ‘‘‘    f = open(file_path, "rb")  # 以二进制的方式打开图像文件    data = []    im = image.open(f)  # 导入图片    m, n = im.size  # 得到图片的大小    print m, n    for i in xrange(m):        for j in xrange(n):            tmp = []            x, y, z = im.getpixel((i, j))            tmp.append(x / 256.0)            tmp.append(y / 256.0)            tmp.append(z / 256.0)            data.append(tmp)    f.close()    return np.mat(data)

In the final form, the number of samples of the matrix behaves as a matrix, and is listed as the value of each channel (RGB). The samples are clustered using the k-means++ algorithm. The main function is shown in the following code:

if __name__ == "__main__":k = 10#聚类中心的个数# 1、导入数据print "---------- 1.load data ------------"data = load_data("001.jpg")# 2、利用kMeans++聚类print "---------- 2.run kmeans++ ------------"run_kmeanspp(data, k)

K represents the number of Clusters. The implementation of the k-means++ program is shown in the following procedure:

# coding:utf-8 "date:20160923@author:zhaozhiyong" "import numpy as Npfrom random import randomfrom Kmeans import Distan ce, kmeans, Save_resultfloat_max = 1e100 # Set a larger value as the minimum distance to initialize def nearest (point, cluster_centers): "calculates Point and Cluster_ Minimum distance between centers Input:point (mat): Current sample point cluster_centers (mat): the cluster center output:min_dist (float) that is currently initialized: Dot point and current        Minimum distance between cluster centers ' ' min_dist = float_max m = np.shape (cluster_centers) [0] # Number of cluster centers currently initialized for I in Xrange (m):            # Calculate the distance between point and each cluster center d = Distance (point, cluster_centers[i,]) # Select the shortest distance if min_dist > d:        Min_dist = d return min_distdef get_centroids (points, k): "kmeans++ method of initializing the cluster center input:points (mat): Sample K (int): the number of cluster centers output:cluster_centers (mat): the cluster center after initialization ' m, n = np.shape (points) cluster_centers = np. Mat (np.zeros ((k, N))) # 1, Randomly select a sample point for the first cluster Center index = np.random.randint (0, M) cluster_centers[0,] = np.copy S[index,]) # 2, Initialize a distanceFrom the sequence d = [0.0 for _ in xrange (m)] for i in xrange (1, k): sum_all = 0 for J in Xrange (m): # 3. Find the nearest cluster center point for each sample d[j] = nearest (points[j,], cluster_centers[0:i,] # 4, sum all the shortest distances _all + = d[j] # 5, obtain a random value between Sum_all sum_all *= random () # 6, Get the sample point farthest away as the cluster center point for j, di in Enumer Ate (d): sum_all-= di if sum_all > 0:continue cluster_centers[i] = np. Copy (points[j,]) break return cluster_centersdef run_kmeanspp (data, k): # 1, kmeans++ Cluster Center initialization method Print "\ t----------1.k-means++ Generate centers------------" centroids = get_centroids (data, K) # 2, cluster calculation print "\ t--- -------2.kmeans------------"subcenter = Kmeans (data, k, Centroids) # 3, Save the category file that belongs to print" \ t----------3.save su Bcenter------------"save_result (" sub_pp ", subcenter) # 4, save Cluster Center print" \ T----------4.save centroids---------- --"save_result (" center_pP ", Centroids) 

In the code above is primarily the initialization of the K cluster center, the core program of the K-means algorithm is as follows:

# coding:utf-8 "date:20160923@author:zhaozhiyong" "import numpy as npdef distance (veca, vecb):" ' calculates the Euclidean distance between Veca and VECB Square Input:veca (mat) A point coordinate VECB (mat) b point coordinate output:dist[0, 0] (float) a point and B point distance squared "' dist = (veca-vecb) * (veca-vecb). T return dist[0, 0]def randcent (data, k): "' Random Initialization Clustering Center Input:data (mat): training Data k (int): number of categories Output:centr OIDs (mat): Cluster center ' n = np.shape (data) [1] # property Number centroids = Np.mat (np.zeros ((k, N)) # Initialize K cluster Center for J in X Range (n): # initializes the cluster center coordinates for each dimension Minj = np.min (data[:, j]) rangej = Np.max (data[:, J])-minj # between the maximum and minimum values with Machine initialization centroids[:, j] = Minj * Np.mat (np.ones ((k, 1)) + np.random.rand (k, 1) * Rangej return centroidsdef Kmean s (data, k, centroids): "' solves the cluster center Input:data (mat) According to the Kmeans algorithm: training data k (int): number of categories Centroids (mat): randomly initialized clusters    Heart Output:centroids (mat): trained to complete the cluster center Subcenter (mat): Each sample belongs to the category "' m, n = np.shape (data) # m: Number of samples, n: The dimension of the feature Subcenter = Np.mat (np.zeros ((m, 2)) # initializes the category to which each sample belongs change = True # Determines whether the cluster center needs to be recalculated while the change = = True:change =  False # Reset for i in xrange (m): mindist = np.inf # Sets the minimum distance between the sample and the cluster center, the initial value is fight poor Minindex = 0 # Category for J in Xrange (k): # calculates the distance between I and each cluster center dist = Distance (data[i,], centroids                [j,]) If dist < mindist:mindist = Dist Minindex = J # Determine if I need to be changed F subcenter[i, 0] <> minindex: # needs change = True subcenter[i,] = Np.mat ([minindex  , Mindist]) # re-compute the cluster Center for J in xrange (k): sum_all = Np.mat (np.zeros ((1, N))) r = 0 # Number of samples in each category for I in xrange (m): if subcenter[i, 0] = = J: # Calculation of j category Sum_all + = d ata[i,] r + = 1 for z in xrange (n): try:centroids[j, z] = Sum _all[0, z]/r Print R except:print "r is zero" return subcente Rdef save_result (file_name, source): "save results from source to file_name file input:file_name (string): file name source (mat): required        Data to be saved output: ' m, n = np.shape (source) f = open (file_name, "w") for i in xrange (m): tmp = [] For j in Xrange (n): tmp.append (str (source[i, j)) f.write ("\ t". join (tmp) + "\ N") f.close ()
3.2, using clustering results to create a new picture

In the above process, each pixel is clustered, finally using the RGB value of the center point of the cluster to replace the value of each pixel in the original image, then the final segmented picture is obtained, the code is as Follows:

#coding:UTF-8import Image as imagef_center = open("center_pp")center = []for line in f_center.readlines():    lines = line.strip().split("\t")    tmp = []    for x in lines:        tmp.append(int(float(x) * 256))    center.append(tuple(tmp))print centerf_center.close()fp = open("001.jpg", "rb")im = image.open(fp)# 新建一个图片m, n = im.sizepic_new = image.new("RGB", (m, n)) f_sub = open("sub_pp")i = 0for line in f_sub.readlines():    index = float((line.strip().split("\t"))[0])    index_n = int(index)    pic_new.putpixel(((i/n),(i % n)),center[index_n])    i = i + 1f_sub.close()pic_new.save("result.jpg", "JPEG")       

For the above pictures of santorini, take different k values and get some results as follows:

    • Original

    • K=3

    • K=5

    • K=7

    • k=10

Reference articles
    • Kmeans Clustering and image segmentation
    • Research on clustering algorithm and its application in image segmentation
    • A survey of image segmentation based on clustering algorithm
    • "image processing" python-image basic image processing operations

Machine learning Algorithm Practice--k-means algorithm and image segmentation

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.