Example code for implementing multi-class support vector machines with TensorFlow

Source: Internet
Author: User
Tags svm
This article mainly introduces the use of TensorFlow implementation of multi-class support Vector machine example code, now share to everyone, but also to make a reference. Come and see it together.

This article will detail a multi-class support Vector machine classifier training iris data set to classify three flowers.

The SVM algorithm was originally designed for the two-value classification problem, but it can also be used to make multi-class classification by some strategies. The two main strategies are: one-to-many (the one versus all) method;

A one-to-two approach is to design a binary classifier between arbitrary samples, and then the category that has the most votes is the predicted category for that unknown sample. But when you have a lot of classes (k), you have to create k! /(K-2)! 2! Classifier, the cost of the calculation is quite large.

Another way to implement a multi-class classifier is to create a one-to-many classifier for each class. The final prediction category is the category with the maximum SVM interval. This article will implement this method.

We will load the iris dataset, using a nonlinear multi-class SVM model with Gaussian kernel functions. The iris DataSet contains three categories, Iris, Iris, and Virginia Iris (I.setosa, I.virginica, and I.versicolor), and we will create three Gaussian kernel function SVM for them to predict.

# multi-class (nonlinear) SVM example#----------------------------------# # This function wll illustrate how to# implement The Gaussian kernel with# multiple classes on the iris dataset.## Gaussian kernel:# K (x1, x2) = exp (-gamma * ABS (X1-X2) ^2) # # X: (Sepal Length, Petal Width) # Y: (I. Setosa, I. virginica, I. versicolor) (3 classes) # # Basic Idea:introduce an Extra dimension to do# one vs all classification.## the prediction of a point would be the category with# the largest margi n or distance to Boundary.import Matplotlib.pyplot as Pltimport numpy as Npimport tensorflow as Tffrom sklearn import data Setsfrom tensorflow.python.framework Import opsops.reset_default_graph () # Create graphsess = tf. Session () # Load the data# loads the iris dataset and detaches the target value for each class. # because we want to draw a result graph, we only use calyx length and petal width two characteristics. # iris.data = [(sepal length, sepal width, petal length, petal width)]iris = Datasets.load_iris () for ease of drawing, also separating X-value and Y-value = Np.array ([[X[0], x[3]] for x in iris.data]) Y_VALS1 = Np.array ([1 if y==0 else-1 for y in Iris. Target]) Y_VALS2 = Np.array ([1 if y==1 else-1 for y in iris.target]) Y_VALS3 = Np.array ([1 if y==2 else-1 for y in IRIS.T Arget]) y_vals = Np.array ([Y_vals1, Y_VALS2, Y_VALS3]) class1_x = [x[0] for i,x in enumerate (x_vals) if IRIS.TARGET[I]==0]CL Ass1_y = [x[1] for i,x in enumerate (x_vals) If iris.target[i]==0]class2_x = [x[0] for i,x in enumerate (x_vals) if Iris.tar  Get[i]==1]class2_y = [x[1] for i,x in enumerate (x_vals) If iris.target[i]==1]class3_x = [x[0] for i,x in enumerate (x_vals) If iris.target[i]==2]class3_y = [x[1] for i,x in enumerate (x_vals) if iris.target[i]==2]# Declare Batch sizebatch_size = The dimensions of the 50# Initialize placeholders# dataset vary from single-class targets to class three target classifications. # We will use matrix propagation and reshape technology to compute all three classes of SVM at once. Note that because all classifications are computed at once, the dimension of the # Y_target placeholder is [3,none], and the model variable B initializes the size to [3,batch_size]x_data = Tf.placeholder (Shape=[none, 2], dtype= Tf.float32) Y_target = Tf.placeholder (shape=[3, None], dtype=tf.float32) Prediction_grid = Tf.placeholder (Shape=[None, 2], Dtype=tf.float32) # Create variables for svmb = TF. Variable (Tf.ranDom_normal (Shape=[3,batch_size]) # Gaussian (RBF) kernel kernel function only depends on x_datagamma = tf.constant ( -10.0) dist = Tf.reduce_sum ( Tf.square (X_data), 1) dist = Tf.reshape (dist, [ -1,1]) sq_dists = tf.multiply (2., Tf.matmul (X_data, Tf.transpose (X_data)) ) My_kernel = Tf.exp (tf.multiply (Gamma, Tf.abs (sq_dists))) # Declare function to do reshape/batch multiplication# The biggest change is the batch matrix multiplication. # The final result is a three-dimensional matrix, and a propagation matrix multiplication is required. # so the data matrix and the target matrix need to be preprocessed, such as the XT x operation requires an additional dimension. # Here you create a function to extend the matrix dimension and then transpose the matrix, # then call TensorFlow's Tf.batch_matmul () function Def Reshape_matmul (MAT): v1 = Tf.expand_dims (Mat, 1) v2 = Tf.reshape (v1, [3, Batch_size, 1]) return (Tf.matmul (v2, v1)) # Compute SVM model Compute dual loss Function first_term = tf.reduce_sum (b) b_ Vec_cross = Tf.matmul (Tf.transpose (b), b) Y_target_cross = Reshape_matmul (y_target) second_term = Tf.reduce_sum ( Tf.multiply (My_kernel, tf.multiply (B_vec_cross, Y_target_cross)), [up]) loss = Tf.reduce_sum (tf.negative ( Tf.subtract (First_term, Second_term)) # Gaussian (RBF) Prediction kernel# Now create a predictive kernel function. # Beware of the Reduce_sum () function, where we don't want to aggregate three SVM predictions, #So we need to tell TensorFlow by the second argument which RA = Tf.reshape (Tf.reduce_sum (Tf.square (X_data), 1), [ -1,1]) RB = Tf.reshape (tf.reduce_ Sum (Tf.square (Prediction_grid), 1), [ -1,1]) pred_sq_dist = Tf.add (Tf.subtract (RA, tf.multiply (2), Tf.matmul (X_data, Tf.transpose (Prediction_grid))), Tf.transpose (RB)) Pred_kernel = Tf.exp (tf.multiply (Gamma, Tf.abs (pred_sq_dist))) # After implementing the predictive kernel function, we create a predictive function. # Unlike the two class, the sign () operation is no longer performed on the model output. # because this is a one-to-many method, the predicted value is the category with the maximum return value for the classifier. # Use TensorFlow's built-in function Argmax () to implement this function Prediction_output = Tf.matmul (tf.multiply (y_target,b), pred_kernel) prediction = Tf.arg_max (Prediction_output-tf.expand_dims (Tf.reduce_mean (prediction_output,1), 1), 0) accuracy = Tf.reduce_mean ( Tf.cast (tf.equal (prediction, Tf.argmax (y_target,0)), Tf.float32)) # Declare optimizermy_opt = Tf.train.GradientDescentOptimizer (0.01) Train_step = my_opt.minimize (loss) # Initialize Variablesinit = Tf.global_ Variables_initializer () Sess.run (init) # Training Looploss_vec = []batch_accuracy = []for i in range]: Rand_index = NP. Random.choice (Len (x_vals), size=batch_size) rand_x = X_vals[rand_index] rand_y = Y_vals[:,rand_index] Sess.run (Train_step, Feed_dict={x_data:r and_x, y_target:rand_y}) Temp_loss = Sess.run (loss, feed_dict={x_data:rand_x, y_target:rand_y}) Loss_vec.append (Temp_                       Loss) Acc_temp = Sess.run (accuracy, feed_dict={x_data:rand_x, y_target:rand_y, prediction_grid:rand_x}) Batch_accuracy.append (acc_temp) if (i+1)%25==0:print (' Step # ' + str (i+1)) print (' Los s = ' + str (temp_loss) # Create a predictive grid of data points, run the predictive function x_min, X_max = x_vals[:, 0].min ()-1, x_vals[:, 0].max () + 1y_min, Y_max = X_va ls[:, 1].min ()-1, x_vals[:, 1].max () + 1xx, yy = Np.meshgrid (Np.arange (X_min, X_max, 0.02), Np.arange (Y_min, y _max, 0.02)) grid_points = Np.c_[xx.ravel (), yy.ravel ()]grid_predictions = Sess.run (prediction, Feed_dict={x_data:rand _x, Y_target:rand_y, prediction_grid:grid_points}) Grid_predictions = g Rid_predictions.reshape (xx.sHape) # Plot points and Gridplt.contourf (xx, yy, Grid_predictions, cmap=plt.cm.paired, alpha=0.8) Plt.plot (class1_x, class1_y, ' ro ', label= ' I Setosa ') plt.plot (class2_x, Class2_y, ' kx ', label= ' I versicolor ') plt.plot (class3_x, class3_y , ' GV ', label= ' I virginica ') plt.title (' Gaussian SVM Results on Iris Data ') Plt.xlabel (' pedal Length ') Plt.ylabel (' Sepal Width ') plt.legend (loc= ' lower right ') Plt.ylim ([ -0.5, 3.0]) Plt.xlim ([3.5, 8.5]) plt.show () # Plot Batch Accuracyplt.plot (Batch_accuracy, ' K ', label= ' accuracy ') plt.title (' batch accuracy ') Plt.xlabel (' Generation ') plt.ylabel (' accuracy ') Plt.legend (loc= ' lower right ') plt.show () # Plot loss over Timeplt.plot (Loss_vec, ' K ') plt.title (' Loss per Generation ') Plt.xlabel (' Generation ') plt.ylabel (' Loss ') plt.show ()

Output:

Instructions for updating:
Use ' Argmax ' instead
Step #25
Loss =-313.391
Step #50
Loss =-650.891
Step #75
Loss =-988.39
Step #100
Loss =-1325.89


The multi-classification (three-Class) result of the I.setosa nonlinear Gaussian SVM model, where the gamma value is 10

The focus is to change the SVM algorithm to optimize the three-class SVM model. Model parameter B calculates three models by adding a dimension. As we can see, using the TensorFlow built-in functionality makes it easy to extend the algorithm to a multi-class similarity algorithm.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.