Python code cs231n Softmax linear classifier, non-linear classifier comparison example (with Python drawing display results)

Source: Internet
Author: User


1 #examples of linear and nonlinear classifiers in cs231n (Softmax)2 #Note that the calculation of the inverse propagation3 4 #-*-coding:utf-8-*-5 ImportNumPy as NP6 ImportMatplotlib.pyplot as Plt7N = 100#Number of points per class8D = 2#dimensionality9K = 3#Number of classesTenX = Np.zeros ((n*k,d))#data Matrix (each row = Single example) Oney = Np.zeros (n*k, dtype='uint8')#Class Labels A forJinchxrange (K): -IX = Range (n*j,n* (j+1)) -R = Np.linspace (0.0,1,n)#radius thet = Np.linspace (j*4, (j+1) *4,n) + NP.RANDOM.RANDN (N) *0.2#Theta -X[ix] = Np.c_[r*np.sin (t), r*Np.cos (t)] -Y[ix] =J - #lets visualize the data: +Plt.xlim ([-1, 1]) -Plt.ylim ([-1, 1]) +Plt.scatter (x[:, 0], x[:, 1], c=y, s=40, cmap=plt.cm.Spectral) A plt.show () at - - - #Initialize parameters randomly - #linear classifier -W = 0.01 *Np.random.randn (d,k) inb = Np.zeros ((1, K)) - to #some hyperparameters +Step_size = 1e-0 -Reg = 1e-3#regularization Strength the * #gradient descent Loop $Num_examples =X.shape[0]Panax Notoginseng forIinchXrange (200): - the #evaluate class scores, [N x K] +Scores = Np.dot (X, W) +b A the #compute the class probabilities +Exp_scores =Np.exp (scores) -probs = Exp_scores/np.sum (Exp_scores, Axis=1, Keepdims=true)#[N x K] $ $ #compute the Loss:average cross-entropy loss and regularization -Corect_logprobs =-Np.log (Probs[range (num_examples), y]) -Data_loss = Np.sum (corect_logprobs)/Num_examples theReg_loss = 0.5*reg*np.sum (w*W) -Loss = Data_loss +Reg_lossWuyi ifI% 10 = =0: the Print "Iteration%d:loss%f"%(i, loss) - Wu #compute the gradient on scores -Dscores =probs AboutDscores[range (Num_examples), y]-= 1 $Dscores/=Num_examples - - #backpropate the gradient to the parameters (W,b) -DW =Np.dot (x.t, Dscores) Adb = Np.sum (Dscores, axis=0, keepdims=True) + theDW + = Reg*w#regularization Gradient - $ #perform a parameter update theW + =-step_size *DW theb + =-step_size *DB the the #evaluate training set accuracy -Scores = Np.dot (X, W) +b inPredicted_class = Np.argmax (scores, Axis=1) the Print 'Training accuracy:%.2f'% (Np.mean (Predicted_class = =y)) the About #plot the resulting classifier theH = 0.02 theX_min, X_max = x[:, 0].min ()-1, x[:, 0].max () + 1 theY_min, Y_max = x[:, 1].min ()-1, x[:, 1].max () + 1 +xx, yy =Np.meshgrid (Np.arange (X_min, X_max, h), - Np.arange (Y_min, Y_max, h)) theZ = Np.dot (Np.c_[xx.ravel (), Yy.ravel ()], W) +bBayiz = Np.argmax (z, Axis=1) theZ =Z.reshape (Xx.shape) theFig =plt.figure () -Plt.contourf (xx, yy, Z, Cmap=plt.cm.spectral, alpha=0.8) -Plt.scatter (x[:, 0], x[:, 1], c=y, s=40, cmap=plt.cm.Spectral) the Plt.xlim (Xx.min (), Xx.max ()) the Plt.ylim (Yy.min (), Yy.max ()) the the ## Initialize parameters randomly - #non-linear classifier with a hidden layer using Relu theH = 100#size of hidden layer theW = 0.01 *Np.random.randn (d,h) theb = Np.zeros ((1, h))94W2 = 0.01 *Np.random.randn (h,k) theB2 = Np.zeros ((1, K)) the the #some hyperparameters98Step_size = 1e-0 AboutReg = 1e-3#regularization Strength - 101 #gradient descent Loop102Num_examples =X.shape[0]103 forIinchXrange (10000):104 the #evaluate class scores, [N x K]106Hidden_layer = Np.maximum (0, Np.dot (X, W) + b)#Note, ReLU activation107Scores = Np.dot (Hidden_layer, W2) +B2108 109 #compute the class probabilities theExp_scores =Np.exp (scores)111probs = Exp_scores/np.sum (Exp_scores, Axis=1, Keepdims=true)#[N x K] the 113 #compute the Loss:average cross-entropy loss and regularization theCorect_logprobs =-Np.log (Probs[range (num_examples), y]) theData_loss = Np.sum (corect_logprobs)/Num_examples theReg_loss = 0.5*reg*np.sum (w*w) + 0.5*reg*np.sum (w2*W2)117Loss = Data_loss +Reg_loss118 ifI% 1000 = =0:119 Print "Iteration%d:loss%f"%(i, loss) - 121 #compute the gradient on scores122Dscores =probs123Dscores[range (Num_examples), y]-= 1124Dscores/=Num_examples the 126 #backpropate The gradient to the parameters127 #First backprop into parameters W2 and B2 -DW2 =Np.dot (Hidden_layer. T, Dscores)129DB2 = Np.sum (Dscores, axis=0, keepdims=True) the #next backprop into hidden layer131Dhidden =Np.dot (Dscores, w2.t) the #Backprop the ReLU non-linearity133Dhidden[hidden_layer <= 0] =0134 #finally into W,b135DW =Np.dot (x.t, Dhidden)136db = Np.sum (Dhidden, axis=0, keepdims=True)137 138 #Add regularization gradient contribution139DW2 + = Reg *W2 $DW + = Reg *W141 142 #perform a parameter update143W + =-step_size *DW144b + =-step_size *DB145W2 + =-step_size *dW2146B2 + =-step_size *DB2147 #evaluate training set accuracy148Hidden_layer = Np.maximum (0, Np.dot (X, W) +b)149Scores = Np.dot (Hidden_layer, W2) +B2 MaxPredicted_class = Np.argmax (scores, Axis=1)151 Print 'Training accuracy:%.2f'% (Np.mean (Predicted_class = =y)) the #plot the resulting classifier153H = 0.02154X_min, X_max = x[:, 0].min ()-1, x[:, 0].max () + 1155Y_min, Y_max = x[:, 1].min ()-1, x[:, 1].max () + 1156xx, yy =Np.meshgrid (Np.arange (X_min, X_max, h),157 Np.arange (Y_min, Y_max, h))158Z = Np.dot (np.maximum (0, Np.dot (Np.c_[xx.ravel (), Yy.ravel ()], W) + b), W2) +B2159z = Np.argmax (z, Axis=1) theZ =Z.reshape (Xx.shape)161Fig =plt.figure ()162Plt.contourf (xx, yy, Z, Cmap=plt.cm.spectral, alpha=0.8)163Plt.scatter (x[:, 0], x[:, 1], c=y, s=40, cmap=plt.cm.Spectral)164 Plt.xlim (Xx.min (), Xx.max ())165Plt.ylim (Yy.min (), Yy.max ())

Run results

Python code cs231n softmax linear classifier, non-linear classifier comparison example (with Python drawing display results)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.