The programming realizes the discriminant analysis, and gives the result on the watermelon data set.
The data set is as follows
number, color, foundine, knock, texture, navel, touch, density, sugar content, good melon1, turquoise, curled, turbid, clear, sunken, hard-slippery,0.697,0.46, is2, black, curled, dull, clear, sunken, hard-slippery,0.774,0.376, is3, black, curled, turbid, clear, sunken, hard-slippery,0.634,0.264, is4, turquoise, curled, dull, clear, sunken, hard-slippery,0.608,0.318, is5, plain, curled, turbid, clear, sunken, hard-slippery,0.556,0.215, is6, turquoise, slightly curled, turbid, clear, slightly concave, soft-sticky,0.403,0.237, is7, black, slightly curled, turbid, slightly mushy, slightly concave, soft sticky,0.481,0.149, is8, black, slightly curled, turbid, clear, slightly concave, hard-slippery,0.437,0.211, is9, black, slightly curled, dull, slightly mushy, slightly concave, hard-slippery,0.666,0.091, noTen, turquoise, stiff, crisp, clear, flat, soft-sticky,0.243,0.267, no One, plain, stiff, crisp, fuzzy, flat, hard-slippery,0.245,0.057, no A, plain, curled, turbid, vague, flat, soft-sticky,0.343,0.099, no -, green, slightly curled, turbid, slightly mushy, sunken, hard-slippery,0.639,0.161, no -, plain, slightly curled, dull, slightly mushy, sunken, hard-slippery,0.657,0.198, no the, black, slightly curled, turbid, clear, slightly concave, soft-sticky,0.36,0.37, no -, plain, curled, turbid, vague, flat, hard-slippery,0.593,0.042, no -, turquoise, curled, dull, slightly mushy, slightly concave, hard-slippery,0.719,0.103Whether
Python code is implemented as follows: The linear discriminant analysis module in Sklearn is called.
#!/usr/bin/python#-*-coding:utf-8-*-ImportNumPy as NPImportMatplotlib.pyplot as Plt fromMatplotlibImportColors fromSklearn.discriminant_analysisImportLinearDiscriminantAnalysisfile1= Open ('C:\quant\watermelon.csv','R') Data= [Line.strip ('\ n'). Split (',') forLineinchFile1] X= [[[Float (raw[-3]), float (raw[-2])] forRawinchData[1:]] Y= [1ifraw[-1]=='\xca\xc7' Else0 forRawinchData[1:]] X=Np.array (X) y=Np.array (y)###################################################################### #以上是西瓜#ColorMapCMap =colors. Linearsegmentedcolormap ('red_blue_classes', {'Red': [(0, 1, 1), (1, 0.7, 0.7)], 'Green': [(0, 0.7, 0.7), (1, 0.7, 0.7)], 'Blue': [(0, 0.7, 0.7), (1, 1, 1)]}) Plt.cm.register_cmap (CMap=cmap)################################################################################plot FunctionsdefPlot_data (LDA, X, Y, y_pred): Plt.figure () Plt.title ('Linear discriminant Analysis') Plt.xlabel ('Sugar Rate') Plt.ylabel ('Density') TP= (y = = y_pred)#True Positive//boolean Matrixtp0, TP1= Tp[y = = 0], Tp[y = = 1] PrintTP X0, X1= X[y = = 0], X[y = = 1] X0_TP, X0_FP= X0[tp0], x0[~Tp0] X1_TP, X1_FP= X1[TP1], x1[~TP1]#class 0:dotsPlt.plot (x0_tp[:, 0], x0_tp[:, 1],'o', color='Red') Plt.plot (x0_fp[:, 0], x0_fp[:,1],'.', color='#990000')#Dark red #class 1:dotsPlt.plot (x1_tp[:, 0], x1_tp[:, 1],'o', color='Blue') Plt.plot (x1_fp[:, 0], x1_fp[:,1],'.', color='#000099')#dark blue #Class 0 and 1:areasNX, NY = 200, 100X_min, X_max=Plt.xlim () y_min, Y_max=Plt.ylim () xx, yy=Np.meshgrid (Np.linspace (X_min, X_max, NX), Np.linspace (Y_min, Y_max, NY)) Z=Lda.predict_proba (Np.c_[xx.ravel (), Yy.ravel ()) Z= z[:, 1].reshape (Xx.shape) Plt.pcolormesh (xx, yy, Z, CMap='red_blue_classes', Norm=colors. Normalize (0., 1.)) Plt.contour (xx, yy, Z, [0.5], linewidths=2., colors='k') #meansPlt.plot (Lda.means_[0][0], lda.means_[0][1], 'o', color='Black', markersize=10) Plt.plot (lda.means_[1][0], lda.means_[1][1], 'o', color='Black', markersize=10)################################################################################Linear discriminant AnalysisLDA = Lineardiscriminantanalysis (solver="SVD", store_covariance=True) y_pred=lda.fit (x, y). Predict (x) Plot_data (LDA, x, Y, y_pred) Plt.axis ('Tight') Plt.suptitle ('Linear discriminant Analysis of Watermelon') plt.show ()
The results are as follows:
Where the red blue is the difference between two kinds of watermelon. Small red dots and small blue dots indicate error-sensitive. The middle horizontal line is the dividing line.
"Machine learning" Zhou Zhihua exercise answer 3.5