標籤:
繼續之前的寫。
三、對單個樣本進行分類。
'''function: classify the input sample by voting from its K nearest neighborinput:1. the input feature vector2. the feature matrix3. the label list4. the value of kreturn: the result label'''def ClassifySampleByKNN(featureVectorIn, featureMatrix, labelList, kValue): # calculate the distance between feature input vector and the feature matrix disValArray = CalcEucDistance(featureVectorIn,featureMatrix) # sort and return the index theIndexListOfSortedDist = disValArray.argsort() # consider the first k index, vote for the label labelAndCount = {} for i in range(kValue): theLabelIndex = theIndexListOfSortedDist[i] theLabel = labelList[theLabelIndex] labelAndCount[theLabel] = labelAndCount.get(theLabel,0) + 1 sortedLabelAndCount = sorted(labelAndCount.iteritems(), key=lambda x:x[1], reverse=True) return sortedLabelAndCount[0][0]
基本思路就是,首先計算輸入樣本和訓練樣本集合的歐氏距離,然後根據距離進行排序,選擇距離最小的k個樣本,用這些樣本對應的標籤進行投票,票數最多的標籤就是輸入樣本所對應的標籤。
比較有特色的寫法是這一句:
# sort and return the index theIndexListOfSortedDist = disValArray.argsort()
disValArray是numpy的一維數組,儲存的僅僅是歐式距離的值。argsort直接對這些值進行排序,並且把排序結果所對應的原索引返回回來。很方便。另外一句是sorted函數的調用,按照value來對字典進行排序,用到了函數式編程的lambda運算式。這個用operator也能達到同樣的目的。
四、對測試樣本檔案進行分類,並統計錯誤率
'''function: classify the samples in test file by KNN algorithminput:1. the name of training sample file2. the name of testing sample file3. the K value for KNN4. the name of log file'''def ClassifySampleFileByKNN(sampleFileNameForTrain, sampleFileNameForTest, kValue, logFileName): logFile = open(logFileName,'w') # load the feature matrix and normailize them feaMatTrain, labelListTrain = LoadFeatureMatrixAndLabels(sampleFileNameForTrain) norFeaMatTrain = AutoNormalizeFeatureMatrix(feaMatTrain) feaMatTest, labelListTest = LoadFeatureMatrixAndLabels(sampleFileNameForTest) norFeaMatTest = AutoNormalizeFeatureMatrix(feaMatTest) # classify the test sample and write the result into log errorNumber = 0.0 testSampleNum = norFeaMatTest.shape[0] for i in range(testSampleNum): label = ClassifySampleByKNN(norFeaMatTest[i,:],norFeaMatTrain,labelListTrain,kValue) if label == labelListTest[i]: logFile.write("%d:right\n"%i) else: logFile.write("%d:wrong\n"%i) errorNumber += 1 errorRate = errorNumber / testSampleNum logFile.write("the error rate: %f" %errorRate) logFile.close() return
代碼挺多,不過邏輯上就很簡單了。沒什麼好說的。另外,不知道python中的命名是什麼習慣?我發現如果完全把變數名字展開,太長了——我的macbook pro顯示起來太難看。這裡就沿用c/c++的變數簡寫命名方式了。
五、入口調用函數
類似c/c++的main函數。只要運行kNN.py這個指令碼,就會先執行這一段代碼:
if __name__ == '__main__': print "You are running KNN.py" ClassifySampleFileByKNN('datingSetOne.txt','datingSetTwo.txt',3,'log.txt')
kNN中的k值我選擇的是3。
未完,待續。
【用Python玩Machine Learning】KNN * 代碼 * 二