Use of Logistic Regression
Use of Logistic regression and processing of missing values from the hernia disease prediction mortality Dataset:
Data on UCI, 368 samples, 28 features
Test method:
Cross-test
Implementation Details:
1. Pre-processing is required because there are missing values in the data. This will be discussed separately later.
2. There are three tags in the data. In this case, we simply merged the "not alive" and the "Euthanasia" tab.
3. Calculate the mean value for 10 times in the code
Processing of missing values:
Generally, there are several methods to handle missing values:
- Enter missing values manually
- Fill in missing values with global variables
- Ignore samples with missing values
- Fill in missing values with the central measurement (mean or median) of the attribute
- Use the mean or median of all attributes of the same class as the given ancestor
- Use the most likely value (which must be pushed by the machine learning algorithm)
We need to use different methods for different data. Here, considering that we use Logistic regression, we can use 0 to fill the data, because 0 is being updated.weight = weight + alpha * error * dataMatrix[randIndex]
And sigmoid (0) = 0.5, it will not affect the results.
1 #coding=utf-8 2 from numpy import * 3 4 def loadDataSet(): 5 dataMat = [] 6 labelMat = [] 7 fr = open('testSet.txt') 8 for line in fr.readlines(): 9 lineArr = line.strip().split()10 dataMat.append([1.0, float(lineArr[0]), float(lineArr[1])])11 labelMat.append(int(lineArr[2]))12 return dataMat, labelMat13 14 def sigmoid(inX):15 return 1.0/(1+exp(-inX))16 17 def stocGradAscent1(dataMatrix, classLabels, numIter=150):18 m,n = shape(dataMatrix)19 20 #alpha = 0.00121 weight = ones(n)22 for j in range(numIter):23 dataIndex = range(m)24 for i in range(m):25 alpha = 4/ (1.0+j+i) +0.0126 randIndex = int(random.uniform(0,len(dataIndex)))27 h = sigmoid(sum(dataMatrix[randIndex]*weight))28 error = classLabels[randIndex] - h29 weight = weight + alpha * error * dataMatrix[randIndex]30 del(dataIndex[randIndex])31 return weight32 33 def classifyVector(inX, weights):34 prob = sigmoid(sum(inX*weights))35 if prob > 0.5: return 1.036 else: return 0.037 38 def colicTest():39 frTrain = open('horseColicTraining.txt'); frTest = open('horseColicTest.txt')40 trainingSet = []; trainingLabels = []41 for line in frTrain.readlines():42 currLine = line.strip().split('\t')43 lineArr =[]44 for i in range(21):45 lineArr.append(float(currLine[i]))46 trainingSet.append(lineArr)47 trainingLabels.append(float(currLine[21]))48 trainWeights = stocGradAscent1(array(trainingSet), trainingLabels, 1000)49 errorCount = 0; numTestVec = 0.050 for line in frTest.readlines():51 numTestVec += 1.052 currLine = line.strip().split('\t')53 lineArr =[]54 for i in range(21):55 lineArr.append(float(currLine[i]))56 if int(classifyVector(array(lineArr), trainWeights))!= int(currLine[21]):57 errorCount += 158 errorRate = (float(errorCount)/numTestVec)59 print "the error rate of this test is: %f" % errorRate60 return errorRate61 62 def multiTest():63 numTests = 10; errorSum=0.064 for k in range(numTests):65 errorSum += colicTest()66 print "after %d iterations the average error rate is: %f" % (numTests, errorSum/float(numTests))67 68 def main():69 multiTest()70 71 if __name__ == '__main__':72 main()
From Weizhi note (Wiz)