After learning simple logistic regression, we will find that this function cannot be applied to large-scale data, because when the amount of data is too large, the amount of computing increases exponentially. Next we will discuss how to optimize logistic regression. Now we will write a simple optimization function:
def stocGradAscent0(dataMatrix, classLabels): m,n = shape(dataMatrix) alpha = 0.01 weights = ones(n) #initialize to all ones for i in range(m): h = sigmoid(sum(dataMatrix[i]*weights)) error = classLabels[i] - h weights = weights + alpha * error * dataMatrix[i] return weights def stocGradAscent0(dataMatrix, classLabels): m,n = shape(dataMatrix) alpha = 0.01 weights = ones(n) #initialize to all ones for i in range(m): h = sigmoid(sum(dataMatrix[i]*weights)) error = classLabels[i] - h weights = weights + alpha * error * dataMatrix[i] return weights
Of course, this function is still very simple, and I feel that there is no change. Compared with the previous gradient optimization, I carefully observed that this random gradient optimization uses a simple single value instead of a matrix, in this way, the calculation amount is reduced to 1/n. Of course there is no free lunch in the world. The consequence of such optimization is of course the loss of precision. However, this loss can be forgiven when the data volume is large.
Here is a method that combines precision and efficiency. This method is a little more complex than the previous one. Let's look at the code.
def stocGradAscent1(dataMatrix, classLabels, numIter=150): m,n = shape(dataMatrix) weights = ones(n) #initialize to all ones for j in range(numIter): dataIndex = range(m) for i in range(m): alpha = 4/(1.0+j+i)+0.0001 #apha decreases with iteration, does not randIndex = int(random.uniform(0,len(dataIndex)))#go to 0 because of the constant h = sigmoid(sum(dataMatrix[randIndex]*weights)) error = classLabels[randIndex] - h weights = weights + alpha * error * dataMatrix[randIndex] del(dataIndex[randIndex]) return weights def stocGradAscent1(dataMatrix, classLabels, numIter=150): m,n = shape(dataMatrix) weights = ones(n) #initialize to all ones for j in range(numIter): dataIndex = range(m) for i in range(m): alpha = 4/(1.0+j+i)+0.0001 #apha decreases with iteration, does not randIndex = int(random.uniform(0,len(dataIndex)))#go to 0 because of the constant h = sigmoid(sum(dataMatrix[randIndex]*weights)) error = classLabels[randIndex] - h weights = weights + alpha * error * dataMatrix[randIndex] del(dataIndex[randIndex]) return weights
In this way, most of our logistic regression methods are implemented. Of course, the test code is missing:
def classifyVector(inX, weights): prob = sigmoid(sum(inX*weights)) if prob > 0.5: return 1.0 else: return 0.0 def classifyVector(inX, weights): prob = sigmoid(sum(inX*weights)) if prob > 0.5: return 1.0 else: return 0.0
In this way, all the code will be available, and then we can use these things to do something. Well, I have just understood logistic regression. As for the advanced optimization target function, I won't take it out here because I didn't understand it either. O (objective _ objective) O Haha ~