# Logistic regression (2)

Source: Internet
Author: User

After learning simple logistic regression, we will find that this function cannot be applied to large-scale data, because when the amount of data is too large, the amount of computing increases exponentially. Next we will discuss how to optimize logistic regression. Now we will write a simple optimization function:

`def stocGradAscent0(dataMatrix, classLabels):     m,n = shape(dataMatrix)     alpha = 0.01     weights = ones(n)   #initialize to all ones      for i in range(m):         h = sigmoid(sum(dataMatrix[i]*weights))         error = classLabels[i] - h         weights = weights + alpha * error * dataMatrix[i]     return weights def stocGradAscent0(dataMatrix, classLabels):    m,n = shape(dataMatrix)    alpha = 0.01    weights = ones(n)   #initialize to all ones    for i in range(m):        h = sigmoid(sum(dataMatrix[i]*weights))        error = classLabels[i] - h        weights = weights + alpha * error * dataMatrix[i]    return weights`

Of course, this function is still very simple, and I feel that there is no change. Compared with the previous gradient optimization, I carefully observed that this random gradient optimization uses a simple single value instead of a matrix, in this way, the calculation amount is reduced to 1/n. Of course there is no free lunch in the world. The consequence of such optimization is of course the loss of precision. However, this loss can be forgiven when the data volume is large.

Here is a method that combines precision and efficiency. This method is a little more complex than the previous one. Let's look at the code.

`def stocGradAscent1(dataMatrix, classLabels, numIter=150):     m,n = shape(dataMatrix)     weights = ones(n)   #initialize to all ones      for j in range(numIter):         dataIndex = range(m)         for i in range(m):             alpha = 4/(1.0+j+i)+0.0001    #apha decreases with iteration, does not               randIndex = int(random.uniform(0,len(dataIndex)))#go to 0 because of the constant              h = sigmoid(sum(dataMatrix[randIndex]*weights))             error = classLabels[randIndex] - h             weights = weights + alpha * error * dataMatrix[randIndex]             del(dataIndex[randIndex])     return weights def stocGradAscent1(dataMatrix, classLabels, numIter=150):    m,n = shape(dataMatrix)    weights = ones(n)   #initialize to all ones    for j in range(numIter):        dataIndex = range(m)        for i in range(m):            alpha = 4/(1.0+j+i)+0.0001    #apha decreases with iteration, does not            randIndex = int(random.uniform(0,len(dataIndex)))#go to 0 because of the constant            h = sigmoid(sum(dataMatrix[randIndex]*weights))            error = classLabels[randIndex] - h            weights = weights + alpha * error * dataMatrix[randIndex]            del(dataIndex[randIndex])    return weights`

In this way, most of our logistic regression methods are implemented. Of course, the test code is missing:

`def classifyVector(inX, weights):     prob = sigmoid(sum(inX*weights))     if prob > 0.5: return 1.0     else: return 0.0 def classifyVector(inX, weights):    prob = sigmoid(sum(inX*weights))    if prob > 0.5: return 1.0    else: return 0.0`

In this way, all the code will be available, and then we can use these things to do something. Well, I have just understood logistic regression. As for the advanced optimization target function, I won't take it out here because I didn't understand it either. O (objective _ objective) O Haha ~

Related Keywords:
Related Article

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

## A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

• #### Sales Support

1 on 1 presale consultation

• #### After-Sales Support

24/7 Technical Support 6 Free Tickets per Quarter Faster Response

• Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.