Case study: Predicting mortality from hernia disease in horses
When preparing data, missing values in the data are a tricky issue. Because sometimes the data is rather expensive, it is undesirable to throw away and regain it, so there are some ways to solve the problem.
There are two things to do in the preprocessing phase: first, all missing values must be replaced with a real value, because the NumPy data type we use does not allow missing values to be included. This selects the real number, which replaces all missing values, just to fit the logistic regression. Second, if a category label for a piece of data is found to be missing in the test dataset, then we can discard that piece of data. Because category labels are different from features, it is difficult to be sure to replace them with a suitable value.
1, test logistic regression algorithm. Multiply each feature vector on the test set by the regression coefficients obtained by the optimization method, sum the product results and enter into the sigmoid function. If the corresponding sigmoid value is greater than 0.5, the Predictor category label is 1, otherwise 0.
########################################
#功能: Category Label Categories
#输入变量: inx, weights eigenvectors, regression coefficients
########################################
def classify_vector (Inx, weights):
Prob = sigmoid (sum (INX * weights))
If prob > 0.5:
return 1.0
Else
return 0.0
2. Open the training set and test set, and format the data. The last column of the data is the category label, which is categorized as "still Alive" and "failed to survive". The improved stochastic gradient ascending algorithm is used to calculate the regression coefficient vectors.
########################################
# function: test of disease and horse mortality
# input variable: null
# OUTPUT Variable: Error rate
########################################
Def colic_test ():
Fr_train = open (' HorseColicTraining.txt ')
fr_test = open (' HorseColicTest.txt ')
Training_set = []
Training_labels = []
For line in Fr_train.readlines ():
Curr_line = Line.strip (). Split (' \ t ')
Line_arr = []
For I in Xrange (21):
Line_arr.append (float (curr_line[i]))
Training_set.append (Line_arr)
Training_labels.append (float (curr_line[21]))
Train_weights = rand_grad_ascent1 (Array (training_set), Training_labels, 500)
Error_count = 0
Num_test_vec = 0.0
For line in Fr_test.readlines ():
Num_test_vec + = 1.0
Curr_line = Line.strip (). Split (' \ t ')
Line_arr = []
For I in Xrange (21):
Line_arr.append (float (curr_line[i]))
if int (classify_vector (Array (Line_arr), train_weights))! = Int (curr_line[21]):
Error_count + = 1
Error_rate = float (error_count)/num_test_vec
print ' The error rate of this test is:%f '% error_rate
Return error_rate
########################################
# function: Averaging error rate after 10 iterations
########################################
Def multi_test ():
Num_tests = 10
Error_sum = 0.0
For k in Xrange (num_tests):
Error_sum + = Colic_test ()
print ' After%d iterations the average error rate is:%f '% (num_tests, error_sum/float (num_tests))
Test code:
def main ():
multi_test ()
if __name__ = = ' __main__ ':
Main ()
Machine learning--logistic Regression algorithm case