R language-Logistic regression modeling

Source: Internet
Author: User

Case 1: Predicting the customer's credit rating using a logistic regression model

The data set takes defect as the dependent variable and the remaining variable as the argument

1. Loading packages and datasets

Library (PROC) library (DMWR)
MODEL.DF <-read.csv (' E:\\udacity\\data Analysis high\\r\\r_study\\ advanced Course code \ \ Data set \ First day \\4 credit rating \\customer defection Data.csv ', sep= ', ', header=t

2. View the data set,

Dim (model.df) Head (MODEL.DF) str (MODEL.DF) Summary (MODEL.DF)

Conclusion: There are 10000 rows of data, 56 variables, there is no null value in the dataset, but there is a maximum value exists

3, Data cleaning

 #   The value of Na is 0  z <-model.df[,sapply ( MODEL.DF, is  .numeric)]z[ is . NA (z)] = 0summary (z)  #   remove customer ID and defect column  ExL <-names (z)%in % c ( " cust_id  , "  defect   " ) z  <- Z[!exl]head (z) 
Maximum point and 99% sub-position, minimum value of 1%
QS <-sapply (z, function (z) quantile (z,c (0.01,0.99)))
System.time (for (i in 1:ncol (z)) {
For (J in 1:nrow (z)) {
if (Z[j,i] < qs[1,i]) z[j,i] = Qs[1,i]
if (Z[j,i] > Qs[2,i]) z[j,i] = Qs[2,i]
}
})
rebuilding datasets
MODEL_AD.DF <-data.frame (cust_id=model.df$cust_id,defect=model.df$defect,z)

Modified before modification

Conclusion: VISIT_CNT no longer has the maximum value of non-conforming business

4. Modeling

Set.seed (123)#divide the dataset into training and test sets, typically (70% is the training set, 30% is the test set)S <-sample (Nrow (MODEL_AD.DF), Floor (Nrow (MODEL_AD.DF) *0.7), replace =F) TRAIN_DF<-MODEL_AD.DF[S,]TEST_DF<-model_ad.df[-S,]#get rid of cust_id.n <-names (Train_df[-c (1,34)])#formulas for generating logistic regressionF <-As.formula (paste ('Defect ~', Paste (n[!n%inch%'defect'],collapse =' + ')))#ModelingModel_full <-GLM (f,data=train_df[-c (1,34)],family =binomial) Summary (model_full)#The model test direction has three kinds of parameters Both,backword,forward#Backword Each test reduces one factor, and ForWord increments one factor at a time#the smaller the value of the AIC, the better the model.Step <-Step (model_full,direction ='both') Summary (step)

5. Test model

# using test sets to predict models pred <-predict (step,test_df,type='response'<- IfElse (pred>0.5,1, 0)#  model Precision accuracy <- table (fitted.r,test_df$defect)  # make ROC image Roc <- Roc (test_df$defect,pred) Rocplot (ROC)

Conclusion: The value of ROC is 0.75 indicating that the model has a good predictive function, the accuracy of the general model should reach about 75%, otherwise it needs to be adjusted

Case 2: Study which types of users are bad users

1. Data set field description

1 #seriousdlqin2yrs over 90 days of overdue arrears2 #Revolvingutilizationofunsecuredlines Unsecured Loan Recycling, in addition to the car, the house divided by the credit limit of the comprehensive non-staged debt credit card loan3 #Age Loan Person4 #Numberoftime30-59dayspastduenotworse 30-59 days Overdue5 #Debtratio debt ratio6 #monthlyincome Monthly Income7 #Numberofopencreditlinesandloans the number of open and credit8 #numberoftimes90dayslate greater than or equal to 90 days overdue9 #numberrealestateloansorlines number of real estateTen #Numberoftime60-89dayspastduenotworse 60-90 days Overdue One #numberofdependents not including the number of my dependents

2. Importing datasets and Packages

Library (PROC) library (DMWR)
CS.DF <-read.csv (' E:\\udacity\\data analysis high\\r\\r_study\\ the next day Data \\cs-data.csv ', header=t,sep= ', ')
Summary (CS.DF)

Conclusion: There are more NA values in this column of monthly income.

There are some values that have outliers, such as debt ratios, real estate numbers, and family members, which can have a bad effect on the model, so remove

3. Data Cleansing

# use KNN proximity algorithm to supplement missing monthly revenue ' Weighavg ' # to get rid of the maximum value of 30-60 days overdue over 80 cs.df_imp <-Cs.df_imp[-which (cs.df_imp$ Numberoftime30.59dayspastduenotworse>80)]#  to get rid of the extreme value of debt greater than 10000 Cs.df_imp <-cs.df_ Imp[-which (Cs.df_imp$debtratio > 100000)]#  to get rid of the extreme value of monthly income greater than 500,000 Cs.df_imp <-cs.df_ Imp[-which (Cs.df_imp$monthlyincome > 500000)]

4. Modeling

Set.seed (123)#  divides the dataset into training and test sets to prevent overfitting S <-sample (Nrow (Cs.df_imp), Floor (Nrow (CS.DF_IMP) *0.7), replace =<-<-cs.df_imp[-S,]#  generate a full-scale model using logical linear regression # Family=binomia means using two-item distributions # maxit=1000 indicates the need to fit 1000 times model_full <-GLM (seriousdlqin2yrs~.,data=cs.train,family=binomial,maxit=1000 )
# use regression to find the value of the smallest AIC
Step <-Step (model_full,direction= ' both ')
Summary (STEP)

Conclusion: A factor with a PR value of less than 0.05 is an effective factor, and more is more important.

5. View the Model

' Response '  <-ifelse (pred>0.5,1<-<-mean (fitted.r!=<- Roc (cs.test$ seriousdlqin2yrs,pred) Plot (ROC) Roc

Conclusion: The predictive success rate is only 69%

6. Modify the Model

6.1 Viewing datasets

Table (cs.train$seriousdlqin2yrs) prop.table (table (cs.train$seriousdlqin2yrs))

Conclusion: Only about 6% of users default, indicating that the data set is not balanced

6.2 Balance Results

cs.train$seriousdlqin2yrs <- as.factor (cs.train$seriousdlqin2yrs)#  uses the bootstrasp self-Service sampling method to: Reduce the number of 0, increase the number of 1, rebalance model trainsplit <-SMOTE (seriousdlqin2yrs~.,cs.train,perc.over = 30,perc.under =  550 <- as.numeric (cs.train$seriousdlqin2yrs) prop.table (table (trainsplit$seriousdlqin2yrs))

Conclusion: The distribution of data sets achieves a basic balance

6.3 Re-modeling

Model_full =  GLM (seriousdlqin2yrs~.,data=trainsplit,family=binomial,maxit=1000"both  ") Summary (step)

Conclusion: 8 variables with effect on the result are found, which are different from the variable selection to start modeling

6.4 Predictive Models

Pred = Predict (step,cs.test,type="response") FITTED.R=ifelse (pred>0.5,1  == mean (fitted.r!== roc (cs.test$seriousdlqin2yrs,pred) Plot (ROC) Roc

Conclusion: The accuracy of model prediction has been increased from 69% to 81.6%.

R language-Logistic regression modeling

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.