Use several arguments and set up a formula to predict the target variable
The target variable is continuous, it is called regression analysis (1) A linear regression analysis Y=kx+bsol.lm<-lm (y~x,data) abline (SOL.LM) to make the model error of the square and the smallest, the parameters K and B, called the Least squares K=cov (x, y)/ CoV (x,x) B=mean (y)-k*mean (x) estimates the value range of the parameter B,k P-ary model P is the number of arguments, n is the number of samples [Ki-sd (ki) ta/2 (n-p-1), Ki+sd (ki) ta/2 (n-p-1)] K0 represents the regression model of B; K1 indicates that K;SD (k) is the standard deviation degree of Freedom Df<-sol.lm$df.residualleft<-summary (SOL.LM) $coefficients [, 1]-summary (SOL.LM) $ COEFFIENTS[,2]*QT (1-ALPHA/2,DF) right<-summary (SOL.LM) $coefficients [, 1]+summary (SOL.LM) $coeffients [, 2]*QT (1- ALPHA/2,DF)measure the degree of relevanceThe variable x and y correlation coefficients r=sxy/sqrt (Sxx) sqrt (SYY) range of values is [ -1,1] Cor (x, y) determination coefficient r^2 correction coefficient adjusted.r^2 The decision coefficient for multivariate regression analysis has a disadvantage, the more the number of independent variables, the greater the coefficient of determinationthe significance test of regression coefficientT Test Summary (SOL.LM) $coefficients [, 4] Calculated p.value value, the smaller the probability of its value equals 0, when p.value<0.05, can be determined k!=0 F Test Summary (SOL.LM) $ P.value tests the model parameters as a whole for 0, and calculates the probability of 0, and when p.value<0.05, the F-Test summary (SOL.LM) $fstatistic gives the sample degrees of Freedom F, the free-variable Freedom DF1, the F-value DF2 The P.value value PF (f,df1,df2,lower.tail=f) or 1-PF (F,DF1,DF2) can be read directly using the following code
model Error (residuals) residuals For a correct regression model, the error of the normal distribution residuals can be reflected in the error condition of a model, which can be used to compare the performance between different models.
Forecast
Predict (SOL.LM)
(2) Multivariate regression analysis Sol.lm<-lm (formula=y~., Data.train)Model correction Function Update (Object,formula)The update function can arbitrarily add or reduce independent variables on the basis of the LM model results, or model the target variables such as logarithm and root, for example: increase x2 squared variable lm.new<-update (SOL.LM,. ~.+i (x2^2)) Delete x2 variable. ~.- X2 The X2 into a x2 squared variable. ~.-x2+i (x2^2) add x1*x2.~.+x1*x2 to model Y-root modeling sqrt (.) ~. stepwise regression analysis function step ()Methods for decreasing variables Lm.step<-step (SOL.LM) model the smaller the ACI value the better regression analysis of categorical data in self-contained variablesThe value of categorical variable A is I, then the model prediction value is f (a1=0,... Ai=1,ap=0) (3) Logic regression y=1/(1+EXP (x)) using the maximum likelihood method to estimate the use of the RODBC package to read an Excel fileroot<-"c:/"File<-paste (Root, "Data.xls", sep= "")Library (RODBC)excel_file<-odbcconnectexcel (file)Data<-sqlfetch (excel_file, "data")Close (excel_file) Use predictive accuracy of models to measure Forecast data NUM11 &NBSP;NUM10 actual data N UM01 num00 Forecast accuracy = (num11+num00)/total number of samples = (num11+num00)/(num11+num10+num01+ NUM00) t () return transpose &NBSP;GLM () is the core function of logic regression analysis with R language family=binomial ("Logit") using the Step () function to modify the model STR function to view the contained data properties Model Prediction New<-predict (old,newdata=test.data) new<-1/(1+exp (-new)) New<-as.factor (IfElse (new>= 0.5,1,0)) Model performance measurement performance<-length (which (predict.data==data) ==true)/nrow (data) (4) Return tree cart The core function of the cart algorithm is the Rpart function of the Rpart package, and then the plot function is used to draw the Draw.tree function of the Maptree packet read the leaf node sol.rpart$frame$var== "<leaf> "Read the leaf node ordinal sol$rpart$where to make the test set error and the scale of the regression tree as small as possible cp the complexity coefficient sol.rpart$cptablexerror is the model error obtained by cross-validation xstd is the standard of model error. Xerror XERROR+/-XSTD pruning is to find a reasonable CP value with the increase of the split, the complexity of the parameters will be reduced monotonically, but the prediction error will first reduce the epigenetic pruning prune (sol.part,0.02) the CP <0.02 Tree Pruning use the plotcp () function to plot the fluctuation relationship of CP
R language-Regression analysis notes