Cross validation is a model evaluationmethod that is better than residuals. The problem with residual evaluations isthat they do not give an indication of how well the learner will do when it isasked to make new predictions for data it has not already seen. One way toovercome this problem is to not use the entire data set when training alearner. Some of the data is removed before training begins. Then when trainingis done, the data that was removed can be used to test the performance of thelearned model on ``new'' data. This is the basic idea for a whole class ofmodel evaluation methods called cross validation.
Fewcommon techniques used for cross validation
Train_TestSplit approach.
In this approach we randomly split thecomplete data into training and test sets. Then Perform the model training onthe training set and use the test set for validation purpose, ideally split thedata into 70:30 or 80:20. With this approach there is a possibility of highbias if we have limited data, because we would miss some information about thedata which we have not used for training. If our data is huge and our testsample and train sample has the same distribution then this approach isacceptable.
K-FoldsCross Validation:
K-Folds technique is a popular and easy tounderstand, it generally results in a less biased model compare to othermethods. Because it ensures that every observation from the original datasethas the chance of appearing in training and test set. This is one among thebest approach if we have a limited input data.
Mistakesoften made when using Cross-Validation
Since many studies in the laboratory haveused evolutionary algorithms (EA) and classifiers, the fitness function usedusually has the recognition rate of the classifier. However, there are manycases where cross-validation is used incorrectly. As mentioned earlier, onlytraining data can be used for model construction, so only the recognition rateof training data can be used in fitness function. The EA is the method used toadjust the best parameters of the model during the training process, so onlyafter the EA has evolved and the model parameters have been fixed, the testdata can be used at this time. How to match EA and cross-validation? Theessence of cross-validation is to estimate (estimate) the generalization errorof a certain classification method to a set of datasets. It is not a method todesign a classifier. Therefore, cross-validation cannot be used in the fitnessfunction of EA, because it is different from the fitness function. The relevantsamples belong to the training set, so which samples are the test sets? If a cross-validationtraining or test recognition rate is used in a fitness function, then such anexperimental method can no longer be called cross-validation.
Thecorrect collocation method of EA and k-CV is to divide the dataset into k equalparts of subsets, and take 1 part of the subset as the test set each time, andthe remaining k-1 parts as the training set, and apply the training set to theEA The fitness function is being calculated (as for how the training set can befurther used, there is no restriction). Therefore, the correct k-CV willundergo a total of k times of EA evolution to establish k classifiers. The testrecognition rate of k-CV is the average of the recognition rates of k groups oftest sets corresponding to k classifiers obtained from EA trainin
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.