The function of the Sklearn.cross_validation module is to do cross validation as its name implies.
Cross
validation probably means : for the raw data we want to divide it into train data, part of it into test data. Train data is used for training, and test data is used for testing accuracy. The result of testing on test data is called validation error. To apply an algorithm to a raw data, we can not just randomly divide the train and test data, and then get a validation error, as a measure of the quality of the algorithm is the standard. Because there is contingency. We have to randomly divide train data and test data several times to figure out their validation error on top of each other. So there is a set of validation error, according to this set of validation error, you can better accurately measure the quality of the algorithm. Cross validation is a very good method of evaluate performance in the case of limited data volume. There are many ways to divide the raw data into train and test data, which results in many different ways of cross validation. The main function of the cross validation module in Sklearn is the following function:
Sklearn.cross_validation.cross_val_score. His calling form is scores = Cross_validation.cross_val_score (CLF, raw data, raw target, cv=5, Score_func=none)parameter explanation:The CLF is a different classifier and can be any classifier. such as support vector machine classifier. CLF = SVM. SVC (kernel= ' linear ', c=1)The cv parameter is the method that represents the different cross validation. If the CV is an int number, and if the raw target parameter is provided, then the Stratifiedkfold classification is used, and if the raw target parameter is not provided, then the Kfold classification is used. The return value of the Cross_val_score function is the exact rate of the classification obtained on test data for each different partition of raw data. As for the accuracy rate of the algorithm can be specified by the Score_func parameter, if not specified, is the CLF default self-band accuracy algorithm. There are other parameters that are not very important. Cross_val_score Specific Use examples are shown below:>>> CLF = SVM. SVC (kernel= ' linear ', c=1)>>> scores = Cross_validation.cross_val_score (... CLF, raw data, raw target, cv=5)...>>> scoresArray ([1. ..., 0.96 ..., 0.9.., 0.96 ..., 1. ])
In addition to the Kfold and stratifiedkfold that have just been mentioned, there are many other ways to divide raw data. but the other methods of partitioning are called slightly different from the first two (but all are the same), as illustrated below with the Shufflesplit method:>>> n_samples = raw_data.shape[0]>>> CV = cross_validation. Shufflesplit (N_samples, n_iter=3,... test_size=0.3, random_state=0) >>> cross_validation.cross_val_score (CLF, raw data, raw target, CV=CV)... Array ([0.97 ..., 0.97 ..., 1. ]) There are other partitioning methods as follows:cross_validation. Bootstrapcross_validation. Leaveonelabeloutcross_validation. Leaveoneoutcross_validation. Leaveplabeloutcross_validation. Leavepoutcross_validation. Stratifiedshufflesplit their invocation methods and Shufflesplit are the same, but each has its own parameters. As for the specific meaning of these methods, see machine learning textbook.
One more useful function is train_test_split.function: Train data and test data are randomly selected from the sample. The invocation form is:X_train, X_test, y_train, y_test = Cross_validation.train_test_split (Train_data, Train_target, test_size=0.4, random_state=0)Test_size is a sample-to-account ratio. If it is an integer, it is the number of samples. Random_state are the seeds of random numbers. Different seeds can result in different random sampling results. The same seed sample results are the same.
Python Scikit-learn Machine Learning Toolkit Learning Note: cross_validation module