In this section we learn two very useful diagnostic methods that can be used to improve the performance of the algorithm. They are the learning curve (learning curve) and the validation curve (validation curve). The learning curve can be used to determine whether the learning algorithm is over-fitted or under-fitted.
Using the learning curve to discriminate deviation and variance problems
If a model is too complex relative to the training set, such as too many parameters, the model is likely to be over-fitted. Avoiding overfitting means including increasing the training set, but this is not easy to do. By drawing the training set and the accuracy of the validation set for the different training set sizes, we can easily detect whether the variance of the model is too high or excessive, and whether it is useful to increase the training set.
The model deviation is high in the upper-left corner of the image. Its training set and validation set accuracy are very low, it is likely to be under-fitting. The solution to the under-fitting method is to increase the model parameters, for example, build more features and reduce the regular term.
In the upper right-hand corner, the model variance is very high, which shows that there are too many differences between training set and validation set accuracy. The methods of solving overfitting are to increase the training set or reduce the complexity of the model, such as increasing the regular term, or reducing the feature number by feature selection.
These two problems can be solved by verifying the curve.
Let's look at the learning curve first:
The train_sizes parameter in the Learning_curve controls the absolute/relative number of training samples that generate the learning curve, where we set the Train_sizes=np.linspace (0.1, 1.0, 10), Divides the training set size into 10 equal intervals. The Learning_curve uses a layered K-fold cross-validation to calculate the accuracy of cross-validation by default, and we set K by CV.
It can be seen that the model behaves well in the test set, but the accuracy of the training set and the test set is still a small interval, possibly the model is somewhat over-fitted.
Solving over-fitting and under-fitting with verification curve
Validating a curve is a very useful tool that can be used to improve the performance of a model because he can handle fit and under-fit problems.
The verification curve and the learning curve are very similar, but the difference is that the accuracy rate of the model under different parameters is not the same as the accuracy of the different training set size:
We get the validation curve for parameter C.
Like the Learning_curve method, the Validation_curve method uses sampling K-fold cross-validation to evaluate the performance of the model. Inside the Validation_curve, we set the parameters to evaluate, and here is the reciprocal of C, the regular coefficient of LR.
Observe that the best C value is 0.1.
Python Machine learning Chinese catalog (http://www.aibbt.com/a/20787.html)
Reprint please specify the source, Python machine learning (http://www.aibbt.com/a/pythonmachinelearning/)
Python machine learning: 6.3 Debugging algorithms using learning curve and validation curve