Keras Series-early stopping in training, there are times when you need to stop at a stopped position. But earyly stopping can implement these functions, these times the model generalization ability is stronger. Similar to L2 regularization, a neural network with a relatively small parameter w norm is chosen. There are times when early stopping can be used.
Early stopping
Advantage: only run once gradient drop, you can find the relatively small value of W, middle value and relatively large value. There is no need to try many of the values of the L2 regularization super-parameter lambda.
disadvantage: can not deal with the above two problems independently, making things more complicated to consider.
General machine learning steps are divided into two steps: The first step is to determine the cost function J, which can then be optimized with the gradient descent method, the second step does not want the model to be fitted, there is a regularization mode to operate, this is a dynamic process. However, if early stopping is used, it is equivalent to controlling the end of two problems in one way, which makes the problem more complicated. In the middle position, the model has stopped training and the cost function does not fall to the appropriate area.
Import keras.callbacks
keras.callbacks.EarlyStopping (monitor= ' Val_loss ', patience=0,verbose=0,mode= ' auto ')
Monitor: The amount to monitor
Patience: Stop training after patience epoch when early stop is activated (if loss is found to have not decreased compared to the previous epoch training)
Verbose: Information display mode
Mode: ' Auto ', ' min ', one of ' Max ', training in min mode, terminating training if the detection value stops falling. In max mode, training is stopped when the detection value no longer rises.