Q: The loss value is negative during training. "Cause" the training data entered is not normalized resulting in a "workaround" the input value is filtered through the following function for normalization
#数据归一化
def data_in_one (inputdata):
inputdata = (inputdata-inputdata.min ())/(Inputdata.max ()-inputdata.min ())
return Inputdata
Q: How to see Loss and ACC changes (loss a few rounds will not change how to do.) )"Cause" (Transfer from http://blog.csdn.net/SMF0504/article/details/71698354)
Train loss constantly declining, test loss constantly declining, indicating that the network is still learning; Train loss constantly declining, test loss tend to be unchanged, indicating that the network over-fitting; Train loss tends to be unchanged, test loss constantly declining, indicating that there is a problem with data set 100%; Train loss tends to be unchanged, test loss tends to be constant, indicating that learning bottlenecks, need to reduce the learning rate or batch number; Train loss constantly rising, test loss constantly rising, indicating that the network structure design is improper, training super parameter set improper, data set after cleaning problems.
Q: How to visualize the Keras training process (changes in loss and ACC). the visualization function is defined by the following statement:
Import Keras from keras.utils import np_utils import matplotlib.pyplot as plt%matplotlib inline #写一个LossHistory类, save loss and ACC class Losshistory (keras.callbacks.Callback): Def on_train_begin (self, logs={}): self.losses = {' Batch ': [], ' Epoch ': []} self.accuracy = {' Batch ': [], ' epoch ': []} Self.val_loss = {' Batch ': [], ' epoch ': []} sel F.VAL_ACC = {' Batch ': [], ' epoch ': []} def on_batch_end (self, batch, logs={}): self.losses[' Batch '].append (logs . Get (' loss ')) self.accuracy[' batch '].append (Logs.get (' acc ')) self.val_loss[' batch '].append (Logs.get (' val_l OSS ') self.val_acc[' batch '].append (Logs.get (' Val_acc ') def on_epoch_end (self, batch, logs={}): Self . losses[' Epoch '].append (logs.get (' loss ')) self.accuracy[' epoch '].append (Logs.get (' acc ')) self.val_loss[' EP Och '].append (logs.get (' Val_loss ')) self.val_acc[' epoch '].append (logs.get (' Val_acc ')) def loss_plot (self, loss _type): Iters =Range (len (self.losses[loss_type))) Plt.figure () # ACC Plt.plot (Iters, Self.accuracy[loss_type], ' r ', label= ' Train acc ') # loss Plt.plot (Iters, Self.losses[loss_type], ' g ', label= ' train loss ') if L
Oss_type = = ' epoch ': # VAL_ACC Plt.plot (Iters, Self.val_acc[loss_type], ' B ', Label= ' Val acc ')
# Val_loss Plt.plot (Iters, Self.val_loss[loss_type], ' k ', label= ' Val loss ') Plt.grid (True) Plt.xlabel (Loss_type) plt.ylabel (' Acc-loss ') plt.legend (loc= "upper right") plt.show ()
In models, the model statement is preceded by a
History = Losshistory ()
Then add callbacks = {history} in the Model.fit, and the following call to history
Model.fit (x, Y, batch_size=32, Nb_epoch=20,validation_data= (XT,YT), Validation_steps=none,callbacks=[history])
history.loss_plot (' epoch ')
The general effect is this: