This article mainly introduces the question and answer section of Keras, in fact, very simple, may not be in detail behind, cooling a bit ahead, easy to look over.
Keras Introduction:
Keras is an extremely simplified and highly modular neural network Third-party library. Based on Python+theano development, the GPU and CPU operation are fully played. The purpose of the development is to do neural network experiments faster. Suitable for the early stage of the network prototype design, support convolution network and recurrent network and the results of both, supporting the artificial design of other networks, on the GPU and CPU running can be seamless connection. no.0 How Windows users install the Keras framework.
It is easy to install with the Pip method. That is, open cmd, enter pip install Keras, then wait for installation to complete. The required components are automatically installed. If the PIP command fails, it is recommended to follow the Win7 installation Theano detailed tutorial to install Python+theano. Then install Keras. Other installation methods see here. No.1 How to save the Keras model.
Use of pickle or cpickle is not recommended. (1) If you save only the model structure, the code is as follows: [python] view plain copy # Save as JSON json_string = Model.to_json () # Save As YAML yaml_string = Model.to_yaml () # Model reconstruction from Json:from keras.modelsimport Model_from_json mode L = Model_from_json (json_string) # Model reconstruction from YAML model =model_from_yaml (yaml_string) (2) if Need to save data:
[python] view plain copy model.save_weights (' My_model_weights.h5 ') model.load_weights (' My_model_weights.h5 ')
[python] view plain copy </pre>
No.2 why the loss of training is greater than the loss of the test.
There are two models of Keras: training and testing. Rules, such as dropout and l1/l2, are closed during testing.
In addition, the training loss is the average loss of each training batch. The first batch loss is certainly higher than the last batches loss because of the change in the model at all times. On the other hand, each epoch loss is calculated using the final epoch, so the return result is relatively small. No.3 How to visualize the output of the middle tier.
Through the output of the Theano function. Examples are as follows:
[python] view plain copy <span style= "Font-family:microsoft Yahei;" ># with a sequential model get_3rd_layer_output =theano.function ([Model.layers[0].input], MODEL.LAYERS[3].GET_OUTP UT (train=false)) Layer_output =get_3rd_layer_output (X) # with a Graph model Get_conv_layer_output =theano.functi On ([Model.inputs[i].inputfor iin model.input_order],model.outputs[' conv '].get_output (train=false), on_unused_ input= ' Ignore ') Conv_output = Get_conv_output (input_data_dict) </span>
No.4 How to use Keras to process data sets that are not suitable for storage in memory.
Batch trainingusingmodel.train_on_batch (x, y) and Model.test_on_batch (x, y) reference documentation: Modelsdocumentation.
You can also the batch training in action Inour cifar10example. No.5 How to interrupt training when verifying that the loss does not continue to decrease.
With the earlystopping callback function, the code is as follows:
[python] view plain copy <span style= "Font-family:microsoft Yahei;" >from Keras.callbacksimport earlystopping