Visualization of Keras depth Learning training results

Source: Internet
Author: User
Tags keras
' This script goes along the blog post "Building powerful image classification models using very little data" from BLOG.K Eras.io. It uses data that can is downloaded at:https://www.kaggle.com/c/dogs-vs-cats/data in our setup, we:-Created a data/folder-created Train/and validation/subfolders inside data/created-Cats/and dogs/subfolders inside train/a nd validation/-Put the "Cat pictures index 0-999 in data/train/cats-put" Cat pictures index 1000-1400 in Data/valida Tion/cats-put The Dogs Pictures index 12500-13499 in data/train/dogs-put the Dog Pictures index 13500-13900 in Data/va
Lidation/dogs so this we have 1000 training examples for each class, and validation to each class. In summary, this is our directory structure: ' Data/train/dogs/dog001.jpg dog002. JPG ... cats/cat001.jpg cat002.jpg ... validation/do
      Gs/dog001.jpg      Dog002.jpg ... cats/cat001.jpg cat002.jpg ... ' "' th Anks sove bug @http://blog.csdn.net/aggresss/article/details/78588135 from Keras import applications from  Keras.preprocessing.image Import imagedatagenerator from Keras import optimizers from keras.models import sequential from Keras.layers import dropout, flatten, dense from keras.models import Model from keras.regularizers import L2 # path to T
He model weights files. Weights_path = '..
/keras/examples/vgg16_weights.h5 ' Top_model_weights_path = ' bottleneck_fc_model.h5 ' # dimensions of our images. Img_width, Img_height =, Data_root = ' m:/dataset/dog_cat/' train_data_dir =data_root+ ' Data/train ' Validation_dat A_dir = data_root+ ' data/validation ' nb_train_samples = nb_validation_samples = epochs = batch_size = # BU ILD the VGG16 network Base_model = applications. VGG16 (weights= ' imagenet ', Include_top=false, input_shape= (150,150,3)) # Train designated trainingSize print (' Model loaded. ') # Build a classifier the convolutional model Top_model = sequential () Top_model.add (Flatten (input_s Hape=base_model.output_shape[1:]) # base_model.output_shape[1:] Top_model.add (Dense (256, activation= ' Relu ') KERNEL_REGULARIZER=L2 (0.001)) Top_model.add (Dropout (0.8)) Top_model.add (dense (1, activation= ' sigmoid ')) # Note That it's necessary to start with a fully-trained # classifier, including the top classifier Do fine-tuning top_model.load_weights (top_model_weights_path) # Add the ' model ' to the top of the convolutional base # Model.ad D (Top_model) # Bug model = Model (Inputs=base_model.input, Outputs=top_model (Base_model.output)) # Set the "the" "Laye 
    RS (up to the last conv blocks) # to Non-trainable (weights won't is updated) for layer in model.layers[:15]: #: Bug
layer.trainable = False # Compile the model with a Sgd/momentum optimizer # and a very slow learning. Model.compile (loss= ' binary_crossentropy ', optimizer=optimizers.  SGD (lr=1e-4, momentum=0.9), metrics=[' accuracy ']) # Prepare data augmentation Configuration Train_datagen = Imagedatagenerator (rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=true) test_datage n = Imagedatagenerator (rescale=1./255) Train_generator = Train_datagen.flow_from_directory (Train_data_dir, tar Get_size= (Img_height, Img_width), batch_size=batch_size, class_mode= ' binary ') Validation_generator = Test_datagen . Flow_from_directory (Validation_data_dir, target_size= (Img_height, Img_width), Batch_size=batch_size, CLA
ss_mode= ' binary ') model.summary () # Prints a summary representation of your model. # Let's visualize layer names and layer indices to-to-to-do many layers # we should freeze:for I, layer in enumerate (base_ Model.layers): Print (I, layer.name) from keras.utils import Plot_model Plot_model (model, to_file= ' model.png ') from Keras.callbACKs import History from keras.callbacks import modelcheckpoint import Keras History = History () Model_checkpoint = MODELC Heckpoint (' Temp_model.hdf5 ', monitor= ' loss ', save_best_only=true) TB_CB = Keras.callbacks.TensorBoard (log_dir= ' log ')
, Write_images=1, histogram_freq=0) # Set the storage location of the log, the network weight in the picture format to keep the display in the Tensorboard, set each cycle to compute the # Weight of the network, the distribution histogram of the output value of each layer callbacks = [history, Model_checkpoint, TB_CB] # model.fit () # Fine-tune the model history =model.fit_generator (Train_generator, steps_per_epoch=nb_train_samples//Batch_size, Epochs=epochs, call Backs=callbacks, Validation_data=validation_generator, validation_steps=nb_validation_samples//Batch_size, V

 Erbose = 2)
Save model and weights
model.save (' fine_tune_model.h5 ')
model.save_weights (' fine_tune_model_weight ')
print ( History.history)

//visualization section from
matplotlib import pyplot as Plt
history=history
plt.plot ()
Plt.plot (history.history[' VAL_ACC '])
plt.title (' model accuracy ')
plt.ylabel (' accuracy ')
Plt.xlabel (' Epoch ')
Plt.legend ([' Train ', ' Test '], loc= ' upper left ')
plt.show ()
# Summarize history for loss
Plt.plot ( history.history[' loss '])
plt.plot (history.history[' Val_loss '])
plt.title (' model loss '
) Plt.ylabel (' loss ')
Plt.xlabel (' epoch ')
plt.legend ([' Train ', ' Test '], loc= ' upper left ')
plt.show ()

Import  NumPy as NP
accy=history.history[' ACC ']
np_accy=np.array (accy)
np.savetxt (' Save _acc.txt ', np_accy)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.