Keras Transfer Learning, change the VGG16 output layer, with imagenet weight retrain.

Source: Internet
Author: User
Tags shuffle keras

Migration learning, with off-the-shelf network, run their own data: to retain the network in addition to the output layer of the weight of other layers, change the existing network output layer output class number. Train your network based on existing network weights,
Take Keras 2.1.5/vgg16net as an example. Import the necessary libraries

From keras.preprocessing.image import Imagedatagenerator to
keras import optimizers from
keras.models Import Sequential
from keras.layers import dropout, flatten, dense from
keras import Model from
keras Import initial Izers
from keras.callbacks import Modelcheckpoint, earlystopping from
keras.applications.vgg16 import VGG16
Set Input Picture Augmentation method
Here for train/test data only set Rescale, the picture matrix as a whole divided by a number.
# Prepare Data augmentation configuration
Train_datagen = Imagedatagenerator (
    rescale=1./255,
#     shear_range=0.2,
#     zoom_range=0.2,
#     horizontal_flip=true
    )

Test_datagen = Imagedatagenerator (rescale=1./255)
Set Picture size
Because the VGG16 is reserved for all network structures (except the output layer), the input must be consistent with the vggnet requirements.
# The input is the same as original network
Input_shape = (224,224,3)
Set Picture path
Sets the picture path, the directory parameter corresponds to the category root directory, each subdirectory under directory corresponds to each category picture. The subdirectory folder name is class name. The categories that are read are sorted alphabetically by category directory.
Train_generator = train_datagen.flow_from_directory (
    directory = './data/train/',
    target_size = input_shape[ : -1],
    color_mode = ' RGB ',
    classes = None,
    class_mode = ' categorical ',
    batch_size = ten,
    shuffle = True)

test_generator = test_datagen.flow_from_directory (
    directory = './data/test/',
    target_size = Input_shape[:-1],
    batch_size = ten,
    class_mode = ' categorical ')
Load VGG16 Network
Input_shape: Picture Input dimensions
Include_top = True to retain the fully connected layer.
Classes = 10: Number of categories (number of our categories)
weights = none: Do not load any network
# Build the VGG16 network, load the VGG16 network, change the number of categories in the output layer.
# include_top = True, load the whole network
# Set the new output classes
to # weights = None, load no weights< C3/>base_model = VGG16 (Input_shape = Input_shape, 
                     include_top = True, 
                     classes = ten, 
                     weights = None
                     )
print (' Model loaded. ')
Change the name of the last level
Base_model.layers[-1].name = ' pred '
View the initialization method for the last layer
Base_model.layers[-1].kernel_initializer.get_config ()

will be given:

{' Distribution ': ' Uniform ', ' mode ': ' Fan_avg ', ' scale ': 1.0, ' Seed ': None}
Change the weight initialization method of the last layer
Base_model.layers[-1].kernel_initializer = Initializers.glorot_normal ()
Load VGG16 in imagenet training get weight, be sure to follow By_name = True Way
Load weights According to the name of the layer (a hierarchy with no corresponding name will not load the weights), which is why we must change the name of the last layer. Because this is the only way This step loads the weights and will load all the layers except the last layer.
Base_model.load_weights ('./vgg16_weights_tf_dim_ordering_tf_kernels.h5 ', by_name = True)
Compile Network
# Compile the model with a sgd/momentum optimizer
# and a very slow learning.
SGD = optimizers. SGD (lr=0.01, decay=1e-4, momentum=0.9, nesterov=true)

base_model.compile (loss = ' categorical_crossentropy ',
              optimizer = SGD,
              metrics=[' accuracy '])
Start training
The weight is saved automatically during the training and the training is stopped as required.
# Fine-tune the model

check = Modelcheckpoint ('./', 
                monitor= ' Val_loss ', 
                verbose=0, 
                save_best_only= False, 
                save_weights_only=false, 
                mode= ' auto ', 
                period=1)

stop = earlystopping (monitor= ' Val_loss ',
              min_delta=0, 
              patience=0, 
              verbose=0, 
              mode= ' auto ')

base_model.fit_generator (
    Generator = Train_generator,
    epochs = 5,
    verbose = 1,
    validation_data = Test_generator,
    shuffle = True,
    callbacks = [Check, stop]

    )
Save Network
Model.save_weights (' Fine_tuned_net.h5 ')

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.