The program demonstrates the process of re-fine-tuning a pre-trained model on a new data set. We freeze the convolution layer and only adjust the full connection layer. Use the first five digits on the mnist dataset [0 ... 4] Training of a convolutional network. In the latter five digits [5 ... 9] Using convolutional networks to classify, freeze convolutional layers and fine-tune all connected layers One, variable initialization
now = Datetime.datetime.now
batch_size =
nb_classes = 5
Nb_epoch = 5
# Enter the dimensions of the image
Img_rows, img_ cols = 2, # of the
number of convolution filters used
nb_filters =
# # Size of pooling area for Max pooling
pool_size
Kernel_size = (3,3)
Input_shape = (img_rows, img_cols, 1)
# data, mixing and splitting on training and test data sets
(X_train, Y_train), (x_ Test, y_test) = Mnist.load_data ()
x_train_lt5 = X_train[y_train < 5]
Y_TRAIN_LT5 = Y_train[y_train < 5]
X_TEST_LT5 = X_test[y_test < 5]
Y_TEST_LT5 = y_test[y_test < 5]
x_train_gte5 = X_train[y_train >= 5 ]
#使标签从0 ~ ~ 5
y_train_gte5 = Y_train[y_train >= 5]-5
x_test_gte5 = x_test[y_test >= 5]
Y_ Test_gte5 = Y_test[y_test >= 5]-5
Second, the training function of the model
def Train_model (model, train, test, nb_classes): #train [0] is a picture, Train[1] is a label x_train = Train[0].reshape ((Train[0].sha Pe[0],) + input_shape) #1D +3d=4d x_test = Test[0].reshape ((test[0].shape[0],) + input_shape) X_train = X_train.asty PE (' float32 ') x_test = X_test.astype (' float32 ') x_train/= 255 x_test/= 255 print (' X_train shape: ', X_tra In.shape) print (x_train.shape[0], ' train samples ') print (x_test.shape[0], ' test samples ') Y_train = np_utils.t O_categorical (train[1], nb_classes) y_test = np_utils.to_categorical (test[1], nb_classes) model.compile (loss= ' Cate Gorical_crossentropy ', optimizer= ' Adadelta ', metrics=[' accuracy ']) T = Now () m
Odel.fit (X_train, Y_train, Batch_size=batch_size, Nb_epoch=nb_epoch, verbose=1, Validation_data= (X_test, y_test)) print (' Training time:%s '% (now ()-t)) score = Model.evaluate (X_test, Y_test, verbose=0) print (' TeSt Score: ', score[0]) print (' Test accuracy: ', score[1])
Build a model, build a convolution layer (feature layer) and a fully connected layer (classification layer)
Feature_layers = [
convolution2d (Nb_filters, kernel_size,
padding= ' valid ',
input_shape=input_shape),
Activation (' Relu '),
Convolution2d (Nb_filters, kernel_size),
Activation (' Relu '),
maxpooling2d (pool_size= (Pool_size, pool_size )),
Dropout (0.25),
Flatten (),
]
classification_layers = [
dense (+),
Activation (' Relu '),
Dropout (0.5),
dense (nb_classes),
Activation (' Softmax ')
]
model = sequential ( Feature_layers + classification_layers)
Iv. Pre-training the model
Train_model (model,
(X_TRAIN_LT5, Y_TRAIN_LT5),
(X_TEST_LT5, Y_TEST_LT5), nb_classes)
the characteristic layer of freezing pre-training model
For L in feature_layers:
l.trainable = False
vi. classification layer of fine_tuning
Train_model (model,
(X_train_gte5, Y_train_gte5),
(X_test_gte5, Y_test_gte5), nb_classes)
Source Address:
Https://github.com/Zheng-Wenkai/Keras_Demo