What is migration learning
In deep learning, the so-called migration learning is to adapt a model of problem A to a new problem B by simply adjusting it. In actual use, it is often completed problem a training model has more perfect data, and problem B's data volume is small. The adjustment process is based on the actual situation, you can choose to retain the weight of the previous layers of the convolutional layer to retain the low-level features of the extraction, you can also retain all the model, only the new task to change its FC layer. The role of migration learning
So for different tasks, why is it possible to migrate between different models? As mentioned above, the migrated models are often trained using a large number of samples, such as the inception V3 network model provided by Google, which is trained with imagenet data sets, and there are 1.2 million labeled images in Imagenet, and in practical applications, It is difficult to collect so many sample data. And the process of collection need to consume a lot of human inability (in fact, deep learning to solve practical problems, the best time is often not the training process, but the process of data tagging), so generally speaking, the amount of data in question B is less.
So, the same model in the use of large samples is a good solution to the problem a, then there is reason to believe that the training in the model of the weight parameters can be able to do a good job of feature extraction task (at least the first few layers are so), so since already have such a model, then take it.
Therefore, migration learning has the following advantages:
Shorter training time, faster convergence speed, more accurate weight parameters.
In general, however, if the amount of data in task B is sufficient, the migrated model will be less effective than the training, but at least the underlying weight parameter can be re-trained as the initial value. TensorFlow Realization of Inception V3 migration Learning
The following example uses the inception V3 model provided by Google to complete the sorting task of the flower, which retains all the convolutional layers of the inception V3 and modifies only the final fully connected layer to accommodate the new classification task.
Import glob Import os.path import random import NumPy as NP import TensorFlow as TF from tensorflow.python.platform Import Gfile #模型和样本路径的设置 #inception-v3 bottleneck node Number bottleneck_tensor_size = 2048 #瓶颈层tenbsor name bottleneck_tensor_name = ' Pool_3/_ reshape:0 ' #图像输入tensor name jpeg_data_tensor_name = ' decodejpeg/contents:0 ' # v3 path model_dir = './datasets/inception_d Ec_2015 ' # v3 modefile model_file= ' TENSORFLOW_INCEPTION_GRAPH.PB ' #特征向量 save path Cache_dir = './datasets/bottleneck ' #数 According to Path input_data = './datasets/flower_photos ' #验证数据 percentage validation_percentage = ten #测试数据 percentage Test_percentag E = Ten #神经网络参数的设置 learning_rate = 0.01 STEPS = 4000 BATCH = #把样本中所有的图片列表并按训练, verify, test data separate def create_image_lists (testin G_percentage, validation_percentage): result = {} Sub_dirs = [x[0] for x in Os.walk (input_data)] Is_root_dir
= True for sub_dir in sub_dirs:if Is_root_dir:is_root_dir = False Continue extensions = [' jpg ', ' JPEg ', ' JPG ', ' JPEG '] file_list = [] Dir_name = Os.path.basename (sub_dir) for extension in Extensio Ns:file_glob = Os.path.join (Input_data, Dir_name, ' *. ' + extension) file_list.extend (Glob.glob (fi LE_GLOB)) if not file_list:continue label_name = Dir_name.lower () # Initialize Training_images = [] Testing_images = [] Validation_images = [] for file_name in File_list:base_name = Os.path.basename (file_name) # randomly divides data chance = Np.random.randint if chance < ; Validation_percentage:validation_images.append (Base_name) elif Chance < (Testing_percentag E + validation_percentage): Testing_images.append (base_name) else:training_im Ages.append (base_name) Result[label_name] = {' dir ': dir_name, ' Training ': training_image S, ' TestinG ': testing_images, ' validation ': Validation_images,} return result #函数通过类别名称, the owning dataset, and the picture number get a
Address of the image def Get_image_path (image_lists, Image_dir, Label_name, Index, category): Label_lists = Image_lists[label_name] Category_list = label_lists[category] Mod_index = index% len (category_list) Base_name = Category_list[mod_inde X] sub_dir = label_lists[' dir '] Full_path = Os.path.join (Image_dir, Sub_dir, base_name) return Full_path #函数获 Take the file address of the eigenvector after Inception-v3 model processing def get_bottleneck_path (image_lists, Label_name, Index, category): Return Get_image_pat
H (image_lists, Cache_dir, Label_name, Index, category) + ' txt ' #函数使用加载的训练好的Inception-v3 model process a picture to get the eigenvector of the image. def run_bottleneck_on_image (Sess, Image_data, Image_data_tensor, bottleneck_tensor): Bottleneck_values = Sess.run (bot Tleneck_tensor, {image_data_tensor:image_data}) Bottleneck_values = Np.squeeze (bottleneck_values) return Bottlen Eck_values #函数会先试图寻找已经计算且保存下来的特征向量, such asIf the result is not found then the eigenvector is calculated first and then saved to the file Def get_or_create_bottleneck (Sess, image_lists, Label_name, Index, category, Jpeg_data_tensor, Bottleneck_tensor): label_lists = image_lists[label_name] Sub_dir = label_lists[' dir '] Sub_dir_path = Os.path. Join (Cache_dir, Sub_dir) if not os.path.exists (Sub_dir_path): Os.makedirs (sub_dir_path) Bottleneck_path = Get_bott Leneck_path (image_lists, Label_name, Index, category) if not os.path.exists (bottleneck_path): image_path = ge T_image_path (image_lists, Input_data, Label_name, Index, category) Image_data = Gfile. Fastgfile (Image_path, ' RB '). Read () Bottleneck_values = Run_bottleneck_on_image (Sess, Image_data, Jpeg_data_tensor , bottleneck_tensor) bottleneck_string = ', '. Join (str (x) for x in Bottleneck_values) with open (bottleneck _path, ' W ') as Bottleneck_file:bottleneck_file.write (bottleneck_string) Else:with open (Bottlene Ck_path, ' R ') as Bottleneck_file:bottleneck_string = Bottleneck_file.read () bottleneck_values = [Float (x) for x in Bottleneck_string.split (', ')] return Bottleneck_values #函数随机获取一个batch的图片作为训练数据 def get_random_cached_bottlenecks (Sess, n_classes, image_lists, How_many,
Category, Jpeg_data_tensor, bottleneck_tensor): bottlenecks = [] Ground_truths = [] for _ in range (How_many): Label_index = Random.randrange (n_classes) label_name = List (Image_lists.keys ()) [Label_index] Image _index = Random.randrange (65536) bottleneck = Get_or_create_bottleneck (Sess, image_lists, Label_name,
Image_index, category, Jpeg_data_tensor, bottleneck_tensor) Ground_truth = Np.zeros (n_classes, Dtype=np.float32)
Ground_truth[label_index] = 1.0 bottlenecks.append (bottleneck) ground_truths.append (Ground_truth) Return bottlenecks, Ground_truths #获取全部的测试数据, and calculate the correct rate for Def get_test_bottlenecks (Sess, image_lists, n_classes, Jpeg_data _tensor, Bottleneck_tensor):
bottlenecks = [] Ground_truths = [] label_name_list = List (Image_lists.keys ()) for Label_index, Label_nam E in Enumerate (label_name_list): Category = ' testing ' for index, unused_base_name in enumerate (image_lists [Label_name] [Category]: bottleneck = Get_or_create_bottleneck (Sess, image_lists, Label_name, index, Category,jpeg_data_te Nsor, bottleneck_tensor) Ground_truth = Np.zeros (n_classes, Dtype=np.float32) ground_truth[label_i Ndex] = 1.0 bottlenecks.append (bottleneck) ground_truths.append (Ground_truth) return Bottlenec KS, ground_truths def Main (): Image_lists = create_image_lists (Test_percentage, Validation_percentage) n_classes
= Len (Image_lists.keys ()) # reads the INCEPTION-V3 model that has been trained well. With Gfile. Fastgfile (Os.path.join (Model_dir, Model_file), ' RB ') as F:graph_def = tf. Graphdef () graph_def. Parsefromstring (F.read ()) bottleneck_tensor, jpeg_data_tensor = tf.iMport_graph_def (Graph_def, Return_elements=[bottleneck_tensor_name, Jpeg_data_tensor_name]) # define a new neural network input Bottleneck_input = Tf.placeholder (Tf.float32, [None, Bottleneck_tensor_size], name= ' Bottleneckinputplaceholder ') gro Und_truth_input = Tf.placeholder (Tf.float32, [None, n_classes], name= ' Groundtruthinput ') # define a layer full-link layer with Tf.name_ Scope (' Final_training_ops '): weights = tf. Variable (Tf.truncated_normal ([Bottleneck_tensor_size, n_classes], stddev=0.001)) biases = tf. Variable (Tf.zeros ([n_classes])) Logits = Tf.matmul (bottleneck_input, weights) + biases final_tensor = TF.N
N.softmax (logits) # defines the cross-entropy loss function. Cross_entropy = Tf.nn.softmax_cross_entropy_with_logits (logits=logits, Labels=ground_truth_input) cross_entropy_ mean = Tf.reduce_mean (cross_entropy) train_step = Tf.train.GradientDescentOptimizer (learning_rate). Minimize (Cross_
Entropy_mean) # calculates the correct rate. With Tf.name_scope (' evaluation '): Correct_prediction = tf.equal (Tf.argmax (final_tensor, 1), Tf.argmax (Ground_truth_input, 1)) Evaluation_step = Tf.reduce_mean (TF. Cast (Correct_prediction, Tf.float32)) with TF.
Session () as Sess:init = Tf.global_variables_initializer () sess.run (INIT) # training process.
For I in Range (STEPS): train_bottlenecks, Train_ground_truth = Get_random_cached_bottlenecks ( Sess, N_classes, image_lists, BATCH, ' training ', Jpeg_data_tensor, Bottleneck_tensor) sess.run (Train_step, fe Ed_dict={bottleneck_input:train_bottlenecks, Ground_truth_input:train_ground_truth}) if I% = = 0 or i +
1 = = Steps:validation_bottlenecks, Validation_ground_truth = Get_random_cached_bottlenecks ( Sess, N_classes, image_lists, BATCH, ' validation ', Jpeg_data_tensor, bottleneck_tensor) validation _accuracy = Sess.run (Evaluation_step, feed_dict={Bottleneck_input:validation_botTlenecks, Ground_truth_input:validation_ground_truth}) print (' Step%d:validation accuracy on random samp
Led%d examples =%.1f%% '% (i, BATCH, validation_accuracy * 100)) # test the correct rate on the final test data. Test_bottlenecks, Test_ground_truth = Get_test_bottlenecks (Sess, image_lists, n_classes, Jpeg_data_tensor, b Ottleneck_tensor) test_accuracy = Sess.run (Evaluation_step, feed_dict={Bottleneck_input:test_bottlen Ecks, Ground_truth_input:test_ground_truth}) print (' Final test accuracy =%.1f%% '% (test_accuracy *)) if __ name__ = = ' __main__ ': Main ()
Output Result:
.
.
Step 1000:validation accuracy on random sampled examples = 92%
.
.
Step 2700:validation accuracy on random sampled examples = 94%
.
.
Step 3999:validation accuracy on random sampled examples = 94%
Final Test accuracy = 92.7%
It can be seen from the results that the model converges and has a good accuracy in a very short period of time. Finally click here to download the entire project, due to the upload size limit, the project model and data set needs to be re-downloaded, the path under the folder has been provided in the download mode.