TensorFlow Official Tutorial: The last layer of the retraining model to cope with the new classification
This article mainly includes the following content:
TensorFlow Official Tutorial re-training the final layer of the model to cope with the new classification flowers the inception model for the dataset
re-training inception model for flowers data sets
First, before you start training, you need to prepare the dataset and download the dataset as follows.
CD ~
Curl-o http://download.tensorflow.org/example_images/flower_photos.tgz
tar xzf flower_photos.tgz
As you can see, there are 5 categories in the Flower_photos folder.
Re-trained with the binary weight training tool provided by TF, executed under the root directory of the TensorFlow source code:
sudo bazel build tensorflow/examples/image_retraining:retrain
bazel-bin/tensorflow/examples/image_retraining/ Retrain--image_dir ~/flower_photos
Similarly, we can use Python to run the appropriate PY program directly, running the code as follows:
Python retrain.py--image_dir ~/flower_photos
The script loads the pre-training model inception V3, removes the last layer of the original model, and then trains again on the dataset. The idea of migration learning is used here.
Note: The first phase of the script analyzes all the images and calculates the bottleneck of each image. Bottleneck is an informal term that we often use to describe the network layer that actually completes the sorting task before the last layer of the network. Among them, the output of the penultimate layer is sufficient for describing the categories that need to be categorized. Since each image is reused many times during training and calculation of the bottleneck value, caching these data on disk can help speed up the process to avoid duplicate computations.
# retrain.py Code idea:
# First read the network correlation parameter and the training model, according to the setting proportion produces the corresponding training data, the test data, the verification data.
# Next, add a new layer (that is, the last layer), where the penultimate layer of the previous model output is used as input to the layer, and then the weights and offsets are constructed according to the category tag
#, and the cross entropy loss is defined to obtain the final output. At the same time, define formula calculation accuracy rate.
# After network initialization, first compute the penultimate layer of the model output bottleneck and save it to disk, avoiding repetitive computations to accelerate network convergence.
# and finally it's normal training network model, like the use of TF. The session () converges the model.
Visualization of training results: Once Tensorboard is running, browse your Web browser to localhost:6006 to view Tensorboard.
Tensorboard--logdir/tmp/retrain_logs
Perform classification tests: So far I have not run through, if you pass, please comment :