TensorFlow Getting Started using Tf.train.Saver () to save the model

Source: Internet
Author: User
This article mainly introduces the TensorFlow introduction to use Tf.train.Saver () to save the model, now share to everyone, but also to make a reference. Come and see it together.

A little bit about the preservation of the model

Saver = Tf.train.Saver (max_to_keep=3)

When defining saver, you typically define the maximum number of saved models, in general, if the model itself is large, we need to consider the size of the hard disk. If you need to fine-tune on the basis of a well-trained model, then save the model as much as possible, and the successor fine-tune not necessarily from the best ckpt, as it is possible to fit in one swoop. But if you save too much, the hard disk will be stressed. If you only want to keep the best model, the method is to calculate the accuracy or F1 value on the validation set each time you iterate to a certain number of steps, and if the result is better than the previous one, save the new model, otherwise there is no need to save it.

If you want to fuse a model that is preserved by different epochs, 3 to 5 models are sufficient, assuming that the fused model becomes m, and that the best single model is called M_best, so that the fusion can be better for m than M_best. But if the model and other structural models are fused again, the effect of M is not m_best good, because M is equivalent to doing the average operation, reducing the "characteristics" of the model.

But there is a new way of fusion, is to use the adjustment learning rate to obtain a number of local optimal points, that is, when loss down, save a ckpt, and then open a large learning rate to continue to find the next local optimal point, and then use these ckpt to do fusion, have not tried, the single model certainly has improved, It is not known whether there will be the above and other models to merge with the situation is not improved.

How to use Tf.train.Saver () to save a model

has been wrong, mainly because of the problem of the coding of the pit daddy. So pay attention to the path of the file absolutely no need to appear in Chinese ah.

Import TensorFlow as TFConfig = tf. Configproto () config.gpu_options.allow_growth = Truesess = tf. Session (config=config) # Create Some variables.v1 = tf. Variable ([1.0, 2.3], name= "v1") v2 = tf. Variable (55.5, Name= "V2") # Add an op to initialize the VARIABLES.INIT_OP = Tf.global_variables_initializer () # Add OPS to S Ave and restore all the Variables.saver = Tf.train.Saver () Ckpt_path = './ckpt/test-model.ckpt ' # Later, launch the model, I Nitialize the variables, do some work, save the# variables to Disk.sess.run (init_op) Save_path = Saver.save (Sess, Ckpt_path , global_step=1) print ("Model saved in file:%s"% Save_path)

Model saved in file:./ckpt/test-model.ckpt-1

Note that after the model has been saved. Kernel should be restart before you can use the following model to import. Otherwise, the name "V1" will be incorrectly named because of two times.

Import TensorFlow as TFConfig = tf. Configproto () config.gpu_options.allow_growth = Truesess = tf. Session (config=config) # Create Some variables.v1 = tf. Variable ([11.0, 16.3], name= "v1") v2 = tf. Variable (33.5, Name= "v2") # ADD Ops to save and restore all the Variables.saver = Tf.train.Saver () # Later, launch the model  , use the saver-to-restore variables from disk, and# does some work with the model.# restore variables from disk.ckpt_path = './ckpt/test-model.ckpt ' Saver.restore (sess, Ckpt_path + '-' + str (1)) print ("model restored.") Print Sess.run (v1) Print Sess.run (v2)

INFO:tensorflow:Restoring parameters from./ckpt/test-model.ckpt-1
Model restored.
[1.2.29999995]
55.5

Before you can import a model, you must redefine the variables again.

But you don't need all the variables to be redefined, just define the variables we need.

That is, the variable you define must exist in the checkpoint, but not all of the variables in the checkpoint, you have to redefine it.

Import TensorFlow as TFConfig = tf. Configproto () config.gpu_options.allow_growth = Truesess = tf. Session (config=config) # Create Some variables.v1 = tf. Variable ([11.0, 16.3], name= "v1") # Add Ops to save and restore all the Variables.saver = Tf.train.Saver () # later, launch T He model, use the saver-to-restore variables from disk, and# does some work with the model.# restore variables from DISK.CKP T_path = './ckpt/test-model.ckpt ' Saver.restore (sess, Ckpt_path + '-' + str (1)) print ("model restored.") Print Sess.run (v1)

INFO:tensorflow:Restoring parameters from./ckpt/test-model.ckpt-1
Model restored.
[1.2.29999995]

Tf. Saver ([tensors_to_be_saved]) can pass in a list, the tensors to be saved, if not given the list, he will save the current all the tensors. Generally speaking, TF. Saver can be cleverly paired with Tf.variable_scope () and can be consulted: "Migration learning" add new variables and fine-tune to an already saved model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.