In the previous article, "TensorFlow load pre-training model and save Model", we learned how to use the pre-training model. Note, however, that in the previous article, you must have at least 4 files to use the pre-training model:
Checkpoint
Mymodel.meta
mymodel.data-00000-of-00001
mymodel.index
This is very inconvenient for us to use. Is there a way to export a PB file and then use it directly. The answer is yes. In the article "TensorFlow Load pre-training model and save model" mentioned that the meta file Save diagram structure, weights and other parameters are stored in the data file. That is, diagram and parameter data are stored separately. To be more blunt, there is no such data as weights in the meta document. However, it is worth noting that themeta file holds constants. all we need to do is convert the parameters in the data file to constants in the meta file. 1 model exported as a file 1.1 have code and start training from scratch
TensorFlow provides a tool function tf.graph_util.convert_variables_to_constants () to convert a variable to a constant. Look at the official website description:
If you have a trained graph containing Variable ops, it can is convenient to convert them all to Const ops holding the SAM E values. This is makes it possible to describe the network fully with a single graphdef file, and allows the removal of a lot of OPS R Elated to loading and saving the variables.
We go on to start with a simple example:
Import TensorFlow as tf
w1 = tf. Variable (20.0, name= "W1")
w2 = tf. Variable (30.0, name= "W2")
b1= TF. Variable (2.0,name= "bias")
W3 = Tf.add (w1,w2)
#记住要定义name, need to use out
= tf.multiply (w3,b1,name= "out")
# Convert variable to constant and write the network to the file with
TF. Session () as Sess:
Sess.run (Tf.global_variables_initializer ())
# Here you need to fill in the output tensor name
graph = Tf.graph_ Util.convert_variables_to_constants (Sess, Sess.graph_def, ["Out"])
tf.train.write_graph (graph, '. ', './ CHECKPOINT_DIR/GRAPH.PB ', As_text=false)
Execution can see the following log:
Converted 3 variables to const OPS.
You can see Using the tf.graph_util.convert_variables_to_constants () function to convert a variable to a constant and store it in a GRAPH.PB file, and then see how to use the model.
Import TensorFlow as TF with
TF. Session () as Sess:
with open ('./CHECKPOINT_DIR/GRAPH.PB ', ' RB ') as graph:
graph_def = tf. Graphdef ()
graph_def. Parsefromstring (Graph.read ())
output = Tf.import_graph_def (Graph_def, return_elements=[' out:0 '])
print ( Sess.run (output))
The results of the operation are as follows:
[100.0]
Back to the tf.graph_util.convert_variables_to_constants () function, you can see that you need to pass in the Session object and diagram, which is understandable. Take a look at the third parameter ["Out"], which is the output tensor that specifies the model. 1.2 have code and model, but do not want to retraining the model
When you have a model source, you can use the tf.graph_util.convert_variables_to_constants () function to save the variable to a constant in the diagram file when you export the model. But most of the time, we get the other people's checkpoint files, that is, Meta, index, data and other documents. In this case, the variables in the data file need to be converted to constants to be saved to the meta file. The idea is also very simple, first load the checkpoint file, and then save it again.
Suppose the training and save model Code is as follows:
Import TensorFlow as tf
w1 = tf. Variable (20.0, name= "W1")
w2 = tf. Variable (30.0, name= "W2")
b1= TF. Variable (2.0,name= "bias")
W3 = Tf.add (w1,w2)
#记住要定义name, need to use out
= tf.multiply (w3,b1,name= "out")
# Convert variable to constant and write the network to the file with
TF. Session () as Sess:
Sess.run (Tf.global_variables_initializer ())
saver = Tf.train.Saver ()
# Here you need to fill in the output tensor name
saver.save (Sess, './checkpoint_dir/mymodel ', global_step=1000)
At this point, the model file is as follows:
Checkpoint
mymodel-1000.data-00000-of-00001
mymodel-1000.index
Mymodel-1000.meta
If we only have the above 4 model files, but we can see the training source code. Then, export these 4 files to a PB file method as follows:
Import TensorFlow as TF with
TF. Session () as Sess:
#初始化变量
Sess.run (Tf.global_variables_initializer ())
#获取最新的checkpoint, In fact, parsing the checkpoint file
latest_ckpt = Tf.train.latest_checkpoint ("./checkpoint_dir")
#加载图
restore_saver = Tf.train.import_meta_graph ('./checkpoint_dir/mymodel-1000.meta ')
#恢复图, the weights and other parameters will be added to the map corresponding position
restore_ Saver.restore (Sess, latest_ckpt)
#将图中的变量转为常量
output_graph_def = tf.graph_util.convert_variables_to_ Constants (
sess, Sess.graph_def, ["Out"])
#将新的图保存到 "/PRETRAINED/GRAPH.PB" File
tf.train.write_graph ( Output_graph_def, ' pretrained ', "GRAPH.PB", As_text=false)
After execution, there will be the following log:
Converted 3 variables to const OPS.
The next step is to use and use the same method as before:
Import TensorFlow as TF with
TF. Session () as Sess:
with open ('./PRETRAINED/GRAPH.PB ', ' RB ') as graph:
graph_def = tf. Graphdef ()
graph_def. Parsefromstring (Graph.read ())
output = Tf.import_graph_def (Graph_def, return_elements=[' out:0 '])
print ( Sess.run (output))
Print the information as follows:
[100.0]
2 model interface Settings
We notice that the front is simply getting an output interface, but it's obvious that when we use it, there's no way to have just one output, we need input, and then we'll see how to set the input and output. Again we are divided into code and training from scratch, and there are code and models, but do not want to retraining the model in two cases. 2.1 have code and start training from scratch
In contrast to the code in 1.1, the 6th line of code changes:
Import TensorFlow as tf
w1 = tf. Variable (20.0, name= "W1")
w2 = tf. Variable (30.0, name= "W2")
#这里将b1改为placeholder, let the user input, rather than write dead
#b1 = tf. Variable (2.0,name= "bias")
b1= Tf.placeholder (tf.float32, name= ' bias ')
W3 = Tf.add (w1,w2)
#记住要定义name, You need to use the out
= tf.multiply (w3,b1,name= "Out")
# Convert variable to constant and write the network to the file with
TF. Session () as Sess:
Sess.run (Tf.global_variables_initializer ())
# Here you need to fill in the output tensor name
graph = Tf.graph_ Util.convert_variables_to_constants (Sess, Sess.graph_def, ["Out"])
tf.train.write_graph (graph, '. ', './ CHECKPOINT_DIR/GRAPH.PB ', As_text=false)
The log is as follows:
Converted 2 variables to const OPS.
Next look at how to use:
Import TensorFlow as TF with
TF. Session () as Sess:
with open ('./CHECKPOINT_DIR/GRAPH.PB ', ' RB ') as f:
graph_def = tf. Graphdef ()
graph_def. Parsefromstring (F.read ())
output = Tf.import_graph_def (Graph_def, input_map={' bias:0 ': 4.}, return_elements=[' Out:0 '])
print (Sess.run (output))
Print the information as follows:
[200.0]
That is, when you set up input, you first use the data you want to enter as Placeholdler, and then when you import the diagram Tf.import_graph_def (), you specify the input by using the parameter input_map={}. The output is directly referencing the name of tensor through return_elements=[]. 2.2 have code and model, but do not want to retraining the model
In cases where there are codes and models, but do not want to train the model, it means that we cannot directly modify the code that exports the model. But we can use the Graph.get_tensor_by_name () function to get some intermediate results in the graph, and then add some logic. In fact, this situation in the last article has been said. You can refer to the previous file to solve, compared to "have code and training from scratch" situation is more limited, most of the situation can only get some intermediate results of the model, but also meet the majority of our situation use.