TensorFlow Saved_model Module

Source: Internet
Author: User
Tags aliases tag name

Transferred from: https://blog.csdn.net/thriving_fcl/article/details/75213361
The Saved_model module is mainly used for TensorFlow serving. TF serving is a system that deploys a trained model to a production environment, with the main advantage of keeping the server side and API intact, deploying new algorithms or experimenting, while still having high performance.

What is the benefit of keeping the server side and the API intact? There are many advantages, I only from one aspect of my experience to illustrate, for example, we need to deploy a text classification model, then the input and output can be determined, input text, output various types of probability or category tags. In order to get better results, we may want to try a lot of different models, cnn,rnn,rcnn and so on, these models are trained well to save, in the inference phase need to reload these models, we hope that the inference code has a copy of the good, That is, when using the new model, you do not need to modify the inference code for the new model. How is this supposed to be done?

Summarized in the two methods of TensorFlow model save/load.
1. Use only saver to save/load variables. This method is obviously not possible, the only way to save variables is to redefine graph (define model) at inference, so that different model code must be modified. Even if the same model, parameter changes, also need to be reflected in the code, at least a configuration file to synchronize, so it is cumbersome.
2. Use Tf.train.import_meta_graph to import graph information and create saver, and then use the Saver restore variable. In comparison with the first, there is no need to redefine the model, but in order to find the input and output tensor from graph, it is necessary to use Graph.get_tensor_by_name (), which also requires knowing the names of those tensor that are given during the defining model phase. If the code that creates each model is done by the same person, it is also relatively well controlled, forcing these inputs and outputs to be named consistently. If you are a different developer, it is more difficult to enforce the naming of tensor during the model creation phase. This way, you have to maintain a configuration file, write the tensor name that you want to get, and then read the parameter from the configuration file.

Through the above analysis found that to achieve inference code unification, the use of the original method is also possible, but the TensorFlow official provides a better way, and this method is not only to solve this problem, so still have to learn to use Saved_model this module.

Saved_model Save/load Model
List the APIs you'll use first

Class Tf.saved_model.builder.SavedModelBuilder Initialization method

Init (Export_dir) Importing graph and variable information

Add_meta_graph_and_variables (
Sess,
Tags
Signature_def_map=none,
Assets_collection=none,
Legacy_init_op=none,
Clear_devices=false,
Main_op=none
) load the saved model

Tf.saved_model.loader.load (
Sess,
Tags
Export_dir,
**saver_kwargs
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21st
22
23
(1) The simplest scenario, just save/load the model
Save
To save a well-trained model, use the following three lines of code.

Builder = Tf.saved_model.builder.SavedModelBuilder (Saved_model_dir)
Builder.add_meta_graph_and_variables (Sess, [' tag_string '])
Builder.save ()
1
2
3
First, the Savedmodelbuilder object is constructed, and the initialization method only needs to pass in the directory name used to save the model, which is not pre-created.

The Add_meta_graph_and_variables method imports graph information as well as variables, this method assumes that the variables are already initialized, and for each Savedmodelbuilder this method must be executed once for importing the first meta graph.

The first parameter is passed into the current session, which contains the graph's structure and all the variables.

The second parameter is a tag for the current meta graph that needs to be saved, the tag name can be customized, and then when the model is loaded, it is necessary to find the corresponding metagraphdef according to the tag name, which will be reported as Runtimeerror:metagraphdef Associated with tags ' foo ' could isn't being found in Savedmodel such a mistake. Labels can also be used for system-defined parameters, such as tf.saved_model.tag_constants. Serving and tf.saved_model.tag_constants. TRAINING.

The Save method is to serialize the model under the specified directory.

After saving to the Saved_model_dir directory, there will be a SAVED_MODEL.PB file and a variables folder. As the name implies, variables saves all variables, SAVED_MODEL.PB is used to save information such as the structure of the model.

Gta5-In
The model can be loaded using the Tf.saved_model.loader.load method. Such as

Meta_graph_def = Tf.saved_model.loader.load (Sess, [' tag_string '], Saved_model_dir)
1
The first parameter is the current session, and the second parameter is the tag of the meta graph defined at the time of the Save, and the tag matches to find the corresponding meta graph. The third parameter is the directory where the model is saved.

After load, it is also obtained from the sess corresponding graph to obtain the required tensor to inference. Such as

x = Sess.graph.get_tensor_by_name (' input_x:0 ')
y = sess.graph.get_tensor_by_name (' predict_y:0 ') actual sample to be inference

_x = ...
Sess.run (Y, feed_dict={x: _x})
1
2
3
4
5
6
This is the same as the previous second method, but also to know the name of tensor. So how can you use it without knowing the tensor name. Then you need to pass the third argument to the Add_meta_graph_and_variables method, Signature_def_map.

(2) using Signaturedef
About Signaturedef My understanding is that it defines protocols that encapsulate the information we need, and we get information based on that protocol, which allows us to decouple the creation and use of the model. SIGNATUREDEF's structure and related detailed documentation are in: HTTPS://GITHUB.COM/TENSORFLOW/SERVING/BLOB/MASTER/TENSORFLOW_SERVING/G3DOC/SIGNATURE_DEFS.MD

Related API build signature

Tf.saved_model.signature_def_utils.build_signature_def (
Inputs=none,
Outputs=none,
Method_name=none
) build tensor info

Tf.saved_model.utils.build_tensor_info (tensor)
1
2
3
4
5
6
7
8
9
Signaturedef, the input and output tensor information are encapsulated, and give them a custom alias, so in the stage of building the model, you can give tensor name, as long as the training model is saved, A unified alias can be given in the signaturedef.

TensorFlow's example of this part used a lot of signature_constants, and the usefulness of these constants was mainly to provide a convenient and unified naming. In our own understanding of the role of signaturedef, we can not control these, encountered the need to name the time, how to write how to write.

Save
Assuming that the definition model input alias is "input_x", the output alias is "Output", the code using SIGNATUREDEF is as follows

Builder = Tf.saved_model.builder.SavedModelBuilder (Saved_model_dir) x is input tensor, keep_prob dropout prob

Inputs = {' input_x ': Tf.saved_model.utils.build_tensor_info (x),
' Keep_prob ': Tf.saved_model.utils.build_tensor_info (Keep_prob)} y is the result of the final desired output tensor

outputs = {' Output ': Tf.saved_model.utils.build_tensor_info (Y)}

Signature = Tf.saved_model.signature_def_utils.build_signature_def (inputs, outputs, ' test_sig_name ')

Builder.add_meta_graph_and_variables (Sess, [' Test_saved_model '], {' Test_signature ': signature})
Builder.save ()
1
2
3
4
5
6
7
8
9
10
11
12
The above inputs adds a keep_prob to illustrate that inputs can have multiple, build_tensor_info methods to serialize tensor related information into Tensorinfo protocol buffer.

Inputs,outputs are Dict,key are our agreed input and output aliases, value is the tensorinfo of the specific tensor packaging.

Then use the Build_signature_def method to build the Signaturedef, the third parameter method_name temporarily give one.

The created Signaturedef is used in the third parameter signature_def_map of Add_meta_graph_and_variables, but is not passed directly to the Signaturedef object. In fact, Signature_def_map receives a dict,key is the name of the signature that we name, and value is the Signaturedef object.

Gta5-In
The code that is loaded and used is omitted to construct the Sess code as follows

Signature_key = ' Test_signature '
Input_key = ' input_x '
Output_key = ' Output '

Meta_graph_def = Tf.saved_model.loader.load (Sess, [' Test_saved_model '], Saved_model_dir) from Meta_graph_ Remove Signaturedef object from Def

Signature = Meta_graph_def.signature_def to find out the tensor name of the specific input and output from the signature

X_tensor_name = Signature[signature_key].inputs[input_key].name
Y_tensor_name = Signature[signature_key].outputs[output_key].name gets tensor and inference

x = Sess.graph.get_tensor_by_name (x_tensor_name)
y = Sess.graph.get_tensor_by_name (y_tensor_name) _x actual input data to be inference

Sess.run (y, feed_dict={x:_x})
1
2
3
4
5
6
7
8
9
Ten
One

+

+
+
/
/
+
+
from the above two pieces of code can be known, we only need to contract the input and output aliases, when the model is saved using these aliases to create the SIG Nature, the specific name of the input and output tensor is completely hidden, which enables decoupling of the creation model from the usage model.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.