TensorFlow Study Notes (29): Merge_all-induced murders _tensorflow

Source: Internet
Author: User
We often use dropout in training deep neural networks, but we need to remove dropout in test. In order to deal with this problem, we usually have to build two models to let them share variables. Details. You can also set up Train_flag, which discusses only the problems that the first method might encounter. In order to visualize our data using tensorboard, we often use summary, and eventually we will use a simple merge_all function to manage our summary error example

When these two situations meet, the bug comes up and looks at the code:

Import TensorFlow as TF import NumPy as NP class Model (object): Def __init__ (self): Self.graph () self. Merged_summary = Tf.summary.merge_all () # The place where the bloodshed is caused Def graph (self): self.x = Tf.placeholder (Dtype=tf.float32,sha pe=[none,1]) Self.label = Tf.placeholder (Dtype=tf.float32, shape=[none,1]) W = tf.get_variable ("W", shape=[ 1,1]) self.predict = Tf.matmul (self.x,w) Self.loss = Tf.reduce_mean (Tf.reduce_sum (Tf.square (Self.label-sel f.predict), Axis=1) Self.train_op = Tf.train.GradientDescentOptimizer (0.01). Minimize (Self.loss) tf.summary . Scalar ("loss", Self.loss) def Run_epoch (Session, model): x = Np.random.rand (1000). Reshape ( -1,1) label = X*3 FE Ed_dic = {model.x.name:x, model.label:label} su = Session.run ([model.merged_summary], Feed_dic) def main (): with T
                F.graph (). As_default (): With Tf.name_scope ("Train"): With Tf.variable_scope ("Var1", Dtype=tf.float32): Model1 = Model () with Tf.name_scope ("test"): With Tf.variable_scope ("Var1", Reuse=true,dtype=tf.float32): Model2 = Model () with TF. Session () as Sess:tf.global_variables_initializer (). Run () Run_epoch (SESS,MODEL1) run_ Epoch (sess,model2) if __name__ = = "__main__": Main ()

The operation is like this: the implementation of Run_epoch (SESS,MODEL1), the program does not error, once executed to Run_epoch (SESS,MODEL2), it will be an error (see article at the end of the message). Cause of the error

Look at the code fragment:

Class Model (object):
    def __init__ (self):
        self.graph ()
        self.merged_summary = Tf.summary.merge_all () # The place where
the bloodshed was caused ... With Tf.name_scope ("Train"):
    with Tf.variable_scope ("Var1", Dtype=tf.float32):
        Model1 = Model () # Here's the Merge_ All just managed their own summary with
tf.name_scope ("test"): With
    tf.variable_scope ("Var1", Reuse=true,dtype=tf.float32 ):
        Model2 = Model () # The Merge_all here manages its own summary and the summary of the top model.

Because the summary calculation is to need feed data, so it will be an error. Solving method

We just need to replace merge_all to solve this problem. Look at the code

Class Model (object):
    def __init__ (self,scope):
        self.graph ()
        self.merged_summary = Tf.summary.merge (
        tf.get_collection (TF. Graphkeys.summaries,scope)
        )
...
With TF. Graph (). As_default ():
    with Tf.name_scope ("Train") as Train_scope:
        with Tf.variable_scope ("Var1", dtype= Tf.float32):
            model1 = Model (train_scope) with
    tf.name_scope ("test") as Test_scope: with
        tf.variable_ Scope ("Var1", Reuse=true,dtype=tf.float32):
            Model2 = Model (test_scope)

About Tf.get_collection address when there are multiple models, there is a similar error, you should consider whether the use of the method involves other model error

Tensorflow.python.framework.errors_impl. Invalidargumenterror:you must feed a value for placeholder tensor ' train/var1/placeholder ' with Dtype float
[Node:train/var1/placeholder = Placeholder[dtype=dt_float, shape=[], _device= "/job:localhost/replica:0/task:0/gpu : 0 "]]

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.