We often use dropout in training deep neural networks, but we need to remove dropout in test. In order to deal with this problem, we usually have to build two models to let them share variables. Details. You can also set up Train_flag, which discusses only the problems that the first method might encounter. In order to visualize our data using tensorboard, we often use summary, and eventually we will use a simple merge_all function to manage our summary error example
When these two situations meet, the bug comes up and looks at the code:
Import TensorFlow as TF import NumPy as NP class Model (object): Def __init__ (self): Self.graph () self. Merged_summary = Tf.summary.merge_all () # The place where the bloodshed is caused Def graph (self): self.x = Tf.placeholder (Dtype=tf.float32,sha pe=[none,1]) Self.label = Tf.placeholder (Dtype=tf.float32, shape=[none,1]) W = tf.get_variable ("W", shape=[ 1,1]) self.predict = Tf.matmul (self.x,w) Self.loss = Tf.reduce_mean (Tf.reduce_sum (Tf.square (Self.label-sel f.predict), Axis=1) Self.train_op = Tf.train.GradientDescentOptimizer (0.01). Minimize (Self.loss) tf.summary . Scalar ("loss", Self.loss) def Run_epoch (Session, model): x = Np.random.rand (1000). Reshape ( -1,1) label = X*3 FE Ed_dic = {model.x.name:x, model.label:label} su = Session.run ([model.merged_summary], Feed_dic) def main (): with T
F.graph (). As_default (): With Tf.name_scope ("Train"): With Tf.variable_scope ("Var1", Dtype=tf.float32): Model1 = Model () with Tf.name_scope ("test"): With Tf.variable_scope ("Var1", Reuse=true,dtype=tf.float32): Model2 = Model () with TF. Session () as Sess:tf.global_variables_initializer (). Run () Run_epoch (SESS,MODEL1) run_ Epoch (sess,model2) if __name__ = = "__main__": Main ()
The operation is like this: the implementation of Run_epoch (SESS,MODEL1), the program does not error, once executed to Run_epoch (SESS,MODEL2), it will be an error (see article at the end of the message). Cause of the error
Look at the code fragment:
Class Model (object):
def __init__ (self):
self.graph ()
self.merged_summary = Tf.summary.merge_all () # The place where
the bloodshed was caused ... With Tf.name_scope ("Train"):
with Tf.variable_scope ("Var1", Dtype=tf.float32):
Model1 = Model () # Here's the Merge_ All just managed their own summary with
tf.name_scope ("test"): With
tf.variable_scope ("Var1", Reuse=true,dtype=tf.float32 ):
Model2 = Model () # The Merge_all here manages its own summary and the summary of the top model.
Because the summary calculation is to need feed data, so it will be an error. Solving method
We just need to replace merge_all to solve this problem. Look at the code
Class Model (object):
def __init__ (self,scope):
self.graph ()
self.merged_summary = Tf.summary.merge (
tf.get_collection (TF. Graphkeys.summaries,scope)
)
...
With TF. Graph (). As_default ():
with Tf.name_scope ("Train") as Train_scope:
with Tf.variable_scope ("Var1", dtype= Tf.float32):
model1 = Model (train_scope) with
tf.name_scope ("test") as Test_scope: with
tf.variable_ Scope ("Var1", Reuse=true,dtype=tf.float32):
Model2 = Model (test_scope)
About Tf.get_collection address when there are multiple models, there is a similar error, you should consider whether the use of the method involves other model error
Tensorflow.python.framework.errors_impl. Invalidargumenterror:you must feed a value for placeholder tensor ' train/var1/placeholder ' with Dtype float
[Node:train/var1/placeholder = Placeholder[dtype=dt_float, shape=[], _device= "/job:localhost/replica:0/task:0/gpu : 0 "]]