Tensorboard
Tensorboard's official website tutorials are as follows:
Https://www.tensorflow.org/versions/r0.7/how_tos/summaries_and_tensorboard/index.html
A simple explanation: Tensorboard is a visual tool that can be used to view TensorFlow diagrams and various values and images in the process.
1. Add "Summary operations" to the desired node in the TensorFlow program, and "Summary operations" collects the node's data and marks the previous step, timestamp, and other identifiers to write to the event file.
The format of the event file is as follows:
2. Tensorboard read the event file and visualize the TensorFlow process. Demo Demo Using the examples provided by the official website, an example of mnist is provided, and the path to my file is as follows:
~/libsource/tensorflow/tensorflow/examples/tutorials/mnist,
Where ~/libsource/tensorflow/changes to the user's own TensorFlow path.
The above directory has a mnist_with_summaries.py file, that is, to join the "summary operations" Mnist demo. Start mnist_with_summaries.py,
Python mnist_with_summaries.py
The source code of mnist_with_summaries.py is as follows:
# Copyright TensorFlow Authors.
All rights Reserved.
# # Licensed under the Apache License, Version 2.0 (the ' License ');
# You are not a use of this file except in compliance with the License. # Obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # unless required by applic Able law or agreed to in writing, software # Distributed under the License are distributed on a ' as is ' BASIS, # without W
Arranties or CONDITIONS of any KIND, either express OR implied.
# See the License for the specific language governing permissions and # Limitations under the License. # ============================================================================== "" A simple MNIST classifier which
Displays summaries in Tensorboard. This was an unimpressive MNIST model, but it was a good example of using Tf.name_scope to make a graph legible in the Tensor
Board Graph Explorer, and of naming summary tags so that they is grouped meaningfully in Tensorboard. It demonstrates The functionality of every Tensorboard dashboard. "" "from __future__ import absolute_import from __future__ Import division from __future__ import print_function import t Ensorflow as TF from tensorflow.examples.tutorials.mnist import input_data flags = TF.APP.FLAGS flags = flags. Flags flags. Define_boolean (' Fake_data ', False, ' If true, uses fake data ' for unit testing. ') flags. Define_integer (' max_steps ', +, ' number of steps to run trainer. ') flags. Define_float (' learning_rate ', 0.001, ' Initial learning rate ') flags. Define_float (' Dropout ', 0.9, ' Keep probability for training dropout. ') flags. Define_string (' Data_dir ', '/tmp/data ', ' Directory for storing data ') flags. Define_string (' Summaries_dir ', '/tmp/mnist_logs ', ' summaries directory ') def train (): # Import Data mnist = Input_da Ta.read_data_sets (Flags.data_dir, One_hot=true, FA Ke_data=flags.fake_data) Sess = tf. InteractivesessIon () # Create a multilayer model. # input Placehoolders with tf.name_scope (' input '): x = Tf.placeholder (Tf.float32, [None, 784], name= ' x-input ') y _ = Tf.placeholder (Tf.float32, [None, ten], name= ' y-input ') with Tf.name_scope (' Input_reshape '): Image_shaped_input = Tf.reshape (x, [-1, 1]) tf.image_summary (' input ', Image_shaped_input, ten) # We can ' t initialize these Vari
Ables to 0-the network would get stuck.
def weight_variable (Shape): "" "Create a weight variable with appropriate initialization." "" " Initial = Tf.truncated_normal (shape, stddev=0.1) return TF.
Variable (initial) def bias_variable (shape): "" "Create a bias Variable with appropriate initialization." "" " Initial = Tf.constant (0.1, Shape=shape) return TF.
Variable (initial) def Variable_summaries (Var, name): "" "Attach a lot of summaries to a Tensor." " With Tf.name_scope (' summaries '): mean = Tf.reduce_mean (Var) tf.scalar_summary (' mean/' + name, MEAN) with Tf.name_scope (' StdDev '): StdDev = Tf.sqrt (Tf.reduce_sum (Tf.square))) Var-mean Mmary (' sttdev/' + name, StdDev) tf.scalar_summary (' max/' + name, Tf.reduce_max (Var)) tf.scalar_summary (' min/') + Name, Tf.reduce_min (Var)) tf.histogram_summary (name, VAR) def Nn_layer (Input_tensor, Input_dim, Output_dim, Lay
Er_name, Act=tf.nn.relu): "" "Reusable code for making a simple neural net layer.
It does a matrix multiply, bias add, and then uses Relu to Nonlinearize.
It also sets up name scoping So, the resultant graph was easy to read, and adds a number of summary ops.
"" "# Adding a name scope ensures logical grouping of the layers in the graph. With Tf.name_scope (layer_name): # This Variable would hold the state of the weights for the layer with Tf.name_ Scope (' weights '): weights = weight_variable ([Input_dim, Output_dim]) variable_summaries (weights, Layer_nam
E + '/weights ') With Tf.name_scope (' biases '): biases = Bias_variable ([Output_dim]) variable_summaries (biases, layer_name
+ '/biases ') with Tf.name_scope (' Wx_plus_b '): Preactivate = Tf.matmul (input_tensor, weights) + biases
Tf.histogram_summary (layer_name + '/pre_activations ', preactivate) activations = Act (preactivate, ' activation ') Tf.histogram_summary (layer_name + '/activations ', activations) return activations Hidden1 = Nn_layer (x, 784, ' Layer1 ') with Tf.name_scope (' Dropout '): Keep_prob = Tf.placeholder (tf.float32) tf.scalar_summary (' Dropout _keep_probability ', keep_prob) dropped = Tf.nn.dropout (Hidden1, keep_prob) y = Nn_layer (dropped, $, ten, ' Layer2 ',
Act=tf.nn.softmax) with Tf.name_scope (' Cross_entropy '): diff = y_ * tf.log (y) with tf.name_scope (' Total '): Cross_entropy =-tf.reduce_mean (diff) Tf.scalar_summary (' Cross entropy ', cross_entropy) with Tf.name_scope (' Trai n '): Train_step = TF. Train. Adamoptimizer (flags.learning_rate). Minimize (Cross_entropy) with tf.name_scope (' accuracy '): with TF.NAME_SC Ope (' correct_prediction '): Correct_prediction = Tf.equal (Tf.argmax (y, 1), Tf.argmax (Y_, 1)) with Tf.name_scope (' Accuracy '): accuracy = Tf.reduce_mean (Tf.cast (correct_prediction, Tf.float32)) tf.scalar_summary (' Accuracy ', acc
uracy) # Merge all the summaries and write them off to/tmp/mnist_logs (by default) merged = Tf.merge_all_summaries () Train_writer = tf.train.SummaryWriter (flags.summaries_dir + '/train ', sess.grap h) test_writer = Tf.train.SummaryWriter (flags.summaries_dir + '/test ') tf.initialize_all_variables (). Run () # Train
The model, and also write summaries. # Every 10th step, measure test-set accuracy, and write test summaries # All other steps, run train_step on training dat A, & Add training summaries def feed_dict (train): "" "Make a TensorFlow feed_dict:maps daTa onto Tensor placeholders. "" "
If train or FLAGS.fake_data:xs, ys = Mnist.train.next_batch (fake_data=flags.fake_data) k = Flags.dropout Else:xs, ys = mnist.test.images, mnist.test.labels k = 1.0 return {x:xs, Y_: ys, keep_prob:k} F or I in range (flags.max_steps): if i% = = 0: # Record summaries and Test-set accuracy summary, acc = sess.ru N ([merged, accuracy], feed_dict=feed_dict (False)) Test_writer.add_summary (summary, i) print (' accuracy at step
%s:%s '% (I, ACC)) Else: # Record train set summaries, and train if I% = =: # record Execution stats Run_options = tf. Runoptions (TRACE_LEVEL=TF. Runoptions.full_trace) Run_metadata = tf.
Runmetadata () Summary, _ = Sess.run ([merged, Train_step], feed_dict=feed_dict (True), Options=run_options, Run_metadata=run_metadata) train _writer.add_run_metadata (Run_metadata, ' step%d '% i) train_writer.add_summary (summary, i) print (' Adding run metadata
For ', i) Else: # Record A summary summary, _ = Sess.run ([merged, Train_step], feed_dict=feed_dict (True)) Train_writer.add_summary (Summary, i) def main (_): If Tf.gfile.Exists (flags.summaries_dir): tf.gfile.DeleteRe Cursively (Flags.summaries_dir) tf.gfile.MakeDirs (Flags.summaries_dir) train () if __name__ = = ' __main__ ': tf.app.ru N ()
which
Flags. Define_string (' Summaries_dir ', '/tmp/mnist_logs ', ' summaries directory ')
Identifies the output path of the event file. In this example, the output path opens the Tensorboard service for/tmp/mnist_logs
Tensorboard--logdir=/tmp/mnist_logs/
Browse http://0.0.0.0:6006 in a browser, and in this visual interface, you can view tensorflow graphs and various intermediate outputs.
Tensorboard is just a debugging tool, it looks cool and has no, but how to make full use of, I would like to TensorFlow fully understand. The following will turn to the study of TensorFlow.
Error 2 Bug Resolution
TensorFlow installed by Pip mode, when using Tensorboard, the following bug may appear:
WARNING:tensorflow:IOError [Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/tensorflow/ Tensorboard/tag ' on Path/usr/local/lib/python2.7/dist-packages/tensorflow/tensorboard/tag
WARNING:tensorflow : Unable to read tensorboard tag
starting tensorboard on Port 6006
Solution:
Download TensorFlow GitHub source code, copy the tag file under TensorFlow Tensorboard directory to the Tensorboard directory below Python, my directory is as follows:
sudo cp ~/libsource/tensorflow/tensorflow/tensorflow/tensorboard/tag/usr/local/lib/python2.7/dist-packages/ tensorflow/tensorboard/