Tensorboard Visualization of simple convolutional neural networks

Source: Internet
Author: User
Tags scalar

Tensorboard is an official visualization tool provided by TensorFlow. The data in the model training can be summarized and displayed. This article is based on the tensorflow1.2 version. This version of the Tensorboard interface is shown in figure:

Image.png


The Tensorboard supports 8 visualizations, which are the 8 tabs in the figure above, namely: scalars: Scalar curve. Changes such as accuracy, loss rate, weight, and bias. Tf.summary.scalar () in the code defines the scalar that needs to be shown under this tab. IMAGES: Data image. For image classification problems, you can display the input or the pictures in the training process. In the Code tf.summary.image () defines the data that needs to be presented in this tab (directly drawn as an image display), and by default, 3 pictures are drawn. Audio: Sound data. There has not been an example of speech analysis, which is estimated ibid. Graphs:tensorflow's streaming chart. The integrity of the streaming diagram needs to be defined in detail in the program. Defined with Tf.name_scope () or with tf.name_scope () as scope. Distributions: Data distribution map. It can be used to show the distribution of data before and after activation, and to assist with design analysis. Histograms: Data bar chart. defined in code with Tf.summary.histogram (). Embeddings: For text analysis, the projection distribution of the display word vectors (e.g. Word2vec)

Tensorboard runs a local server, listens on port 6006, analyzes data recorded during training when the browser makes a request, and plots data curves and images during training. Tensorboard's code implementation:

First you need to declare the address where the data is saved

Log_dir = './simple_cnn_log ' # log dir to store all data and graph structure

To show the node names in Tensorboard, use the With Tf.name_scope () to qualify the namespaces when designing the network:

With Tf.name_scope (' input '):    
        x = Tf.placeholder (Tf.float32, [None, 784], name= ' x ')
        y_ = Tf.placeholder ( Tf.float32, [None, ten], name= ' y_ ')
        Keep_prob = Tf.placeholder (tf.float32,name= ' KP ')

All nodes in the with Tf.name_scope (' input ') will be automatically named "Input/xxx" in the format.
Define the Variable_summaries function to perform simple statistical analysis of the data, such as mean, variance, and extreme values. It is worth noting that the input parameter var in this function should be used with TF. Variable defines the weights, deviations.

    def variable_summaries (Name,var): With
        tf.name_scope (name+ ' _summaries '):
            mean = Tf.reduce_mean (Var)
        Tf.summary.scalar (name+ ' _mean ', mean) with
        tf.name_scope (name+ ' _stddev '):
            StdDev = tf.sqrt (Tf.reduce_mean ( Tf.square (Var-mean))
        tf.summary.scalar (name+ ' _stddev ', StdDev)
        tf.summary.scalar (name+ ' _max ', Tf.reduce_max (Var))
        tf.summary.scalar (name+ ' _min ', Tf.reduce_min (Var))
        Tf.summary.histogram (name+ ' _ Histogram ', Var)

For all data with Tf.summary definition summary records, a rollup is required for the program to output and save.

    merged = Tf.summary.merge_all ()
    #merged = Tf.summary.merge ([input_summary,acc_summary])
    Train_writer = Tf.summary.FileWriter (log_dir+ '/train ', sess.graph)
    test_writer = Tf.summary.FileWriter (log_dir+ '/test ')

The work of this step is very important, usually implemented with the Tf.summary.merge_all () function, but there are some hidden risks associated with using this function, especially when cross-validation, The models required for testing and training may be slightly different (such as when testing without dropout but need to be dropout for training), but the graph overlap can occur, possibly with an error:
PlaceHolder Error
And suggests that a variable defined by a placeholder is not assigned a value. This situation may be caused by multiple graph confusion at run time, the previous TF. Scalar_summary variable also needs to be computed, so it is required to assign a value to the previous placeholder variable.
There are two solutions to this situation:
1) When there is not much data to be recorded and visualized, it can be defined separately and aggregated with the tf.merge_summary () function.

Accuracy_summary = tf.scalar_summary ("accuracy", accuracy)
loss_summary = tf.scalar_summary ("loss", C)

merged = Tf.merge_summary ([Accuracy_summary, Loss_summary])

2) define the default graph. Can be used With.with TF. Graph (). As_default (): The python structure defines the default graph, or you can declare that each run will currently run as the default graph, which is declared:
Tf.reset_default_graph () cross-validation

The

actually performs specific training, tests, and logging operations. Use Tf.train.Saver () to create the model depositary. After entering the training cycle, every 1 steps perform a test of data recording and accuracy, with TF every 100 steps. Runoptions () defines the running options for the model, using TF. Runmetadata () defines the meta-information to record and save the model.
during cross-validation training, the training results need to be saved to the specified path, Log_dir. When the session runs, the merged function handle is passed into the sess.run and the resulting results are saved in summary, and the running data can be saved with Train_writer.add_summary (Summary,i).

    Saver = Tf.train.Saver () for I in Range (max_steps): if i%10 = = 0:summary, acc = Sess.run ([Me Rged, Accuarcy], feed_dict=feed_dict (False)) Test_writer.add_summary (summary, i) print (' Accuracy a T step%s:%s '% (I, ACC)) else:if i%100 = = 99:continue Run_options = Tf. Runoptions (TRACE_LEVEL=TF. Runoptions.full_trace) Run_metadata = tf.
                                      Runmetadata () Summary, _ = Sess.run ([merged, Train_step], feed_dict=feed_dict (True), Options=run_options, Run_metadata=run_metadata) train_writer.add_run_metadata (run_metadat A, ' step%03d '%i) train_writer.add_summary (summary,i) saver.save (sess,log_dir+ '/model.ckpt ', i) print (' Adding run metadata for ', i) else:summary, _ = Sess.run ([merged, TR Ain_step], feed_dict=feed_dict (True)) Train_Writer.add_summary (Summary, i) 
Start Tensorboard

After you complete the code, re-open a terminal and enter:
Tensorboard--logdir=./simple_cnn_log
Can get a link address, copy the link address, open with Chrome browser, you can see just set the whole picture of Tensorboard. Complete Implementation

Finally, the Tensorboard implementation of simple convolutional neural network is attached.

# ======================================== # Simple CNN with Tensorboard # ======================================== # define CONV Kernel def weight_variable (shape): initial = Tf.truncated_normal (shape,stddev=0.1) weight = tf. Variable (initial_value=initial) return weight # define conv bias def bias_variable (shape): initial = Tf.constant ( 0.1,shape=shape) bias = tf. Variable (initial_value=initial) Return bias # define a simple conv operation def conv_op (In_tensor, kernel, strides= [1,1,1,1], padding= ' same '): Conv_out = tf.nn.conv2d (in_tensor, kernel, strides=strides, padding=padding) return Co Nv_out # define Max pooling Operation def max_pool_2x2 (in_tensor,ksize=[1,2,2,1],strides=[1,2,2,1],padding= ' SAME '): M 
    Ax_pool = Tf.nn.max_pool (In_tensor, ksize, strides, padding) return Max_pool def simple_cnn_tensorboard (mnist): "Simple CNN with Tensorboard visulization" "tf.reset_default_graph () Log_dir = './simple_cnn_Log ' # log dir to store all data and graph structure Sess = tf. InteractiveSession () # CNN Structure max_steps = Learning_rate = 0.001 dropout = 0.9 W1 = [5,5,1
    , B1 = [] W2 = [5,5,32,64] B2 = [+] Wfc1 = [7*7*64,1024] bfc1 = [1024x768] wfc2 = [1024,10] BFC2 = [ten] def variable_summaries (Name,var): With Tf.name_scope (name+ ' _summaries '): mean = Tf.re Duce_mean (Var) tf.summary.scalar (name+ ' _mean ', mean) with Tf.name_scope (name+ ' _stddev '): Stdde v = tf.sqrt (Tf.reduce_mean (Tf.square (Var-mean))) tf.summary.scalar (name+ ' _stddev ', StdDev) tf.summary.scal AR (name+ ' _max ', Tf.reduce_max (Var)) Tf.summary.scalar (name+ ' _min ', Tf.reduce_min (Var)) Tf.summary.histogram ( Name+ ' _histogram ', Var) with tf.name_scope (' input '): x = Tf.placeholder (Tf.float32, [None, 784], name= ' x ' ) Y_ = Tf.placeholder (Tf.float32, [None, ten], name= ' y_ ') keep_pRob = Tf.placeholder (tf.float32,name= ' KP ') with Tf.name_scope (' Image_reshape '): X_image = Tf.reshape (x,[-1, 2 8, 1]) # 28*28 pic of 1 channel tf.summary.image (' Input ', x_image) # 1st layer with Tf.name_scope    (' Conv_layr1 '): W_conv1 = weight_variable (W1);      Variable_summaries (' W1 ', w_conv1) b_conv1 = Bias_variable (B1); Variable_summaries (' B1 ', B_CONV1) with Tf.name_scope (' Wx_plus_b '): Pre_act = Conv_op (X_image, W_CONV1) +       
        B_conv1 tf.summary.histogram (' pre_act ', pre_act) h_conv1 = Tf.nn.relu (pre_act, name= ' activiation ') H_pool1 = max_pool_2x2 (h_conv1) # 2nd layer with Tf.name_scope (' Conv_layr2 '): W_conv2 = Weight    _variable (W2);      Variable_summaries (' W2 ', w_conv2) b_conv2 = bias_variable (B2); Variable_summaries (' B2 ', b_conv2) h_conv2 = Tf.nn.relu (Conv_op (h_pool1, W_conv2) +b_conv2) h_pool2 = Max_poo l_2x2 (H_CONV2) # FC1 with Tf.name_scope (' FC1 '): H_pool2_flat = Tf.reshape (h_pool2,[-1,7*7*64]) W_fc1 = weight_variable (WFC1);      Variable_summaries (' W_fc1 ', w_fc1) b_fc1 = bias_variable (BFC1); Variable_summaries (' B_fc1 ', b_fc1) h_fc1 = Tf.nn.relu (Tf.matmul (H_pool2_flat, W_FC1) +b_fc1, name= ' _act ') # Drop Out H_fc1_drop = Tf.nn.dropout (h_fc1,keep_prob=keep_prob) # FC2 with Tf.name_scope (' FC2 '): W    _FC2 = weight_variable (WFC2);      Variable_summaries (' w_fc2 ', w_fc2) B_FC2 = bias_variable (BFC2);
        Variable_summaries (' b_fc2 ', b_fc2) Y_conv = Tf.nn.softmax (Tf.matmul (H_fc1_drop, W_FC2) +b_fc2,name= ' Fc2_softmax ') #tf. Summary.scalar (' Softmax ', y_conv) # loss function with Tf.name_scope (' cross_entropy '): cross_en Tropy = Tf.reduce_mean (-tf.reduce_sum (Y_*tf.log (Y_conv), reduction_indices=[1]), name= ' cross_entropy ') #tf. Summary. Scalar (' cross_entropy ', cross_entropy) with Tf.name_scope (' Train '): Train_stEP = Tf.train.AdamOptimizer (Learning_rate). Minimize (cross_entropy) # estimate Accuarcy with Tf.name_scope (' Accura Cy '): correct_prediction = Tf.equal (Tf.arg_max (y_conv,1), Tf.arg_max (y_,1)) Accuarcy = Tf.reduce_mean (tf.c AST (Correct_prediction, tf.float32)) #acc_summary = Tf.summary.scalar (' Accuarcy ', Accuarcy) tf.summary.scalar (' ACC Uarcy ', Accuarcy) # summary All merged = Tf.summary.merge_all () #merged = Tf.summary.merge ([input_summary,ac C_summary]) Train_writer = Tf.summary.FileWriter (log_dir+ '/train ', sess.graph) Test_writer = Tf.summary.FileWriter (  log_dir+ '/test ') Tf.global_variables_initializer (). Run () def feed_dict (train): if Train:xs, ys = Mnist.train.next_batch k = Dropout else:xs, ys = Mnist.test.images, mnist.test . labels K = 1.0 return {x:xs, Y_:ys, keep_prob:k} saver = Tf.train.Saver () for I in range (MA X_steps): If i%10 = = 0:summary, acc = Sess.run ([merged, Accuarcy], feed_dict=feed_dict (False)) Test_writer.add_su
                Mmary (summary, i) print (' Accuracy at step%s:%s '% (I, ACC)) else:if i%100 = = 99: Continue run_options = tf. Runoptions (TRACE_LEVEL=TF. Runoptions.full_trace) Run_metadata = tf.
                                      Runmetadata () Summary, _ = Sess.run ([merged, Train_step], feed_dict=feed_dict (True), Options=run_options, Run_metadata=run_metadata) train_writer.add_run_metadata (run_metadat A, ' step%03d '%i) train_writer.add_summary (summary,i) saver.save (sess,log_dir+ '/model.ckpt ', i) print (' Adding run metadata for ', i) else:summary, _ = Sess.run ([merged, TR Ain_step], feed_dict=feed_dict (True)) Train_writer.add_summary (summary, i) train_write
   R.close () Test_writer.close () return 

After running, get the visualization of Tensorboard as shown below. The first is the Scalar tab, and you can see that scalar data has been stored separately according to the WITH statement in the code.
Image.png


When you click Accuarcy, you can see the change in accuracy:
Image.png


On the Image tab, the image data is entered. Divide test and train two classes.
In the Distribution tab, you can see the distribution of data, weights, and offsets before activation:
Image.png


Histogram tab:
Image.png


Finally, the TensorFlow graph is given:
Image.png

Reference:
1 "TensorFlow Combat"
2 "TensorFlow technical analysis and actual combat"
3 https://stackoverflow.com/questions/35413618/tensorflow-placeholder-error-when-using-tf-merge-all-summaries
4 https://segmentfault.com/a/1190000007846181

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.