15, the use of Tensorboard __tensorflow

Source: Internet
Author: User
Tags flushes prepare scalar
introduction of Tensorboard and its application process 1, Tensoboard introduction

Tensorboard and TensorFlow programs run in different processes , Tensorboard automatically reads the latest TensorFlow log files , and renders the current TensorFlow program running in the latest state. 2, Tensorboard use process to add record node: Tf.summary.scalar/image/histogram (), such as summary record node: merged = Tf.summary.merge_all () Run Rollup node: summary = Sess.run (merged), get summary result log writer instantiation: Summary_writer = Tf.summary.FileWriter (LogDir, Graph=sess.graph), Instantiate and pass in graph write the current diagram to the log call Log Composer instance Object Summary_writer add_summary (summary, global_step=i) method to write all rollup journals to file Call the Close () method of the Log writer instance object Summary_writer to memory, or it writes every 120s of the second, tensorflow visual category 1, visualization of the calculation diagram: Add_graph ()

... create a graph ...
# Launch the graph in a session.
Sess = tf. Session ()
# Create A summary writer, add the ' graph ' to the event file.
writer = Tf.summary.FileWriter (LogDir, Sess.graph)
writer.close ()  # writes memory when closed, otherwise it writes every 120s
2. Visualization of monitoring index: add_summary () I, SCALAR

Tf.summary.scalar (name, tensor, Collections=none, Family=none)

the variation curves of the visual training process with the accuracy of iteration (Val ACC), loss value (Train/test loss), learning rate (learning rate), weight per layer and offset statistics (mean, STD, max/min)

Input Parameters: Name: This operation node is named, and the vertical axis of the graph drawn in Tensorboard will also use this name tensor: The variable to be monitored a real numeric tensor containing a Single value.

Output: A scalar Tensor of type string. Which contains a Summary protobuf. II, IMAGE

Tf.summary.image (name, tensor, max_outputs=3, Collections=none, Family=none)

visualize the current wheel training using the training/test picture or feature maps

Input Parameters: Name: The names of this operation node, the vertical axis of the graph drawn in Tensorboard will also use this name tensor: A R A-D uint8 or float32 tensor of shape [Batch_size, height, width, channels] where channels is 1, 3, or 4 max_outputs: max number of batch elements to Generate images for

Output: A scalar Tensor of type string. Which contains a Summary protobuf. III, histogram

Tf.summary.histogram (name, values, Collections=none, Family=none)

the value distribution of the visual tensor

Input Parameters: Name: This operation node is named, and the vertical axis of the graph drawn in Tensorboard will also use this name tensor: A real numeric tensor. Any shape. Values to the Histogram

Output: A scalar Tensor of type string. Which contains a Summary protobuf. IV, a comprehensive summary of a variable

def variable_summaries (Var): "" "
    Attach a lot of summaries to a Tensor (for Tensorboard visualization).
    " " With Tf.name_scope (' summaries '):
        mean = Tf.reduce_mean (var)
        StdDev = Tf.sqrt (Tf.reduce_mean (Tf.square (Var- Mean))

        # The mean value, standard deviation, maximum (small) value, histogram, etc. of the variables are summarized 
        tf.summary.scalar (' mean ', mean)
        tf.summary.scalar (' StdDev ', StdDev)
        tf.summary.scalar (' Max ', Tf.reduce_max (Var))
        tf.summary.scalar (' min ', Tf.reduce_min (Var)
        ) Tf.summary.histogram (' histogram ', Var)

# Visualize the first 10 feature maps in activations
tf.summary.image (' feature Maps ', activations,

the histogram tf.summary before and after visual activation
. Histogram (' pre_act ', pre_activate)
tf.summary.histogram (' act ', activations)
V, Merge_all

Tf.summary.merge_all (KEY=TF. graphkeys.summaries) Merges all summaries collected in the default graph because there are more write-log operations defined in the program, which is cumbersome to invoke, so Tensoorflow provides this function to organize all log generation operations , eg:merged = Tf.summary.merge_all () This operation is not executed immediately, so you need to explicitly run this operation (summary = Sess.run (merged)) to get the rollup results. The Add_summary (summary, global_step=i) method that calls the log writer instance object is the visualization of all the rollup logs written to file 3, multiple Events (event): Add_event () if the subdirectory of the logdir directory contains another run-time data ( multiple event), then Tensorboard shows all the running data (mainly scalar), which can be used to The results of the model are compared under different parameters , and the parameters of the model are adjusted to achieve the best results. The above line is an iterative 200-time loss graph, the following is an iterative 400- time graph, the program see Final.

third, through the namespace beautification calculation diagram uses namespaces to make the visualization diagram more hierarchical , so that the whole structure of the neural network will not be overwhelmed by too much detail. All nodes under the same namespace are abbreviated to one node , Only nodes in the top-level namespace will be displayed on the Tensorboard visualization diagram, either through Tf.name_scope () or Tf.variable_scope (), as shown in the final program.

Iv. Write all logs to file: Tf.summary.FileWriter ()

Tf.summary.FileWriter (LogDir, Graph=none, flush_secs=120, max_queue=10) is responsible for the event log (graph, Scalar/image/histogram, Event) is written to the specified file

Initialization parameters: logdir: Event Write directory graph: If the sess,graph is passed in when initialized, it is equivalent to calling the Add_graph () method, which is used to compute the visualization of the graph flush_sec : How often, in seconds, to flush the added summaries and events to disk. max_queue: Maximum number of summaries or events pending to is written to disk before one of the ' add ' calls block .

Other common methods: add_event (Event): Adds anEventTo the event fileadd_graph (graph, Global_step=none): Adds AGraphTo the event File,most users pass a graph in the constructor insteadadd_summary (Summary, Global_step=none): Adds ASummary Protocol BufferTo the event file, be sure to pay attention to incoming Global_stepClose (): Flushes the event file to disk and close the fileFlush (): Flushes the event file to diskadd_meta_graph (Meta_graph_def,global_step=none) Add_run_metadata (run_metadata, Tag, global_step=none) v. Start Tensorboard Show all log charts 1. Start with cmd under WindowsRun your program, in the specified directory (logs) to generate event files in the directory of logs, hold down the SHIFT key, click the right button to open cmd here in cmd, enter the following command to start Tensorboard--logdir=logs,Note: The logs directory does not need to be quoted, and when there are multiple event in the logs, a scalar contrast is generated, but graph only shows the latest resultsCopy the URLs generated below (http://localhost:6006 # Everyone's might not be the same) to open in the browser


2. Start with a bash under UbuntuRun your program (Python my_program.py), generate an event file in the specified directory (logs) in bash, and enter the following command to start Tensorboard--logdir=logs--port=8888,Note: The logs directory does not need to be quoted, and the port number must be previously configured in the routerAdd the following generated URL (http://ubuntu16:8888 # ubuntu16 to the server's external IP address) Copy to the local browser to open
Vi. using TF to achieve a linear regression (and using Tensorboard visualization)The loss comparison of multiple event and the network structure Graph (graph) are already shown above, and there is no repetition here. The bottom shows thethe training process of the networkAndFinal Fitting Effect Chart

#!/usr/bin/env Python3 #-*-coding:utf-8-*-import tensorflow as TF import matplotlib.pyplot as PLT import NumPy as NP Import os os.environ[' tf_cpp_min_log_level '] = ' 2 ' # Prepare training data, assuming its distribution roughly corresponds to y = 1.2x + 0.0 n_train_samples = X_train = NP . Linspace ( -5, 5, n_train_samples) Y_train = 1.2*x_train + np.random.uniform ( -1.0, 1.0, N_train_samples) # Add a little random perturbation # prepare for the test Certificate data, used to verify the model is good or bad n_test_samples = X_test = Np.linspace ( -5, 5, n_test_samples) y_test = 1.2*x_test # parameter learning algorithm correlation variable settings learnin G_rate = 0.01 Batch_size = Summary_dir = ' logs ' Print (' ~~~~~~~~~~ start designing calculation Diagram ~~~~~~~~ ') # Use placeholder to send training data/validation data to the network Practice/Validation # Shape=none indicates that the shape is determined with tf.name_scope (' Input ') from the shape of the input tensor: X = Tf.placeholder (Dtype=tf.float32, Shape=none, name=  ' X ') Y = Tf.placeholder (Dtype=tf.float32, Shape=none, name= ' y ') # Decision function (parameter initialization) with Tf.name_scope (' Inference '): W = TF. Variable (Initial_value=tf.truncated_normal (shape=[1)), name= ' weight ') b = tf. Variable (Initial_value=tf.truncated_normal (shape=[1)), name= ' bias ') y_pred = tf.multiply (X, W) + B # loss function (MSE) with Tf.name_scope (' Loss '): Loss = Tf.reduce_mean (Tf.square ( y_pred-y), name= ' loss ') tf.summary.scalar (' loss ', loss) # parameter learning algorithm (Mini-batch SGD) with Tf.name_scope (' optimization ' ): Optimizer = Tf.train.GradientDescentOptimizer (learning_rate). Minimize (loss) # initialize all variables init = Tf.global_variables_ initializer () # Rollup record node merge = Tf.summary.merge_all () # Open session, training with TF.

    Session () as Sess:sess.run (init) summary_writer = Tf.summary.FileWriter (Logdir=summary_dir, Graph=sess.graph)  For-I in range (201): j = np.random.randint (0, 10) # Total 200 training data, 10 copies [0, 9] x_batch = x_train[batch_size*j: Batch_size* (j+1)] Y_batch = y_train[batch_size*j:batch_size* (j+1)] _, Summary, Train_loss, w_pred, B_PR ed = Sess.run ([Optimizer, merge, loss, W, b], Feed_dict={x:x_batch, y:y_batch}) Test_loss = Sess.run (loss, feed_ Dict={x:x_test, y:y_test}) # Write all logs to file SUMMARY_WRIter.add_summary (summary, global_step=i) print (' step:{}, losses:{}, test_loss:{}, w_pred:{}, b_pred:{} '. Format (i, Train_loss, Test_loss, w_pred[0], b_pred[0]) If i =: # Plot The results Plt.plot (x_ Train, Y_train, ' Bo ', label= ' train data ' Plt.plot (x_test, Y_test, ' GX ', label= ' test data ') plt.pl OT (X_train, X_train * w_pred + b_pred, ' R ', label= ' predicted data ') Plt.legend () plt.show ()

 Ummary_writer.close ()


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.