Learn TensorFlow, reverse convolution

Source: Internet
Author: User
Tags benchmark

In the deep learning network structure, the categories of each layer can be divided into these kinds: convolution layer, full connection layer, Relu layer, pool layer and reverse convolution layer. At present, in pixel-level estimation and end-to-end learning problems, full convolution network shows his advantage, there is a very important layer, the convolution of the feature map sampling (deconvolution) to the input image dimension space, is the deconvolution layer. So how does it come to be achieved in the TensorFlow? This blog post is about this.

1. Anti-convolution function introduction

Tf.nn.conv2d_transpose (value, filter, Output_shape, strides, padding= ' SAME ', Name=none)
This is the function of deconvolution in TensorFlow, and the value is the feature map,filter of the upper layer is the convolution nucleus [kernel_size, Kernel_size, Output_channel, Input_channel], Output_shape defines the size of the output [batch_size, height, width, channel],padding is the boundary patching algorithm.

Specifically, the parameters in Output_shape and strides are coupled, and we can determine the strides parameters (positive integers) based on input and output, or we can determine the output size based on input and strides.

2. Alex net plus reverse convolution layer

# Copyright 2015 the TensorFlow Authors.
All Rights Reserved.
# # Licensed under the Apache License, Version 2.0 (the "License");
# You could not use this file, except in compliance with the License. # You may obtain a copy of the License in # # http://www.apache.org/licenses/LICENSE-2.0 # unless required by applic Able or agreed to in writing, software # Distributed under the License be distributed on ' as is ' basis, # without W
Arranties or CONDITIONS of any KIND, either express OR implied.
# The License for the specific language governing permissions and # Limitations under the License. # ============================================================================== "" "Timing benchmark for AlexNet

Inference. To run, Use:bazel run-c opt--config=cuda \ Third_party/tensorflow/models/image/alexnet:alexnet_benchmark Acros

S-Steps on batch size = 128. Forward Pass:run on Tesla k40c:145/+-1.5 ms/batch Run on Titan x:70/+ 0.1 Ms/batch Forward-bacKward Pass:run on Tesla k40c:480/+ ms/batch Run on Titan x:244/+ Ms/batch "" "from __future__ import a  Bsolute_import from __future__ Import Division to __future__ import print_function from datetime import datetime import Math Import time from six.moves import xrange # pylint:disable=redefined-builtin import tensorflow as tf FLAGS = tf. App.flags.FLAGS Tf.app.flags.DEFINE_integer (' batch_size ', 1, "" "Batch size." ") tf.app.flags. Define_integer (' num_batches ', +, "" "number of batches to run." ") Tf.app.flags.DEFINE_intege
                            R (' Image_width ', 345, "" "Image width." "") Tf.app.flags.DEFINE_integer (' Image_height ', 460,

"" "Image height." "  def print_activations (t): Print (T.op.name, ', T.get_shape (). As_list ()) def inference (images): "" "Build the Alexnet

  Model. Args:images:Images Tensor Returns:pool5:the Tensor in the ConvolutioNAL component of Alexnet.
  Parameters:a List of tensors corresponding to the weights and biases of the alexnet model. "" "Parameters = [] # Conv1 with Tf.name_scope (' Conv1 ') as Scope:kernel = tf.  Variable (Tf.truncated_normal (one, one, 3,), Dtype=tf.float32, stddev=1e-1), Name= ' weights ') conv = tf.nn.conv2d (images, kernel, [1, 4, 4, 1], padding= ' SAME ') biases = tf. Variable (tf.constant (0.0, shape=[64), Dtype=tf.float32), trainable=true, name= ' biases ') bias = Tf.nn.bias_add (conv, biases) Conv1 = Tf.nn.relu (bias, Name=scope) print_activations (conv1) parameters + = [ke

  Rnel, biases] # lrn1 # TODO (Shlens, Jiayq): Add a GPU version of local response normalization. # pool1 pool1 = Tf.nn.max_pool (Conv1, Ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding= ' VALID ', name= ' pooL1 ') print_activations (pool1) # Conv2 with Tf.name_scope (' Conv2 ') as Scope:kernel = tf.  Variable (Tf.truncated_normal (5, 5,,), Dtype=tf.float32, stddev=1e-1), Name= ' weights ') conv = tf.nn.conv2d (pool1, kernel, [1, 1, 1, 1], padding= ' SAME ') biases = tf.  Variable (tf.constant (0.0, shape=[192), Dtype=tf.float32), trainable=true, name= ' biases ') bias = Tf.nn.bias_add (conv, biases) Conv2 = Tf.nn.relu (bias, name=scope) parameters + = [kernel, biases] Print_activa
                         tions (conv2) # pool2 pool2 = Tf.nn.max_pool (Conv2, Ksize=[1, 3, 3, 1], Strides=[1, 2, 2, 1], padding= ' VALID ', name= ' pool2 ') Print_activat Ions (pool2) # Conv3 with Tf.name_scope (' Conv3 ') as Scope:kernel = tf.
Variable ([3, 3, Tf.truncated_normal, 384], Dtype=tf.float32,                                             stddev=1e-1), name= ' weights ') conv = tf.nn.conv2d (pool2, kernel, [1, 1, 1 , 1], padding= ' SAME ') biases = tf.  Variable (tf.constant (0.0, shape=[384), Dtype=tf.float32), trainable=true, name= ' biases ') bias = Tf.nn.bias_add (conv, biases) Conv3 = Tf.nn.relu (bias, name=scope) parameters + = [kernel, biases] print_acti Vations (conv3) # conv4 with Tf.name_scope (' conv4 ') as Scope:kernel = tf.
                                             Variable (Tf.truncated_normal) ([3, 3, 384, 256], Dtype=tf.float32, stddev=1e-1), name= ' weights ') conv = tf.nn.conv2d (conv3, kernel, [1, 1, 1, 1], Paddin g= ' SAME ') biases = tf.  Variable (tf.constant (0.0, shape=[256), Dtype=tf.float32), trainable=true, name= ' biases ') bias = Tf.nn.bias_add (conv, biases) conv4 = Tf.nn.relu (bias, name=scope) parameters + = [kernel, biases] print_actiVations (CONV4) # conv5 with Tf.name_scope (' conv5 ') as Scope:kernel = tf.
                                             Variable (Tf.truncated_normal) ([3, 3, 256, 256], Dtype=tf.float32, stddev=1e-1), name= ' weights ') conv = tf.nn.conv2d (conv4, kernel, [1, 1, 1, 1], Paddin g= ' SAME ') biases = tf.  Variable (tf.constant (0.0, shape=[256), Dtype=tf.float32), trainable=true, name= ' biases ') bias = Tf.nn.bias_add (conv, biases) conv5 = Tf.nn.relu (bias, name=scope) parameters + = [kernel, biases] print_acti
                         Vations (CONV5) # Pool5 pool5 = Tf.nn.max_pool (conv5, Ksize=[1, 3, 3, 1], Strides=[1, 2, 2, 1], padding= ' VALID ', name= ' pool5 ') print_activ Ations (POOL5) # Conv6 with Tf.name_scope (' conv6 ') as Scope:kernel = tf.
                                  Variable (Tf.truncated_normal ([3, 3, 256, 1],           Dtype=tf.float32, stddev=1e-1), name= ' weights ') conv = tf.nn.c onv2d (POOL5, kernel, [1, 1, 1, 1], padding= ' SAME ') biases = tf.  Variable (tf.constant (0.0, shape=[1), Dtype=tf.float32), trainable=true, name= ' biases ') bias = Tf.nn.bias_add (conv, biases) Conv6 = Tf.nn.relu (bias, name=scope) parameters + = [kernel, biases] Print_activa tions (CONV6) # Deconv1 with Tf.name_scope (' Deconv1 ') as SCOPE:WT = tf. Variable (Tf.truncated_normal ([One, 1, 1]) Deconv1 = Tf.nn.conv2d_transpose (conv6, WT, [Flags.batch_size, 130, 100, 1] , [1, ten, 1], ' SAME ') print_activations (DECONV1) # Deconv2 with Tf.name_scope (' Deconv2 ') as SCOPE:WT = tf.  Variable (Tf.truncated_normal ([One, 1, 1]) Deconv2 = Tf.nn.conv2d_transpose (deconv1, WT, [Flags.batch_size, 260, 200, 1], [1, 2, 2, 1], ' SAME ') print_activations (DECONV2) return deconv2, Parameters def time_tensorflow_run (Session, TA RgeT, info_string): "" "Run the computation to obtain the target tensor and print timing stats.
    Args:session:the TensorFlow session to run the computation under.
    Target:the Target Tensor This is passed the session ' s run () function.

  INFO_STRING:A string summarizing this run is printed with the stats. Returns:none "" "num_steps_burn_in = ten total_duration = 0.0 total_duration_squared = 0.0 for i in xrange (F  Lags.num_batches + num_steps_burn_in): Start_time = Time.time () _ = Session.run (target) duration = Time.time ()
               -Start_time If i > num_steps_burn_in:if not i% 10:print ('%s:step%d, duration =%.3f '%
      (DateTime.Now (), i-num_steps_burn_in, duration))
  Total_duration + + Duration total_duration_squared + = Duration * Duration MN = total_duration/flags.num_batches VR = total_duration_squared/flags.num_batches-mn * MN SD = MATH.SQRT (VR) print ('%s:%s across%d steps,%.3f + +-%.3f sec/batch '% (DateTime.Now (), info_string, Flags.num_batches, MN, SD)) def Run_benchmark (): "" "R
  UN the benchmark on alexnet. "" " With TF.

    Graph (). As_default (): # Generate some dummy images.
    # that we padding definition is slightly different the cuda-convnet. # in order to force the model to start with the same activations sizes, # We add 3 to the image_size and employ VALID
    padding above. Images = tf.
                                           Variable (Tf.random_normal ([Flags.batch_size, 460,
                                          345, 3], Dtype=tf.float32,
    stddev=1e-1)) # Build a Graph which computes the logits predictions from the # inference model.
    POOL5, Parameters = Inference (images) # Build a initialization operation.
    init = Tf.initialize_all_variables () # Start running operations on the Graph. Config = tf. ConfiGproto () Config.gpu_options.allocator_type = ' BFC ' sess = tf.
    Session (Config=config) Sess.run (init) # Run the forward benchmark.
    Time_tensorflow_run (Sess, Pool5, "Forward") # Add A simple objective so we can calculate the backward pass.
    Objective = Tf.nn.l2_loss (POOL5) # Compute the gradient with respect.
    Grad = tf.gradients (objective, parameters) # Run the backward benchmark. Time_tensorflow_run (Sess, Grad, "Forward-backward") def Main (_): Run_benchmark () If __name__ = ' __main__ ': Tf.app . Run ()

Three. Operating results



Reference URL:

Https://www.tensorflow.org/versions/r0.9/api_docs/python/nn.html#convolution

http://cvlab.postech.ac.kr/research/deconvnet/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.