TensorFlow convolution implementation principle + handwritten Python code to achieve convolution

Source: Internet
Author: User

The process of generating a new one-channel graph from a picture of a channel is easy to understand, and it is a bit abstract to figure out how to generate multiple channels after convolution of multiple channels. In this paper, the convolution is described in a popular and understandable way, and the principle of convolution can be understood quickly with the help of picture interpretation. Finally, handwritten Python code to achieve the convolution process, so that tensorflow convolution in front of us is no longer a black box.

Attention:

This article only for batch_size=1,padding= ' SAME ', stride=[1,1,1,1] carries on the experiment and the explanation, other if is not this parameter setting, the principle also is same. 1 TensorFlow convolution realization principle

First look at the convolution implementation principle, for In_c channel of the input graph, if you need to go through the convolution after the output Out_c channel map, then a total of In_c * Out_c volume kernel involved in the operation. Refer to the following figure:

As shown above, the input is [h:5,w:5,c:4], then each channel corresponding to the output requires 4 convolution cores. In the figure above, the output is 3 channels, so a total of 3*4=12 is required. For each point in a single output channel, the value is the corresponding set of 4 different convolution cores after convolution computation.

Next, we take the input of 2 channels with a width of 5 input, 3*3 volume kernel, 1 channel width of 5 output, as an example to expand.

2 channels, the 5*5 input is defined as follows:

#输入, Shape=[c,h,w]
input_data=[
              [[1,0,1,2,1],
               [0,2,1,0,1],
               [1,1,0,2,0],
               [2,2,1,1,0],
               [ 2,0,1,2,0]],

               [[2,0,2,1,1],
                [0,1,0,0,2],
                [1,0,0,2,1],
                [1,1,2,1,0],
                [1,0,1,1,1]],

            ]

For the output of 1 channel map, according to the previous calculation method, need to 2*1 a convolution kernel. Define the convolution kernel as follows:

#卷积核, shape=[in_c,k,k]=[2,3,3]
weights_data=[ 
               [[1, 0, 1],
                [-1, 1, 0],
                [0,-1, 0]],
               [[-1, 0, 1] ,
                [0, 0, 1],
                [1, 1, 1]] 
             ]

The data defined above, in the next calculation, corresponds to the method described in the following figure.

Since the shape of the tensor defined by TensorFlow is [n,h,w,c], here we can set the N to 1, or batch size to 1. Another problem is that the input we have just defined is [c,h,w], so we need to convert [c,h,w] to [h,w,c]. The conversion is as follows, the comments have been explained in detail and are not explained here.

def get_shape (tensor):
    [s1,s2,s3]= tensor.get_shape () 
    s1=int (S1)
    s2=int (S2)
    s3=int (S3)
    return S1,S2,S3

def CHW2HWC (chw_tensor): 
    [C,h,w]=get_shape (Chw_tensor) 
    cols=[] for

    i in range (c):
        #每个通道里面的二维数组转为 [w*h,1] is 1 line 
        = Tf.reshape (chw_tensor[i],[h*w,1])
        cols.append

    #横向连接, All vertical arrays are arranged horizontally connected
    input = Tf.concat (cols,1) #[w*h,c]
    #[w*h,c]-->[h,w,c]
    input = Tf.reshape (Input,[h, W,C]) return
    input

Similarly, when TensorFlow uses a convolution kernel, the format used is [K,k,in_c,out_c]. And when we define the convolution, we define [in_c,k,k], and we need to convert [in_c,k,k] to [K,k,in_c], because to simplify the workload, we specify that the output is 1 channels, that is, the out_c=1. So here we can simply call the Weights_data on the CHW2HWC, and then expand it in the 3rd dimension.

Next, put out the complete code:

Import TensorFlow as TF import numpy as NP input_data=[[[1,0,1,2,1], [0,2,1,0,1], [1,1,0,2,0], [2,2,1,1,0], [2,0,1,2,0]], [[2,0,2,1,1], [ 0,1,0,0,2], [1,0,0,2,1], [1,1,2,1,0], [1,0,1,1,1]],] weights
                _data=[[[1, 0, 1], [-1, 1, 0], [0,-1, 0]], [[-1, 0, 1], [0, 0, 1], [1, 1, 1]]] def get_shape (tensor): [s1,s2,s3]= tensor.get_sh Ape () S1=int (S1) s2=int (S2) s3=int (S3) return S1,s2,s3 def CHW2HWC (chw_tensor): [C,h,w]=get_shape (c Hw_tensor) cols=[] for I in range (c): #每个通道里面的二维数组转为 [w*h,1] is 1 line = Tf.reshape (chw_tensor[i),
        [h*w,1]) Cols.append (line) #横向连接, all vertical arrays are arranged horizontally connected input = Tf.concat (cols,1) #[w*h,c] #[w*h,c]-->[h,w,c] input = TF . ReshapE (input,[h,w,c]) return input def HWC2CHW (hwc_tensor): [H,w,c]=get_shape (Hwc_tensor) cs=[] for I in Ran GE (c): #[h,w]-->[1,h,w] Channel=tf.expand_dims (hwc_tensor[:,:,i],0) cs.append (channel) #[ 1,H,W] ... [1,h,w]---->[c,h,w] input = Tf.concat (cs,0) #[c,h,w] return input def tf_conv2d (input,weights): conv = TF.N n.conv2d (input, Weights, strides=[1, 1, 1, 1], padding= ' SAME ') return conv def main (): Const_input = tf.constant (Input_data, tf.float32) const_weights = Tf.constant (weights_data, tf.float32) input = tf.
    Variable (const_input,name= "input") #[2,5,5]------>[5,5,2] INPUT=CHW2HWC (input) #[5,5,2]------>[1,5,5,2] Input=tf.expand_dims (input,0) weights = tf.
    Variable (const_weights,name= "weights") #[2,3,3]-->[3,3,2] WEIGHTS=CHW2HWC (weights) #[3,3,2]-->[3,3,2,1] Weights=tf.expand_dims (weights,3) #[b,h,w,c] conv=tf_conv2d (input,weights) RS=HWC2CHW(Conv[0]) Init=tf.global_variables_initializer () sess=tf.
 Session () Sess.run (init) Conv_val = Sess.run (rs) print (conv_val[0]) if __name__== ' __main__ ': Main ()

The above code has a few places to mention, because the output channel is 1, so you can convert the volume of nuclear data directly call CHW2HWC, if the input channel is not 1, you cannot complete the conversion. When the input completes the CHW turn HWC, remember to extend the dimension in the No. 0 dimension, because the convolution requires input as [n,h,w,c] to facilitate our view of the results, remember to convert the HWC shape to CHW

Execute the above code, the results of the operation are as follows:

[[2.  0.  2.  4.  0.]
 [1.  4.  4.  3.  5.]
 [4.  3.  5.  9. -1.]
 [3.  4.  6.  2.  1.]
 [5.  3.  5.  1.-2.]]

How the calculation results are calculated. In order for us to learn more clearly the details, I specially produced a gif, after reading this picture, if you still do not understand the convolution calculation process, you can hit me ....

2 Handwriting Python code to achieve the convolution

When you implement the convolution yourself, you do not need to convert the defined data [c,h,w] to [h,w,c].

Import NumPy as NP input_data=[[[1,0,1,2,1], [0,2,1,0,1], [1,1,0,2,0],
                [2,2,1,1,0], [2,0,1,2,0]], [[2,0,2,1,1], [0,1,0,0,2], [1,0,0,2,1], [1,1,2,1,0], [1,0,1,1,1]]] weights_data=[[ [1, 0, 1], [-1, 1, 0], [0,-1, 0]], [[-1, 0, 1], [0, 0, 1
    ], [1, 1, 1]]] #fm: [h,w] #kernel: [k,k] #return rs:[h,w] def compute_conv (Fm,kernel):
    [H,w]=fm.shape [K,_]=kernel.shape r=int (K/2) #定义边界填充0后的map Padding_fm=np.zeros ([H+2,w+2],np.float32) #保存计算结果 Rs=np.zeros ([H,w],np.float32) #将输入在指定该区域赋值, that is, except for the 4 boundaries, the remaining area PADDING_FM[1:H+1,1:W+1]=FM #对每个点为中心的区域遍 Calendar for I in range (1,h+1): for J in Range (1,w+1): #取出当前点为中心的k *k Area roi=padding_fm[i-r:i+
         R+1,J-R:J+R+1]   #计算当前点的卷积, multiply Rs[i-1][j-1]=np.sum (Roi*kernel) return Rs def my_conv2d (input,weights) on k*k dots: [C, H,w]=input.shape [_,k,_]=weights.shape Outputs=np.zeros ([h,w],np.float32) #对每个feature map traversal to each feature map into Row convolution for I in range (c): #feature map==>[h,w] f_map=input[i] #kernel ==>[k,k] W=we Ights[i] Rs =compute_conv (f_map,w) Outputs=outputs+rs return outputs def main (): #shape =[c , h,w] input = Np.asarray (Input_data,np.float32) #shape =[in_c,k,k] weights = Np.asarray (weights_data,np.float3 
 2) rs=my_conv2d (input,weights) print (RS) if __name__== ' __main__ ': Main ()

The code does not need to be interpreted too much to look directly at the annotation. And then run out the results are as follows:

[[2.  0.  2.  4.  0.]
 [1.  4.  4.  3.  5.]
 [4.  3.  5.  9. -1.]
 [3.  4.  6.  2.  1.]
 [5.  3.  5.  1.-2.]]

By contrast, the convolution results with TensorFlow are the same. 3 Summary

In this article, we learned the TensorFlow convolution implementation principle, through the Python code to achieve the output channel of 1 convolution, in fact, the output channel number does not affect our learning convolution principle. If there is a chance in the back, we are going to achieve a more robust, complete convolution.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.