deeplearning-Wunda-Convolution neural network-first week job 01-convolution Networks (python)

Source: Internet
Author: User
Tags constant scalar
convolutional neural Networks:step by step

Welcome to Course 4 ' s-A-assignment! In this assignment, you'll implement Convolutional (CONV) and pooling (POOL) layers in NumPy, including both forward pro Pagation and (optionally) backward propagation.

notation:

We assume that you are already familiar with numpy and/or have completed the previous courses. Let ' s get started!

1-packages

Let ' s-all the packages, you'll need during this assignment. The numpy is the fundamental package to scientific computing with Python. Matplotlib is a library to plot graphs in Python. Np.random.seed (1) is used to keep all random function calls consistent. It would help us grade your work.

Import NumPy as NP
import h5py
import matplotlib.pyplot as plt

%matplotlib inline
plt.rcparams[' Figure.figsize '] = (5.0, 4.0) # Set default size of plots
plt.rcparams[' image.interpolation '] = ' nearest '
PLT.RCP arams[' image.cmap '] = ' Gray '

%load_ext autoreload
%autoreload 2

np.random.seed (1)

2-outline of the assignment

You'll be implementing the building blocks of a convolutional neural network! Each function you'll implement'll have detailed instructions that'll walk you through the steps needed:convolution Functions, Including:zero Padding convolve window convolution forward convolution backward (optional) pooling functions, Including:pooling forward Create Mask distribute value pooling backward (optional)

This notebook would ask you for implement these functions from scratch in numpy. In the next notebook, you'll use the TensorFlow equivalents of this functions to build the following model:

Note this for every forward function, there are its corresponding backward equivalent. Hence, at every step of your forward module, your'll store some parameters in a cache. These parameters are used to compute gradients during.

3-convolutional Neural Networks

Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in De EP Learning. A convolution layer transforms an input volume to an output volume of different size, as shown below.

In this part, you'll build every step of the convolution layer. You'll implement two helper Functions:one for zero padding and the other for computing the convolution function I Tself.
3.1-zero-padding

Zero-padding adds zeros around the border of an image:

Figure 1:zero-padding
Image (3 channels, RGB) with a padding of 2.

The main benefits of padding are the following:

It allows a CONV layer without necessarily shrinking the height and width of the volumes. This is important a for building deeper networks, since otherwise the height/width would the as your go to shrink deeper. An important special the "same" convolution, in which the height/width was exactly after one preserved.

It helps us keep the information at the border of the. Without padding, very few values at the next layer would is affected by pixels as the edges of the.

Exercise:implement the following function, which pads all the images of a batch of examples X with zeros. Use Np.pad. If you are want to pad the array "a" of shape (5,5,5,5,5) with pad = 1 for the 2nd dimension, Pad = 3 for the 4th dimension and pad = 0 for the rest, your would do:

A = Np.pad (A, (0,0), (1,1), (0,0), (3,3), (0,0)), ' constant ', Constant_values = (...,..))
# graded Function:zero_pad

def zero_pad (x, pad): "" "pad with zeros all images of the
    DataSet X. The padding is applied to the height and width of a image, as 
    illustrated in Figure 1.
    
    Argument:
    X--python numpy array of shape (M, N_h, N_w, N_c) representing a batch of M images
    pad--Integer, AMO  Unt of padding around each image on vertical and horizontal dimensions Returns
    
    :
    x_pad--padded image of shape (m, N_h + 2*pad, N_w + 2*pad, N_c)
    "" "
    
    ### START CODE here ### (≈1 line)
    X_pad = Np.pad (X, ((0,0), (Pad,pad), (PA D,pad), (0,0)), ' Constant ', constant_values= (0,0)) ### End CODE-here-### return
    
    x_pad


Np.random.seed (1)
x = Np.random.randn (4, 3, 3, 2)
X_pad = Zero_pad (x, 2)
print ("X.shape =", X.shape)
pri NT ("X_pad.shape =", X_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1))

fig, Axarr = P Lt.subplots (1, 2)
axarr[0].set_title (' x ')
axarr[0].imshow (x[0,:,:,0])
axarr[1].set_title (' X_pad ')
axarr[1].imshow (x_pad[0,:,:,0])

X.shape = (4, 3, 3, 2)
X_pad.shape = (4, 7, 7, 2)
x[1,1] = [[0.90085595-0.68372786]
 [ -0.12289023-0.93576 943]
 [ -0.26788808  0.53035547]]
x_pad[1,1] = [[0.  0.]
 [0.  0.]
 [0.  0.]
 [0.  0.]
 [0.  0.]
 [0.  0.]
 [0.  0.]]
OUT[3]:
<matplotlib.image.axesimage at 0x7f8fbf27f160>

Expected Output:

X.shape: (4, 3, 3, 2)
X_pad.shape: (4, 7, 7, 2)
x[1,1]: [[0.90085595-0.68372786] [ -0.12289023-0.93576943] [-0.26788808 0.53035547]]
x_pad[1,1]: [[0.0.] [0.0.] [0.0.] [0.0.] [0.0.] [0.0.] [0.0.]]

3.2-single step of convolution

In this section, implement a single step of convolution, in which you apply the "filter to" a single position of the input. This is used to build a convolutional, which:takes a input volume applies a filter at every position of Put outputs another volume (usually of different size)

Figure 2:convolution operation
With a filter of 2x2 and a stride of 1 (stride = amount)

In a computer vision application, the "in the" Matrix on the "left" corresponds to a single pixel value, and we convolv e a 3x3 filter with the "image by" multiplying its values element-wise with the original matrix, then summing them up and ad Ding a bias. In this is the exercise, you'll implement a single step of convolution, corresponding to applying a filter to Just one of the positions to get a single real-valued output.

Later in this notebook, and you'll apply this function to multiple positions of the "input to implement" full convolutional Operation.

Exercise:implement Conv_single_step (). Hint.

# graded Function:conv_single_step

def conv_single_step (A_slice_prev, W, b): "" "
    Apply one filter defined By parameters W on a single slice (a_slice_prev) of the output activation of the 
    previous layer.
    
    Arguments:
    A_slice_prev--slice of input data of shape (f, F, N_c_prev)
    W--Weight parameters, contained in a win Dow-matrix of shape (f, F, N_c_prev)
    B--Bias parameters contained in a Window-matrix of shape (1, 1, 1)
    
    Retu RNs:
    Z--a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data

    "" " ### START code here ### (≈2 lines of code)
    # Element-wise product between A_slice and W. Do not add the bias yet.
    s = W*a_slice_prev
    # Sum over all entries of the volume S.
    z = np.sum (s)
    # ADD bias B to Z. Cast B-a float () So, z results in a scalar value.
    Z = z + float (b)
    ### end CODE here ### return

    Z


Np.random.seed (1)
A_slice_prev = Np.random.randn (4, 4, 3)
W = Np.random.randn (4, 4, 3)
B = Np.random.randn (1 , 1, 1)

Z = Conv_single_step (A_slice_prev, W, b)
print ("z =", z)

Z =-6.99908945068

Expected Output:

Z -6.99908945068
3.3-convolutional Neural Networks-forward Pass

In the "Forward pass", you'll take many filters and convolve them on the input. Each ' convolution ' gives a 2D matrix output. You'll then stack outputs to get a 3D volume:

Exercise:implement the function below to convolve the filters W on a input activation A_prev. This function is takes as input a_prev, the activations output by the previous layer (for A batch of M inputs), F Filters/wei Ghts denoted by W, and a bias vector denoted by B, where each filter has it own (single) bias. Finally you also have access to the Hyperparameters dictionary which contains the stride.

Hint:to Select a 2x2 slice at the upper left corner of a matrix "A_prev" (Shape (5,5,3)), you would do:

A_slice_prev = A_prev[0:2,0:2,:]
This is useful when you'll define A_slice_prev below, using the Start/end indexes you'll define. To define A_slice would need to the define its corners Vert_start, Vert_end, Horiz_start and Horiz_end. This figure may is helpful for your to find how each of the corner can is defined using H, W, F and S in the code below.

Figure 3:definition of a slice using vertical and horizontal start/end (with a 2x2 filter)
This figure shows is a single channel.

reminder:the formulas relating the output shape of the convolution to the input shape is:for This exercise, we won ' t wor Ry about vectorization, and would just implement everything with for-loops.

# graded Function:conv_forward def conv_forward (A_prev, W, B, hparameters): "" "Implements The Forward Propagati On for a convolution function arguments:a_prev--output activations of the previous layer, numpy array of S  Hape (M, N_h_prev, N_w_prev, N_c_prev) W--Weights, numpy array of shape (f, F, N_c_prev, N_c) B--biases, numpy
    Array of shape (1, 1, 1, n_c) hparameters-Python dictionary containing "stride" and "pad" Returns: Z--Conv output, numpy array of shape (M, N_h, N_w, N_c) Cache--cache of values needed for the Conv_backward () Fu Nction "" "### START CODE here ### # Retrieve dimensions from A_prev ' s shape (≈1 line) (M, N_h_prev
    
    , N_w_prev, N_c_prev) = a_prev.shape # Retrieve dimensions from W ' s shape (f,f,n_c_prev,n_c) = W.shape
    
    # Retrieve information from "Hparameters" (≈2 lines) stride = hparameters["Stride"] pad = hparameters["pad"] # Compute The DimEnsions of the CONV output volume using the formula given above. Hint:use Int () to floor.  (≈2 lines) n_h = Int ((n_h_prev+2*pad-f)/stride) +1 n_w = Int ((n_w_prev+2*pad-f)/stride) +1 # Initialize the Output volume Z with zeros.
    (≈1 line)
    
    Z = Np.zeros ((m,n_h,n_w,n_c)) # Create A_prev_pad by padding a_prev a_prev_pad = Zero_pad (A_prev,pad) For I in range (m): # Loop over the batch of training examples A_prev_pad = A_pre                           V_pad[i,:,:,:] # Select ITH Training example ' s padded activation for h in range (N_h): # Loop over vertical axis of the output volume to W in range (N_w): #  Loop over horizontal axis of the "output volume for C" range (N_c): # Loop over channels (= #filters) of the output volume # Find the corners of the "slice" (≈4
     Lines               Vert_start = h*stride Vert_end = h*stride+f Horiz_start = w*stride Horiz_end = w*stride+f # Use the corners to define the (3D) s Lice of A_prev_pad (the Hint above the cell).  (≈1 line) A_slice_prev = A_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:] # Convolve the (3D) slice with the correct filter W and bias B, to get back one output neuron.
                    (≈1 line) Z[i, H, W, c] = Conv_single_step (a_slice_prev,w[:,:,:,c],b[:,:,:,c]) ### end CODE here ### # making sure your output shape is correct assert (Z.shape = = (M, N_h, N_w, N_c)) # Sav E information in ' cache ' for the backprop cache = (A_prev, W, B, hparameters) return Z, Cache

Np.random.seed (1)
A_prev = Np.random.randn (10,4,4,3)
W = Np.random.randn (2,2,3,8)
B = Np.random.randn ( 1,1,1,8)
hparameters = {"Pad": 2,
               "Stride": 2}

Z, Cache_conv = Conv_forward (A_prev, W, B, hpa
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.