Python builds the cyclic neural network __python

Source: Internet
Author: User

Wunda Depth Learning lesson five programming question one

Import Module

Import NumPy as NP from
rnn_utils Import *

Circular Neural Network small unit forward propagation


# graded Function:rnn_cell_forward def rnn_cell_forward (XT, A_prev, parameters): "" "Implements a single forward  Step of the Rnn-cell as described into Figure (2) arguments:xt--Your input data at Timestep "T", numpy array of
    Shape (n_x, M).
                        A_prev-Hidden state at Timestep "T-1", numpy array of shape (n_a, m) parameters--Python dictionary containing:
                        Wax--Weight matrix multiplying the input, numpy array of shape (n_a, n_x) WAA--Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wya--Weight m Atrix relating the hidden-state to the output, numpy array of shape (n_y, n_a) Ba-Bias, NumPy Array of shape (n_a, 1) by--Bias relating the hidden-state to the output, numpy array of shape ( n_y, 1) returns:a_next--Next hidden state, the shape (n_a, m) yt_pred--Prediction at Timestep "T", numpy a RraY of shape (n_y, m) cache--tuple of values needed for the backward pass, contains (A_next, A_prev, XT, parameters) "" # Retrieve parameters from "parameters" Wax = parameters["Wax"] Waa = parameters["Waa"] Wya = parameters["Wya"] ba = parameters["ba"] by = parameters[' by '] ### START CODE here ### (≈2 lines) # Co  Mpute Next activation state using the formula given above A_next = Np.tanh (Np.dot (WAX,XT) +np.dot (Waa,a_prev) +ba) # Compute output of the current cell using the formula given above yt_pred = Softmax (Np.dot (wya,a_next) +by) ### End CODE Here ### # Store values your need for backward propagation in cache cache = (A_next, A_prev, XT, para meters) return A_next, yt_pred, cache

Test forward Propagation

Np.random.seed (1)
XT = Np.random.randn (3,10)
A_prev = Np.random.randn (5,10)
Waa = Np.random.randn (5,5)
Wax = Np.random.randn (5,3)
Wya = Np.random.randn (2,5)
ba = Np.random.randn (5,1)
by = Np.random.randn (2,1)
parameters = {"Waa": Waa, " Wax ": Wax," Wya ": Wya," ba ": BA," by ": by}

A_next, yt_pred, cache = Rnn_cell_forward (XT, A_prev, parameters)
Print ("a_next[4] =", a_next[4])
Print ("A_next.shape =", A_next.shape)
print ("yt_pred[1] =", yt_pred[1])
print ("Yt_pred.shape =", Yt_ Pred.shape)

Forward propagation of total time step


# graded Function:rnn_forward def rnn_forward (x, a0, parameters): "" "Implement the forward propagation of the R

    Ecurrent Neural network described in Figure (3).
    ARGUMENTS:X--Input data for every time-step, of shape (n_x, M, t_x). A0--Initial hidden state, of shape (n_a, m) parameters--Python dictionary Containing:waa --Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a) Wax--Weight Matri x multiplying the input, numpy array of shape (n_a, n_x) Wya--Weight matrix relating the hidden-
                        State to the output, numpy array of shape (n_y, n_a) Ba--Bias numpy array of shape (n_a, 1)  By--Bias relating the hidden-state to the output, numpy array of shape (n_y, 1) returns:a --Hidden states for every time-step, numpy array of shape (N_a, M, t_x) y_pred--predictions for every time-step, n Umpy array OF shape (n_y, M, t_x) caches--tuple of values needed for the backward pass, contains (list of caches, X) "" # Initialize "Caches" which would contain the list of all caches caches = [] # Retrieve dimensions from  Shapes of X and parameters["Wya"] n_x, m, t_x = X.shape n_y, n_a = parameters["Wya"].shape ### START CODE Here ### # Initialize ' a ' and ' Y ' with zeros (≈2 lines) A = Np.zeros ((n_a, M, t_x)) y_pred = Np.zeros ((n _y, M, t_x)) # Initialize A_next (≈1 line) A_next = a0 # loop over all time-steps to T in range (t_x): # Update next hidden State, compute the prediction, get the cache (≈1 line) A_next, yt_pred, cache
        = Rnn_cell_forward (XT, A_prev, parameters) # Save The value of the ' new ' next ' hidden state in A (≈1 line) A[:,:,t] = a_next # Save The value of the prediction in Y (≈1 line) y_pred[:,:,t] = yt_pred # AppE nd "cache" to "caches" (≈1 line) caches.append (cache) ### end CODE Here ### # Store values needed for backward Propa Gation in cache caches = (caches, x) return a, y_pred, caches

Test forward propagation calculation results

Np.random.seed (1)
x = Np.random.randn (3,10,4)
a0 = NP.RANDOM.RANDN (5,10)
Waa = Np.random.randn (5,5)
Wax = Np.random.randn (5,3)
Wya = Np.random.randn (2,5)
ba = Np.random.randn (5,1) by
= Np.random.randn (2,1) C7/>parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": BA, "by": by}

A, y_pred, caches = Rnn_forward (x, A0, param eters) Print ("
a[4][1] =", a[4][1])
print ("A.shape =", A.shape)
print ("y_pred[1][3] =", y_pred[1][3)) C12/>print ("Y_pred.shape =", Y_pred.shape)
print ("caches[1][1][3] =", caches[1][1][3])
print ("Len" (Caches ) = ", Len (caches))
To be Continued


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.