"Deep learning--andrew Ng" first lesson second week programming homework __ Programming

Source: Internet
Author: User
Tags abs assert scalar

Recently in the NetEase Cloud classroom learning "depth study" micro-professional, the class after the programming work record down.

Deep Learning – Wunda

The logical regression written in Python before comparison

deeplearning operation Logistic regression with a neural network mindset

Welcome to your (required) programming assignment! You'll build a logistic regression classifier to recognize cats. This assignment'll step for you through the You neural network mindset, and so would also hone your About deep learning.

Instructions:
-Do does not use loops (for/while) in your code, unless the instructions.

you'll learn to:
-builds the general architecture of a learning algorithm, including:
-In Itializing parameters
-Calculating the cost function and its gradient
-Using a optimization algorithm (Gradie NT descent)
-Gather all three functions above to a main model function, in the right order. Programs

Import NumPy as NP import Matplotlib.pyplot as PLT import h5py import scipy from PIL import Image from scipy import Ndimag E from lr_utils import Load_dataset # # Loading the data (cat/non-cat) Train_set_x_orig, train_set_y, Test_set_x_orig, t est_set_y, classes = Load_dataset () # # # Data pre_processing m_train = train_set_x_orig.shape[0] M_test = Test_set_x_orig.s Hape[0] num_px = train_set_x_orig.shape[1] Train_set_x_flatten = Train_set_x_orig.reshape (train_set_x_orig.shape[0), NUM_PX*NUM_PX*3). T Test_set_x_flatten = Test_set_x_orig.reshape (test_set_x_orig.shape[0],num_px*num_px*3).
T train_set_x = train_set_x_flatten/255.

test_set_x = test_set_x_flatten/255. # # graded function:sigmoid def sigmoid (z): "" "Compute the sigmoid of z arguments:z--A scalar or num

    The py array of any size. Return:s-sigmoid (z) "" "### START CODE here ### (≈1 line of CODE) s = 1/(1+np.exp (z)) ### end CODE here ### return S # # graded Function:initialize_With_zeros def initialize_with_zeros (Dim): "" "This function creates a vector of zeros of shape (Dim, 1) for W

    D initializes b to 0. Argument:dim--size of the W vector we want (or number of parameters in this case) Returns:w--Initialize  D Vector of shape (Dim, 1) B--initialized scalar (corresponds to the bias) "" "### START CODE here ### (≈1 Line of code) W = Np.zeros (Dim). Reshape (dim,1) # print ("Type of W:" + str (type (w))) # print (w.shape) b = 0 ### End CODE here ### assert (W.shape = (Dim, 1)) assert (Isinstance (b, float) or isinstance (b, int)) r Eturn W, b # # Propagate function:propagate to get the Y and cost def propagate (W, b, X, Y): "" "Implement the C ost function and its gradient for the propagation explained above arguments:w--weights, a numpy array of size (NUM_PX * NUM_PX * 3, 1) B--bias, a scalar X--Data of size (NUM_PX * NUM_PX * 3, number of examples) Y-true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples) Return:cost-Negati
    ve log-likelihood cost for logistic regression DW--gradient of the loss with respect to W, thus same shape as W DB--gradient of the loss with respect to B, thus same shape as B Tips:-Write Your code step by step for the P Ropagation. Np.log (), Np.dot () "" "# Print (" w.shape= "+str (w.shape)) # print (" x.shape= "+str (x.shape)) # print (" Y.shape = "+str (y.shape)) m = x.shape[1] # FORWARD propagation (from X to cost) ### START CODE here ### (≈2 lines of     Code) z = Np.dot (w.t,x) +b A = Sigmoid (z) # COMPUTE activation # print ("z.shape=" +str (Z.shape)) # Print ("A.shape=" +str (a.shape)) cost = -1/m * Np.sum (Np.dot (A). T) +np.dot ((1-y), Np.log (1-a). T) # Compute cost ### End code here ### # Backward propagation (to find GRAD) ### START code here # # (≈2 lines of code) DW = 1/m * (Np.dot (X, a-y).

    T) # print ("dw.shape=" +str (dw.shape)) db = 1/m * Np.sum ((a-y), axis=1,keepdims=true) ### end CODE here ###

    ASSERT (Dw.shape = = W.shape) assert (Db.dtype = = float) Cost = Np.squeeze (cost) assert (Cost.shape = = ()) Grads = {"DW": DW, "DB": db} return grads, cost # # graded Function:optimize def optimize (W, b, X, Y , Num_iterations, learning_rate, print_cost = False): "" "" This function optimizes W and b by running a gradient de
    Scent algorithm arguments:w--weights, a numpy array of size (NUM_PX * num_px * 3, 1) B--bias, a scalar X--Data of shape (NUM_PX * num_px * 3, number of examples) Y--true "label" vector (containing 0 if non-cat, 1 I F cat), of shape (1, number of examples) num_iterations--Number of iterations of the optimization loop learning_ Rate--learning rate of the gradient descent update rule print_cost--True to print loss every REturns:params--Dictionary containing the weights W and bias B grads--dictionary containing the gradients of T  He weights and bias with respect to the cost function costs--list of the costs computed the during,

    This is used to plot the learning curve. Tips:you basically need to write down two steps and iterate through Them:1) Calculate the cost and the Gradi Ent for the current parameters.
        Use Propagate ().
    2) Update the parameters using gradient descent rule for W and B.
        "" "costs = [] for I in range (num_iterations): # Cost and gradient calculation (≈1-4 lines of code) ### START code here is ### grads, cost = Propagate (W, b, X, Y) ### end code here ### # Retri
        Eve derivatives from grads DW = grads["DW"] db = grads["db"] # Update rule (≈2 lines of code) ### START CODE here ### w = w-learning_rate * DW B = b-learning_rate * b ### End CODE here ### # record the costs if I% = = 0:costs.append (cost # Print The cost every training examples if print_cost and i% = = 0:print ("Cost a  fter Iteration%i:%f "% (I, cost)) params = {" W ": W," B ": b} grads = {" DW ": DW," DB ":  DB} return params, grads, costs # graded Function:predict def predict (W, B, X): "Predict whether the The label is 0 or 1 using learned logistic regression parameters (W, b) arguments:w--weights, a numpy array of Si Ze (num_px * num_px * 3, 1) B--bias, a scalar X--Data of size (NUM_PX * NUM_PX * 3, number of examples) R  Eturns:y_prediction-a numpy array (vector) containing all predictions (0/1) for the examples in X ' ' m = X.SHAPE[1] Y_prediction = Np.zeros ((1,m)) W = W.reshape (x.shape[0), 1) # Compute vector "A" predicting the P Robabilities of a cat being present in the picture ### START CODE here ### (≈1 line of code) A = sigmoid (Np.dot (w.t,x) + b) # print (A  ### End CODE Here ### for I in range (A.shape[1]): # Convert probabilities a[0,i] to actual predictions 
        P[0,i] ### START CODE here ### (≈4 lines of code) if a[0,i] > 0.5:y_prediction[0, i] = 1

    Else:y_prediction[0, I] = 0 ### end CODE here ### assert (Y_prediction.shape = = (1, m)) Return Y_prediction # graded Function:model def model (X_train, Y_train, X_test, y_test, num_iterations = 2000, Learning_rate = 0.5, Print_cost = False): "" "builds the logistic regression model by calling the function for you ' ve Implemented previously Arguments:x_train--training set represented by a numpy array of shape (NUM_PX * num_px * 3, M_train) Y_train--training labels represented by a numpy array (vector) of shape (1, M_train) x_test--tes T set represented by a nUmpy array of shape (NUM_PX * num_px * 3, M_test) y_test--test labels represented by a numpy array (vector) of shape (1, m_test) num_iterations--hyperparameter representing the number of iterations to optimize the parameters Lea  Rning_rate--hyperparameter representing the learning rate used in the update rule of optimize () Print_cost--Set to
    True to print the cost every iterations returns:d--Dictionary containing information about the model. "" "### START code here ### # Initialize parameters with zeros (≈1 line of code) W, b = initialize_with_z Eros (X_train.shape[0]) # gradient descent (≈1 line of code) parameters, grads, costs = Optimize (W, b, X_train, Y
    _train, Num_iterations, learning_rate, print_cost = True) # Retrieve parameters W and b from Dictionary ' parameters ' W = parameters["W"] B = parameters["B"] # predict Test/train set examples (≈2 lines of code) y_predictio N_test = Predict (W, B, x_TEST) Y_prediction_train = Predict (W, b, X_train) ### end CODE here ### # print train/test Errors print ( ' Train accuracy: {}% '. Format (100-np.mean (Np.abs (y_prediction_train-y_train))) print ("Test accuracy: {}%".) Format (100-np.mean (Np.abs (y_prediction_test-y_test))) d = {"Costs": costs, "y_prediction_test": Y_prediction_test, "Y_prediction_train": Y_prediction_train, "W": W, "B": B, "le Arning_rate ": Learning_rate," num_iterations ": Num_iterations} return D # # Test d = model (train_set_x, t Rain_set_y, test_set_x, test_set_y, num_iterations = Watts, learning_rate = 0.005, print_cost = True) # Plot Learning Curv
E (with costs) costs = Np.squeeze (d[' costs ']) Plt.plot (costs) Plt.ylabel ("Cost") Plt.xlabel (' iterations (per hundreds) ')  Plt.title ("Learning rate =" + str (d["learning_rate")) plt.show () learning_rates = [0.01, 0.001, 0.0001] models = {} for
   I in Learning_rates: Print ("Learning rate is:" + str (i)) models[str (i)] = Model (train_set_x, train_set_y, test_set_x, test_set_y, Num_it erations = 1500, learning_rate = i, print_cost = False) print (' \ n ' + '----------------------------------------------- --------"+ ' \ n ') for I-In Learning_rates:plt.plot (Np.squeeze (Models[str (i)] [" costs "]), label= str (MODELS[STR (i) [" L
Earning_rate "]) Plt.ylabel (' cost ') Plt.xlabel (' iterations ') legend = plt.legend (loc= ' Upper Center ', shadow=true) frame = Legend.get_frame () frame.set_facecolor (' 0.90 ') plt.show ()
Results predictive results of test sets and training sets

Comment: Training accuracy is close to 100%. This is a good sanity Check:your model are working and has high enough capacity to fit the training data. Test error is 68%. It is actually to this simple model, given the small dataset we used and so logistic regression is a linear CLA Ssifier. But no worries, you'll build a even better classifier next week!

Also, you'll be clearly the model is overfitting the training data. Later in this specialization you would learn to reduce overfitting and for example by using regularization. Using the code below (and changing the "index variable") You can look at the in predictions on pictures of the test set. gradient descent gradual optimization loss function

Interpretation:
You can decreasing. It shows that the parameters are being learned. However, you are the could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might to the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. Comparison of the results of different learning rates

Interpretation :
-Different learning rates give different costs and thus different predictions re Sults.
-If The learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example with the using 0.01 still eventually ends up at a good value for the cost).
-a lower cost doesn ' t mean a better model. You are possibly overfitting have to check if there. It happens the training accuracy is a lot higher than the test accuracy.
-In deep learning, we usually recommend this:
-Choose The learning rate that better the cost minimizes Ction.
-If Your model overfits, use the other techniques to reduce overfitting. (We ' ll talk about this in later videos.)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.