Wunda +neural-networks-deep-learning+ Second week assignment

Source: Internet
Author: User
Tags scalar

Logistic Regression with a neural Network mindset V4

Simply using the logistic to realize the cat's recognition, the logistic can be regarded as a simple neural network structure, the following is the main code:

1.

Import NumPy as Npimport Matplotlib.pyplot as Pltimport h5pyimport scipyfrom PIL import imagefrom scipy import Ndimagefrom Lr_utils Import Load_dataset%matplotlib Inline

2.

# # # START CODE here # # # (≈3 lines of code) M_train = Train_set_x_orig.shape[0]m_test = TEST_SET_X_ORIG.SHAPE[0]NUM_PX = tr  ain_set_x_orig.shape[1]### END CODE Here # # #print ("number of training Examples:m_train =" + str (m_train)) print ("number of testing examples:m_test = "+ str (m_test)) print (" height/width of each IMAGE:NUM_PX = "+ str (NUM_PX)) print (" Each I Mage is of size: ("+ str (NUM_PX) +", "+ str (NUM_PX) +", 3) ") Print (" train_set_x shape: "+ str (train_set_x_orig.shape) Print ("train_set_y shape:" + str (train_set_y.shape)) print ("test_set_x shape:" + str (test_set_x_orig.shape)) print ("t Est_set_y shape: "+ str (test_set_y.shape))

3. Data preprocessing process

# Reshape the training and test examples### START CODE here # # # (≈2 lines of code) Train_set_x_flatten = Train_set_x_orig. Reshape ( -1,train_set_x_orig.shape[1]*train_set_x_orig.shape[2]*3). Ttest_set_x_flatten = Test_set_x_orig.reshape ( -1,test_set_x_orig.shape[1]*test_set_x_orig.shape[2]*3).  t### END CODE Here # # #print ("Train_set_x_flatten shape:" + str (train_set_x_flatten.shape)) print ("train_set_y shape:" + STR (train_set_y.shape)) print ("Test_set_x_flatten shape:" + str (test_set_x_flatten.shape)) print ("Test_set_y shape:" + str (test_set_y.shape)) Print ("Sanity check after reshaping:" + str (train_set_x_flatten[0:5,0))
Note: Here, not available (num_px*num_px*3,-1), because reshape is split by default, that is, after I have identified a reshape (m,n), I read the original array by row, and write the array by line, So the line of my original array is an image, so the line of the reshape array should also be an image, so to write,Train_set_x_orig.reshape ( -1,train_set_x_orig.shape[1]*train_set_x_orig.shape[2]*3), instead of taking the sample number as a line, it's messy!

4.

train_set_x = train_set_x_flatten/255.test_set_x = test_set_x_flatten/255.

5.

def propagate (W, b, X, Y): "" Implement the cost function and its gradient for the propagation explained above A Rguments:w--weights, a numpy array of size (NUM_PX * num_px * 3, 1) B--bias, a scalar X--Data of size (num _PX * NUM_PX * 3, number of examples) Y--true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, Number O F examples) Return:cost--negative log-likelihood cost for logistic regression DW--gradient of the loss with  Respect to W, thus same shape as W db-gradient of the loss with respect to B, thus same shape as B Tips:- Write Your code step by step for the propagation. Np.log (), Np.dot () "" "" M = x.shape[1] # FORWARD propagation (from X to cost) # # START CODE here # # # ( ≈2 lines of code) A = sigmoid (Np.dot (w.t,x) +b) # COMPUTE activation cost = -1/m* (Np.dot (Y,np.log (A). T) + (Np.dot (1-y,np.log (1-a)). T)) # COMPUTE Cost # # # # END code here # # # # Backward propagation (to FIND GRAD) # # START code here # # # (≈2 lines of code) DW = 1/m*np.dot (X, (a-y). T) db = 1/m*np.sum (a-y) # # # END CODE here # # assert (Dw.shape = = W.shape) assert (Db.dtype = = float) Cost = Np.squeeze (Cost) asserts (Cost.shape = = ()) grads = {"DW": DW, "DB": db} return grads, cost

  

6.

# graded Function:optimizedef optimize (W, b, X, Y, Num_iterations, learning_rate, print_cost = False): "" "This fun Ction optimizes W and b by running a gradient descent algorithm arguments:w--weights, a numpy array of size ( NUM_PX * NUM_PX * 3, 1) B--bias, a scalar X--Data of shape (NUM_PX * num_px * 3, number of examples) Y--Tru E "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples) num_iterations--Number of ITER Ations of the optimization loop learning_rate-learning rate of the gradient descent update rule print_cost-Tru E to print the loss every steps returns:params-dictionary containing the weights W and bias B grads- -Dictionary containing the gradients of the weights and bias with respect to the cost function costs – List of all th        E costs computed during the optimization, this would be used to plot the learning curve. Tips:you basically need to write down and steps and iterate through Them:1) Calculate the cost and the gradient for the current parameters.        Use Propagate ().    2) Update the parameters using gradient descent rule for W and B.  "" "costs = [] for I in range (num_iterations): # Cost and gradient calculation (≈1-4                Lines of Code) # # # START code here # # # grads, cost = Propagate (w,b,x,y) # # # END code here # # # # Retrieve derivatives from grads DW = grads["DW"] db = grads["db"] # Update rule ( ≈2 lines of Code) # # # START code here # # # w = w-learning_rate*dw B = b-learning_rate*db # # # E ND CODE Here # # # Record The costs if I% = = 0:costs.append (cost) # Print the cost every training examples if print_cost and i% = = 0:print ("Cost after iteration %i:%f "% (I, cost)) params = {" W ": W," B ": b} GRads = {"DW": DW, "DB": db} return params, grads, costs 

  

7.

# graded Function:predictdef predict (W, B, X): "Predict whether the label is 0 or 1 using learned logistic Regre Ssion parameters (W, b) arguments:w--weights, a numpy array of size (NUM_PX * num_px * 3, 1) B--bias, a Scalar X--Data of size (NUM_PX * NUM_PX * 3, number of examples) returns:y_prediction--a numpy array (ve ctor) containing all predictions (0/1) for the examples in X ' m = x.shape[1] Y_prediction = Np.zeros ((1,m) ) W = W.reshape (x.shape[0], 1) # Compute vector "a" predicting the probabilities of a cat being present in the P Icture # # # START code here # # # (≈1 line of code) A = sigmoid (Np.dot (w.t,x) +b) # # # END code here # # # #####                # # # y_prediction=a>0.5 Y_prediction=y_prediction.astype (float) ######### for I in range (A.shape[1]): # Convert probabilities A[0,i] to actual predictions p[0,i] # # # START CODE here # # # (≈4 lines of cod e) Pass # ## END CODE Here # # # assert (Y_prediction.shape = = (1, m)) return y_prediction 

With a vectorization to solve the problem of circulation, very happy!

8.

# graded Function:modeldef model (X_train, Y_train, X_test, y_test, num_iterations = $, learning_rate = 0.5, print_cost = False): "" "Builds the logistic regression model by calling the function you ' ve implemented previously ARG Uments:x_train--training set represented by a numpy array of shape (NUM_PX * num_px * 3, M_train) Y_train--Trai  Ning labels represented by a numpy array (vector) of the shape (1, M_train) x_test--test set represented by a numpy array of shape (NUM_PX * num_px * 3, M_test) y_test--test labels represented by a numpy array (vector) of shape (1, m_test  ) Num_iterations--hyperparameter representing the number of iterations to optimize the parameters learning_rate--  Hyperparameter representing the learning rate used in the update rule of optimize () Print_cost--Set to True to print    The cost every iterations returns:d – dictionary containing information about the model.   "" "# # # START CODE here # # #     # Initialize parameters with zeros (≈1 line of code) W, b = Initialize_with_zeros (x_train.shape[0]) # Gradient Descent (≈1 line of code) parameters, grads, costs = Optimize (W, b, X_train, Y_train, Num_iterations, Learning_ra Te, print_cost = False) # Retrieve parameters W and b from Dictionary "parameters" W = parameters["W"] B = P    Arameters["B"] # Predict Test/train set examples (≈2 lines of code) Y_prediction_test = Predict (w,b,x_test) Y_prediction_train = Predict (W,b,x_train) # # # # END CODE here # # # print train/test Errors print ("Train accuracy: {}% ". Format (100-np.mean (Np.abs (y_prediction_train-y_train) *)) print (" Test accuracy: {}% ". Format (100-np.me           An (Np.abs (y_prediction_test-y_test)) *) d = {"Costs": costs, "y_prediction_test": Y_prediction_test, "Y_prediction_train": Y_prediction_train, "W": W, "B": B, "learning_rate": Learnin G_rate, "Num_iteRations ": num_iterations} print (d[" costs "]) return D 

If 3 of the code is reversed, it becomes a 34% prediction result, so be sure to pay attention to the details!

Wunda +neural-networks-deep-learning+ Second week assignment

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.