Python uses numpy to flexibly define the neural network structure.

Source: Internet
Author: User

Python uses numpy to flexibly define the neural network structure.

This document describes how to flexibly define the neural network structure of Python Based on numpy. We will share this with you for your reference. The details are as follows:

With numpy, You can flexibly define the neural network structure and apply the powerful numpy matrix operation function!

I. Usage

1) define a layer-3 neural network:

''' Example 1 ''' nn = NeuralNetworks ([3, 4, 2]) # defines the neural network nn. fit (X, y) # fit print (nn. predict (X) # Prediction

Note:
Number of input layer nodes: 3
Number of hidden layer nodes: 4
Number of nodes at the output layer: 2

2) define a five-layer Neural Network:

''' Example 2 ''' nn = NeuralNetworks ([3, 5, 7, 4, 2]) # defines the neural network nn. fit (X, y) # fit print (nn. predict (X) # Prediction

Note:
Number of input layer nodes: 3
Hidden Layer 1 node count: 5
Number of Hidden Layer 2 nodes: 7
Number of Hidden Layer 3 nodes: 4
Number of nodes at the output layer: 2

II. Implementation

The following implementation method is self (@ hhh5460) original.Key points:Dtype = object

Import numpy as npclass NeuralNetworks (object): ''' def _ init _ (self, n_layers = None, active_type = None, n_iter = 10000, error = 0.05, alpha = 0.5, lamda = 0.4): '''build a neural network framework ''' # number of nodes at each layer (vector) self. n = np. array (n_layers) # 'n' _ layers must be of the list type, for example, [3, 4, 2] Or n_layers = [3, 4, 2] 'self. size = self. n. size # Total number of layers # layer (vector) self. z = np. empty (self. size, dtype = object) # placeholder (empty), dtype = object! Self. a = np. empty (self. size, dtype = object) self. data_a = np. empty (self. size, dtype = object) # offset (vector) self. B = np. empty (self. size, dtype = object) self. delta_ B = np. empty (self. size, dtype = object) # weight (matrix) self. w = np. empty (self. size, dtype = object) self. delta_w = np. empty (self. size, dtype = object) # fill in for I in range (self. size): self. a [I] = np. zeros (self. n [I]) # All zero self. z [I] = np. zeros (self. n [I]) # All zero self. data_a [I] = np. zeros (self. n [I]) # completely zero if I <self. size-1: self. B [I] = np. ones (self. n [I + 1]) # All one self. delta_ B [I] = np. zeros (self. n [I + 1]) # All 0 mu, sigma = 0, 0.1 # Mean, variance self. w [I] = np. random. normal (mu, sigma, (self. n [I], self. n [I + 1]) # Normal Distribution randomization self. delta_w [I] = np. zeros (self. n [I], self. n [I + 1]) # All Zero

The complete code below is my Stanford machine learning tutorial, Which I typed myself:

Import numpy as np ''' reference: http://ufldl.stanford.edu/wiki/index.php/%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C'''class NeuralNetworks (object): ''' def _ init _ (self, n_layers = None, active_type = None, n_iter = 10000, error = 0.05, alpha = 0.5, lamda = 0.4): '''build a neural network framework ''' self. n_iter = n_iter # number of iterations self. error = error # Maximum Allowable error self. alpha = alpha # learning rate self. lamda = lamda # Attenuation Factor # deliberately misspelled here! If n_layers is None: raise 'the number of nodes at each layer must be set! 'Elif not isinstance (n_layers, list): raise 'n' _ layers must be of the list type, for example, [3, 4, 2] Or n_layers = [3, 4, 2] '# number of nodes (vector) self. n = np. array (n_layers) self. size = self. n. size # Total number of layers # layer (vector) self. a = np. empty (self. size, dtype = object) # placeholder (empty), dtype = object! Self. z = np. empty (self. size, dtype = object) # offset (vector) self. B = np. empty (self. size, dtype = object) self. delta_ B = np. empty (self. size, dtype = object) # weight (matrix) self. w = np. empty (self. size, dtype = object) self. delta_w = np. empty (self. size, dtype = object) # residual (vector) self. data_a = np. empty (self. size, dtype = object) # fill in for I in range (self. size): self. a [I] = np. zeros (self. n [I]) # All zero self. z [I] = np. zeros (se Lf. n [I]) # All zero self. data_a [I] = np. zeros (self. n [I]) # completely zero if I <self. size-1: self. B [I] = np. ones (self. n [I + 1]) # All one self. delta_ B [I] = np. zeros (self. n [I + 1]) # All 0 mu, sigma = 0, 0.1 # Mean, variance self. w [I] = np. random. normal (mu, sigma, (self. n [I], self. n [I + 1]) # Normal Distribution randomization self. delta_w [I] = np. zeros (self. n [I], self. n [I + 1]) # all zeros # activate the function self. active_functions = {'sigmoid': self. sigmoid, 'tanh': self. tanh, 'Radb': self. radb, 'line': self. line,} # activate function self. derivative_functions = {'sigmoid': self. sigmoid_d, 'tanh': self. tanh_d, 'radb': self. radb_d, 'line': self. line_d,} if active_type is None: self. active_type = ['sigmoid'] * (self. size-1) # else: self. active_type = active_type def sigmoid (self, z): if np. max (z)> 600: z [z. argmax ()] = 600 return 1.0/(1.0 + np. exp (-z) def tanh (se Lf, z): return (np. exp (z)-np. exp (-z)/(np. exp (z) + np. exp (-z) def radb (self, z): return np. exp (-z * z) def line (self, z): return z def sigmoid_d (self, z): return z * (1.0-z) def tanh_d (self, z): return 1.0-z * z def radb_d (self, z): return-2.0 * z * np. exp (-z * z) def line_d (self, z): return np. ones (z. size) # FYI def forward (self, x): ''' forward Propagation (online) ''' # refresh all z and a self with sample x. a [0] = x I in range (self. size-1): self. z [I + 1] = np. dot (self. a [I], self. w [I]) + self. B [I] self. a [I + 1] = self. active_functions [self. active_type [I] (self. z [I + 1]) # Add the activation function def err (self, X, Y): ''' error ''' last = self. size-1 err = 0.0 for x, y in zip (X, Y): self. forward (x) err ++ = 0.5 * np. sum (self. a [last]-y) ** 2) err/= X. shape [0] err + = sum ([np. sum (w) for w in self. w [: last] ** 2]) return err def backward (self, Y): ''' reverse propagation (online) ''' last = self. size-1 # Use sample y to refresh all delta_w and delta_ B self. data_a [last] =-(y-self. a [last]) * self. derivative_functions [self. active_type [last-1] (self. z [last]) # Add the function activation function for I in range (last-1, 1,-1): self. data_a [I] = np. dot (self. w [I], self. data_a [I + 1]) * self. derivative_functions [self. active_type [I-1] (self. z [I]) # Add the pilot function of the activation function # Calculate the partial export p_w = np. outer (self. a [I], self. data_a [I + 1]) # outer product! Thanks for the strength of numpy! P_ B = self. data_a [I + 1] # update delta_w and delta_w self. delta_w [I] = self. delta_w [I] + p_w self. delta_ B [I] = self. delta_ B [I] + p_ B def update (self, n_samples): ''' update weight parameter ''' last = self. size-1 for I in range (last): self. w [I]-= self. alpha * (1/n_samples) * self. delta_w [I] + self. lamda * self. w [I]) self. B [I]-= self. alpha * (1/n_samples) * self. delta_ B [I]) def fit (self, X, Y): ''' fit ''' for I in range (Self. n_iter): # Use all samples, for x, y in zip (X, Y): self. forward (x) # forward, update a, z; self. backward (y) # backward, update delta_w, delta_ B #, then, update w, B self. update (len (X) # Calculation Error err = self. err (X, Y) if err <self. error: break # error displayed for thousands of times (otherwise it will be boring !) If I % 1000 = 0: print ('iter :{}, error :{}'. format (I, err) def predict (self, X): ''' prediction ''' last = self. size-1 res = [] for x in X: self. forward (x) res. append (self. a [last]) return np. array (res) if _ name _ = '_ main _': nn = NeuralNetworks ([5000, 0.4, 1], n_iter =, alpha =, lamda = 0.3, error = 0.06) # define a neural network X = np. array ([[0 ., 0.], # Prepare data [0 ., 1.], [1 ., 0.], [1 ., 1.]) y = np. array ([0, 1, 0]) nn. fit (X, y) # fit print (nn. predict (X) # Prediction

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.