Python's example of a flexible definition of neural network structure in NumPy

Source: Internet
Author: User
This article mainly introduces Python based on numpy flexible definition of neural network structure, combined with examples of the principle of neural network structure and python implementation methods, involving Python using numpy extension for mathematical operations of the relevant operation skills, the need for friends can refer to the next

The example in this paper describes the method of Python's flexible definition of neural network structure based on NumPy. Share to everyone for your reference, as follows:

Using NumPy can flexibly define the neural network structure, but also can apply numpy powerful matrix operation function!

First, usage

1). Define a three-layer neural network:


"Example one" NN = Neuralnetworks ([3,4,2]) # Defines the neural network nn.fit (x, y) # Fit print (Nn.predict (X)) #预测

Description
Number of input layer nodes: 3
Number of hidden layer nodes: 4
Number of output layer nodes: 2

2). Define a five-layer neural network:


"Example two ' NN = Neuralnetworks ([3,5,7,4,2]) # Defines the neural network nn.fit (x, y) # Fit print (Nn.predict (X)) #预测

Description
Number of input layer nodes: 3
Number of hidden layers 1 nodes: 5
Number of hidden layers 2 nodes: 7
Number of hidden layers 3 nodes: 4
Number of output layer nodes: 2

Second, the realization

As the next realization of the way I (@hhh5460) original. Key points: Dtype=object


Import NumPy as Npclass Neuralnetworks (object): "" "Def __init__ (self, n_layers=none, Active_type=none, n_iter=10000, error=0.05, alpha=0.5, lamda=0.4): "Build Neural network Framework" # Number of nodes (vector) SELF.N = Np.array (n_layers) # ' n_layers must be a list type  , such as: [3,4,2] or n_layers=[3,4,2] ' self.size = total number of Self.n.size # layers # layer (vector) self.z = Np.empty (self.size, Dtype=object) # First placeholder (empty), Dtype=object! The following are all SELF.A = Np.empty (self.size, dtype=object) self.data_a = Np.empty (Self.size, Dtype=object) # offset (vector) self . B = Np.empty (Self.size, dtype=object) Self.delta_b = Np.empty (Self.size, Dtype=object) # Weights (matrix) SELF.W = Np.empt      Y (self.size, dtype=object) Self.delta_w = Np.empty (Self.size, dtype=object) # padding for I in range (self.size):  Self.a[i] = Np.zeros (Self.n[i]) # full 0 Self.z[i] = Np.zeros (Self.n[i]) # full 0 Self.data_a[i] = Np.zeros (Self.n[i]) # Full 0 if I < self.size-1: self.b[i] = Np.ones (self.n[i+1]) # all one self.delta_b[i] = Np.zeros (self.n[I+1]) # full 0 mu, sigma = 0, 0.1 # mean, variance self.w[i] = Np.random.normal (Mu, Sigma, (Self.n[i], self.n[i+1)) # # Normal distribution randomization Self.delta_w[i] = Np.zeros ((Self.n[i], self.n[i+1]) # all zeros

The complete code below is what I have learned from the Stanford Machine Learning tutorial, completely self-tapping:


Import NumPy as NP "Reference: Http://ufldl.stanford.edu/wiki/index.php/%E7%A5%9E%E7%BB%8F%E7%BD%91%E7%BB%9C" class Neuralnetworks (object): "" Def __init__ (self, n_layers=none, Active_type=none, n_iter=10000, error=0.05, alpha=0.5     , lamda=0.4): "Build neural network framework" Self.n_iter = n_iter # iterations self.error = error # allow maximum error self.alpha = Alpha # Learning rate    Self.lamda = Lamda # attenuation Factor # This is deliberately spelled incorrectly! If N_layers is None:raise ' The number of nodes in each layer must be set! ' Elif not isinstance (n_layers, list): Raise ' n_layers must be a list type, such as: [3,4,2] or n_layers=[3,4,2] ' # Number of nodes (vector) se LF.N = Np.array (n_layers) self.size = self.n.size # layer Total # layer (vector) SELF.A = Np.empty (Self.size, Dtype=object) # first accounted for Bit (empty), Dtype=object! The following are self.z = Np.empty (Self.size, Dtype=object) # offset (vector) self.b = Np.empty (self.size, Dtype=object) self.delt A_b = Np.empty (Self.size, Dtype=object) # Right (matrix) SELF.W = Np.empty (self.size, dtype=object) Self.delta_w = np.emp Ty (Self.size, Dtype=object) # residuals (vector) self.dAta_a = Np.empty (Self.size, dtype=object) # padding for I in range (self.size): self.a[i] = Np.zeros (Self.n[i]) # all zeros        Self.z[i] = Np.zeros (Self.n[i]) # full 0 Self.data_a[i] = Np.zeros (Self.n[i]) # full 0 if I < self.size-1:  Self.b[i] = Np.ones (self.n[i+1]) # all one self.delta_b[i] = Np.zeros (self.n[i+1]) # full 0 mu, sigma = 0, 0.1 # Mean, variance self.w[i] = Np.random.normal (Mu, Sigma, (Self.n[i], self.n[i+1])) # # Normal distribution randomization self.delta_w[i] = Np.ze Ros ((Self.n[i], self.n[i+1]) # full 0 # activation function self.active_functions = {' sigmoid ': self.sigmoid, ' tanh ': self. Tanh, ' radb ': self.radb, ' line ': Self.line,} # Activation function of the function self.derivative_functions = {' sigmoid ': Self.sigmoid_d, ' tanh ': self.tanh_d, ' radb ': self.radb_d, ' line ': self.line_d,} if Active_type is No Ne:self.active_type = [' sigmoid '] * (self.size-1) # Default activation function Type Else:self.active_type = Active_type def sig Moid (self, z): if NP.Max (z) > 600:z[z.argmax ()] = return 1.0/(1.0 + np.exp (-Z)) def tanh (self, z): Return (Np.exp (z)-NP . exp (-Z))/(Np.exp (z) + np.exp (-Z)) def radb (self, z): Return np.exp (-Z * z) def line (self, z): Return Z def SIG Moid_d (self, z): Return z * (1.0-Z) def tanh_d (self, z): Return 1.0-z * z def radb_d (self, z): return-2.0 * Z * NP.EXP (-Z * z) def line_d (self, z): Return Np.ones (z.size) # Full One def forward (self, X): "Forward propagation (online) ' # with Sample x go over, refresh All z, a self.a[0] = X for i in Range (self.size-1): self.z[i+1] = Np.dot (Self.a[i], self.w[i]) + SE Lf.b[i] self.a[i+1] = Self.active_functions[self.active_type[i]] (self.z[i+1]) # added activation function def err (self, X, Y): "Error Poor ' last = self.size-1 err = 0.0 for x, y in Zip (x, y): Self.forward (x) Err + 0.5 * np.sum ((Self.a[la ST]-y) **2) Err/= x.shape[0] Err + = SUM ([Np.sum (W) for W "self.w[:last]**2]) return err def backward (self, y    ): ' Reverse propagation (online) 'last = self.size-1 # go through with sample Y, refresh all delta_w, delta_b self.data_a[last] =-(Y-self.a[last]) * Self.derivative_func Tions[self.active_type[last-1]] (Self.z[last]) # Add the function of the activation function for I in range (last-1, 1,-1): self.data_a[i] = Np.dot ( Self.w[i], self.data_a[i+1]) * self.derivative_functions[self.active_type[i-1] [self.z[i]) # Add the derivative of the activation function # to calculate the bias p _w = Np.outer (Self.a[i], self.data_a[i+1]) # outer product!      Thanks for NumPy's strength! P_b = self.data_a[i+1] # update delta_w, delta_w self.delta_w[i] = self.delta_w[i] + p_w self.delta_b[i] = self . delta_b[i] + p_b def update (self, n_samples): ' Update weight parameter ' ' last = self.size-1 for I in range . w[i]-= Self.alpha * ((1/n_samples) * Self.delta_w[i] + Self.lamda * self.w[i]) self.b[i]-= Self.alpha * ((1/n_samp  Les) * Self.delta_b[i]) def fit (self, X, y): "Fit" for I in Range (Self.n_iter): # With all samples, in order X, y in        Zip (x, Y): Self.forward (x) # Forward, update A, z; Self.backward (y) # back, moreNew Delta_w, Delta_b # Then, update W, b self.update (len (x)) # calculates the error err = Self.err (X, Y) if err < self. Error:break # Full thousand display error (otherwise too boring!) If I% = = 0:print (' iter: {}, Error: {} '. Format (i, err)) def predict (self, X): ' ' predicted ' last = s elf.size-1 res = [] for x in X:self.forward (x) res.append (Self.a[last]) return Np.array (res) if __nam e__ = = ' __main__ ': nn = Neuralnetworks ([2,3,4,3,1], n_iter=5000, alpha=0.4, lamda=0.3, error=0.06) # define neural network X = Np.array  ([[0.,0.], # Prepare data [0.,1.], [1.,0.], [1.,1.]] y = Np.array ([0,1,1,0]) nn.fit (x, y) # Fit print (Nn.predict (X)) # Forecast
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.