Implementation and application of Artificial neural network (BP) algorithm python

Source: Internet
Author: User
This article is mainly for you to introduce the Python implementation of Neural Network (BP) algorithm and simple application, with a certain reference value, interested in small partners can refer to

In this paper, we share the specific code of Python to realize the neural network algorithm and application, for your reference, the specific content is as follows

First, use Python to implement a simple neural network algorithm:

Import numpy as np# definition tanh function def tanh (x): Return Np.tanh (x) # tanh function derivative def tan_deriv (x): Return 1.0-np.tanh (x) * Np.tan ( x) # sigmoid function def logistic (x): Return 1/(1 + np.exp (x)) # sigmoid function derivative def logistic_derivative (x): Return Logistic (x) * (1-logistic (x)) class Neuralnetwork:def __init__ (self, layers, activation= ' tanh '): "" "Neural Network algorithm constructor:p Aram Layer  S: Neuron layer:p Aram activation: function Used (default Tanh function): Return:none "" "If activation = = ' Logistic ': self.activation      = Logistic Self.activation_deriv = logistic_derivative elif activation = = ' Tanh ': self.activation = Tanh      Self.activation_deriv = tan_deriv # weight List self.weights = [] # Initialize weights (random) for I in range (1, Len (layers)-1):  Self.weights.append ((2 * np.random.random ((layers[i-1] + 1, layers[i] + 1))-1) * 0.25) Self.weights.append ((2    * Np.random.random ((Layers[i] + 1, layers[i + 1]))-1) * 0.25) def fit (self, X, y, learning_rate=0.2, epochs=10000): "" "Training neural network:p Aram X: DataSet (usually two-dimensional):p Aram y: Classification tag:p Aram learning_rate: Learning rate (default 0.2):p Aram epochs: Number of sessions (max cycles, default 10000): Return:no    Ne "" "# Make sure the dataset is two-dimensional X = np.atleast_2d (x) temp = Np.ones ([x.shape[0], x.shape[1] + 1]) temp[:, 0:-1] = X x = temp y = Np.array (y) for K in range (epochs): # Randomly extracting one line of X = Np.random.randint (x.shape[0]) # with random extraction This set of data for neural network update a = [x[i]] # Forward update for L in range (Len (self.weights)): A.append (Self.activation (Np.dot (A [l], self.weights[l])))) error = Y[i]-a[-1] Deltas = [ERROR * SELF.ACTIVATION_DERIV (a[-1])] # Reverse Update fo R L in range (Len (a)-2, 0,-1): Deltas.append (Deltas[-1].dot (self.weights[l). T) * SELF.ACTIVATION_DERIV (A[l])) Deltas.reverse () for I in Range (len (self.weights)): Layer = Np.atleas T_2d (a[i]) delta = np.atleast_2d (Deltas[i]) self.weights[i] + = learning_rate * layer. T.dot (Delta) def predict (self, x): x = Np.array (x) temp = Np.ones (x.shape[0]+ 1) temp[0:-1] = x A = temp for L in range (0, Len (self.weights)): a = Self.activation (Np.dot (A, self.weight S[L]) return a

Use your own defined neural network algorithm to implement some simple functions:

Small case:

X:y
0 0 0
0 1 1
1 0 1
1 1 0

From NN. Neuralnetwork import Neuralnetworkimport numpy as Npnn = Neuralnetwork ([2, 2, 1], ' tanh ') temp = [[0, 0], [0, 1], [1, 0], [ 1, 1]]x = Np.array (temp) y = Np.array ([0, 1, 1, 0]) nn.fit (X, y) for I in temp:  print (i, nn.predict (i))

Discover the basic mechanisms of results, infinitely close to 0 or infinitely close to 1

Second example: Identify the numbers in a picture

Import data:

From sklearn.datasets import Load_digitsimport pylab as Pldigits = Load_digits () print (Digits.data.shape) Pl.gray () Pl.matshow (Digits.images[0]) pl.show ()

Under observation: Size: (1797, 64)

Number 0

The next code is to identify them:

Import NumPy as Npfrom sklearn.datasets import load_digitsfrom sklearn.metrics import Confusion_matrix, Classification_ Reportfrom sklearn.preprocessing import Labelbinarizerfrom NN. Neuralnetwork Import neuralnetworkfrom sklearn.cross_validation import train_test_split# load data set digits = Load_digits () X = Digits.datay = digits.target# processing data, so that the data in the 0,1 between, meet the requirements of the neural network algorithm X-= X.min () x/= X.max () # Number of layers: # output Layer 10 number # input layer 64 because the picture is 8*8, 64 pixels # Hidden The Tibetan layer assumes 100nn = Neuralnetwork ([max, +, +], ' logistic ') # Separate training set and test set X_train, X_test, y_train, y_test = Train_test_split (X, y) # The two-dimensional data type required for conversion to Sklearn Labels_train = Labelbinarizer (). Fit_transform (y_train) labels_test = Labelbinarizer (). Fit_ Transform (y_test) print ("Start fitting") # Training 3,000 times Nn.fit (X_train, Labels_train, epochs=3000) predictions = []for i in Range (X_test.shape[0]): o = nn.predict (X_test[i]) # Np.argmax: Number of digits corresponds to maximum probability value predictions.append (Np.argmax (o)) # Print forecast related information print (Confusion_matrix (y_test, predictions)) print (Classification_report (y_test, predictions))

Results:

The matrix diagonal represents predicting the correct number and finding the correct rate is much


This table shows a more intuitive prediction of the correct rate:

A total of 450 cases, success rate 94%

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.