Reference Pengliang Teacher's video tutorial: Reprint please indicate the source and Pengliang teacher Original
Video Tutorials: Http://pan.baidu.com/s/1kVNe5EJ
1. About the nonlinear transformation equation (non-linear transformation function)
The sigmoid function (the S-curve) is used as activation functions:
1.1 Hyperbolic function (TANH) 1.2 logical functions (logistic function)
2. Implement a simple neural network algorithm
Import NumPy as NP
def tanh (x): Return Np.tanh (x)
def tanh_deriv (x): Return 1.0-np.tanh (x) *np.tanh (x)
def logistic (x): Return 1/(1 + np.exp (x))
def logistic_derivative (x): Return Logistic (x) * (1-logistic (x))
Class Neuralnetwork: def __init__ (self, layers, activation= ' tanh '): &NBSP ; "" :P Aram layers:a list containing the number of units in each layer. Should to at least two values :p Aram Activation:the Activa tion function to is used. Can be "logistic" or "Tanh" "" " if activation = = ' Logistic ': self.activation = logistic &N Bsp Self.activation_deriv = logistic_derivative Elif Act ivation = ' Tanh ': self.activation = tanh &NB Sp Self.activation_deriv = tanh_deriv self.weights = [] &nBsp for I in range (1, len (layers) 1): Self.weig Hts.append ((2*np.random.random (layers[i-1) + 1, Layers[i] + 1)-1) *0.25) &NBS P Self.weights.append ((2*np.random.random (layers[i] + 1, layers[i + 1])-1) *0.25) &NBSP ; def fit (self, X, y, learning_rate=0.2, epochs=10000) : X = np.atleast_2d (X) &NB Sp TEMP = Np.ones ([x.shape[0], x.shape[1]+1]) &NB Sp temp[:, 0:-1] = X # Adding the bias unit to the input layer & nbsp X = temp y = Np.array (y) & NBsP for K in range (epochs): i = Np.random.randint (x.shape[0]) & nbsp A = [X[i]] for L in range (Len (self.weights)): #going forward network, for each layer A.append (Self.activation (Np.dot (a[l), Self.weights[l])) #Computer the node value for each layer (o_i) using Activa tion function error = Y[i]-a[-1] #Computer the error at the top layer &N Bsp DELTAS = [ERROR * SELF.ACTIVATION_DERIV (a[-1])] #For output layer, ERR calculation (Del TA is updated error) #Staring BA Ckprobagation for L in range (Len (a)-2, 0,-1): # We need to begin at the Secon D to last Layer #Compute the updated error (I,e, deltas) for each node Going from top layer to input layer Deltas.append (deltas[-1) . dot (Self.weights[l]. T) *self.activation_deriv (A[l]) Deltas.reverse () &NB Sp for I in range (len (self.weights)): Lay ER = np.atleast_2d (a[i]) Delta = np.atleast_2d (deltas[i)) & nbsp Self.weights[i] + = learning_rate * layer. T.dot (Delta) &NBSP ; def predict (self, x): x = Np.arra Y (x) &nbsP TEMP = Np.ones (x.shape[0]+1) &NBS P TEMP[0:-1] = x A = temp & nbsp; for L in range (0, Len (self.weights)):  &NB Sp a = Self.activation (Np.dot (A, self.weights[l))  &NBSP ; return a