The problem of realizing recursive neural network by Python

Source: Internet
Author: User
This article mainly introduces the recursive neural network implemented by Python, is an excerpt from the GitHub code snippets, involving Python recursion and mathematical operations related to operational skills, the need for friends can refer to the next

This paper describes the recursive neural network implemented by Python. Share to everyone for your reference, as follows:


# Recurrent neural networksimport copy, numpy as Npnp.random.seed (0) # Compute sigmoid nonlinearitydef sigmoid (x): Output = 1/(1+np.exp (-X)) return output# convert output of sigmoid function to its derivativedef sigmoid_output_to_derivative (ou Tput): Return output* (1-output) # training DataSet Generationint2binary = {}binary_dim = 8largest_number = Pow (2,binary_di m) binary = np.unpackbits (Np.array ([Range (Largest_number)],dtype=np.uint8). T,axis=1) for I in Range (Largest_number): int2binary[i] = binary[i]# input variablesalpha = 0.1input_dim = 2hidden_dim = 1 6output_dim = Initialize neural network Weightssynapse_0 = 2*np.random.random ((Input_dim,hidden_dim))-1synapse_1 = Np.random.random ((Hidden_dim,output_dim))-1synapse_h = 2*np.random.random ((Hidden_dim,hidden_dim))-1synapse_0_ Update = Np.zeros_like (synapse_0) synapse_1_update = Np.zeros_like (synapse_1) synapse_h_update = Np.zeros_like (synapse _H) # Training Logicfor J in range (10000): # Generate a simple addition problem (A + b = c) A_int = Np.random.randint (LARGEST_NUMBER/2) # int version A = int2binary[a_int] # binary encoding b_int = Np.ra Ndom.randint (LARGEST_NUMBER/2) # int version B = int2binary[b_int] # binary encoding # true Answer C_int = A_int + b_in T C = Int2binary[c_int] # where we ' ll store our best guess (binary encoded) d = Np.zeros_like (c) overallerror = 0 Lay Er_2_deltas = List () Layer_1_values = List () layer_1_values.append (Np.zeros (Hidden_dim)) # Moving along the positions I n the binary encoding for position in range (Binary_dim): # Generate input and output X = Np.array ([[A[binary_dim- Position-1],b[binary_dim-position-1]]) y = Np.array ([[[C[binary_dim-position-1]]]). T # Hidden Layer (input ~+ prev_hidden) layer_1 = sigmoid (Np.dot (x,synapse_0) + Np.dot (layer_1_values[-1],synapse_h) # Output layer (new binary representation) layer_2 = Sigmoid (Np.dot (Layer_1,synapse_1)) # did we miss?... if so    , by how much? Layer_2_error = y-layer_2 LAyer_2_deltas.append ((layer_2_error) *sigmoid_output_to_derivative (layer_2)) Overallerror + = Np.abs (layer_2_error[ 0] # decode estimate so we can print (it out) d[binary_dim-position-1] = Np.round (layer_2[0][0]) # store Hidd En layer so we can use it in the next Timestep layer_1_values.append (Copy.deepcopy (layer_1)) Future_layer_1_delta = NP . Zeros (Hidden_dim) for position in range (Binary_dim): X = Np.array ([[A[position],b[position]]]) layer_1 = Layer_1_v Alues[-position-1] prev_layer_1 = layer_1_values[-position-2] # error at output layer Layer_2_delta = Layer_2_del TAS[-POSITION-1] # error at hidden layer Layer_1_delta = (Future_layer_1_delta.dot (synapse_h.t) + Layer_2_delta.dot ( synapse_1.t) * Sigmoid_output_to_derivative (layer_1) # Let's update all our weights so we can try again Synapse_1_u Pdate + = np.atleast_2d (layer_1). T.dot (layer_2_delta) synapse_h_update + = np.atleast_2d (prev_layer_1). T.dot (layer_1_delta) synapse_0_update + = x.t.Dot (layer_1_delta) Future_layer_1_delta = Layer_1_delta Synapse_0 + = synapse_0_update * Alpha Synapse_1 + = synapse_1_   Update * Alpha Synapse_h + = synapse_h_update * Alpha synapse_0_update *= 0 synapse_1_update *= 0 synapse_h_update *= 0 # Print (out progress) if j% = = 0:print ("Error:" + str (overallerror)) print ("Pred:" + str (d)) print ("Tru E: "+ str (c)) out = 0 for index,x in enumerate (reversed (d)): Out + = X*pow (2,index) print (str (a_int) +" + " + STR (b_int) + "=" + str (out)) print ("------------")

Run output:


error:[3.45638663]pred:[0 0 0 0 0 0 0 1]true:[0 1 0 0 0 1 0 1]9 + = 1------------error:[3.63389116]pred:[1 1 1 1 1 1  1 1]true:[0 0 1 1 1 1 1 1]28 + = 255------------error:[3.91366595]pred:[0 1 0 0 1 0 0 0]true:[1 0 1 0 0 0 0 0]116 + 44 =------------error:[3.72191702]pred:[1 1 0 1 1 1 1 1]true:[0 1 0 0 1 1 0 1]4 + = 223------------error:[3.5852713] Pred:[0 0 0 0 1 0 0 0]true:[0 1 0 1 0 0 1 0]71 + one = 8------------error:[2.53352328]pred:[1 0 1 0 0 0 1 0]true:[1 1 0 0 0 0 1 0]81 + 113 = 162------------error:[0.57691441]pred:[0 1 0 1 0 0 0 1]true:[0 1 0 1 0 0 0 1]81 + 0 = Bayi------------Er  ror:[1.42589952]pred:[1 0 0 0 0 0 0 1]true:[1 0 0 0 0 0 0 1]4 + = 129------------error:[0.47477457]pred:[0 0 1 1 1 0  0 0]true:[0 0 1 1 1 0 0 0]39 + 0 =------------error:[0.21595037]pred:[0 0 0 1 1 1 0]true:[0 0 0 0 1 1 1 0]11 + 3 = ------------
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.