Use Java open-source project joone to implement artificial intelligence programming (2) 2008-06-30
Text/Zhu Xianzhong Compilation
Training Neural Networks
To achieve the purpose of this article, we will guide joone to identify a very simple pattern. In this mode, we will examine a binary Boolean operation, such as XOR. The truth table of this XOR operation is listed as follows:
X |
Y |
X xor y |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
As you can see from the table above, the result of the XOR operation is only when
If X and Y have different values, the result is true (1 ). In other cases, the result of the XOR operation is false (0 ). By default, joone retrieves input from text files stored in your system. These texts
Files are read by using a special antenna called fileinputsynapse. To train XOR operations, you must create an input file that contains the data shown above.
This file is displayed in List 1.
List 1: content of the input file to solve the XOR Problem
0.0; 0.0; 0.0.
0.0; 1.0; 1.0.
1.0; 0.0; 1.0.
1.0; 1.0; 0.0.
Now we analyze a simple program that instructs joone to identify XOR operations and generate correct results. We now analyze the process of training the neural network to be processed. The training process includes XOR
Submit the problem to the neural network and observe the result. If this result is not expected, the training algorithm adjusts the weight stored in the tentacles. The gap between the actual output and the expected output of a neural network is called a mistake.
Poor. The training continues until the error is less than an acceptable value. This level is usually a percentage, such as 10%. We now analyze the code that must be used to train a neural network.
The training starts by establishing a neural network, and a hidden input layer and output layer must also be created.
// First, create these three layers Input = new sigmoidlayer (); Hidden = new sigmoidlayer (); Output = new sigmoidlayer (); |
Each layer is created using the sigmoidlayer of the joone object. The sigmoidlayer generates an output based on the natural logarithm. Joone also contains another layer, instead of the layer type you may choose to use.
Next, each layer is assigned a name. These names will help you later identify this layer during debugging.
Input. setlayername ("input "); Hidden. setlayername ("hidden "); Output. setlayername ("output "); |
Each layer must be defined. We will specify the "line" number in each layer. The line number corresponds to the number of neurons in this layer.
Input. setrows (2 ); Hidden. setrows (3 ); Output. setrows (1 ); |
From the code above, we can see that the input layer has two neurons, the hidden neurons in the hidden layers, and the output layer contains one neuron. This is of great significance for Neural Networks that contain two input neurons and one output neural, because the XOR operator receives two parameters and produces one result.
To use this neuron layer, we must also create tentacles. In this example, we need to use multiple tentacles. These tentacles are implemented using the following code.
// Input-> hidden connection. Fullsynapse synapse_ih = new fullsynapse (); // Concealed-> output connection. Fullsynapse synapse_ho = new fullsynapse (); |
Just like in the case of the neural source layer, the tentacles may also be named to facilitate program debugging. The following code names the new tentacles.
Synapse_ih.setname ("ih "); Synapse_ho.setname ("ho "); |
Finally, we must connect the tentacles to the proper neuron layer. The following code implements this.
// Link the input layer to the operation layer Input. addoutputsynapse (synapse_ih ); Hidden. addinputsynapse (synapse_ih ); // Connect the primary layer to the output layer Hidden. addoutputsynapse (synapse_ho ); Output. addinputsynapse (synapse_ho ); |
Now that a neural network has been created, we must create a monitor object for tuning the neural network. The following code creates a monitor object.
// Create a monitor object and set learning parameters Monitor = new monitor (); Monitor. setlearningrate (0.8 ); Monitor. setmomentum (0.3 ); |
The learning speed and motivation are used as parameters to specify the training generation mode. Joone uses the Backpropagation learning algorithm. For more information about the learning speed or motivation, refer to the backpropagation algorithm.
This monitor object should be assigned to each neuron layer. The following code implements this.
Input. setmonitor (MONITOR ); Hidden. setmonitor (MONITOR ); Output. setmonitor (MONITOR ); |
Like many Java objects, the joone monitor allows listeners to be added to it. As the training progresses, joone notifies the listener about the training process. In this simple example, we use:
Monitor. addneuralnetlistener (this ); |
Me
You must now create input tentacles. As mentioned above, we will use a fileinputsynapse to read a disk file. The disk file is not the only input that can be accepted by joone.
Class. Joone is flexible for different input sources. To enable joone to receive other input types, you only need to create a new antenna to accept the input. In this example, we will simply use
Fileinputsynapse. Fileinputsynapse is first instantiated.
Inputstream = new fileinputsynapse (); |
Then, you must notify fileinputsynapse which columns to use. The files displayed in List 1 use the first two columns of input data. The following code creates the first two columns for input to the neural network.
// The first two columns contain input values. Inputstream. setfirstcol (1 ); Inputstream. setlastcol (2 ); |
Then, we must provide the name of the input file, which is directly from the user interface. An edit control is provided to collect the names of input files. The following code sets the file name for fileinputsynapse.
// This is the file name that contains the input data Inputstream. setfilename (inputfile. gettext ()); |
As mentioned above, an antenna is only a data catheter between the neurons. Fileinputsynapse is the data catheter used to access the neural network. To achieve this more easily, we must add fileinputsynapse to the input layer of the neural network. This is implemented by the following line.
Input. addinputsynapse (inputstream ); |
Now
Since a neural network has been established, we must create a trainer and a monitor. The trainer is used to train the neural network because the monitor runs the neural network by a preset number of training duplicates.
Network. For each training repeat, the data is provided to the neural network, and then the results can be observed. The weight of the neural network (stored in the antenna connection between the neurons) will be adjusted according to the error.
. As training progresses, the error level will decrease. The following code creates a trainer and attaches it to the monitor.
Trainer = new teachingsynapse (); Trainer. setmonitor (MONITOR ); |
You will remember that the input file provided in List 1 contains three columns. So far, we have used only the first and second columns, which specify the input to the neural network. The third column contains when provided to the first column of the neural network
The expected output value. We must enable the trainer to access this column to determine the error. This error is a gap between the actual output of the neural network and the expected output. The following code creates another
Fileinputsynapse and prepare to read the same input file as the previous one.
// Set the file containing the expected response value, which is provided by fileinputsynapse Samples = new fileinputsynapse (); Samples. setfilename (inputfile. gettext ()); |
In this case, we want to point to fileinputsynapse in the third column. The following code enables the trainer to use this fileinputsynapse.
// Output value in the third column of the file Samples. setfirstcol (3 ); Samples. setlastcol (3 ); Trainer. setdesired (samples ); |
Finally, the trainer is linked to the output layer of the neural network, which enables the trainer to receive the output of the neural network.
// Connect the trainer to the last layer of the Network Output. addoutputsynapse (trainer ); |
We have now prepared background threads for all layers, including trainers.
Input. Start (); Hidden. Start (); Output. Start (); Trainer. Start (); |
Finally, we set some parameters for training. We specify that there are four rows in the input file, and we want to train for 20,000 cycles, and we are still learning from each other. If you set the learning parameter to false, the neural network simply processes the input and does not learn it. We will discuss input processing in the next section.
Monitor. setpatterns (4 ); Monitor. settotcicles (20000 ); Monitor. setlearning (true ); |
Now we are ready for the training process. The Go Method of the monitor is called to start the training process in the background.
Neural Networks will be trained for 20,000 cycles now. When neural network training is complete, the error layer should be at a reasonable low level. Generally, an error level lower than 10% is acceptable.