Http://www.robotsky.com/ZhiN/MoS/2011-08-25/13142461416649.html
Implementing AI Programming with Java Open Source project Joone http://www.robotsky.com Source: Network time: 2011-08-25 reviews 0(visit forum) Robotsky waiting for your submission >>
Few programmers are attracted to the AI programming that is contained here or there, however, many programmers who are interested in AI quickly retreat from the complexity of the algorithms that they contain. In this article, we will discuss a Java open source project that can greatly simplify this complexity.
Java object-oriented neural network (JOONE) is an open source project that provides Java programmers with a highly adaptable neural network. The Joone engineering source code is protected by LGPL. In short, this means that the source code is free to use and you do not have to pay royalties to use Joone. Joone can be downloaded from http://joone.sourceforge.net/.
Joone allows you to easily create neural networks from a Java program. Joone supports many features, such as multithreading and distributed processing. This means that Joone can leverage the advantages of multiprocessor computers and multiple computers for distributed processing.
Neural network
Joone has implemented an artificial neural network in Java. An artificial neural network attempts to emulate the function of a biological neural network-neural networks that form the brain of almost all advanced life on Earth today. The neural network is composed of a nerve source.
As you can see from Figure 1, the neural source consists of a kernel cell and several long connectors called antennae. The neurons rely on these tentacles to connect. Whether it is a biological or artificial neural network, the signal is transmitted from one neuron to another through the tentacles.
Using Joone
In this article, you will see a simple example of how to use Joone. Neural network topics cover a wide range of areas and are covered in many different applications. In this article, we will show you how to use Joone to solve a very simple pattern recognition problem. Pattern recognition is one of the most common applications in neural networks.
Pattern recognition provides a pattern for neural networks to determine whether the neural network can recognize the pattern. This pattern should be able to be distorted to some extent and the neural network will still be able to recognize it. This is much like the ability of a human to recognize something (such as a traffic sign). Humans should be able to identify traffic signs on rainy days, sunny nights or evenings. Even though these images may seem quite different, the human brain can still tell that they are the same image.
When programming joone, you typically use two types of objects. You want to use a neural-level object that describes one or more of the neurons in a layer that has similar characteristics. Neural networks often have a layer or two of neurons. These neuronal layers are linked by tentacles. These tentacles transmit this pattern of recognition from one neuron layer to another.
Antennae not only transfer this pattern from one neuron layer to another. Tentacles will also generate some diagonal lines that point to elements of this pattern. These slashes will cause some elements of this pattern to be transferred to the next neuron layer more efficiently than otherwise. These slashes are often called weights, and they form a neural network storage system. By adjusting these weights stored in the tentacles, you can change the behavior of the neural network.
Tentacles also bear another role in the Joone. In Joone, antennae can be thought of as data conduits. Just as antennae transfer patterns from one neuron layer to another, the specified version of the antennae is used to pass patterns into and out of the neural network. The following shows you how a simple, single-layer neural network is constructed and pattern-aware.
Training Neural Networks
To achieve the purpose of this article, we will instruct Joone to identify a very simple pattern. In this mode, we will examine a binary Boolean operation, such as XOR. The truth table for this XOR operation is listed below:
X |
Y |
X XOR Y |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
As you can see from the table above, the result of the XOR operation is that the result is true only if X and Y have different values (1). In other cases, the XOR operation results are false (0). By default, Joone gets input from a text file that is stored in your system. These text files are read by using a special horn called Fileinputsynapse. To train the XOR operation, you must create an input file-the file contains the data shown above. The file appears in Listing 1.
Listing 1: The contents of the input file to resolve the XOR issue
0.0;0.0;0.0
0.0;1.0;1.0
1.0;0.0;1.0
1.0;1.0;0.0
We now analyze a simple program that directs Joone to identify XOR operations and produce the correct results. We now analyze the process of training the neural network that must be processed. The training process involves submitting an XOR problem to a neural network and observing the results. If this result is not expected, the training algorithm will adjust the weight stored in the tentacles. The gap between the actual output of the neural network and the expected output is called the error. The training will continue until the error is less than an acceptable value. This level is usually a percentage, such as 10%. We now analyze the code that must be used to train a neural network.
The training process begins with the establishment of a neural network, and the hidden input and output layers must also be created.
First, create these three layers input = new Sigmoidlayer (); Hidden = new Sigmoidlayer (); Output = new Sigmoidlayer (); |
Each layer is created using the Joone object Sigmoidlayer. Sigmoidlayer generates an output based on the natural logarithm. The Joone also contains additional layers, rather than the S-shaped layer types you might choose to use.
Next, each layer is assigned a name. These names will help to identify the layer later during debugging.
Input.setlayername ("input"); Hidden.setlayername ("hidden"); Output.setlayername ("Output"); |
Each layer must now be defined. We will specify the "line" number in each layer. The "line" number corresponds to the number of neurons in this layer.
Input.setrows (2); Hidden.setrows (3); Output.setrows (1); |
From the above code, the input layer has two neurons, the hidden layer has three hidden neurons, the output layer contains a neuron. This is important for the neural network to contain two input neurons and an output neuron, because the XOR operator receives two parameters and produces a result.
To use this neuron, we must also create antennae. In this case, we're going to use multiple antennae. These tentacles are implemented using the following code.
Enter a hidden connection. Fullsynapse synapse_ih = new Fullsynapse (); Invisible, output connection. Fullsynapse Synapse_ho = new Fullsynapse (); |
As in the case of the neurons, antennae may also be named to aid in the debugging of the program. The following code names the new antennae.
Synapse_ih.setname ("IH"); Synapse_ho.setname ("HO"); |
Finally, we must connect the antennae to the appropriate neurons. The following code implements this.
Joins the input layer to the hidden layer Input.addoutputsynapse (SYNAPSE_IH); Hidden.addinputsynapse (SYNAPSE_IH); Joining hidden layers to the output layer Hidden.addoutputsynapse (Synapse_ho); Output.addinputsynapse (Synapse_ho); |
Now that the neural network has been created, we must create a monitor object that adjusts the neural network. The following code creates a monitor object.
Create a monitor object and set the learning parameters Monitor = new monitor (); Monitor.setlearningrate (0.8); Monitor.setmomentum (0.3); |
Learning speed and motivation are used as parameters to specify how training is produced. Joone uses backpropagation learning algorithms. To learn more about learning speed or motivation, you should refer to the backpropagation algorithm.
This monitor object should be assigned to each of the neurons in the original layer. The following code implements this.
Input.setmonitor (monitor); Hidden.setmonitor (monitor); Output.setmonitor (monitor); |
Just like many Java objects themselves, the Joone monitor allows listeners to add to it. As the training progresses, Joone will inform the listener about the training process. In this simple example, we use:
Monitor.addneuralnetlistener (this); |
We must now establish input antennae. As mentioned earlier, we will use a fileinputsynapse to read a disk file. The disk file is not the only type of input that Joone can accept. The Joone is highly flexible for different input sources. To enable Joone to receive other input types, you simply create a new horn to accept the input. In this example, we will simply use Fileinputsynapse. Fileinputsynapse is first instantiated.
InputStream = new Fileinputsynapse (); |
Then, you must inform fileinputsynapse which columns to use. The file shown in Listing 1 uses the first two columns of the input data. The following code establishes the first two columns for input to the neural network.
The first two columns contain input values Inputstream.setfirstcol (1); Inputstream.setlastcol (2); |
Then we must provide the name of the input file, which comes directly from the user interface. Then, provide an edit control to collect the name of the input file. The following code sets the file name for Fileinputsynapse.
This is the file name that contains the input data Inputstream.setfilename (Inputfile.gettext ()); |
As mentioned earlier, an antenna is only a data conduit between the primary nerve layer. Fileinputsynapse is the data conduit here, through which data goes into the neural network. To make this easier, we have to add fileinputsynapse to the input layer of the neural network. This is implemented by the following line.
Input.addinputsynapse (InputStream); |
Now that we have established a neural network, we have to create a trainer and a monitor. The trainer is used to train the neural network because the monitor runs the neural network with a pre-set number of training repetitions. For each training repetition, the data is provided to the neural network and the results can be observed. The weight of the neural network (stored in the antennae connected between the neurons) will be adjusted according to the error. As the training progresses, the error level will decrease. The following code builds the trainer and attaches it to the monitor.
Trainer = new Teachingsynapse (); Trainer.setmonitor (monitor); |
You will remember that the input file provided in Listing 1 contains three columns. So far, we have only used the first to second column, which specifies the input to the neural network. The third column contains the expected output value when given to the number in the first column of the neural network. We must enable the trainer to access the column so that the error can be determined. This error is the gap between the actual output of the neural network and the expected output. The following code creates another fileinputsynapse and prepares it to read the same input file as before.
Sets the file containing the expected response value, which is provided by the Fileinputsynapse samples = new Fileinputsynapse (); Samples.setfilename (Inputfile.gettext ()); |
In this case, we want to point to the Fileinputsynapse in the third column. The following code implements this, and then lets the trainer use the Fileinputsynapse.
The output value is on the third column in the file Samples.setfirstcol (3); Samples.setlastcol (3); Trainer.setdesired (samples); |
Finally, the trainer is linked to the neural network output layer, which allows the trainer to receive the output of the neural network.
Connect the trainer to the last layer of the network Output.addoutputsynapse (trainer); |
We now have a background thread ready for all the layers, including the trainer.
Input.start (); Hidden.start (); Output.start (); Trainer.start (); |
Finally, we set some parameters for the training. We specify a total of four lines in the input file, and want to train 20,000 cycles, and are still learning. If you set the learning parameter to FALSE, the neural network will simply process the input and do not learn. We'll discuss the input processing in the next section.
Monitor.setpatterns (4); Monitor.settotcicles (20000); Monitor.setlearning (TRUE); |
Now we are ready for the training process. The Go method that invokes the monitor starts the training process in the background.
The neural network will now be trained for 20,000 cycles. When neural network training is completed, the error layer should be at a reasonable low level. An error level of less than 10% is generally acceptable.
Training Neural Networks
To achieve the purpose of this article, we will instruct Joone to identify a very simple pattern. In this mode, we will examine a binary Boolean operation, such as XOR. The truth table for this XOR operation is listed below:
X |
Y |
X XOR Y |
0 |
0 |
0 |
0 |
1 |
1 |
1 |
0 |
1 |
1 |
1 |
0 |
As you can see from the table above, the result of the XOR operation is that the result is true only if X and Y have different values (1). In other cases, the XOR operation results are false (0). By default, Joone gets input from a text file that is stored in your system. These text files are read by using a special horn called Fileinputsynapse. To train the XOR operation, you must create an input file-the file contains the data shown above. The file appears in Listing 1.
Listing 1: The contents of the input file to resolve the XOR issue
0.0;0.0;0.0
0.0;1.0;1.0
1.0;0.0;1.0
1.0;1.0;0.0
We now analyze a simple program that directs Joone to identify XOR operations and produce the correct results. We now analyze the process of training the neural network that must be processed. The training process involves submitting an XOR problem to a neural network and observing the results. If this result is not expected, the training algorithm will adjust the weight stored in the tentacles. The gap between the actual output of the neural network and the expected output is called the error. The training will continue until the error is less than an acceptable value. This level is usually a percentage, such as 10%. We now analyze the code that must be used to train a neural network.
The training process begins with the establishment of a neural network, and the hidden input and output layers must also be created.
First, create these three layers input = new Sigmoidlayer (); Hidden = new Sigmoidlayer (); Output = new Sigmoidlayer (); |
Each layer is created using the Joone object Sigmoidlayer. Sigmoidlayer generates an output based on the natural logarithm. The Joone also contains additional layers, rather than the S-shaped layer types you might choose to use.
Next, each layer is assigned a name. These names will help to identify the layer later during debugging.
Input.setlayername ("input"); Hidden.setlayername ("hidden"); Output.setlayername ("Output"); |
Each layer must now be defined. We will specify the "line" number in each layer. The "line" number corresponds to the number of neurons in this layer.
Input.setrows (2); Hidden.setrows (3); Output.setrows (1); |
From the above code, the input layer has two neurons, the hidden layer has three hidden neurons, the output layer contains a neuron. This is important for the neural network to contain two input neurons and an output neuron, because the XOR operator receives two parameters and produces a result.
To use this neuron, we must also create antennae. In this case, we're going to use multiple antennae. These tentacles are implemented using the following code.
Enter a hidden connection. Fullsynapse synapse_ih = new Fullsynapse (); Invisible, output connection. Fullsynapse Synapse_ho = new Fullsynapse (); |
As in the case of the neurons, antennae may also be named to aid in the debugging of the program. The following code names the new antennae.
Synapse_ih.setname ("IH"); Synapse_ho.setname ("HO"); |
Finally, we must connect the antennae to the appropriate neurons. The following code implements this.
Joins the input layer to the hidden layer Input.addoutputsynapse (SYNAPSE_IH); Hidden.addinputsynapse (SYNAPSE_IH); Joining hidden layers to the output layer Hidden.addoutputsynapse (Synapse_ho); Output.addinputsynapse (Synapse_ho); |
Now that the neural network has been created, we must create a monitor object that adjusts the neural network. The following code creates a monitor object.
Create a monitor object and set the learning parameters Monitor = new monitor (); Monitor.setlearningrate (0.8); Monitor.setmomentum (0.3); |
Learning speed and motivation are used as parameters to specify how training is produced. Joone uses backpropagation learning algorithms. To learn more about learning speed or motivation, you should refer to the backpropagation algorithm.
This monitor object should be assigned to each of the neurons in the original layer. The following code implements this.
Input.setmonitor (monitor); Hidden.setmonitor (monitor); Output.setmonitor (monitor); |
Just like many Java objects themselves, the Joone monitor allows listeners to add to it. As the training progresses, Joone will inform the listener about the training process. In this simple example, we use:
Monitor.addneuralnetlistener (this); |
We must now establish input antennae. As mentioned earlier, we will use a fileinputsynapse to read a disk file. The disk file is not the only type of input that Joone can accept. The Joone is highly flexible for different input sources. To enable Joone to receive other input types, you simply create a new horn to accept the input. In this example, we will simply use Fileinputsynapse. Fileinputsynapse is first instantiated.
InputStream = new Fileinputsynapse (); |
Then, you must inform fileinputsynapse which columns to use. The file shown in Listing 1 uses the first two columns of the input data. The following code establishes the first two columns for input to the neural network.
The first two columns contain input values Inputstream.setfirstcol (1); Inputstream.setlastcol (2); |
Then we must provide the name of the input file, which comes directly from the user interface. Then, provide an edit control to collect the name of the input file. The following code sets the file name for Fileinputsynapse.
This is the file name that contains the input data Inputstream.setfilename (Inputfile.gettext ()); |
As mentioned earlier, an antenna is only a data conduit between the primary nerve layer. Fileinputsynapse is the data conduit here, through which data goes into the neural network. To make this easier, we have to add fileinputsynapse to the input layer of the neural network. This is implemented by the following line.
Input.addinputsynapse (InputStream); |
Now that we have established a neural network, we have to create a trainer and a monitor. The trainer is used to train the neural network because the monitor runs the neural network with a pre-set number of training repetitions. For each training repetition, the data is provided to the neural network and the results can be observed. The weight of the neural network (stored in the antennae connected between the neurons) will be adjusted according to the error. As the training progresses, the error level will decrease. The following code builds the trainer and attaches it to the monitor.
Trainer = new Teachingsynapse (); Trainer.setmonitor (monitor); |
You will remember that the input file provided in Listing 1 contains three columns. So far, we have only used the first to second column, which specifies the input to the neural network. The third column contains the expected output value when given to the number in the first column of the neural network. We must enable the trainer to access the column so that the error can be determined. This error is the gap between the actual output of the neural network and the expected output. The following code creates another fileinputsynapse and prepares it to read the same input file as before.
Sets the file containing the expected response value, which is provided by the Fileinputsynapse samples = new Fileinputsynapse (); Samples.setfilename (Inputfile.gettext ()); |
In this case, we want to point to the Fileinputsynapse in the third column. The following code implements this, and then lets the trainer use the Fileinputsynapse.
The output value is on the third column in the file Samples.setfirstcol (3); Samples.setlastcol (3); Trainer.setdesired (samples); |
Finally, the trainer is linked to the neural network output layer, which allows the trainer to receive the output of the neural network.
Connect the trainer to the last layer of the network Output.addoutputsynapse (trainer); |
We now have a background thread ready for all the layers, including the trainer.
Input.start (); Hidden.start (); Output.start (); Trainer.start (); |
Finally, we set some parameters for the training. We specify a total of four lines in the input file, and want to train 20,000 cycles, and are still learning. If you set the learning parameter to FALSE, the neural network will simply process the input and do not learn. We'll discuss the input processing in the next section.
Monitor.setpatterns (4); Monitor.settotcicles (20000); Monitor.setlearning (TRUE); |
Now we are ready for the training process. The Go method that invokes the monitor starts the training process in the background.
The neural network will now be trained for 20,000 cycles. When neural network training is completed, the error layer should be at a reasonable low level. An error level of less than 10% is generally acceptable.
Implementing AI Programming with Java Open source project Joone