.
AI technology in game programming
.
.
(Serialization II)
3Digital neural networks (the digital version)
We have seen that the biological brain is composed of many neural cells. Similarly, the artificial neural network ANN that simulates the brain is composed of many artificial neural cells (Artificial Neuron, also known as artificial neurons, or artificial neurons. Artificial neural cells are like a simplified version of real neural cells, but they are simulated electronically. The number of Artificial Neural Cells Used in an artificial neural network varies greatly. Some neural networks only use less than 10 artificial neural cells, while some neural networks may need thousands of artificial neural cells. This depends entirely on what the artificial neural network is actually used.
Interesting fact There was a fellow Hugo de Garis who created and trained a network containing 1000,000,000 artificial neural cells in an ambitious project. This artificial neural network was built up by him very cleverly. It adopts a beehive-type automatic machine structure to create a cam brainmachine ("Cam brain machine") for machine customers ") (Cam is short for cellular automata machine ). This person once boasted that the artificial network machine would have a cat's intelligence. Many neural network researchers think that he is "Star", but unfortunately, the company that hired him went bankrupt before his dream was fulfilled. This person is now in Utah, Utah Brain Project. Time will tell us whether his thoughts can eventually become practical and meaningful. [Comment] |
I think you might want to know what an artificial neural cell is like now? However, it actually does not look like anything; it is just an abstraction. Let's take a look at Step 2. This is a form of artificial neural cells.
Hugo de Garis is now a professor at Utah State University. He and his cam machines can be viewed on a webpage on the school's website, with real photos, see http://www.cs.usu.edu /~ Degaris
Figure 2 An Artificial Neural Cell
In the figure, W, the letter in the left gray circle, represents a floating point number, which is called weight (weight, or weight, weight ). Each input that enters an artificial neural cell is associated with a weight W, which determines the overall activity of the neural network. Now you can imagine that all these weights are set to a random decimal number between-1 and 1. Because the weight can be positive or negative, it can exert different effects on the input associated with it. If the weight is positive, it will have an excitory effect and the weight is negative, it will have an inhibitory effect. When the input signal enters the neural cell, their values are multiplied by their corresponding weights as the input of the big circle in the figure. The 'core' of a large circle is a function called the activation function.
Function), which adds up all the new and weighted input to form a single activation value ). The excitation value is also a floating point number and can be positive or negative. Then, function output is generated based on the excitation value, that is, neural cell output. If the excitation value exceeds a threshold value (in this example, we assume that the threshold value is 1.0 ), A signal with a value of 1 is generated. If the excitation value is smaller than the threshold value of 1.0, a 0 is output. This is the simplest type of artificial neural cell excitation function. Here, the output value generated from the excitation value is a step function. Take a look at figure 3 and you can guess why there is such a name.
Figure 3 step excitation function
The graph shows that the step function is a one-dimensional function. Since the excitation function can add multiple inputs as multiple elements, it must be different.
You don't have to worry if you haven't felt much about it so far. TIPS: Don't try to feel it, so let's move forward with me. After going through several parts of this chapter, you will eventually begin to figure out their meaning. Now, just relax and continue reading.
3.1 now I need some mathematics (now for some math)
In future discussions, I will try my best to reduce the number of math to an absolute fraction, but learning some math marks is still very useful. I will feed you the mathematics at and introduce some new concepts to you when I arrive at the relevant chapter. I hope that using this method will make your mind more comfortable to absorb all concepts, and you will be able to see how to apply mathematics to your work at every stage of the development of neural networks. Now let's take a look at how to express all the knowledge I have told you before in mathematics.
An artificial neural cell (from now on I will refer to "Artificial Neural Cells" for short) can have any N inputs, and N represents the total number. The following mathematical expression can be used to represent all N inputs:
X1, x2, X3, X4, X5,..., xn
The same n weights can be expressed:
W1, W2, W3, W4, W5..., wn
Remember that the incentive value is the sum of the product of all input and their corresponding weights. Therefore, you can write it as follows:
A = w1x1 + w2x2 + w3x3 + w4x4 + w5x5 +... + wnxn
As mentioned in chapter 5th "Creating a better genetic algorithm", the summation formula written in this way can be simplified with the Greek letter Σ:
Note:
Each input of a neural network and its weight settings can be considered as an n-dimensional vector. You can often see in many technical documents that it is referenced in this way.
Next we will examine how to implement it in the program? Assume that the input array and the weight array have been initialized to X [N] and w [N], the sum code is as follows:
Double activation = 0;
For (INT I = 0; I <n; ++ I)
{
Activation + = x [I] * W [I];
}
Figure 4 shows the equation in graphs. Do not forget that if the excitation value exceeds the threshold value, the nerve cell outputs 1. If the activation value is smaller than the threshold value, the output of the nerve cell is 0. This is equivalent to the excitation and inhibition of A neurocell. We assume that a neural cell has five inputs and their weight W is initialized to a random value between positive and negative 1 (-1 <W <1 ). Table 2 describes the sum calculation process of the excitation value.
Figure 4 nerve cell excitation function
If we assume that the required activation threshold is 1, the nerve cell outputs 1 because the activation threshold is 1.1> activation threshold 1.
Before proceeding, you must understand exactly how to calculate the incentive function.
Table 2 Calculation of nerve cell excitation values
Input |
Weight |
Product of input and weight |
Sum after running |
1 |
0.5 |
0.5 |
0.5 |
0 |
-0.2 |
0 |
0.5 |
1 |
-0.3 |
-0.3 |
0.2 |
1 |
0.9 |
0.9 |
1.1 |
0 |
0.1 |
0 |
1.1 |
Row 3: I know what a nerve cell is, but what does it do?
Biological nerve cells in the brain are connected with other neural cells. To create an artificial neural network, artificial neural cells need to connect to each other in the same way. Therefore, there are many different connection methods, which are the easiest to understand and most widely used, that is, linking neural cells layer by layer as shown in Figure 5. This type of neural network is called feedforword network ). The origin of this name is that the output of each layer of neural cells in the network is fed forward to the next layer of the network (in the figure, it is the layer above it ), until the output of the entire network is obtained.
Figure 5 A Feed-forward Network
The figure shows that there are three layers of Network (the input layer is not a neural cell, but only two layers of neural cells ). Each input in the input layer is fed to the hidden layer as the input of each neural cell in the layer. Then, the output of each neural cell in the hidden layer is connected to the next layer (that is, the output layer) each nerve cell. The figure only draws a hidden layer. As a feed-forward network, there can generally be any number of hidden layers. However, one layer is usually sufficient to deal with the majority of problems you will deal. In fact, there are some problems that do not even require any hidden units. You just need to link those inputs directly to the output neural cells. In addition, the number of nerve cells I selected for Figure 5 is completely arbitrary. Each layer can actually have any number of nerve cells, depending entirely on the complexity of the problem to be solved. However, the higher the number of nerve cells, the lower the network speed. For this reason, and for other reasons (I will explain in Chapter 9th ), the network size must always be as small as possible.
At this point, I can imagine that you may have been confused about all this information. In my opinion, the best thing I can do in this situation is to introduce you to an example of the practical application of neural networks in the real world, it is expected to make your brain nerve cells excited! Good, right? Okay, here we are...
You may have heard or read that neural networks are often used for pattern recognition. This is because they are good at ing an input state (the pattern it attempts to recognize) to an output state (the pattern it has been trained for recognition ).
Let's see how it is done. We use character recognition as an example. Imagine a panel consisting of 8x8 grids. A small lamp is placed in each cell grid, and each cell lamp can be turned on (the cell grid turns on) or off (the cell grid turns black) independently, so that the Panel can be used to display ten numeric symbols. Figure 6 shows the number "4 ".
Figure 6 matrix points used for character display
To solve this problem, we need to design a neural network that receives the Panel status as input and then outputs a 1 or 0; output 1 indicates that Ann confirms that the number "4" is displayed, and output 0 indicates that "4" is not displayed ". Therefore, a neural network requires 64 inputs (each representing a specific grid point of the Panel) and a hidden layer composed of many neural cells, and an output layer with only one neural cell, all outputs of the hidden layer are fed to it. I really hope that you can draw this picture in your mind, because it is really not a pleasure to draw all these circles and connections for you. <smile>.
Once the neural network system is created successfully, it must be trained to recognize the number "4 ". To solve this problem, you can use this method: first, reinitialize the ownership of the neural network to any value. Then give it a series of input, in this example, it represents the input of different panel configurations. For each input configuration, we check its output and adjust the corresponding weight. If the input mode we send to the network is not "4", we know that the network should output a 0 value. Therefore, the network weight of each non-"4" character should be adjusted to make its output tend to 0. When the "4" mode is delivered to the network, the weight should be adjusted to make the output tend to 1.
If you think about this network, you will know that it is easy to increase the output to 10. After training, the network can recognize all numbers ranging from 0 to 9. But why are we stopped here? We can further increase the output so that the network can recognize all the characters in the alphabet. This is essentially the working principle of handwritten recognition. For each character, the network needs to undergo a lot of training to make it aware of the various versions of this text. In the end, the network can not only recognize the handwriting that has been trained, but also show that it has significant generalization and promotion capabilities. That is to say, if the written text is changed to a handwriting, it is slightly different from all the handwriting in the training set, and the network still has a great chance to recognize it. This inductive promotion capability has made neural networks an invaluable tool for countless applications, from face recognition and medical diagnosis to the prediction of Maasai, in addition, there are also the navigation of bots in computer games (robots used as game roles) or hardware robot (real robots.
This type of training is called supervised learning (supervised learnig), and the data used for training is called a training set ). You can adjust the weight in many different ways. The most common method for this type of problem is backpropagation (backprop or bp. I will discuss the problem of reverse propagation later in this book, when you have been able to train a neural network to identify the mouse trend. In the rest of this chapter, I will focus on another training method, that is, training that does not require any mentor to supervise, or unsupervised learnig ).
In this way, I have introduced some basic knowledge to you. Now let's take a look at some interesting things and introduce you to the first code project.