The artificial intelligence technology in game programming
.
.
(serialized bis)
3 Digital version of the neural network (the Digital version)
Above we see that the brain of a creature is made up of many nerve cells, and likewise, the artificial neural network that simulates the brain is made up of many small structural modules called artificial nerve cells (Artificial neuron, also known as artificial neurons, or artificial neurons). Artificial neural cells are like a simplified version of a real nerve cell, but they are simulated by electronic means. The number of artificial neural cells needed to be used in an artificial neural network can vary greatly. Some neural networks only need to use 10 or less artificial neural cells, while some neural networks may require the use of thousands of artificial neural cells. It all depends on what these artificial neural networks are prepared to actually do.
Interesting fact there was a fellow named Hugo de Garis, who had created and trained a network of 1000,000,000 human nerve cells in an ambitious project. The artificial neural network was cleverly built, using a hive automaton structure designed to tailor a machine called cam brainmachine ("Cam brain Machine") to a client (CAM is the abbreviation for cellular automata machine). The man boasted that the artificial network machine would have the intelligence of a cat. Many neural network researchers think he is on the "boarding star", but unfortunately, the company that hired him went bankrupt before his dream came true. This person is now Utah State, the leader of Utah State Brain Engineering (Utah Brain project). Time will tell us whether his mind will eventually become something that actually makes sense. [1] |
[1] Hugo de Garis is now a professor at Utah State State University, and he and his cam machine can be seen on a Web page on the school's website, with real photos on it, see Http://www.cs.usu.edu/~degaris
Fig. 21 Artificial neural cells
I think you might want to know now what a man-made nerve cell is. But, it's actually nothing like it; It's just an abstraction. Let's take a look at Figure 2, which is a form of artificial nerve cells.
In the figure, the marked letter W in the left several gray-bottomed circles represents the floating-point number, called the weight (weight, or weight, weights). Each input (input) into an artificial neural cell is associated with a weighted W, which determines the overall activity of the neural network. You can now assume that all of these weights are set to a random decimal number between 1 and 1. Because the weight can be negative, it can be associated with the input to exert different effects, if the weight is positive, there will be excitation (excitory) role, the weight is negative, there will be inhibition (inhibitory) role. When input signals enter the nerve cells, their values are multiplied by their corresponding weights, as input to the large circle in the diagram. The ' nucleus ' of a great circle is a function called the activation function, which adds all these new, weighted inputs together to form a single excitation value (activation value). The excitation value is also a floating-point number, and can also be positively negative. Then, according to the excitation value, the output of the function is also the output of the neuron: if the excitation value exceeds a certain threshold (as we assume the threshold is 1.0), a signal output of 1 is generated and a 0 is output if the excitation value is less than the threshold 1.0. This is one of the simplest types of artificial nerve cell excitation functions. Here, the output value generated from the excitation value is a step function [2]. After a look at Figure 3 You can guess why there is such a name.
Fig. 3 Step excitation function
[2] from the figure that the step function is a unary, and since the excitation function can add multiple inputs should be multivariate, it needs to be distinguished.
If you haven't got a lot of feeling about it so far, don't worry. The trick is: Don't try to feel it, just drift along with me for the time being. After going through a number of chapters in this chapter, you will eventually begin to understand what they mean. And now, just relax and read on.
3.1 Now you need some math (Some math)
In future discussions, I will try to reduce math to absolute minimum, but learning some mathematical notation is useful for the following. I'll feed you the math at 1.1 o ' time and introduce you to some new concepts when you get to the section. I hope that this approach will enable your mind to absorb all the concepts more comfortably and that you can see how to apply mathematics to your work at every stage of developing your neural network. Now let's take a look at how to express all the knowledge that I have told you before in a mathematical way.
An artificial nerve cell (from now on, I will refer to "artificial nerve cells" as "nerve cells") can have any n input, n represents the total. You can use the following mathematical expression to represent all n inputs:
x1, x2, X3, x4, X5, ..., xn
The same n weights can be expressed as:
W1, W2, W3, W4, W5 ..., WN
Keep in mind that the incentive value is the sum of all the products that are entered with their corresponding weights, so it can now be written as:
A = w1x1 + w2x2 + w3x3 + w4x4 + w5x5 +...+ wnxn
The summation written in this way, as I mentioned in the 5th chapter, "Building a better genetic algorithm", can be simplified by using Greek alphabet σ:
Nasa
Each input of the neural network, as well as the weight setting for each neuron, can be regarded as an n-dimensional vector. You can often see it in a number of technical literature that is quoted in this way.
Let's look at how it should be implemented in the program. Assuming that both the input array and the weight array are initialized to X[n] and w[n], the sum code is as follows:
Double activation = 0;
for (int i=0; i<n; ++i)
{
Activation + + x[i] * W[i];
}
Figure 4 represents the equation in a graphical manner. Please don't forget, if the excitation value exceeds the threshold, the nerve cell outputs 1; If the activation is less than the threshold, the neuron's output is 0. This is equivalent to the excitation and inhibition of a biological nerve cell. We assume that a neuron has 5 inputs, and that their weight w is initialized to a random value between positive and negative 1 ( -1 < W < 1). Table 2 illustrates the summation calculation process of the excitation value.
Fig. 4 excitation function of nerve cells
If we assume that the activation of the desired threshold = 1, then the excitation value of 1.1 > Activation threshold 1, so this nerve cell will output 1.
Before you read further, make sure you understand exactly how the excitation function is calculated.
table 2 calculation of nerve cell excitation value
input |
Weights |
the product of input and weight |
after run sum |
1 |
0.5 |
0.5 |
0.5 |
0 |
-0.2 |
0 |
0.5 |
1 |
-0.3 |
-0.3 |
0.2 |
1 |
0.9 |
0.9 |
1.1 |
0 |
0.1 |
0 |
1.1 |
3.2 Lines, I know what a nerve cell is, but what to do with it.
The biological nerve cells in the brain are interconnected with other nerve cells. To create an artificial neural network, artificial neural cells are interconnected in the same way. There can be many different ways of connecting, the easiest to understand and the most widely used, is to link the nerve cells one layer at a time as shown in Figure 5. This type of neural network is called a Feedforward network (Feedforword network). The origin of this name is that the output of each layer of nerve cells in the network is fed forward (feed) to their next layer (in the figure above the layer above it) until the output of the entire network is obtained.
Figure 51 Feedforward Networks
From the figure, the network has a total of three layers (the input layer is not nerve cells, nerve cells only two layers). Each input in the input layer is fed to the hidden layer as input for each nerve cell in the layer, and then the output from each nerve cell in the hidden layer is connected to every nerve cell in its next layer (i.e. the output layer). Only a hidden layer is drawn, as a feedforward network, which can generally have any number of hidden layers. But a layer is usually sufficient to deal with most of the problems you will be dealing with. In fact, there are some problems that do not even need any hidden units, you just have to link those inputs directly to the output of nerve cells on the line. In addition, the number of nerve cells I have chosen for fig. 5 is also completely arbitrary. Each layer can actually have any number of nerve cells, depending entirely on the complexity of the problem to be solved. But the more neurons there are, the lower the speed of the network, and for that reason, and for several other reasons (which I will explain in chapter 9th), the size of the network is always required to remain as small as possible.
I can imagine that you may have been a little dazed about all this information. I think the best thing I can do in this situation is to introduce you to a practical example of a neural network in the real world that is expected to excite your own brain neurons. It's not bad. OK, here it comes ...
You may have heard or read that neural networks are often used for pattern recognition. This is because they are good at mapping an input state (the pattern it attempts to recognize) to an output state (it has been trained to recognize patterns).
Now let's see how it's done. We use character recognition as an example. Imagine a panel made up of 8x8 lattices. Each lattice has a small light, each small light can be opened independently (lattice lighten) or closed (lattice black), so that the panel can be used to display 10 digit symbols. Figure 6 shows the number "4".
Figure 6 Matrix point for character display
To solve this problem, we must design a neural network that receives the status of the Panel as input and then outputs a 1 or 0; Output 1 on behalf of Ann confirms that the number "4" is displayed, and output 0 indicates that "4" is not displayed. Therefore, the neural network needs to have 64 inputs (each input represents a specific lattice point of the panel) and a hidden layer composed of many nerve cells, as well as the output layer of only one nerve cell, all the output of the hidden layer is fed to it. I wish that you could draw this picture in your head, because it's not a pleasant thing to be able to draw all these small circles and cords for you. < smile >.
Once the neural network system is created, it must be trained to recognize the number "4". This can be accomplished by first initializing the ownership of the neural network to any value. Then give it a series of inputs, in this case, the input that represents the different configurations of the panel. For each type of input configuration, we check what its output is, and adjust the corresponding weights. If the input mode we send to the network is not "4", then we know that the network should output a 0. Therefore each non "4" character when the network weight should be adjusted, so that its output tends to 0. When the mode representing "4" is conveyed to the network, the weights should be adjusted to make the output tend to 1.
If you consider this network, you will know that it is easy to increase the output to 10. Then, by training, the network can identify all numbers from 0 to 9. But why do we stop here? We can also increase the output further so that the network can recognize all the characters in the alphabet. This is essentially how handwriting recognition works. For each character, the network needs to undergo a lot of training to make it aware of the different versions of this text. In the end, the network can not only recognize the trained handwriting, but also show that it has a significant inductive and promotional capacity. That is, if the written text is changed to a single handwriting, it is slightly different from all the handwriting in the training set, and the network still has a great chance of recognizing it. It is this generalization that makes the neural network a priceless tool that can be used in countless applications, from face recognition, medical diagnostics, to racing predictions, there is also the navigation of bots in computer games (robots that act as game characters) or hardware robot (real bots).
This type of training is called supervised learning (supervised Learnig), and the data used to train is called the training set (training set). There are many different ways to adjust weights. The most common method for this kind of problem is the reverse propagation (backpropagation, short backprop or BP) method. As for the problem of reverse propagation, I will discuss it later in this book when you have been able to train the neural network to identify the mouse movements. In the remainder of this chapter I will focus my attention on another form of training that does not require any tutors to oversee training, or to call unsupervised learning (unsupervised learnig).
So I've introduced you to some basic knowledge, now let's look at some interesting things and introduce you to the first code project.