The original book: "AI Technology in Game programming"
Excerpt from: http://blog.csdn.net/starxu85/article/details/3143533
Original: http://blog.csdn.net/zzwu/article/category/243067
< neural network Getting Started > . (one of the serials) introduce neural networks in normal language
(neural Networks in Plain 中文版)
Because we don't have a good understanding of the brain, we often try to use the latest technology as a model to explain it. In my childhood, we all believed that the brain was a telephone switch. (What else could it be?) I also saw a well-known British neurologist, Sherrington, who likened the brain's work to a telegraph. Earlier, Freud often likened the brain to a hydro-electric generator, while Leibniz likened it to a grinder. I also heard people say that the ancient Greeks imagined brain function as a slingshot. Obviously, the current metaphor for the brain is probably a digital computer. - John R.searle [Note 1] |
Neural Network Introduction (Introduction to neural Networks
)
There was a long time when artificial neural networks were completely mysterious to me. Of course, I've read about them in the literature, and I can describe their structure and working mechanism, but I haven't been able to "aha." "A sound, like the feeling of a very difficult concept in your mind when you suddenly get a sense of understanding." My head seems to have been a hammer knocking, or like the movie Animal House (Chinese film called "Animal Room") in the pain of screaming "sir, thank you, give me another AH." "That's the poor guy." I can't translate the mathematical concept into a practical application. Sometimes I even want to pick up the authors of all the neural networks I've read and tie them to a tree and yell at them loudly: "Don't give me math anymore, just give me a little
actualThings right. ”。 But needless to say, it's never going to happen. I had to fill this void myself ... So I did the only thing I could do under that condition. I'm starting to get to work. < A laugh > So a few weeks later, on a beautiful day, when I was on a Scottish seaside vacation, my mind was suddenly struck by a shock as I peered over a misty gulf. Suddenly realized how the artificial neural network is working. I got "aha." "Feel it." But I just had a tent and a sleeping bag and a half-box of crisp nachos, and no computer would allow me to quickly write some code to confirm my intuition. Arghhhhh. It was then that I thought I should buy a laptop computer. Anyway, a few days later I came home, and I immediately let my fingers fly up on the keyboard. A few hours later my first artificial neural network program was finally compiled and run, and worked very well. Naturally, the code is a bit messy and needs to be sorted out, but it does work, and, more importantly, I know why it works. I can tell you that I was a very proud person that day. I hope that this book is passed on to you is this "aha." Feel When we finish the genetic algorithm, you may have tasted a little bit, but you want this feeling to be wonderful, it is necessary to wait for the neural network part of the whole study.
Biological Neural Network-the brain
(A biological Neural network–the Brain)
.... Your brain is a gray, creamy-like thing. It does not work with a single processing unit, as the CPU in a computer does. If you have a freshly preserved corpse in formalin, cut its skull carefully with a saw and remove the skull, you will see familiar brain tissue wrinkles. The outer layers of the brain, like a large walnut, are all wrinkled [figure 0 left], the layer of tissue called the cortex (Cortex). If you are careful to use your fingers to pull the whole brain out of your skull and then get a surgeon's scalpel and slice the brain, you will see two levels of brain [figure 0 right]: the gray outer layer (this is the source of the word "gray matter", but the fresh brain without formalin fixation is actually pink.) ) and the white inner layer. The gray layer is only a few millimeters thick, which tightly compresses billions of tiny cells called neuron (nerve cells, neurons). The white layer, under the gray matter of the cortex, occupies much of the cortex and is made up of countless connections between the neurons. The cortex is wrinkled like a walnut, which can cram a large surface area into a smaller space. This can accommodate more nerve cells than the smooth cortex. The human brain contains about 1OG (or 10 billion) of such tiny processing units, and the brains of an ant are about a 250,ooo. The following table L shows the number of neurons in humans and several animals.
|
|
Figure 0-1 Brain hemisphere like walnuts |
Fig. 0-2 The cerebral cortex is composed of gray matter |
Figure 0 brain shape and slice shape
table L Number of neurons in human and several animals
Animal |
Number of nerve cells (order of magnitude) |
Snail |
10,000 (=10^4) |
Bees |
100,000 (=10^5) |
Bee Sparrow |
10,000,000 (=10^7) |
Mouse |
100,000,000 (=10^8) |
Human |
10,000,000,000 (=10^10) |
Elephant |
100,000,000,000 (=10^11) |
|
FIG. 1 Structure of nerve cells |
Within the first 9 months of human life, these cells were created at an astonishing rate of 25,000 per minute. Neurons are very different from any other type of cell in humans, and each nerve cell has a wire-like "axon" (axon), which sometimes stretches to a few centimeters in length to transmit signals to other nerve cells. The structure of the nerve cell is shown in Figure 1. It consists of a cell body (soma), some dendrites (dendrite), and a long axon. The neuron body is a star-shaped sphere with a nucleus (nucleus). The dendrites grow out of the cell body in all directions and can have branches that are used to receive signals. Axons also have many branches. Axons are contacted by the dendrites of the branches (terminal) and other nerve cells to form a so-called synapse (Synapse, not pictured), and a nerve cell is sent to other nerve cells through axons and synaptic signals. Each nerve cell is connected by its dendrites and about 10,000 other nerve cells. This makes the connection between all the neurons in your head likely to be l,000,000,000,000,000. This is more than the number of connections to 100 trillion modern telephone switches. So it's no wonder why we sometimes have headaches.
interesting facts. It has been estimated that if the axons and dendrites of all neurons in a person's brain are connected in turn and pulled into a straight line, they can be connected from Earth to the moon and back to Earth from the moon. If you connect the axons and dendrites of all the human brains on Earth, you can stretch out to the nearest galaxies. |
Neurons use electrical-chemical processes to exchange signals. The input signal comes from another nerve cell. The axons of these neurons (i.e. the terminal) and the dendrites of the neurons form synapses (synapse), which enter the cell from the synaptic synapses in the dendrites. How the signal actually travels in the brain is a rather complicated process, but for us it is important to think of it as a modern computer, using a series of 0 and one to operate. That is, neurons in the brain have only two states: excitement (fire) and not excitement (ie inhibition). The intensity of the transmit signal is constant, changing only the frequency. Neurons use a method we don't know yet, adding all the signals that come in from the dendrites, and if the sum of all the signals exceeds a certain threshold, the nerve cells enter the excited (fire) state, and an electrical signal is sent out to the other neurons through an axis burst. If the sum of the signals does not reach the threshold, the nerve cells will not get excited. Such an explanation is a bit simplistic, but it has been able to meet our goals.
Neurons use electrical-chemical processes to exchange signals. The input signal comes from another nerve cell. The axons of these neurons (i.e. the terminal) and the dendrites of the neurons form synapses (synapse), which enter the cell from the synaptic synapses in the dendrites. How the signal actually travels in the brain is a rather complicated process, but for us it is important to think of it as a modern computer, using a series of 0 and one to operate. That is, neurons in the brain have only two states: excitement (fire) and not excitement (ie inhibition). The intensity of the transmit signal is constant, changing only the frequency. Neurons use a method we don't know yet, adding all the signals that come in from the dendrites, and if the sum of all the signals exceeds a certain threshold, the nerve cells enter the excited (fire) state, and an electrical signal is sent out to the other neurons through an axis burst. If the sum of the signals does not reach the threshold, the nerve cells will not get excited. Such an explanation is a bit simplistic, but it has been able to meet our goals. It is the sheer number of connections that makes the brain incredibly capable. Although each nerve cell works at a frequency of approximately 100Hz, each nerve cell works in parallel in the form of an independent processing unit, giving the human brain the following very salient features:
can achieve unsupervised learning. One of the incredible facts about our brains is that they can learn on their own without the supervision and guidance of a mentor. If a nerve cell is stimulated by a high frequency over a period of time, the strength of the connection between it and the nerve cells of the input signal will change in a certain process, making the nerve cell more exciting the next time it is stimulated. This mechanism was elaborated more than 50 years ago by Donard Hebb in his book Organination of Behavior. He wrote:
"When a axon of nerve cell a repeatedly or lastingly stimulates another nerve cell B, then one or both of the two neurons will have a growth process or metabolic change, making one of the excitation B cells, its efficacy will increase" |
Conversely, if a nerve cell is not stimulated for a period of time, the effectiveness of its connection will slowly decay. This phenomenon is called plasticity (plasticity).the damage is redundant (tolerance). Even if a large part of the brain is damaged, it can still perform complex work. A famous experiment is to train mice to walk in a maze. Then, the scientists cut off part of their brain, more and more, to the ground. They found that even if a large brain of rats were removed, they could still find walking paths in the maze. This fact proves that in the brain, knowledge is not kept in a local place. Other experiments have shown that if a small portion of the brain is damaged, the nerve cells can re-grow the damaged connection.the efficiency of processing information is extremely high. The transmission of electrical-chemical signals between neurons is very slow compared to the data transfer of a digital computer's CPU, but because neurons use parallel working methods, the brain can process large amounts of data at the same time. For example, when the brain visual cortex processes an image signal that passes through our retina input, it can be done in about 100ms of time. Considering that your nerve cell's average operating frequency is only 100hz,100ms time means that only 10 calculation steps can be completed. Think about how much data we have in our eyes, and you can see that this is really an incredible project.good at inductive promotion. The brain and the digital computer are different, one of the things that it is very good at is pattern recognition, and can be generalized according to the familiar information (generlize). For example, we were able to read the text written by others, even though we had never seen what he had written before.it is conscious. Consciousness (consciousness) is a topic widely and enthusiastically debated by neuroscientists and artificial intelligence researchers. There has been a great deal of literature on this topic, but there has not been a substantial unified view on what the reality of consciousness is. We cannot even agree that only human beings are conscious, or that they include the close relatives of humans in the animal kingdom. is a gorilla conscious? Is your cat conscious? Was it conscious that the fish you ate at dinner last week? Thus, an artificial neural network (Artificial neural Networks, or Ann) is meant to simulate this large amount of parallelism under the constraints of the current size of the current digital computer, and to enable it to show many similar features to the biological brain. Let's take a look at their performance.AI technology in game programming
. < Getting Started with neural networks > .
(Serial II)3digital version of the neural network(The Digital Version) above we see the biological brain is composed of many nerve cells, similarly, the artificial neural network simulating the brain Ann is called artificial nerve cells by many (Artificial neuron, also known as artificial neurons, or artificial neurons) of the small structure of the module. Artificial neural cells are like a simplified version of a real nerve cell, but are simulated in an electronic way. The number of artificial neural cells used in an artificial neural network can vary greatly. Some neural networks require only 10 artificial nerve cells, while some neural networks may require the use of thousands of artificial nerve cells. It all depends on what these artificial neural networks are prepared to actually do.
Interesting facts. A fellow named Hugo de Garis, who had created and trained a network of 1000,000,000 of human nerve cells in an ambitious project. The artificial neural network was cleverly built by him, using a hive automaton structure designed to customize a machine called Cam brainmachine ("Cam brain Machine") for a machine (Cam is cellular automata Machine's abbreviation). The man boasted that the artificial network machine would have the intelligence of a cat. Many neural network researchers think he is "on the star", but unfortunately, the company that hired him went bankrupt before his dream came true. This person is now Utah State and is the leader of Utah State Brain Engineering (Utah Brain project). Time will tell us whether or not his mind will eventually become something that is actually meaningful. Moqtada |
I think you might be wondering now what kind of thing an artificial nerve cell is. But it doesn't actually look like anything; It's just an abstraction. Let's take a look at Figure 2, which is a form of an artificial neural cell.
Moqtada Hugo de Garis now teaches at Utah State State University about him and his Cam machine, which can be seen on a Web page of the school's website, with real photos on it, see Http://www.cs.usu.edu/~degaris
Figure 21 Artificial neural cellsIn the figure, the letter W in the left several gray-bottom circles represents the floating-point number, which is called the weight (weight, or weights, weights). Each input (input) into the artificial neural cell is associated with a weighted W, which determines the overall activity of the neural network. You can now imagine that all of these weights are set to a random decimal between 1 and 1. Because the weight can be negative, it can be associated with the input to exert different effects, if the weight is positive, there will be excitation (excitory) effect, the weight is negative, there will be inhibition (inhibitory) role. When the input signal enters the nerve cells, their values are multiplied by their corresponding weights as input to the large circle in the graph. The ' nucleus ' of a great circle is a function called the excitation function (activation functions), which adds all these new, weight-adjusted inputs together to form a single excitation value (activation value). The excitation value is also a floating point and can be positively negative. Then, the output of the function based on the excitation value is also the output of the nerve cell: if the excitation value exceeds a certain threshold (as an example we assume a threshold of 1.0), a signal output of 1 is generated and a 0 is output if the excitation value is less than the threshold value of 1.0. This is one of the simplest types of artificial nerve cell excitation functions. The output value generated from the excitation value is a step function here. Take a look at Figure 3 and you can guess why there is such a name.Fig. 3 Step excitation functionIt is known from the graph that the step function is a unary, and since the excitation function can add multiple inputs to the plurality, it needs to be distinguished.
If you haven't had a lot of feelings about these so far, don't worry. The trick is: Don't try to feel it, just go with me and walk with me for the time being. After going through several chapters of this chapter, you will eventually begin to understand their meaning. And now, just relax and read on.3.1 I need some math now (current for Some math)
In the future discussion, I will try to reduce math to an absolute small amount, but learning some mathematical notation is still useful below. I will give you the math 1.1 point, and introduce some new concepts to you when you reach the relevant chapters. I hope that this approach will allow your mind to absorb all of the concepts more comfortably and enable you to see how to apply mathematics to your work at every stage of the development of your neural network. Now first let's take a look at how to put all the knowledge I've told you before in a mathematical way. An artificial nerve cell (from now on, I will use "artificial nerve cells" for short as "nerve cells") can have any n input, n represents the total number. You can use the following mathematical expression to represent all n inputs: x1, x2, X3, x4, X5, ..., xn the same n weights can be expressed as: W1, W2, W3, W4, W5 ..., wn remember, the excitation value is all input The sum of the products that correspond to their weights, therefore, can now be written as: a = w1x1 + w2x2 + w3x3 + w4x4 + w5x5 +...+ wnxn in this way, I have mentioned in the 5th chapter "Building a better genetic algorithm" that can be used Greek letter σ to simplify: note:
The various inputs of the neural network, as well as the weight settings for each neuron, can be considered as an n-dimensional vector. You can often see in many technical literature that it is quoted in this way. Let's examine how the program should be implemented. Assuming that both the input array and the weight array are initialized to X[n] and w[n], the sum code is as follows: double activation = 0;
for (int i=0; i<n; ++i)
{
Activation + = x[i] * W[i];
The equation is represented graphically in Figure 4. Please do not forget that if the excitation value exceeds the threshold, the nerve cells will output 1; If activation is less than threshold, the output of the neuron is 0. This is equivalent to the excitement and inhibition of a biological nerve cell. We assume that a nerve cell has 5 inputs, and that their weight w is initialized to a random value between plus and minus 1 ( -1 < W < 1). Table 2 shows the summation calculation process of the excitation value.
Fig. 4 excitation function of nerve cellsIf we assume that the desired threshold value = 1, the excitation Value 1.1 > Activates the threshold of 1, so the neuron will output 1.
Before you read further, make sure you understand exactly how the excitation function is calculated.
table 2 calculation of nerve cell excitation value
input |
Weights |
the product of the input * weight |
post-run totals |
1 |
0.5 |
0.5 |
0.5 |
0 |
-0.2 |
0 |
0.5 |
1 |
-0.3 |
-0.3 |
0.2 |
1 |
0.9 |
0.9 |
1.1 |
0 |
0.1 |
0 |
1.1 |
3.2 Lines, I know what a nerve cell is, but what to do with it.
The biological neurons in the brain and other nerve cells are connected to each other. In order to create an artificial neural network, the artificial neural cells are connected together in the same way. There are many different ways to connect, one of the easiest to understand and the most widely used, as shown in Figure 5, where neurons are connected one layer at a level. This type of neural network is called Feedforward Networks (Feedforword network). The name comes from the fact that the output of each layer of nerve cells in the network feeds forward to their next layer (the layer on which it is drawn) until the output of the entire network is obtained.
Figure 51 Feedforward NetworksThe graph shows that there are three layers in the network: the input layer is not a nerve cell, and the nerve cells are only two layers. Each input in the input layer is fed to the hidden layer as input to each neuron in the layer, and then the output of each neuron from the hidden layer is attached to each neuron of its next layer, the output layer. Only a hidden layer is drawn, as a feedforward network, you can generally have any number of hidden layers. But when dealing with most of the problems you will deal with, the first layer is usually enough. In fact, there are some problems that do not even need any hidden units, you just have to link those inputs directly to the output nerve cells on the line. In addition, the number of nerve cells I have chosen for Figure 5 is completely arbitrary. Each layer can actually have any number of neurons, which depends entirely on the complexity of the problem to be solved. But the greater the number of neurons, the lower the speed of the network, because of this, and for several other reasons (I will explain in the 9th chapter), the scale of the network is always required to keep as small as possible.
I can imagine that you may have been somewhat dazed with all this information. I think the best thing I can do in this case is to introduce you to a practical example of a neural network in the real world, which is expected to excite your own brain neurons. That's good. OK, here we go ...
You may have heard or read about neural networks that are often used for pattern recognition. This is because they are adept at mapping an input state (the pattern it is trying to identify) to an output state (the pattern it was trained to recognize).
Let's see how it's done. We use character recognition as an example. Imagine a panel made up of 8x8 lattices. Each of the squares has a small light, each of which can be opened independently (the lattice turns on) or off (lattice turns black), so that the panel can be used to display 10 number symbols. Figure 6 shows the number "4".Figure 6 Matrix grid point for character display</