Python is used to build a network.

Source: Internet
Author: User
Tags nets python list

Python is used to build a network.

Hot Things will obviously get cooler. The room will become messy with frustration. Messages are distorted. Short-term strategies for reversing these situations are re-heating, sanitation, and the use of the network. This article introduces the last of the three, which is an algorithm that can eliminate noise only by specific parameters. Net. py is a very simple Python implementation that will show you how to combine the basic parts of it, and why the CNN sometimes re-obtains the original pattern from a self-distorted pattern. Despite the limitations of this implementation, you can still gain a lot of useful and enlightening experience on the network.
What are you looking?

I assume that you have read this article because of some computing problems. Some people suggest that some neural network algorithms provide solutions. Specifically, it is recommended that you use a local network. I further assume that you need to have a general idea so that you can decide whether the suggestion is feasible and that you want to study in depth. The following is a very scaled-down application of the network. It may guide you to solve the problem.

First, you have a set of basic patterns encoded with-1 and + 1. If needed, they can be encoded with 0 and + 1. These patterns can be normalized binary stamps (see references ). The next element is a set of patterns that deviate from this base. What you are looking for is to create code so that you can enter an abnormal pattern and output a desired basic pattern. Therefore, you are looking for an algorithm that can enter a description of the encoding of a specific stamp, and then output a basic stamp pattern that you deserve. You are not sure that the search will succeed. There is an acceptable failure rate that negatively impacts your plan. For you, there will be a ratio of false recognition of stamps that will not significantly affect your project.

If this reminds you of your problem, the following may be the beginning of your solution design. Before the conclusion, you should be able to answer the basic questions. What is this? How does it work? What are its limitations? What can it do for me? Do I want to spend more time studying it?

Pattern and distortion

Let's first look at the five arbitrary patterns that will be distorted and subsequently retrieved. They can be visually represented as a black/white matrix of 10 multiplied by 10. Figure 1 shows the first pattern, p1.
Figure 1. Visual Representation of p1

Click any p2 to p5 in net. py to display other patterns. For encoding, these five patterns are initially described as the Python list. For example, the description of the first pattern can be found in Listing 1.
Listing 1. Pattern p1

    p1 = [ [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1], [-1, -1, -1, -1, -1, -1, -1, -1, 1, 1] ]

The black and white squares correspond to-1 and + 1 respectively. The list is then converted into an array. (For more information, see references .) Each element in this pattern,-1 or + 1, has a Node object in the node array.

A Node object has three main attributes:

  1. A Node object has a value, which is an element in the pattern.
  2. A node also has an address, that is, its address in the array.
  3. Each node has a color to display.

As mentioned above, one function of the local code system is to eliminate noise. To implement this function, a method is required to introduce noise to the pattern. Click Add Noise to complete the task. Figure 2 is generated by adding noise to p1.
Figure 2. distorted p1

In order to introduce noise to a pattern, the local code uses the local code to access each address in the node array. Then it will take a random number in [), that is, the range from 0 to 1 contains 0 but not 1. If the number is smaller than a fixed standard, the network will change the value and color of the node, otherwise it will remain unchanged. By default, this standard is set to 0.20, so that 20% of any given node may change its value and color. You can adjust the slider to change this probability. When it is set to 0%, there will be no noise. When it is set to 100%, the node array will be simply reversed. Take the value of this range, and all other common possibilities will occur. Each value introduces a pattern with a certain degree of noise. Because the network is an algorithm for noise elimination, it can input a distortion pattern shown in 2 and then output the original pattern in Figure 1.

Although sometimes it becomes obscure due to inappropriate interpretations, the implementation of relevant algorithms is quite simple. Next, I will introduce you to a complete implementation of algorithms, and I will briefly explain why these algorithms can eliminate noise.

Weight

As David Mertz and I described in a previous developerWorks article An introduction to neural nets, the human brain consists of about billion neurons, each neuron is connected to thousands of other neurons on average. Neurons receive and send different energy. An important feature of neurons is that they do not immediately respond after receiving energy. Instead, they accumulate the received energy. They send their energy to other neurons only when the accumulated energy reaches a certain critical limit.

When the brain is learning, it can be considered as adjusting the number and intensity of these connections. There is no doubt that this is an extremely simplified biological fact. In this example, simplification can be practical for implementing control neural networks, especially when they are used as models. The conversion from biology to algorithms is achieved by converting connections into weights. (The sensor uses weights in a different and more intuitive way. Before reading this article, you may want to read An introduction to neural nets again .)

A weight object encapsulates a value that represents the weight between one node and another. The weight object also has an address and a color. The address is its position in the weight array. Color is used for display. Figure 3 shows a possible representation of the weight array. Net. py (see the links in references) keeps track of the lowest and highest weights, and displays a key for the color value in the weight display.
Figure 3. a visual representation of the weight array

Each row in the weight array is a list of weights between a given node and all other nodes. There are two methods for the network. One form of node has one to its own weight, and the other form does not. The experience gained from net. py shows that when a node is not self-weight (self-weighted), the node array will not always be reconstructed to itself. Select the No Self Weight option and try to refactor p3 or p5. There are one hundred nodes, so there are 10 thousand General redundancy weights. By default, when the node is self-weighted, there will be 5050 non-redundant weights, otherwise there will be only 4950.
Figure 4. Origin of Weight

List 2. Weight Generation Algorithm

PAT = {x: x is a RxC pattern}
WA = {x: x is a (R * C) x (R * C) weight array}
For all (I, j) and (a, B) in the range of R and C:
SUM = 0
For p in PAT:
SUM + = p (I, j) * p (a, B)
WA (R * I) + j, (C * a) + B) = SUM

The concept inspired by biology is the basis of the CNN developed by Donald Hebb in 1949. It is assumed that if a pair of nodes send their energy to each other at the same time, the weight between them will be greater than that of only one sending their own energy. He wrote: "When one of cell A's axes is close enough to B to stimulate it and can be repeatedly and persistently involved in its stimulation, one or all two cells have some growth or metabolism changes. In this way, as a cell that stimulates B, the efficacy of A will increase "(see references for details ). In the sense of the network, when a pair of nodes have the same value, in other words,-1 or + 1, the weights between them are larger. The product of the values of all possible node pairs and the content of the determined weight array. When two values are the same, their product is positive, and the sum increases. This sum is reduced when values are different.

In more detail, where does the weight come from? First, you must be able to access a library or a set of basic patterns on the network. Here, p1 to p5. The Calculation of weights is based on a set of coordinates in the boundaries of the basic pattern matrix. Then it accesses the corresponding node in each pattern. In each step, it adds the product of the node value to a used sum. (See figure 4 ). After the network accesses each pattern, it sets the value of a weight object to the sum. A pair of nodes located in (I, j) and (a, B) are given. It is set to be located in the weight array (I * 10 + j, a * 10 + B) the value of the weight object.

This is how weights are constructed. However, how does it apply to a larger algorithm? How does it apply to pattern reconstruction?

Reconstruction

If there is a weight array and a distorted or noisy pattern at hand side, the CNN sometimes outputs the original pattern. Not guaranteed, but the percentage of correct network times is astonishing. It can be synchronized or asynchronously completed.

In asynchronous mode, the network will traverse the distorted pattern. At each node N, it will ask whether the value of N should be set to-1 or + 1.

To confirm this setting, the network traverses the rows in the weight array that contain N and other nodes with heavy ownership. Do not forget that the node may or may not be self-weighted.

In each step of the second traversal, it calculates the product of the weight between (1) N and the value of (2) another node. As you expected, the network maintains a counter in use of these products.

Now, the network can make a decision. At least in the current implementation, if the sum is less than 0, the network sets the node to-1. If the sum is greater than or equal to 0, the network sets the value of the node to + 1.
Figure 5. Reconstruction: No node left

Listing 3. refactoring

For every node, N, in pattern P.
SUM = 0
For every node, A, in P:
W = weight between N and
V = value of
SUM + = W * V
If SUM <0:
Set n's value to-1
Else:
Set n's value to + 1

The default update is asynchronous, because the network only sets the value of a node after determining the value. If the network sets the node value after making all the judgments, it can be synchronized. In this case, it will store its judgment, and then update the node of the array after the last decision is made. In net. py (see references), refactoring is performed asynchronously by default, but pay attention to the options of synchronous refactoring.

When you experience net. py, when the reconstruction is successful, the behavior of the network is shocking. One of these actions is that even when the weight array seriously degrades, it can still refactor the pattern. My simple Degrade Weights will traverse the weight array and randomly set the weight to 0. A view of the damage degree is provided for the display of the number of weights. Here, the correct reconstruction shows that the fault tolerance of the CNN is far greater than that of the brain. How does it work? The description of mathematics is not short. Here is a structure introduction.

What happened

The algorithm details of the CNN demonstrate why it can sometimes eliminate noise. Like algorithm analysis, the most troublesome part is the mathematical details. In the current example, these are hard to depict and imagine. Fortunately, there are some closely related phenomena that can clearly and clearly display the working principle of the network.

When a fl falls into a bowl composed of a simple surface, it rolls to its lowest point. The bowl curvature is like a rule. Enter the entry point of the ball and return the lowest point, that is, the bottom of the bowl. A more complex curvature is similar to a function, which inputs an entry point and returns one of several local lowest points.

Energy is a basic part of these simple phenomena. In simple cases or complex cases, the incoming ball has a certain amount of energy. This energy decreases over time. It will eventually reach a stable State and cannot be smaller. In a complex example, there may be a lower energy level, but the ball cannot be reached.

Similarly, no matter whether there is any distortion, a pattern can be considered as an energy with a specific measurement. Therefore, the pattern p1 to p5 has an energy level.
Pattern Energy Level

The pattern magnitude calculation is not complex. The value of each possible node pair and the product of the weights between them are calculated by. The energy level of a pattern is the result of removing the sum of these products with negative 2. Net. py shows the energy level of any given pattern or node array. When you refactor a pattern, I think and hope that you will be able to see the pattern can decline in magnitude.

During Reconstruction, the network determines to flip a node based on the values of other nodes and the product of the weights between them. When the sum is less than 0, the node is set to-1; otherwise, the node is set to + 1. When the product of the value and weight is positive, it helps to promote and greater than 0. However, this will push the network towards the trend of setting the node value to + 1. When the product is negative, and is pushed to or less than 0. Therefore, the network is pushed to the trend of setting the node to-1. Changes in weights will lead to changes in measurements and trends pushed during network determination. The pattern may be distorted seriously, so that the network will not be pushed to the correct determination trend. If the expression is correct, the network will be pushed to the correct trend in most cases.

If you refactor any of the five patterns, you will find that each pattern is reconstructed to itself. It should be like this, because each pattern occupies a local lowest energy point. No reconstruction process can reduce the energy level of the pattern. If you have successfully reconstructed a distorted pattern, you can see that the energy level of the pattern has been reduced to the level of a pattern. When a failure occurs, the energy level of the distorted pattern has been reduced to a false local low. In both cases, the energy level cannot be reduced. In other words, it has reached a stable State. It is interesting and important to describe the network in the form of energy. On this basis, it can be created in mathematics, so that the re-applied reconstruction algorithm can finally get a stable pattern. (See references for details .)

Conclusion

You should be aware of the limitations of the network. An obvious limitation that is often mentioned is that its pattern must be encoded as an array. This array may consist of-1 and + 1, or 0 and + 1. As you already know, the local price of the local value may be stable at a false low point. The more obvious limitation is that when the number of patterns exceeds about 14% of the number of nodes in the node array, the probability that the network is stable at a certain false local low point will increase. That is to say, each time a basic pattern is added, about seven more nodes are required. Despite these limitations, the pattern refactoring discussed here may become an intuitive guide to solve your specific computing problems. Now you have a rough idea of the first mentioned algorithm. If it meets your needs, you now understand the upper structure of building your own implementations. This includes algorithms for calculating the weight array, methods for restructuring distorted patterns, and Algorithms for calculating the magnitude of patterns.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.