Neural network architecture Arrangement

Source: Internet
Author: User

New neural network architectures are in place anytime, anywhere, dcign,iilstm,dcgan~

1. Forward propagation Network (FF or FFNN)

Very straightforward, they transfer information from the trip (input and output, respectively). Neural networks usually have many layers, including input layers, hidden layers, and output layers. There is no connection on a single layer, and the two adjacent layers are all connected (each neuron in each layer is connected to each neuron in the other layer). The simplest, in a sense, is also the most practical network structure, there are two input units, an output unit, can be used to model the logical gateway. FFNN is usually trained with a reverse propagation algorithm because the network pairs the "incoming" and "We Want" two datasets. This is also known as supervised learning, in contrast to unsupervised learning, in the case of unsupervised learning, we are only responsible for the input, by the network itself responsible for the output. The error that is derived from the inverse propagation algorithm is usually the difference between the input and the output (such as MSE or linear difference). Since the network has enough hidden layers, it is theoretically always possible to model input and output. In fact, they are very limited in scope, but a strong network is formed by the combination of a positive communication network with other networks.

2. Radial basis function (RBF) network

is a ffnn that takes a radial basis function as an activation function. The RBF is so simple. However, this does not mean that they are useless, except that ffnn with other functions as activation functions generally do not have their own individual names. To have a name, you have to meet a good time.

3. Hopfied Network (HN)

All neurons are connected to another neuron, and each node functions the same. Each node is input before training, each node is hidden during training, and each node is output after training. The way to train HN is to set the value of each neuron to the ideal mode and then calculate the weight. The weights will not change after this. Once the training is received, the network will always become a previously trained mode, because the entire network only in these states can achieve stability. It is important to note that HN is not always consistent with the desired state. Part of the reason for the stability of the network is that the total "energy" or "temperature" gradually shrinks during the training process. Each neuron has an activated threshold that changes with temperature and, once the sum of the inputs is exceeded, causes the neuron to become one of the two states (usually 1 or 1, sometimes 0 or 1). The update network can be synchronized or rotated in turn, which is more common. When the network is updated in turn, a fair random sequence is generated and each unit is updated in the specified order. Therefore, when each unit is updated and no longer changes, you can tell that the network is stable (no longer converging). These networks are also referred to as memory, as they converge to the most similar state of the input; When humans see half a table, we imagine the other half of the table, and if you enter half the noise and half the table, the HN will converge into a table.

4. Markov chain (MC or discrete-time Markov chain, DTMC)

It is the forerunner of BM and HN. You can understand DTMC: from my current node, the probability of reaching the neighboring node is how much? They have no memory, that is, each of your states is entirely dependent on the state of the previous. Although DTMC is not a true neural network, they have similar properties to neural networks and form the theoretical basis of BM and HN.

5. Boltzmann Machine (BM)

Very similar to HN, but some neurons are labeled as input neurons, and other neurons remain "hidden." The input neurons become input neurons when the network is updated as a whole. At first the weights are random, through the inverse propagation algorithm, or through the recent divergence of contrasts (using a Markov chain to determine the gradient between two of the information obtained). Neurons that compare to HN,BM sometimes present a two-yuan activation pattern, but others are random. The training and operation of BM is very similar to the HN: set the input neuron to a fixed value and then let the network change itself. Repeatedly moving back and forth between the input neuron and the hidden neuron, the net will be balanced when the temperature is right.

6. Self-encoder (AE)

Similar to FFNN, it is just a different usage of FFNN, which is not a different kind of network in nature and FFNN. AE looks like an hourglass, and the input and output are larger than the hidden layer. AE is also symmetrical along both sides of the middle. The smallest layer is always in the middle, and here is where the information is compressed most densely. From the beginning to the middle is called the coding section, the middle to the end is called the decoding part, the middle (unexpected bar) is called code. You can use the inverse propagation algorithm to train AE. AE is symmetrical on both sides, so the encoding weights and decoding weights are equal.

Neural network architecture Arrangement

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.