Feedback Neural Network Hopfield Network

Source: Internet
Author: User
Tags at sign

First, preface

After a period of accumulation, for the neural network, has basically mastered the Perceptron, BP algorithm and its improvement, Adaline and so on the most simple and basic knowledge of feedforward neural network, the following is based on the feedback neural network hopfiled neural network. Feedforward neural networks have complicated nonlinear mapping ability by introducing hidden layer and nonlinear transfer function (activation function). The output of the Feedforward network is determined only by the current input and weight matrix, regardless of the previous output state of the network. J.J. Hopfield introduced the concept of energy function in the feedback neural network, which made the reliable basis for the judgement of the operation stability of the feedback neural network, and the 1985 Hopfield and tank used the analog electronic circuit to realize the Hopfield network. And successfully solves the most representative TSP problem in the combinatorial optimization problem, and thus opens up a new way of neural network for intelligent information processing.

In Feedforward networks, whether discrete or continuous, the time lag between input and output is not considered, but the relationship between the two is expressed, but in the Hopfield network, the delay factor between input and output needs to be considered, so the dynamic mathematical model of the network need to be described by differential equation or difference equation.

The learning methods of neural networks include three kinds: supervised learning, unsupervised learning, and indoctrination learning. The weight of the Hopfield network is not obtained by repeated learning, but is calculated according to certain implementation rules, in the change is the state of the network, until the network state is stable output is the solution of the problem.

Hopfield networks are divided into continuous and discrete types, which are recorded as CHNN and DHNN respectively. The main explanation here is DHNN.

Second, Dhnn1. Network structure and working methods

The DHNN feature is that the output XI of any neuron is fed back through the link right wij to all of the neuron XJ as input, in order to allow the output to be controlled by the output of all neurons, thus allowing the output of each neuron to be constrained. Each neuron has a threshold TJ to reflect the control of the input noise. DHNN can be précis-writers to n= (W,t).

(1) Status of the network

The state set of all neurons constitutes the feedback network state x= (x1,x2,x3,..., xn), the input of the feedback network is the state initial value of the network, X (0) = (X1 (0), X2 (0), x3 (0),..., xn (0)). The feedback network enters the dynamic evolution process from the initial state under the excitation of the outside input, in which the state of each neuron changes continuously, the law of change is: XJ = f (net-j), F is the transfer function, often uses the symbolic function, then the net input of neuron J Net-j = SUM (WJI*XI-TJ), For dhnn nets, there is a Wii = 0, wji= wij. The diagonal element of the weight matrix is 0 and is the match matrix. It indicates that the output of neuron I does not feed back to neuron I, but instead feeds into the input of all neurons other than neuron I.

When the feedback network is stable, the state of each neuron is no longer changed, i.e. x (t) = x (t+1) = ... = x (∞).

(2) How the network works asynchronously

Serially, the network adjusts the state of only one neuron at a time, and the other is unchanged. The order of this adjustment has a certain effect. Can be randomly selected or in a fixed order. The result of this adjustment will play a role in the net input of the next neuron.

(3) How the network works synchronously

Parallel, all neurons simultaneously perform state adjustment calculations.

2. Network stability and attractor(1) Stability

Feedback network is a kind of network which can store some pre-set stable points, as a nonlinear dynamical system, it has rich dynamic characteristics, such as stability, finite ring state and chaotic state, etc.

Stability refers to a finite number of recursion, the state no longer changes;

The finite ring state refers to the self-sustaining oscillation of the limiting amplitude;

Chaotic state refers to the network state of the trajectory in a certain range of changes, neither repeat nor stop, the state irresistable multiple, the trajectory is not divergent to infinity.

For DHNN, the chaotic state cannot occur because the network state is limited.

Using Hopfield network can realize the function of associative memory: Using the steady state of the network to express a kind of memory mode, the process of the initial condition toward steady convergence is the process of searching the memory mode of the network, the first state can be regarded as the part information of the memory mode, and the network evolution can be regarded as the process of recalling all information from some information,

The optimization solution problem can be realized: the objective function with the solution is set to the network energy function, and the output of the network state is the optimal solution of the problem when the energy function tends to the minimum. The initial state of the network is regarded as the original solution of the problem, and the convergence of the network from the initial states to the steady state is the optimization calculation process, which is completed automatically in the process of network evolution.

(2) Attractor and energy function

The stable state of the network X is the attractor of the network, which is used to store memory information. The evolution of the network is to find all the information from some information, that is, the associative recollection process. Attractors have the following properties:

X=f (WX-T), X is the attractor of the network;

For DHNN, if it is adjusted asynchronously, and the weight matrix W is symmetric, the network eventually converges to an attractor for any initial state;

For DHNN, if adjust by the synchronous way, and the weight matrix W is non-negative symmetry, then for any initial state, the network finally converges to a attractor;

X is the network attractor, and the threshold value t=0, at sign (0), XJ (t+1) = XJ (t), then-X must be the attractor of the network;

The linear combination of attractors is also the attractor;

A set of all the initial states that can stabilize the network in the same attractor, called the Attractor's attraction domain;

For asynchronous mode, if there is an adjustment order, so that the network can evolve from state X to XA, it is said that x weak attracts to XA; if the network can evolve from X to XA for any adjustment order, the X-strong attracts XA. is corresponding to weak attraction domain and strong attraction domain.

The Shut up feedback network has the associative ability, each attractor should have certain attraction domain, only then, for the initial sample with certain noise or the defect, the network can undergo the dynamic evolution and stabilize to a certain attractor state, thus realizes the correct association. The purpose of the feedback network design is to make the network fall to the desired stable point, and also to have the largest possible attraction domain, in order to enhance the associative function.

3. Weight design of the network

The distribution of attractor is determined by the network weight, including the threshold value, the core of design attractor is how to design a set of appropriate weights, in order to make the design weights meet the requirements, the weights matrix should meet the following requirements:

(1) In order to ensure the asynchronous network convergence, W is a symmetric matrix;

(2) In order to ensure the synchronization of the network convergence, W is a non-negative fixed symmetric matrix;

(3) To ensure that the given sample is the attractor of the network, and must have a certain attraction domain.

Depending on the number of attractors required by the application, you can use the following different methods:

(1) Simultaneous equation method

This method can be used when there are fewer attractors.

(2) External product and method

This method can be used when there are more attractors. The outer product and method of Hebb law are adopted.

References:

Han Liqun, Artificial neural Network tutorial, Beijing University of Posts and Telecommunications Press, December 2006


Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.

Feedback Neural Network Hopfield Network

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.