Artificial neural network note-particle swarm optimization (partical Swarm optimization

Source: Internet
Author: User
Tags rand

The content of particle swarm optimization can be obtained by searching.

The following are mainly personal understanding of particle swarm optimization, and the adjustment of weights in BP neural network

Original in: http://baike.baidu.com/view/1531379.htm

Refer to some of the contents below

===============我是引用的分界线=================
粒子根据如下的公式来更新自己的 速度和新的位置
v[] = w * v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[])
present[] = persent[] + v[]
v[] 是粒子的速度, w是惯性权 重,persent[] 是当前粒子的位置. pbest[] and gbest[] 如前定义 rand () 是介于(0, 1)之间的随 机数. c1, c2 是学习因子. 通常 c1 = c2 = 2.
===============我是引用的分界线 =================

X is generally used to indicate the position of the particle, V to indicate the velocity of the particle

Both X and V are vectors, so it's easy to associate with the weight of the BP neural network.

In the process of looking at how the PSO and BP neural network are combined, I have been puzzled by whether to let X replace the weight, or to replace the weight with V. or the product of X and V. After many discussions, the basic consensus is to replace the weight with X.

Because a particle represents a set of weights, if only using a particle ... So gbest[] and pbest[] don't make much sense, at least just one. So a lot of people put the number of particles into 10-30. The BP neural network combined with PSO has 10-30 sets of weights. The BP neural network in general sense has only one set of weights. So this idea for neural networks is not just rely on a group of weights to do the best search, but the use of multiple sets of weights to search. The opportunity to jump out of the local minimum is a lot bigger. Very good idea. However, the calculation is already much larger than the average BP, at least the variable is already more than the original. Fortunately, however, these variables are all controllable vectors. Only a few parameters need to be adjusted.

The study of Particle swarm optimization (partical Swarm OPTIMIZATION-PSO) is mainly due to the fact that the BP neural network which was previously made is easy to fall into the local minimum when the dimension reaches more than 100, so we have to consider how to optimize it.

Originally wanted to use the simulated annealing algorithm to do the weight adjustment. But after looking at particle swarm optimization (partical Swarm OPTIMIZATION-PSO), this thought was better. Where the hell is it better? Oh. Can't say. It's just a perceptual understanding.

Ps:

After all, I am not a math study, after all, I am not reading the computer, the neural network contains ideas and theories are often beyond my understanding. Neural network so many years of development, so many predecessors of the efforts of the results, not my junior can immediately understand. Suddenly feel the pressure is very big. But when I had a strong interest in him, an invisible force drove me to continue to learn. No why, just for what I like, I just want to know. ^_^

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.