C ++ Implementation of BP artificial neural network

Source: Internet
Author: User

BP (Back Propagation) network is a multi-layer feed-forward Network trained by the error inverse propagation algorithm, which was proposed by a team of scientists led by Rumelhart and mccelland in 1986, it is one of the most widely used neural networks. The BP network can learn and store a large number of input/output ing relationships without revealing the mathematical equations that describe this ing relationship beforehand. Its learning rule is to use the shortest descent method (Gradient Method) to continuously adjust the weights and thresholds of the network through reverse propagation, so as to minimize the sum of squared errors of the network. The topological structure of the BP neural network model includes the input layer (
Layer), hidden layer (hide layer), and output layer (output layer ).

Here are some articles I have collected about neural networks:

Introduction to Neural Networks-learning using reverse Propagation Algorithms

Http://www.ibm.com/developerworks/cn/linux/other/l-neural/index.html

Artificial Intelligence Java tank robot series: neural networks, top

Http://www.ibm.com/developerworks/cn/java/j-lo-robocode3/index.html

Artificial Intelligence Java tank robot series: neural networks, lower

Http://www.ibm.com/developerworks/cn/java/j-lo-robocode4/

Constructing a neural network using Python-the CNN can reconstruct distorted patterns and eliminate noise.

Http://www.ibm.com/developerworks/cn/linux/l-neurnet/

Provide basic data of a matlab bp Neural Network

Http://www.cnblogs.com/galaxyprince/archive/2010/12/20/1911157.html

Http://www.codeproject.com/KB/recipes/aforge_neuro.aspx

The author has provided several application examples.

The following C ++ code implements the BP network. eight three-bit binary samples correspond to one expected output and the BP network is trained, the trained network can output a single decimal number for the input three binary numbers.

// Convert the three-digit binary to one-digit decimal number # include <iostream> # include <cmath> using namespace STD; # define innode 3 // input knots # define hidenode 10 // implicit knots # define outnode 1 // output knots # define trainsample 8 // BP training samples class bpnet {public: void train (Double P [trainsample] [innode], double T [trainsample] [outnode]); // BP training Double P [trainsample] [innode]; // input sample double T [trainsample] [outnode]; // double * recognize (double * P) to be output by the sample; // BP recognizes void writ Etrain (); // write the trained weight void readtrain (); // read the trained weight so that it does not need to be trained each time, you only need to save the best training weight, and then OK bpnet (); Virtual ~ Bpnet (); Public: void Init (); double W [innode] [hidenode]; // implicit node weight: Double W1 [hidenode] [outnode]; // output node weight double B1 [hidenode]; // hidden node threshold value double B2 [outnode]; // output node threshold value double rate_w; // weight learning rate (input layer-hidden layer) Double rate_w1; // weight learning rate (hidden layer-output layer) Double rate_b1; // hidden layer threshold learning rate double rate_b2; // output layer threshold value learning rate double E; // error calculation double error; // Maximum Allowable Error double result [outnode]; // BP output}; bpnet: bpnet () {error = 1.0; E = 0.0; rate_w = 0.9; // weight learning rate (Input layer-hidden layer) rate_w1 = 0.9; // weight learning rate (hidden layer-output layer) rate_b1 = 0.9; // threshold learning rate of hidden layer rate_b2 = 0.9; // output layer threshold value learning rate} bpnet ::~ Bpnet () {} void winit (double W [], int N) // weight initialization {for (INT I = 0; I <n; I ++) W [I] = (2.0 * (double) rand ()/rand_max)-1;} void bpnet: Init () {winit (double *) W, innode * hidenode); winit (double *) W1, hidenode * outnode); winit (B1, hidenode); winit (B2, outnode);} void bpnet :: train (Double P [trainsample] [innode], double T [trainsample] [outnode]) {double PP [hidenode]; // correction error of hidden nodes double QQ [outnode]; // expected deviation between the output value and the actual output value. Double YD [outno De]; // expected output value double X [innode]; // input vector double X1 [hidenode]; // hidden node status value double X2 [outnode]; // output node status value double O1 [hidenode]; // hidden layer activation value double O2 [hidenode]; // output layer activation value for (INT isamp = 0; isamp <trainsample; isamp ++) // one sample of cyclic training {for (INT I = 0; I <innode; I ++) x [I] = P [isamp] [I]; // input sample for (INT I = 0; I <outnode; I ++) YD [I] = T [isamp] [I]; // sample to be output // construct the input and output standard for each sample (Int J = 0; j 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.