C + + realization of BP artificial neural network

Source: Internet
Author: User

BP (back propagation) network is the 1986 by the Rumelhart and McCelland, led by the team of scientists, is an error inverse propagation algorithm training Multilayer Feedforward Network, is currently the most widely used neural network model. BP network can learn and store a large number of input-output pattern mapping relationships without revealing the mathematical equations that describe the mapping relationship beforehand. Its learning rule is to use the steepest descent method (gradient method), through the reverse propagation to continuously adjust the network weights and thresholds, so that the network error squared and minimum. The BP neural network model topology includes the input layer (inputs layer), the hidden Layer (hide layer) and the output layer (outputs layer).

Here are some of the articles I've collected about neural networks:

Introduction of neural network--the mode learning by using reverse communication algorithm
Http://www.ibm.com/developerworks/cn/linux/other/l-neural/index.html

Artificial Intelligence Java Tank Robot Series: Neural Network, Upper
Http://www.ibm.com/developerworks/cn/java/j-lo-robocode3/index.html
Artificial Intelligence Java Tank Robot Series: neural Network, lower
http://www.ibm.com/developerworks/cn/java/j-lo-robocode4/

Using Python to construct a neural network--hopfield network can reconstruct distorted patterns and eliminate noise
http://www.ibm.com/developerworks/cn/linux/l-neurnet/

Provide the basic data of a MATLAB BP neural network
Http://www.cnblogs.com/galaxyprince/archive/2010/12/20/1911157.html

Http://www.codeproject.com/KB/recipes/aforge_neuro.aspx
The authors have given several forms of application examples

The following C + + code implemented the BP network, through 8 3-bit binary samples corresponding to a desired output, training BP network, the last training network can be input of the three-bit binary number corresponding to the output of a decimal digit.

Converts a three-bit binary number to a decimal number #include <iostream> #include <cmath> using namespace std; #define INNODE 3//Input node #define HIDENODE 10//implied node #define OUTNODE 1//Output node #define TRAINSAMPLE 8//BP Training Sample number class Bp Net {public:void train (double P[trainsample][innode],double t[trainsample][outnode]);//BP Training Double P[trainsampl     E][innode];    Enter the sample double T[trainsample][outnode]; The sample is to be output double *recognize (double *p);//bp recognizes void Writetrain (); Write training completed the weight of void readtrain ();
    Read training good weight value, this makes not every time to train, as long as the training of the best value of the right to save on OK bpnet ();

Virtual ~bpnet ();
    Public:void init (); Double w[innode][hidenode];//implied node weights double w1[hidenode][outnode];//output node weight double b1[hidenode];//implied node threshold double b2[outnode];//output node valve value double rate_w; Weights Learning Rate (input layer-hidden layer) Double rate_w1;//weight Learning rate (implied layer-output layer) Double rate_b1;//hidden layer threshold learning rate double rate_b2;//output layer threshold learning rate dou

BLE e;//error calculation double error;//allowable maximum error double result[outnode];//bp output}; Bpnet::bpnet () {error=1.0;

    e=0.0;  rate_w=0.9; Weight Learning rate (input layer-hidden layer) rate_w1=0.9; Weight learning rate (implied layer-output layer) rate_b1=0.9; Implicit layer threshold learning rate rate_b2=0.9; Output layer threshold Learning rate} bpnet::~bpnet () {} void Winit (double w[],int n)//weight initialization {for (int i=0;i<n;i++) w[i]= (double
Rand ()/rand_max)-1;
    } void Bpnet::init () {winit (double*) w,innode*hidenode);
    Winit ((double*) w1,hidenode*outnode);
    Winit (B1,hidenode);
Winit (B2,outnode);
    } void Bpnet::train (double p[trainsample][innode],double T[trainsample][outnode]) {Double pp[hidenode];//correction error for implied nodes Double qq[outnode];//you want the output value to be a deviation from the actual output value double yd[outnode];//want the output value double X[innode]; Input vector double x1[hidenode];//implied node state value double x2[outnode];//output node state value double o1[hidenode];//hidden layer activation value double o2[h
            idenode];//output Layer activation value for (int isamp=0;isamp<trainsample;isamp++)//Cycle Training sample {for (int i=0;i<innode;i++) X[i]=p[isamp][i]; Enter the sample for (int i=0;i<outnode;i++) yd[i]=t[isamp][i];
Samples expected to be exported
        Construct the input and output criteria for each sample for (int j=0;j 



Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.