**This article by the @ Star Shen Pavilion Ice language production, reproduced please indicate the author and source. **

**article link: http://blog.csdn.net/xingchenbingbuyu/article/details/53674544**

**Micro Blog: http://weibo.com/xingchenbing**

Gossip less and start straight.

Since it is to be implemented in C + +, then we naturally think of designing a neural network class to represent the neural network, which I call the Net class. Since this class name is too prevalent, it is likely to conflict with the program written by other people, so all my programs are included in namespace Liu, so it's not hard to think of my surname Liu. In the previous blog back-propagation algorithm resource collation, I listed a few relatively good resources. Students who are not familiar with the theory and learn the spirit can go out to the left to see the resources of this article. This assumes that the reader has a certain understanding of the basic theory of neural networks.

Before the real start coding still need to explain the Neural Network Foundation, in fact, is the design class and writing program ideas.

In short, the neural network contains several major elements: neuronal nodes, layers (layer), weights (weights) and biases (bias). The two computational processes of neural networks are forward propagation and reverse propagation respectively. The forward propagation of each layer contains the weighted sum (convolution) respectively. The linear operation and the activation function of the nonlinear operation. The inverse propagation is mainly to update weights with BP algorithm.

Although there are a lot of details, but for the first article, the above content is enough. The computation in the neural network can almost all be expressed in the form of matrix computation, which is one of the reasons why I use the OpenCV mat class, another reason is that the library I am most familiar with is OpenCV .... There are many better libraries and frameworks that use many classes to represent different parts when implementing neural networks. For example, blob classes represent data, layer classes represent various layers, and optimizer classes represent various optimization algorithms. But I am not so complicated here, mainly my ability is limited, only use a net class to represent the neural network.

Or just let the program speak, net classes are included in the Net.h, roughly as follows:

#ifndef net_h #define NET_H #endif//net_h #pragma once #include <iostream> #include <opencv2\core\core.hpp>
;
#include <opencv2\highgui\highgui.hpp>//#include <iomanip> #include "Function.h" namespace Liu {class Net
{public:std::vector<int> layer_neuron_num;
Std::vector<cv::mat> layer;
Std::vector<cv::mat> weights;
Std::vector<cv::mat> bias;
Public:net () {};
~net () {}; Initialize net:genetate weights matrices, layer matrices and bias matrices//bias default all zero void Initnet (std:
:vector<int> layer_neuron_num_);
Initialise the weights matrices.
void initweights (int type = 0, double A = 0, double b = 0.1);
Initialise the bias matrices.
void Initbias (cv::scalar& bias);
Forward void Farward ();
Forward void Backward ();
Protected://initialise The weight matrix.if type =0,gaussian.else uniform.
void Initweight (Cv::mat &dst, int type, double A, double b); Activation function Cv::mat activationfunction (Cv::mat &x, std::string func_type);
Compute Delta error void Deltaerror ();
Update weights void updateweights ();
}; }

This is not a complete form, just a simplified version of the text, and it looks clearer after simplification.

The net class now has only four member variables, respectively:

Number of neurons per layer (layer_neuron_num)

Layer (layer)

Weight matrix (weights)

Offset items (bias)

The weights are represented by matrices, needless to say, for the convenience of calculation, each layer and offset item is also represented by mat, each layer and bias is represented by a single column matrix.

NET class, in addition to the default constructors and destructors, there are:

Initnet (): Used to initialize neural networks

Initweights (): Initialization weights matrix, calling Initweight () function

Initbias (): Initializing a bias item

Forward (): Performs forward operation, including linear operation and nonlinear activation, while calculating error

Backward (): Performs a reverse propagation, calling the Updateweights () function to update the weight value.

These functions are already the core of the neural network program. The rest of the content is slowly realized, when the need to add what, Sankai, River Bridge.

Let's start with the Initnet () function, which takes only one parameter--the number of neurons per layer--and then initializes the neural network. The meaning of the so-called initialization neural network is to generate each layer of the matrix, each weight matrix and each offset matrix. It sounds simple, but it's simple.

Implementation code in NET.CPP:

Initialize net
void Net::initnet (std::vector<int> layer_neuron_num_)
{
Layer_neuron_num = Layer _neuron_num_;
Generate every layer.
Layer.resize (Layer_neuron_num.size ());
for (int i = 0; i < layer.size (); i++)
{
layer[i].create (layer_neuron_num[i], 1, CV_32FC1);
Std::cout << "Generate layers, successfully!" << Std::endl;
Generate every weights matrix and bias
weights.resize (layer.size ()-1);
Bias.resize (Layer.size ()-1);
for (int i = 0; I < (Layer.size ()-1); ++i)
{
weights[i].create (layer[i + 1].rows, layer[i].rows, CV_32FC1);
//bias[i].create (Layer[i + 1].rows, 1, CV_32FC1);
Bias[i] = Cv::mat::zeros (Layer[i + 1].rows, 1, CV_32FC1);
}
Std::cout << "Generate weights matrices and bias, successfully!" << Std::endl;
Std::cout << "Initialise Net, done!" << Std::endl;
}

It is not difficult to generate a variety of matrices here, the only thing to be aware of is the weight matrix of the number of rows and the determination of the number of columns. It is worth mentioning that this is the default setting of the circle to 0.

The Weight initialization function initweights () calls the Initweight () function, in fact, initializes one and more differences.

Initialise The weights matrix.if type =0,gaussian.else uniform.
void Net::initweight (Cv::mat &dst, int type, double A, double b)
{
if (type = = 0)
{
Randn (DST, A, b); c6/>}
Else
{
Randu (DST, A, b);
}
}
Initialise the weights matrix.
void net::initweights (int type, double A, double b)
{
//initialise weights cv::matrices and bias for
(int i = 0; I < weights.size (); ++i)
{
initweight (Weights[i], 0, 0., 0.1);
}

Bias initialization is the same value assigned to all offsets. Here we use the scalar object to assign a value to the matrix.

Initialise the bias matrices.
void Net::initbias (cv::scalar& bias_)
{for
(int i = 0; i < bias.size (); i++)
{
Bias[i] = Bias_;
}
}

At this point, the part of the neural network that needs to be initialized has been initialized completely.

We can use the following code to initialize a neural network, although there is no function, but at least can test the current code is a bug:

#include ". /include/net.h "
//<opencv2\opencv.hpp>
using namespace std;
using namespace CV;
using namespace Liu;
int main (int argc, char *argv[])
{//set neuron number of every layer vector<int> layer_neuron_num
= {784,100,10};
initialise NET and weights
Net net;
Net.initnet (layer_neuron_num);
Net.initweights (0, 0, 0.01);
Net.initbias (Scalar (0.05));
GetChar ();
return 0;
}

There's no problem with the pro test.

This article first came here, forward propagation and reverse propagation in the next content. All of the code is already hosted on the GitHub and interested in downloading and viewing. Comments are welcome.