Self-summary of simple character recognition algorithm based on BP Neural Network (C language Edition)

Source: Internet
Author: User

This article is the source code of their own reading a bit of summary. Please specify the source for the transfer.

Welcome to communicate with you. qq:1037701636 Email:[email protected]

Written in front of the gossip:

Self-feeling should not be a very good at learning the algorithm of people. The past one months have been due to the need to contact the BP neural network. Until now, I have always felt that the neural network, Ant colony algorithm, and robust control of the algorithm community are all very tall things, and listen well, never touch and understand. This time and the BP neural network encounters. Let me have a preliminary grasp of. Understanding the basic principles and formulas of the algorithm to the computer can recognize the code flow, this should be the so-called mathematical and computer perfect combination, is this process is called ACM?

Basic concepts and principles of 1.BP neural networks

Looking at the network of a lot of information about the neural network, the understanding that the BP neural network is the most simple and universal, just the need to complete the work is mainly a simple character recognition process.

The concept of BP neural network:

The BP (back propagation) network was presented by a team of scientists led by Rumelhart and McCelland in 1986 and is a multi-layered feedforward network trained by error inverse propagation algorithm. is one of the most widely used neural network models of the moment.

BP network can learn and store a large number of input-output pattern mapping relationships. Without having to reveal the mathematical equation of the mapping relation of the descriptive narration beforehand.

The basic model diagram of BP Neural network is as follows:

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvz3p6ywlny25mb3jldmvy/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma== /dissolve/70/gravity/southeast "The three levels of the/>BP network: input layer, hidden layer, and output layer.

All we have to do is build a BP neural network that belongs to us based on our own needs.principle of 2.BP neural network and derivation of related formulas2.1, the basic idea of BP network:The learning process of BP Neural network consists of two processes: the forward transmission of information and the reverse propagation of error. (1) Forward transfer: Input sample from the input through the hidden layer of the calculation of the output layer, if the output layer of the actual output and expected output does not match, then calculate the error value of the output layer. Then turn to the reverse propagation process.

(2) The reverse propagation of error: It is the output error in some form through the hidden layer to the input layer of the anti-pass. and distribute the error to all the units in each layer. The error signal of each layer element is obtained, which is the basis of correcting the unit. the signal forward transmission and the error reverse propagation repeats, the weight value continuously adjusts the process. is the learning/training process of the network.

When the training reaches the prescribed error or a certain number of training. End the training. The BP network training learning process can be understood as: the error between the ideal target TK and the actual output OK at the sample input is a process that tends to be 0:

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvz3p6ywlny25mb3jldmvy/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma== /dissolve/70/gravity/southeast "/>Correcting the weight value of output layer by error gradient descent method Δ w ki , the correction amount of the output layer threshold value Δ a k , the correction amount of the hidden layer weight value Δ w ij , the correction amount of the hidden layer threshold:                       The above formula indicates that the network input error is the function of each layer weight value wjk and Vij, so the adjustment weights can change the error E. Obviously, the principle of adjusting weights is to keep the error decreasing. Therefore, the weight adjustment should be proportional to the gradient drop of the error. Therefore, the visual interpretation of the BP network algorithm is as follows: It is shown that the core of BP network algorithm is to adjust the weight value continuously. Causes the error to decrease continuously.

Obviously, whether it is in the forward gradient or negative gradient, in the discrete case, we need to constantly adjust the weights to the error minimum. The adjusted rate value ETA (also known as the step value of weight) is related to the training speed of the entire neural network.  The formula derivation of BP neural network can refer to: http://en.wikipedia.org/wiki/Backpropagation, the core content of the relevant derivation formula is as follows: The core of the algorithm of BP network is the constant forward using a stimulus function, and this stimulating function often uses the s-type activation function (LOGSIG) as:

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvz3p6ywlny25mb3jldmvy/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma== /dissolve/70/gravity/southeast "/>

watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvz3p6ywlny25mb3jldmvy/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma== /dissolve/70/gravity/southeast "/> Note: The above formula to refer to the network resources, these steps of the flow chart for the understanding of the whole BP neural network algorithm of the core idea has a great help.

Implementation of 3.BP Neural network algorithm using C language

3.1. Initial preparatory work, the establishment of three layers of data structure body: The sample information, input layer information, hidden layer information and output layer information, finally attributed to a BP core algorithm structure of the bp_alg_core_params;

typedef struct {unsigned int img_sample_num;//The image character to be trained, number sample number unsigned int img_width;    unsigned int img_height;    unsigned char *img_buffer;  }img_smaple_params;typedef img_smaple_params* hd_sample_params;typedef struct {unsigned int in_num; Number of input layer nodes double *in_buf; Input layer output data cache double **weight;    Weights, the current input layer of a node corresponding to a plurality of hidden layers unsigned int weight_size; Double **pri_deltas; Record the change value of the previous weight. For additional momentum double *deltas;    The current calculated implicit layer feedback back weight correction}bp_input_layer_params;typedef bp_input_layer_params* hd_input_layer_params;typedef struct { unsigned int hid_num; Number of hidden layer nodes double *hid_buf; Hidden layer output data cache double **weight; Weight.    A node of the current hidden layer corresponds to a plurality of output layers unsigned int weight_size; Double **pri_deltas; Record the change value of the previous weight for attaching the momentum double *deltas;    The current computed output feedback returns the weight correction value}bp_hidden_layer_params;typedef bp_hidden_layer_params* hd_hidden_layer_params;typedef struct { unsigned int out_num;//output layer node number double *out_buf; Output layer output data cache double *out_target;} Bp_out_layer_params;typedef Bp_out_layer_params* hd_out_layer_params;typedef struct {unsigned int size;//structure unsigned int train_ite_num;//number of training iterations unsigned I  NT Sample_num;   Number of samples to be trained double momentum;                BP threshold adjusts the momentum double eta; Training step value.  Learning efficiency double Err2_thresh; Minimum mean square error hd_sample_params p_sample; Sample set of parameters Hd_input_layer_params P_inlayer; Input layer Parameters Hd_hidden_layer_params p_hidlayer;//hidden layer parameters hd_out_layer_params p_outlayer;//output layer parameter}bp_alg_core_params;


3.2. Initialization of the number of parameters. Mainly includes the allocation of compute buffers.

If there were m,n here. K indicates the number of nodes entered, hidden, and output.

Then one input buffer size: allocation size m+1. Similarly, the hidden layers and outputs are assigned: N+1,P+1; The data volume size is the double type of the default dual precision.

Weight buffer size: A BP network based algorithm, a node to the next layer of nodes need to have one by one corresponding. So this is the form of a two-dimensional array. We assign the size of the input layer weight space to (m+1) (n+1). The hidden Weight space size is (n+1) (p+1);

Of course, the correct amount of weights. It is a variable that is fed back from the output value of the node, which is actually a lot of feedback to 1. By formula, we can see that we are able to use only one-dimensional arrays to represent feedback corrections for each node (not based on the input node's data). That is, for example, the following variables:

And finally, the feedback correction of the hidden layer to the output layer, the feedback correction of the output layer and the hidden layer are all in the form of a single variable, and the 2-dimensional correction is done by combining the input data of the node in the calculation of the weight value.

Complete the dynamic allocation process function for the two-dimensional array, as seen in the following:

double** alloc_2d_double_buf (unsigned int m, unsigned int n) {        unsigned int i;    Double **buf = NULL;    Double *head;        /* Assign an array pointer space + 2-D data Cache space */    BUF = (double * *) malloc (sizeof (double *) *m + m*n*sizeof (double));    if (buf = = NULL)    {        ERR ("malloc error!");        Exit (1);    }    Head = (double *) (BUF + m);        memset (void *) head, 0x00, sizeof (double) *m*n)//clear 2d buf for    (i = 0; i < m; i++)    {                Buf[i] = head + I *n;        DEG ("alloc_2d_double_buf,  addr = 0x%x", buf[i]);    }        return BUF;}


3.3 BP neural network training process and continuous weight correction

Go through the forward and push forward in turn. Weight correction value calculation, weight adjustment, sample sharing error calculation.

After calculating all the sample nodes of a sample, the mean square error is calculated, and the error satisfies a certain threshold, which indicates that the BP neural network training can basically end (the acceptable error is generally defined at about 0.001):

int Bp_train (Bp_alg_core_params *core_params) {unsigned int i, j, K;    unsigned int train_num, sample_num;    Double err2;//averaging error DEG ("Enter bp_train Function");        if (Core_params = = null) {ERR ("null point Entry");    return-1; } train_num = core_params->train_ite_num;//iterative Training Sample_num = core_params->sample_num;//Sample number Hd_sample_  params p_sample = core_params->p_sample;  Sample Set parameters Hd_input_layer_params P_inlayer = core_params->p_inlayer; Input layer parameters Hd_hidden_layer_params P_hidlayer = core_params->p_hidlayer; Hidden layer parameters Hd_out_layer_params P_outlayer = core_params->p_outlayer;    Output Layer Parameters DEG ("the max Train_num =%d", train_num);                /* Iterative Training according to the number of training samples */for (i = 0; i < Train_num; i++) {err2 = 0.0;                DEG ("Current train_num =%d", i);                        for (j = 0; J < Sample_num; J + +) {DEG ("current Sample id =%d", j); memcpy (unsigned char*) (P_inlayer->in_buf+1), (unsigned char*) sample[j], p_inlayer->in_num*sizeof (double)); memcpy (unsigned char*) (p_outlayer->out_target+1), (unsigned char*) out_target[j%10], p_outlayer->out_num*            sizeof (double)); /* Forward output of input layer to hidden layer */Bp_layerforward (P_INLAYER-&GT;IN_BUF, P_hidlayer->hid_buf, P_inlayer->in_num, P_hidlay            Er->hid_num, P_inlayer->weight); /* Forward delivery output from hidden layer to output layer */Bp_layerforward (P_HIDLAYER-&GT;HID_BUF, P_outlayer->out_buf, P_hidlayer->hid_num, P_ou                        Tlayer->out_num, P_hidlayer->weight); /* Output layer forward feedback error to hidden layer, i.e. weight correction value */Bp_outlayer_deltas (P_OUTLAYER-&GT;OUT_BUF, P_outlayer->out_target, P_outlayer->o                        Ut_num, P_hidlayer->deltas); /* Hide layer forward feedback error to input layer, weight correction value depends on the adjustment value of the previous layer */Bp_hidlayer_deltas (P_HIDLAYER-&GT;HID_BUF, P_hidlayer->hid_num, P_outlayer            ->out_num, P_hidlayer->weight, P_hidlayer->deltas, P_inlayer->deltas); /* Adjust the hidden layer to the output layer weight value*/Adjust_layer_weight (P_HIDLAYER-&GT;HID_BUF, P_hidlayer->weight, P_hidlayer->pri_deltas, P_hidlayer-&gt ;d Eltas, P_hidlayer->hid_num, P_outlayer->out_num, core_params-&                        Gt;eta, core_params->momentum); /* Adjust the weight of the hidden layer to the output layer */Adjust_layer_weight (P_INLAYER-&GT;IN_BUF, P_inlayer->weight, P_inlayer->pri_deltas, p_in Layer->deltas, P_inlayer->in_num, P_hidlayer->hid_num, core_p                  Arams->eta, core_params->momentum); ERR2 + = CALCULATE_ERR2 (P_outlayer->out_buf, P_outlayer->out_target, p_outlayer->out_num);//        Statistics the total error after all samples have been traversed}///ERR2 = err2/(double) (p_outlayer->out_num*sample_num) after a sample processing                INFO ("Err2 =%08f\n", ERR2);       if (ERR2 < Core_params->err2_thresh) {INFO ("BP Train Success by costs Vaild iter nums:%d\n", i);     return 1; }} INFO ("BP Train%d Num failured!    Need to Modfiy core params\n ", I);    return 0; }

4. Summary

bp neural Network after understanding the core idea of the algorithm, using the form of code to achieve will often become more effective, and if peremptorily directly get code to analyze the practice is not recommended. Due to the lack of understanding of the core idea of the basis of the algorithm can not be part of the change and optimization, blind changes often lead to experimental failure.

The algorithm should satisfy the basic training process of BP neural network. As for the recognition, prediction and others, the BP neural network can be trained and studied on the basis of obtaining the ideal sample source and target source, which makes it have a certain universality. Later, the above-mentioned floating-point processing can be converted into a fixed-point DSP to be processed and applied to the embedded device.


Note:

Since so many people consulted me about code, I uploaded a simple character recognition algorithm based on BP Neural Network (c language version). I have no longer studied, thank you.

Self-summary of simple character recognition algorithm based on BP Neural Network (C language Edition)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.