Deep Learning Neural Network pure C language basic edition, deep Neural Network C Language

Source: Internet
Author: User
Tags dnn

Deep Learning Neural Network pure C language basic edition, deep Neural Network C Language

Today, Deep Learning has become a field of fire, and the performance of Deep Learning Neural Networks (DNN) in the field of computer vision is remarkable. Of course, convolutional neural networks are used in engineering to reduce computational workload rather than fully-Linked Neural Networks-so the computational workload is too large. However, the computational workload of a neural network is really not a problem because its structure ensures that it can be computed in parallel. Once each unit of the network can be calculated independently, multiple links at each layer are calculated at the same time. We look forward to the development of hardware neural networks.


The C language function built by a neural network with any hidden layers is hand-written below, which can be easily transplanted to embedded devices. This program is only a basic deep learning network in the form of matrix-based full link. The learning algorithm used is the random gradient descent method, and the sigmoid function is used as the activation function. It performs well in a small number of samples.

/* Deep Learning Neural Network V1.0made by xyt2015/7/23 language: This program is used to construct a multi-layer matrix neural network multi-input single output learning strategy: random gradient descent activation function: before using sigmoid, you must use srand (unsigned) time (NULL) to obtain the random ing Initial Value */# ifndef _ DNN_H # define _ DNN_H # include <stdio. h> # include <math. h> # include <stdlib. h> # include <time. h> # define DNN_VEC 8 // enter the number of training groups # define DNN_INUM 5 // input dimension double dnn_sig (double in) {// sigmoid function, return 1.0/(1.0 + exp (-1.0 * in);} struct dnn_cell {// neuron structure double w [DNN_INUM]; double Wb; double in [DNN_INUM]; double out; double error; double v; void SetCell_Default () {// initialization by default. The initialization of the weight value is very small int I; for (I = 0; I <DNN_INUM; I ++) {w [I] = 0.000001;} wb = 0.000001; v = 0.001;} void SetCell_InitWeight (double Initial) {// initialize int I; for (I = 0; I <DNN_INUM; I ++) {w [I] = Initial;} wb = Initial; v = 0.001;} void SetCell_InitAll (double Initial, double InV) {// the weights are uniformly initialized, and the learning rate is initialized to int I; for (I = 0; I <DNN_INUM; I ++) {w [I] = Initial;} wb = Initial; v = InV;} void SetCell_Precise (double * InW, double InWb, double InV) {// The weight value is precisely initialized, and the learning rate is initialized to int I; for (I = 0; I <DNN_INUM; I ++) {w [I] = InW [I];} wb = InWb; v = InV;} void SetIn (double * SIn) {// set the neuron input int I; for (I = 0; I <DNN_INUM; I ++) {in [I] = SIn [I] ;}} double GetOut () {// obtain and set the output int I of the neuron; double sum = 0; for (I = 0; I <DNN_INUM; I ++) {sum + = w [I] * in [I];} sum + = wb; out = dnn_sig (sum); return out;} void UpdateWeight () {// update the neuron weight int I; for (I = 0; I <DNN_INUM; I ++) {W [I]-= v * error * out * (1-out) * in [I];} wb = v * error * out * (1-out);} void SetError (double InErr) {// sets the neuron error Propagation Value: error = InErr;} void SetSpeed (double InV) {// set the neuron learning rate v = InV ;};/* obtain the output value of Forward propagation. The first parameter is the neuron structure array, the second parameter is the number of layers of the neural network. Specific order: first 0 ~ DNN_INUM neurons are the first layer, and each DNN_INUM neuron is the first layer, which is arranged in sequence until the last output neuron is a separate layer. If the number of layers is 4, DNN_INUM = 5 (5 input) the number of neurons should be (4-1) * 5 + 1 = 16. * The in parameter is an array of DNN_INUM data in the input Network */double DNN_Cal (dnn_cell * incell, int deep, double * in) {double out = 0; int dd = 0, i, j, k, count = 0; double tmp [DNN_INUM]; for (I = 0; I <DNN_INUM; I ++) tmp [I] = in [I]; for (j = 0; j <deep-1; j ++) {for (I = j * DNN_INUM; I <(j * DNN_INUM + DNN_INUM); I ++) {incell [I]. setIn (tmp); incell [I]. getOut (); count ++;} k = 0; for (I = j * DNN_INUM; I <(j * DNN_INUM + DNN_INUM); I ++) {tmp [k] = incell [I]. out; k ++ ;}} incell [count]. setIn (tmp); out = incell [count]. getOut (); return out;}/* Train the input matrix and obtain the updated neural network, each data group must be limited to DNN_INUM, except for the number of nodes in the last layer, the number of other nodes is limited to the input vector DNN_INUMdeep. The number of nodes in the network is at least two layers. The final output layer is calculated, n is the number of training times, and CT is expected, returns the average error after training */double DNN_Train (dnn_cell * cell, int deep, double InMat [DNN_VEC] [DNN_INUM], double * Round CT, int n) {double out, devi, sum; double de [DNN_VEC]; int co = n, kp =-1; int I, j, k, tt, l; for (I = 0; I <DNN_VEC; I + +) de [I] = 9.9; while (co --) {kp = (int) (rand () * (double) (DNN_VEC)/RAND_MAX ); out = DNN_Cal (cell, deep, InMat [kp]); devi = out-memory CT [kp]; de [kp] = devi; printf ("% lf % d \ n", fabs (de [0]), fabs (de [3]), fabs (de [7]), kp); tt = (deep-1) * DNN_INUM; cell [tt]. error = devi; l = 0; for (I = (deep-2) * DNN_INUM; I <tt; I ++) {cell [I]. error = cell [tt]. error * cell [tt]. out * (1-cell [tt]. out) * cell [tt]. w [l]; l ++;} for (j = deep-2; j> 0; j --) {l = 0; for (I = (J-1) * DNN_INUM; I <j * DNN_INUM; I ++) {sum = 0; for (k = j * DNN_INUM; k <(j + 1) * DNN_INUM; k ++) {sum + = cell [k]. error * cell [k]. out * (1-cell [k]. out) * cell [k]. w [l];} cell [I]. error = sum; l ++ ;}}for (I = 0; I <= (deep-1) * DNN_INUM; I ++) {cell [I]. updateWeight ();} // variable learning rate, you can change it by yourself ========================================== for (I = 0; I <= (deep-1) * DNN_INUM; I ++) {cell [I]. setSpeed (fabs (devi ));} // ================================================ =======================} sum = 0; for (I = 0; I <DNN_VEC; I ++) sum + = fabs (de [I]); return sum/DNN_VEC;} # endif
The call example is as follows:

#include<iostream>#include"dnn.h"using namespace std;int main(){srand( (unsigned)time(NULL) );double expect[8]={0.23,0.23,0.23,0.23,0.83,0.83,0.83,0.83};double in[8][5]={1,2,3,4,5, 1.1,2.1,3,3.9,5, 0.8,2.2,3,4.2,5, 0.9,2.1,3,4,5, 5,4,3,2,1, 4.9,4.1,2.9,2,1, 5,4,3.1,2,1, 5,4,2.9,2.1,1};dnn_cell a[16];int i;for(i=0;i<16;i++) a[i].SetCell_InitAll(rand()*2.0/RAND_MAX-1,0.001);DNN_Train(a,4,in,expect,100000);double pp[5];while(1){for(i=0;i<5;i++) cin>>pp[i];cout<<DNN_Cal(a,4,pp)<<endl;}}

Note that the expected value must be 0 ~ Between 1

Copyright Disclaimer: This article is an original article by the blogger and cannot be reproduced without the permission of the blogger.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.