Source code and running result
Cuda: https://github.com/zhxfl/cuCNN-I
C language version reference from: http://eric-yuan.me/
The accuracy of the mnist library for famous handwritten numbers recognition is 99.7%. In a few minutes, CNN training can reach 99.60% accuracy.
Parameter configuration
The network configuration uses config.txt for configuration # comments between them, and the code will be filtered out automatically. For other formats, refer to the following:
#Comment##NON_LINEARITY CAN = NL_SIGMOID , NL_TANH , NL_RELU#########IS_GRADIENT_CHECKING = false;BATCH_SIZE = 200;NON_LINEARITY = NL_RELU;[LAYER = CONV;KERNEL_SIZE = 5;KERNEL_AMOUNT = 10;WEIGHT_DECAY = 1e-6;POOLING_DIM = 2;][LAYER = CONV;KERNEL_SIZE = 5;KERNEL_AMOUNT = 20;WEIGHT_DECAY = 1e-6;POOLING_DIM = 2;][LAYER = FC;NUM_HIDDEN_NEURONS = 256;WEIGHT_DECAY = 1e-6;DROPOUT_RATE = 0.5;][LAYER = FC;NUM_HIDDEN_NEURONS = 256;WEIGHT_DECAY = 1e-6;DROPOUT_RATE = 0.5;][LAYER = SOFTMAX;NUM_CLASSES = 10;WEIGHT_DECAY = 1e-6;]
1) currently, the Code supports multiple convolution layers and multiple full link layers.
2) the convolution layer has a pooling layer by default. Currently, the pooling algorithm only supports maximum pooling.
3) the convolution kernel size of the convolution layer only supports an odd number.
4) The full link layer supports dropconnect. (Dropout is written in the configuration, which will be corrected later)
5) If you do not know what the weight_decay parameter is, you can ignore it first. Just use this value first.
Compile code
1) The Code currently depends on cuda-6.0 and opencv, if you do not want to install opencv, you can set util. cu AND util. h. All the Code related to opencv is removed. Only opencv is used in the whole code. It is only because I need to display images for debugging during the development process.
2) code can be imported directly.NsightThen compile and run. You can also compile and run it in vs2010.
Code features
1) We processed the data. Before each training, we randomly rotated, scaled, distorted, and cropped the data. There are two examples. In fact, this is very effective, so that our accuracy can be higher.
2) The whole code uses Cuda for acceleration. We use the cublas. lib and curand. Lib libraries. One is matrix calculation and the other is random number generation. I applied for all the memory I needed at one time. After the program started running, there was no data exchange between the CPU and GPU. This proved to be very effective. The program performance is about dozens of times faster than the original C language version (if the network is relatively large, it can reach a speed-up ratio of about one hundred times ). Each EPOS uses 1600 ms and processes 60000 images, that is, training an image is about 0.0266 Ms.
3) In fact, if you train multiple networks and then vote, the accuracy can reach 99.82%. This result is the best (99.79%) of all publicly published results so far.
Use Cuda to accelerate convolutional Neural Networks-Handwritten digits recognition accuracy of 99.7%