Tiny_cnn reading (1), tiny_cnn reading
From today on, I will submit my reading experience on tiny_cnn to the blog center. I hope you can have more exchanges on this platform;
What if I read the code? Focus on ignoring details
First open the file downloaded from github:
Through csdn and searching on the internet, you will know what the directories of this file are stored;
I use $ {root} to represent the path to tiny-cnn-master. This variable will be used in the annotation;
Start vs/2014, use MS studio to open this project, and find main. cpp.
Samplew.convnet () function is called, as shown in 1.
Figure 1 main method
While we did not enter any parameters during the operation, it can be seen that there are 45th rows of running. In this case, we need to find the function to call when there is no parameter?
Row 3 shows that the parameter value of the samplew.convnet () function is data_dir_path = ../data.
This variable has not yet been used, so we don't have to go further, as long as we know that. data_dir_path is the path of the data;
"Enter the void samplew.convnet (const string & data_dir_path) function:
First sentence:
So we have to understand these two generics:
Mse:
@see: ${root}/tiny_cnn/optimzers/optimizer.h
Gradient_descent_levenberg_marquardt:
@see: ${root}/tiny_cnn/optimzers/optimizer.h
The above two functions are useless, but we can check them because the code is concise.
Mse:
2-1:
Tiny_cnn supports two types of loss functions:
(1) mean squared error mean variance Function
(2) cross entropy
Read the code;
A) These are two mathematical formulas converted:
f(y,t) = (y - t )^2 / 2
Df (y, t) = y-t. In theory, y and t are all real numbers from negative infinity to positive infinity.
B) The program uses the self-generated mse loss function. If we want to modify or make improvements, we can also start from here;
For example, changing a loss function, improving this function, etc ..
Figure 2-1: mse function implementation
Gradient_descent_levenberg_marquardt:
Network
@see ${root}/tiny_cnn/network.h