Reprint from https://www.cnblogs.com/denny402/p/5076285.html
Caffe's operation provides three kinds of interfaces: C + + interface (command line), Python interface and MATLAB interface.
This article first resolves the command line, followed by a description of the other two interfaces. Caffe C + + main program (CAFFE.CPP) is placed in the Tools folder under the root directory, and of course there are other functional files, such as: Convert_imageset.cpp, Train_net.cpp, Test_ Net.cpp and so on are also placed in this folder. After compiling, these files are compiled into executable files and placed in the./build/tools/folder.
So we need to add the./build/tools/prefix to execute the Caffe procedure. such as: # sudo sh./build/tools/caffe train--solver=examples/mnist/train_lenet.sh Caffe program's command-line execution format is as follows: Caffe <command> &
Lt;args> There are four kinds of <command>: Train test Device_query time corresponds to the function: Train----training or finetune model, test-----testing Model Device_query---Display GPU Information time-----show program execution times <args> parameters include:-solver-gpu-snapshot-weights-iteration-model-sighu P_effect-sigint_effect Note that there is a-symbol in front. The corresponding function is:-solver: Required Parameters. A protocol buffer type file, that is, the model's configuration file. such as: #./build/tools/caffe train-solver examples/mnist/lenet_solver.prototxt-gpu: Optional parameters. This parameter is used to specify which GPU to run, select according to the GPU ID, and use all the GPU if set to '-gpu all '. If you are running with a second GPU: #./build/tools/caffe train-solver examples/mnist/lenet_solver.prototxt-gpu 2-snapshot: Optional parameters. This parameter is used to from the snapshot (snapshot)Resume training. You can set up snapshots in the Solver configuration file to save Solverstate. such as: #./build/tools/caffe train-solver examples/mnist/lenet_solver.prototxt-snapshot examples/mnist/lenet_iter_5000. Solverstate-weights: Optional parameter. To fine-tuning the model with the prior training weights, we need a caffemodel, which can not be used in conjunction with-snapshot. such as: #./build/tools/caffe train-solver examples/finetuning_on_flickr_style/solver.prototxt-weights models/bvlc_ Reference_caffenet/bvlc_reference_caffenet.caffemodel-iterations: Optional parameter, number of iterations, defaults to 50.
If you do not set the number of iterations in the profile file, the default iteration is 50 times. -model: Optional parameters, defined in the protocol buffer file model.
You can also specify in the Solver configuration file. -sighup_effect: Optional parameter. Used to set the action to be performed when a program suspends an event, which can be set to snapshot, stop, or none, default to Snapshot-sigint_effect: Optional parameter.
Used to set when the program occurs when a keyboard abort event (Ctrl + C), the operation can be set to snapshot, stop or none, by default to stop just examples of some train parameters, now we look at the other three <command> Test parameters are used in the testing phase for the output of the final result, and in the model configuration file we can set the input accuracy or loss. Let's say that we want to validate a trained model in the validation set, so we can write the #./build/tools/caffe Test-model Examples/mnist/lenet_train_test.prototxt-weights Examples/mnist/lenet_iter_10000.caffemodel-gpu 0-iterations 100 is a long example, using not only the test parameters, but also the-Model,-weights,-gpu and-iteration four parameters.
The meaning is to use the trained weights (-weight), enter into the test model (-model), with a GPU numbered 0 (-GPU) to test 100 times (-iteration). The time parameter is used to display the program uptime on the screen.
such as: #./build/tools/caffe Time-model examples/mnist/lenet_train_test.prototxt-iterations 10 This example is used to display the time used by the Lenet model iteration 10 times on the screen.
Includes the time spent forward and backward for each iteration, as well as the average time spent on each layer forward and backward.
#./build/tools/caffe Time-model Examples/mnist/lenet_train_test.prototxt-gpu 0 This example is used to display on the screen the time that the Lenet model used on the GPU iteration 50 times. #./build/tools/caffe Time-model examples/mnist/lenet_train_test.prototxt-weights examples/mnist/lenet_iter_10000.
Caffemodel-gpu 0-iterations 10 uses the given weights to iterate over 10 times lenet models using the first GPU.
The Device_query parameter is used to diagnose GPU information. #./build/tools/caffe Device_query-gpu 0 Finally, let's look at two examples of the GPU./build/tools/caffe train-solver Examples/mnist/lenet_sol Ver.prototxt-gpu 0,1 #/build/tools/caffe Train-solver Examples/mnist/lenet_solver.prototxt-gpu All two examples: two or more GP u to parallel operation, so the speed will be much faster.
But if you have only one or no GPU, don't add-gpu parameters, but slow. Finally, under Linux, there is a time command in itself, so it can be used in conjunction, so the final command we run for the mnist example is (aBlock GPU): $ sudo time/build/toos/caffe train-solver examples/mnist/lenet_solver.prototxt