The Caffe operation provides three interfaces: C + + interface (command line), Python interface, and MATLAB interface. This article first parses the command line, followed by the other two interfaces.
Caffe's C + + main program (CAFFE.CPP) is placed in the Tools folder under the root directory, and of course there are some other feature files, such as: Convert_imageset.cpp, Train_net.cpp, Test_ Net.cpp, etc. are also placed in this folder. After compiling, these files are compiled into executable files and placed in the./build/tools/folder. Therefore, we need to add the./build/tools/prefix to execute the Caffe program.
Such as:
# sudo sh./build/tools/caffe train--solver=examples/mnist/train_lenet.sh
The command line execution format for the Caffe program is as follows:
Caffe <command> <args>
There are four types of <command>:
- Train
- Test
- Device_query
- Time
The corresponding functions are:
Train----Training or finetune model,
Test-----Testing Model
Device_query---display GPU information
Time-----Show Program execution times
Among the <args> parameters are:
- -solver
- -gpu
- -snapshot
- -weights
- -iteration
- -model
- -sighup_effect
- -sigint_effect
Note that there is a-symbol in front. The corresponding functions are:
-solver: Required parameter. A file of type protocol buffer, which is the configuration file for the model. Such as:
#./build/tools/caffe Train-solver Examples/mnist/lenet_solver.prototxt
-GPU: Optional parameter. This parameter is used to specify which GPU to run, select based on the GPU ID, and run with all GPUs if set to '-gpu all '. If you are running with a second GPU:
#./build/tools/caffe Train-solver Examples/mnist/lenet_solver.prototxt-gpu 2
-snapshot: Optional parameter. This parameter is used to resume training from the snapshot (snapshot). You can set a snapshot in the Solver configuration file to save the solverstate. Such as:
#./build/tools/caffe train-solver examples/mnist/lenet_solver.prototxt-snapshot examples/mnist/lenet_iter_5000. Solverstate
-weights: Optional parameter. Using pre-trained weights to fine-tuning the model requires a caffemodel that cannot be used in conjunction with-snapshot. Such as:
#./build/tools/caffe Train-solver examples/finetuning_on_flickr_style/solver.prototxt-weights models/bvlc_ Reference_caffenet/bvlc_reference_caffenet.caffemodel
-iterations: Optional parameter, iteration count, default is 50. If the number of iterations is not set in the profile file, the default iteration is 50 times.
-model: Optional parameter that defines the model in the protocol buffer file. You can also specify it in the Solver configuration file.
-sighup_effect: Optional parameter. Used to set the action to be taken when a pending event occurs, can be set to snapshot, stop or none, default to Snapshot
-sigint_effect: Optional parameter. Used to set the action that can be set to snapshot, stop or none, and default to stop when a keyboard abort event occurs (CTRL + C).
Just to illustrate some examples of train parameters, let's take a look at the other three <command>:
The test parameter is used in the testing phase for the output of the final result, and we can set the input accuracy or loss to the model configuration file. Let's say we want to validate a trained model in a validation set, so we can write
#./build/tools/caffe Test-model examples/mnist/lenet_train_test.prototxt-weights examples/mnist/lenet_iter_10000. CAFFEMODEL-GPU 0-iterations 100
This example is longer, not only using the test parameters, but also using the-model,-weights,-gpu and-iteration four parameters. This means using the trained weights (-weight), entering into the test model (-model), and testing 100 times (-iteration) with the 0 GPU (-GPU).
The time parameter is used to display the program runtime on the screen. Such as:
#./build/tools/caffe Time-model examples/mnist/lenet_train_test.prototxt-iterations 10
This example is used to display the time used by the Lenet model to iterate 10 times on the screen. Includes the time spent on forward and backward for each iteration, as well as the average time spent on each layer forward and backward.
#./build/tools/caffe Time-model Examples/mnist/lenet_train_test.prototxt-gpu 0
This example is used to display the time used by the Lenet model to iterate 50 times with the GPU on the screen.
#./build/tools/caffe Time-model examples/mnist/lenet_train_test.prototxt-weights examples/mnist/lenet_iter_10000. Caffemodel-gpu 0-iterations 10
The time taken to iterate 10 lenet models with a given weight using the first GPU.
The device_query parameter is used to diagnose GPU information.
#./build/tools/caffe Device_query-gpu 0
Finally, let's take a look at two examples of GPUs
#./build/tools/caffe Train-solver Examples/mnist/lenet_solver.prototxt-gpu 0,1
#./build/tools/caffe Train-solver Examples/mnist/lenet_solver.prototxt-gpu All
These two examples show that parallel operations with two or more GPUs can be much faster. However, if you have only one or no GPU, do not add-GPU parameters, add a slow instead.
Finally, under Linux, there is a time command on its own, so it can be used in combination, so the final command we run the mnist example is (a GPU):
$ sudo time./build/toos/caffe train-solver Examples/mnist/lenet_solver.prototxt
Caffe (10) command-line parsing