Caffe continue training on existing models

Source: Internet
Author: User

Original URL:

http://blog.csdn.net/u014114990/article/details/47781233

One

Caffe supports the continuation of training on other people's models. Here are the examples given

caffe-master0818\examples\imagenet\resume_training.sh [CPP] view plain copy #!/usr/bin/env Sh./build/tool S/caffe train \--solver=models/bvlc_reference_caffenet/solver.prototxt \--SNAPSHOT=MODELS/BVLC_REFERENCE_CAF Fenet/caffenet_train_10000.solverstate.h5
Second, Caffe also supports the training of multiple descending learning rate

Like caffe-master0818\examples\cifar10\train_full.sh.

[CPP]  View plain  copy #!/usr/bin/env sh      tools=./build/tools       $TOOLS/caffe train \       --solver=examples/cifar10/cifar10_full _solver.prototxt      # reduce learning rate by factor of  10   $TOOLS/caffe train \       --solver=examples/ cifar10/cifar10_full_solver_lr1.prototxt \                            //  The learning rate here is LR1 profile         --snapshot=examples/cifar10/cifar10_full_iter_60000. solverstate.h5      # reduce learning rate by factor of  10   $TOOLS/caffe train \       --solver=examples/cifar10/ Cifar10_full_solver_lr2.prototxt \                            // <span style= " font-family: arial, helvetica, sans-serif; " >  Learning rate Here LR1 profile </span>       --snapshot=examples/cifar10/cifar10_ full_iter_65000.solverstate.h5  
Another model training

[CPP] view plain copy #!/usr/bin/env sh tools=./build/tools $TOOLS/caffe train \--solver=example     S/cifar10/cifar10_quick_solver.prototxt # Reduce learning rate by factor of ten after 8 epochs $TOOLS/caffe train \ --solver=examples/cifar10/cifar10_quick_solver_lr1.prototxt \--snapshot=examples/cifar10/cifar10_quick_iter_ 4000.solverstate.h5

So you don't have to manually drop the learning rate every time.

For large models, it is important to drop the learning rate many times, and the experimental results show that when the first learning rate no longer decreases, the learning rate is reduced again. Can further reduce the loss function


The learning rate is too low to jump to the lowest point, too small to jump out of local optimal points, so the initial learning rate to be larger, to prevent access to local optimal points.


--------------------------------------------------------------------------------------------------------------- ----------------------------

Third, the use of configuration files, configuration

Of course, the reduced learning rate can also be configured through configuration files.

Http://caffe.berkeleyvision.org/tutorial/solver.html Caffe Official Example

To use a learning rate policy like this, you can put the following lines somewhere in your Solver prototxt file:

base_lr:0.01     # Begin training at a learning rate of 0.01 = 1e-2

lr_policy: "Step" # Learning rate policy:drop th E learning rate in ' steps '
                  # by a factor of gamma every stepsize iterations

gamma:0.1        # Drop the Learning rat  E by a factor of ten
                  # (i.e., multiply it by a factor of gamma = 0.1)

stepsize:100000  # Drop The Learning Every 100K iterations

max_iter:350000  # Train for 350K iterations Total

momentum:0.9

Under the above settings, we ll always use momentum μ=0.9. We ' ll begin training at A base_lr of α=0.01=10−2 for the first 100,000 iterations and then multiply the L Earning Rate by gamma  (

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.