Draknet Network Configuration parameters

Source: Internet
Author: User

65447947?utm_source=itdadao&utm_medium=referral

[Net]batch=64The parameters are updated once per batch of samples. Subdivisions=8 If the memory is not large enough, the batch is split into subdivisions sub-batch, and the size of each child batch is batch/Subdivisions In the Darknet code, the batch/Subdivisions is named batch. height=416High width of the input image=416width channels of the input image=3number of channels for the input image momentum=0.9Momentum Decay=0.0005weight decay Regular term to prevent overfitting angle=0generate more training samples by rotation angle saturation= 1.5generate more training samples by adjusting the saturation level exposure= 1.5generate more training samples by adjusting the amount of exposure hue=.1generate more training samples by adjusting the hue learning_rate=0.0001Initial learning rate max_batches= 45000Stop learning policy after training reaches Max_batches=steps adjust the learning rate policy, there are the following policy:constant, STEP, EXP, POLY, Steps, SIG, Randomsteps=100,25000,35000Adjust the learning rate according to Batch_num scales=10,.1,.1percentage of change in learning rate, cumulative multiplication [convolutional]batch_normalize=1whether to do bnfilters=32How many feature graphs are output size=3size of convolutional cores Stride=1step pad To do convolution operation=1 if the pad is 0,padding specified by the padding parameter. If the pad is 1,padding size/2activation=Leaky activation function: Logistic,loggy,relu,elu,relie,plse,hardtan,lhtan,linea R,ramp,leaky,tanh,stair[maxpool]size=2size of the pool layer stride=2pooled Stepping [convolutional]batch_normalize=1Filters=64size=3Stride=1Pad=1activation=leaky[maxpool]size=2Stride=2... .... #######[convolutional]batch_normalize=1size=3Stride=1Pad=1Filters=1024activation=leaky[convolutional]batch_normalize=1size=3Stride=1Pad=1Filters=1024activation=Leaky[route] The route layer is to bring finer grained featuresinchFrom earlierinchThe networklayers=-9[reorg] The reorg layer is to do these features match the feature map size at the later L                                    Ayer.                                    The end feature map is 13x13, and the feature map from earlier is 26x26x512.  The reorg layer maps the 26x26x512 feature map onto a 13x13x2048 feature map so it can be concatenated withThe feature maps at 13x13 resolution.stride=2[Route]layers=-1,-3[Convolutional]batch_normalize=1size=3Stride=1Pad=1Filters=1024activation=leaky[convolutional]size=1Stride=1Pad=1FiltersThe filters number of the last convolution layer before =125 region is specific and the formula is filter=num* (classes+5The meaning of 5 is 5 coordinates, the tx,ty,tw,th,toactivation in the paper=linear[region]anchors= 1.08, 1.19, 3.42, 4.41, 6.63, 11.38, 9.42, 5.11, 16.62,10.52preselection box, can be selected by hand, or by k means from the training sample This school out of Bias_match=1Classes=20the number of objects the network needs to recognize Coords=44 coordinates per box Tx,ty,tw,thnum=5each grid cell predicts several boxes, consistent with the number of anchors. When you want to use more anchors, you need to increase num, and if you increase the number of NUM after the training of obj to nearly 0, you can try to increase the size of Object_scalesoftmax=1use Softmax to do the activation function jitter=.2suppressing over-fitting rescore by adding noise to jitter=1tentatively understood as a switch, non-0 o'clock adjusts the l.delta (difference between the predicted and the true values) by a re-rating Object_scale=5when there is an object in the grid, the bbox confidence loss the weight of the total loss calculation Noobject_scale=1when there is no object in the grid, the bbox confidence loss the weight of the total loss calculation Class_scale=1category Loss The weight of the contribution to the total loss calculation Coord_scale=1bbox coordinates predict the weight of loss contribution to total loss calculation absolute=1Thresh=. 6Random=0 Random is 1 enabled multi-scale Training, randomly using different sizes of pictures for training.

Draknet Network Configuration parameters

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.