YOLO V2 Tutorial Training of your own data

Source: Internet
Author: User
Tags prepare

1. YOLO v2 Many files and V1 are not the same, and many iterations, the online existing V2 tutorials in the src/yolo_kernels.cu is gone, this step changes do not control

2. Tutorial : http://blog.csdn.net/hysteric314/article/details/54097845 (remember to make the changes)

3. A tutorial on testing threshold changes, intermediate parameter visualization (which cannot be used directly because the intermediate parameter format has changed):

Http://blog.csdn.net/hrsstudy/article/details/65644517?utm_source=itdadao&utm_medium=referral

4. In order for the YOLO V2 to train its own dataset like a VOC dataset, the following three aspects of the code need to be modified : (I do image text detection on the SVT DataSet, 1-class text)
One: Modify the number of categories: In the code, the default VOC data set is 20 categories, and I want to change to 1 classes.
Second: Prepare TXT document: VOC training data set will bring several TXT documents, to indicate the filename or path address, and if you use your own data may need to generate these documents themselves.
Third: Modify the path information in the code, the code in the VOC Training dataset path to its own training data set path

5. Convert 20 categories to 1 categories

1) cfg/voc.data file:

Classes changed to 1.

Names=data/svt.names.

Svt.names This file exists in the Darknet directory in the Data folder, create a svt.names, plus the content, of course, name and path can be defined by themselves. The number of rows in this file is the same as the number of classes, and each row is the name of a category. For example, I have only one line of data in this file: "Text." This file will be used to test your training model, and the system will draw the text on the picture bounding box,bounding box, which is the name of the object in this frame, should come from this file.

2) cfg/yolo_voc.cfg file:

The classes in the "region" layer is changed to 1.

The first "convolution" layer above the "region" layer, where the filters value is modified to (classes+ coords+ 1) * (NUM), in my case: (1+4+1) * 5=30, I changed the filters value to 30.

3) src/yolo.c file:

Position around the 14th line to change to: Char *voc_names={"text"}, there are 20 categories of names, I changed to the only 1 class name.

Position around line No. 328, modify draw_detection The last parameter of this function: 20 to 1. This function is used to draw the box that is detected by the system and to return the picture of the finished frame back to the first parameter im for saving and displaying.

The position is about line No. 361, in the demo function, the last third parameter changes 20 to 1.

scripts/voc_label.py file:

Here I do not use the author's code, because at that time V1 this file can not run out, so I found another

#-*-Coding:utf-8-*-

Import OS

l=["Text"]

For word in L:

Os.system ("Convert-fill black-background white-bordercolor white-border 4-font/usr/share/fonts/truetype/arphic/uka I.ttc-pointsize label:\ "%s\" \ "%s.png\"% (word, word))

6. Data Format conversion (prepare TXT document)

A total of four TXT format documents need to be prepared, respectively, annotation files and list files that record the coordinates and paths of the training sets and test sets.

1 each picture a annotation, but the storage path and V1 different.

V1: What is needed is the labels. png file name and the folder where the picture is placed and annotation folder names correspond. As to where to put the annotation, in fact the path is the default in the Train.txt path of the jpegimages to labels, the rest of the path unchanged.

The v2:labels. png file name is the same as the category name, test set and training set in two folders (do not know whether it also applies to many classes), through the list file in 2 to indicate the full path of each picture, then each picture annotation txt file where it.

Since in the training set, a picture file and this picture file correspond to the tag files, they both have the same name except the suffix name, so there are the following statements in SRC/YOLO.C:

Find_replace (Path, "Dout", "labels", Labelpath);

Find_replace (Labelpath, "jpegimages", "labels", Labelpath);

Find_replace (Labelpath, ". jpg", "txt", labelpath);

Find_replace (Labelpath, ". JPEG ",". txt ", labelpath);

The function will find the picture suffix name. jpg in the path, automatically replaced with. txt, so you only need to copy these txt-formatted tag files to the same directory as the picture. The system reads the corresponding tag file based on the replaced path address.

2 All training Pictures A path list, the test set is also a list file. The following describes how to modify path information in your code to point to these two files.

7. Modify the path

1) cfg/voc.data file:

Train =/home/pjreddie/data/voc/train.txt//change to the path of the training list

valid =/home/pjreddie/data/voc/2007_test.txt/* modified to the path of the test list, because test can not return the evaluation index, valid.

Backup = backup/* This path is YOLO for backup, during the training Yolo will constantly back up the resulting weights files, darknet directory with a backup folder, this path to point there. Here suggest modifying for your own path * *

2) src/yolo.c file:

In the Train_yolo function:

Char *train_images = "/data/voc/train.txt"; Modify the path to the training list

Char *backup_directory = "/home/pjreddie/backup/"; Modify to the backup path above

In the Validate_yolo function:

Char *base = "Results/comp4_det_test_"; /* You can modify your own path, comp4_det_test_[class name].txt to save the valid command test results * *

List *plist = get_paths ("/home/pjreddie/data/voc/2007_test.txt");//Modify to the path of the test list

In the Validate_yolo_recall function:

Char *base = "Results/comp4_det_test_"; You can modify your own path

List *plist = get_paths ("Data/voc.2007.test"); Modify the path to the test list

3) src/detector.c file:

Char *train_images = option_find_str (options, "Train", "data/train.list"); Modify the path to the training list

Char *backup_directory = option_find_str (options, "Backup", "/backup/"); Modify to your own backup path

In the Validate_detector_flip function:

Char *valid_images = option_find_str (options, "valid", "data/train.list"); Modify to test list path, this function corresponds to the test command for VALID2

In the Validate_detector function:

Char *valid_images = option_find_str (options, "valid", "data/train.list"); /* modified to test list path, this function corresponds to the test command for valid*/

In the Validate_detector_recall function:

List *plist = get_paths ("Data/voc.2007.test"); Modify to test list path

8. Adjustment of Evaluation indicators

src/detector.c File:

1) Modify the evaluation threshold value

The evaluation model can use the command valid or recall

Modify the VALID2 threshold (default. 005), Validate_detector_flip function:

Float thresh =. 005; Modify to the desired threshold value such as. 1

Modify the valid threshold (default. 005), Validate_detector function:

Float thresh =. 005; Modify to the desired threshold value such as. 1

Modify the recall threshold (the default is. 001), in the Validate_detector_recall function:

Float thresh =. 001; Modify to the desired threshold value such as. 25

2) Increase return precision index and indicate the index name

fprintf (stderr, "%5d%5d%5d\trps/img:%.2f\tiou:%.2f%%\trecall:%.2f%%\n", I, correct, total, (float) proposals/(i+1) , Avg_iou*100/total, 100.*correct/total);

fprintf (stderr, "id:%5d correct:%5d total:%5d\trps/img:%.2f\tiou:%.2f%%\trecall:%.2f%%\t", I, correct, total, (float) proposals/(i+1), avg_iou*100/total, 100.*correct/total);

fprintf (stderr, "proposals:%5d\tprecision:%.2f%%\n", proposals,100.*correct/(float) proposals);

Evaluation indicators are cumulative, not a single picture of the

9. After modifying all of the above, you can start training , the most important do not forget to make the first:

Make clean

Make-j16

Otherwise, don't blame me for the box. Haha, of course, forget to make the weight of the training is not affected

Training Command:

./darknet Detector Train Cfg/voc.data cfg/yolo-voc.cfg darknet19_448.conv.23 | Tee./svt_train_log.txt

I usually use | Tee [path + filename] (followed by-A is continued) command, or ls-l >[path + filename] (>>[path + filename) is a continuation of the command to export the intermediate process, on the one hand, backup easy to view, on the other hand, according to the middle parameters can draw a chart to analyze, The 2nd tutorial contains code that shares visual intermediate parameters, but cannot be used directly because the intermediate parameter format is changed

testing and returning evaluation indicators

1)./darknet detector Test Cfg/voc.data cfg/yolo-voc.cfg./svt/backup/yolo-voc_final.weights

/* Unrealistic evaluation index, input picture path, only show the picture and category, confidence rate * *

2)./darknet Detector valid Cfg/voc.data cfg/yolo-voc.cfg backup/yolo-voc_final.weights

/* Save the test results in the./results/comp4_det_test_[class name].txt when the terminal returns only.

3)./darknet Detector recall Cfg/voc.data cfg/yolo-voc.cfg Backup/yolo-voc_final.weights

/* sequentially ID: Picture sequence number starting from 0, correct: cumulative detection of the correct total number of frames, total: Cumulative gross Ground Truth number, RPS/IMG: Cumulative total proposals/has detected the number of pictures, Iou,recall:correct/ Total,proposals: Cumulative total number of boxes, precision:correct/proposals*/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.