Unsupervised feature learning by augmenting Single images

Source: Internet
Author: User
I. Main Ideas

When deep learning is applied to target detection, Data enhancement usually increases the training data volume without the additional tag cost. Data enhancement reduces overfitting and increases algorithm performance. This article mainly verifies whether data enhancement can be used as the main component of the unsupervised feature learning architecture. Implementation path: first, some image blocks are randomly collected as seed images, and then each seed image block is transformed to obtain a series of image blocks as virtual ones. The CNN framework is used for training; finally, use the trained model to test on different databases.

Ii. Main framework 1. Data Acquisition (enhanced)

(1) randomly select {50, 32000} 32x32 seed image blocks from the entire image;

(2) For each seed image block, different translation, scale, color, and contrast changes are used as a class of {1,100} sub-image blocks;

As shown in:


2. Training

(1) For the training framework, see. krizhevsky's quick model and adjustment support for dropout: Two-layer convolution layer (5*5, 64 convolution cores), one full connection layer (128 neurons) and one softmax Layer

(2) pre-training: first, 100 types of training are randomly selected. When the training error starts to decrease, the training is stopped. The error descent means that the direction to a good local optimum has been found.

3. Test

Use the trained model to extract image features, remove softmax, and form a three-layer gold tower. Use linear SVM to train a classifier and select SVM parameters through cross-validation.

Iii. Experiment 1. Experiment Result 2. Result Analysis

(1) Impact of the number of unsupervised training types


It can be seen that the precision increases with the increase of the sample type, but the precision decreases slightly by 8000. Possible cause: as the number of sample types increases, the coincidence degree increases. A problem caused by the increasing degree of coincidence is that classification tasks become increasingly difficult.

(2) Impact of the number of samples in each category during unsupervised Training


From the image, we can see that when there are few samples for each type, even if the number of classes is increased, the accuracy difference is not very big. Possible cause: the number of samples is too small, resulting in over-fitting. Therefore, the actual test results are poor and unstable. When the number of samples for each type reaches a certain level, the difference is apparent.

Iv. Summary

This paper focuses on unsupervised learning proposed by CNN. Currently, there are few unsupervised training methods for CNN. Therefore, it is worth learning.


Unsupervised feature learning by augmenting Single images

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.