Some improvements on deep convolutional neural Network Based Image classif ication

Source: Internet
Author: User

The general idea of this article is:

(1) Increase training samples, (2) Increase the number of test samples forecast, (3) integration of multiple CNN models;

One, increase the number of training samples

Common methods to increase the sample are: Crop, flip and add randomly generated ligthing;

1, the traditional crop method is to resize the image to 256*256, and then crop, but this will lose a portion of useful information such as:

Therefore, the method used in this paper: first, the smallest side of the image is enlarged to 256, so that the formation of 256*n or n*256, and then in the crop;

2, in addition to randomly added light noise, you can also add additional color processing, that is, randomly change the image contrast, brightness and chroma (0.5-1.5 ratio);

Second, add additional forecast data

In addition to the use of five crops and two flips:

1, the use of three-scale images, 256, 228 and 284;

2, the use of three visual angle, while doubling the total number of layers, but did not achieve good results;

3, reduce the number of forecasts

due to the use of translations, 2 flips, 3 scales, and 3 views this produces 90 predictions, if the 90 predictions are made It will greatly affect the test speed and not use it. The simple approach is to adopt a greedy approach: first using one of the most feasible predictions, and gradually adding other predictions until the recognition rate no longer increases. (The puzzle: When testing, there is no way to know when the recognition rate will no longer increase, can still be able to use all the predictions). as follows:


Three, high-resolution models

using larger images, train a high-resolution model, and then combine the basic model with the high-resolution model. There are a few points when training a high-resolution model:

1 , 9 crops, 2 flips, 3 scales, 3 views, using the previous greedy prediction method.

2, because the training sample is very rich, so drop out is no longer so important. This paper uses a phased drop out to obtain a better effect of not using and always making drop out . The effects of model blending are as follows:

Iv. the recognition rate on the ILSVRC2013

V. Summary

Personally, this paper does not have any outstanding innovation, but is more than others to increase the number of samples, and achieve better results because there are 10 of the results of the fusion model.

Points for reference:

1, the method of increasing the sample;

2, phased use drop out, phased use drop out for the first time, has achieved good results.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.