Tinymind Multi-label image classification Race Road

Source: Internet
Author: User

Competition Portal: HTTPS://WWW.TINYMIND.CN/COMPETITIONS/42

We're stupid dogs, fairy.

Final leaderboard:

Thanks to the first baseline:81946511 in this competition.

Our code is based on this baseline, eliminating the hassle of writing data reading and scoring criteria.

First of all, we changed the model of baseline to ResNet50, DenseNet201 empty model, and then we chose the migration study, referring to the blog: 78889838, and later changed its InceptionV3 to InceptionResNetV2:

 fromkeras.applications.inception_resnet_v2 Import Inceptionresnetv2,preprocess_inputbase_model= InceptionResNetV2 (weights='imagenet', include_top=False) x=BASE_MODEL.OUTPUTX=globalaveragepooling2d () (x) x= Dense (1024x768, activation='Relu') (x) Predictions= Dense (6941, activation='sigmoid') (x) Model= Model (Inputs=base_model.input, outputs=predictions) model.summary ()

Added data enhancements to the Imgaug library:

 fromImgaug Import Augmenters asIaaseq=IAA. Sequential ([IAA]. Cropandpad (Percent=(-0.1,0.1) ), IAA. Sometimes (0.5, IAA. Gaussianblur (Sigma=(0,0.5)) , IAA. Contrastnormalization ((0.75,1.5) ), IAA. Additivegaussiannoise (Loc=0, scale= (0.0,0.05*255)),], Random_order=True) Imglist=[]imglist.append (x_train) Images_aug= Seq.augment_images (X_train)

Based on this model, we then began to adjust the work of the batchsize, steps and two epochs, and we adjusted the best results:

Batch_size = -setup_to_transfer_learning (model, Base_model) history_t1=Model.fit_generator (train_generator, Steps_per_epoch=274, Validation_data=Val_generator, epochs=Ten, callbacks=[reduce], verbose=1) Setup_to_fine_tune (Model,base_model) history_ft=Model.fit_generator (train_generator, Steps_per_epoch=274, epochs=8, Validation_data=Val_generator, Validation_steps=Ten, callbacks=[reduce], verbose=1)

At this point, we got a score of 44.3 in the preliminaries.

Here are two key jobs to get 45.89 points!

First, change 0.5 in the Arr2tag function to 0.3. Reason: The data set is small, many of the labels corresponding to the number of training pictures, the predicted probability value is low, so need to lower the threshold to let more correct label prediction.

Second, the model is fused. We are merging the results of the InceptionV3 and InceptionResNetV2 two models, first saving the two models and then finding the set of the tags predicted by the two models.

Some of the code is as follows:

def arr2tag (arr1, arr2): Tags= []     forIinchRange (arr1.shape[0]): Tag=[] index1= NP.where(Arr1[i] >0.3) Index2= NP.where(Arr2[i] >0.3) index1= index1[0].tolist () index2= index2[0].tolist () index= List (Set(INDEX1). Union (Set(INDEX2))) Tag= [Hash_tag[j] forJinchindex] tags.append (TAG)returnTags
Model = Load_model ('model1.h5'== Load_model ('model2.h5  '== Arr2tag (y_pred1, Y_pred2)

For the above two key tasks, you can increase the points:

Two 0.3 can be adjusted more precisely; Model Fusion can also be incorporated into other models.

Small mood: did not get the first or a little regret, after a good long time did not do, also did not think of better methods, so the center of gravity are placed on the tuning, learning rate, epoch have tried a lot, epoch even need a reduction. The key work was done the night before the end of the game and the last morning. Our team is composed of silly dog and fairy two, fairy before in a public number to see the model fusion method, the last night decided to try this method, the silly dog is very clever, training needs to save the model when the Arr2tag function of 0.5 to change to 0.4, the morning submitted found the results increased by 1 points, and very happy. The game is still very fruitful ~

Silly dog just said to everybody public code, that good: https://github.com/feifanrensheng/tinymind-

Tinymind Multi-label image classification Race Road

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.