Paper notes: Attention for fine-grained categorization

Source: Internet
Author: User
Tags blank page

Attention for fine-grained categorization

Google ICLR 2015

  

This article says that the RNN-based attention model of BA et al has been extended to less restrictive, or non-restricted, visual scenarios. The difference between this work and the former is largely the use of a more effective visual network and the pre-training of the visual network outside the attention RNN.

The work of predecessors has solved some computer vision problems when learning visual attention model, and it shows that adding different attention mechanisms can effectively improve the performance of the algorithm. But the previous work was largely restricted or based on toys, and the algorithms in this paper can handle more challenging factors such as occlusion and more complex scenarios. The following data set gives a case:

  

The model framework is mainly derived from "multiple Object recognition with Visual Attention", which is basically always, there are several differences :

1. Our model chooses actions for N glimpses and then classifies only after the final glimpse, as opposed to the sequence T Ask in Ba et al. The number of glimpse in each experiment is fixed.

2. Because the image in the dataset is constantly changing, the size of the "foveal" glimpses patches is consistent with the scale of the shortest edge of the input image.

3. Replace LSTM with "vanilla" RNN, at Glimpse N, $r _n^{(1)}$ and $r _n^{(2)}$ are composed of 4,096 points, when $i = 1, 2$, $r _n (i) $ and $r _{n+1} (i) $ are fully connected.

4. This article does not glimpse visual core $G _{image} (x_n| W_{image}) $ and $G _{loc} (l_n| W_{LOC}) $ the output is multiplied by the element level, but instead its output is concatenate to implement a linear combination, and then make it pass through a fully connected layer.

Finally, then the biggest difference is that the visual Glimpse network $G _{image} (x_n| W_{image}) $ is replaced with a more powerful and effective visual core (visual cores) based on the "googlelenet" model.

Because it is based on the framework of others, so this article on the model of the introduction of less, I will go back to explain the quoted article, combined with that article, to understand the paper.

  

  

Leave a blank page to talk about your feelings :

I'll go and look at that article, and I'll make it back! Wait for me!!!

  

Paper notes: Attention for fine-grained categorization

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.