"Paper Notes" adversarial multi-task Learning for Text classification

Source: Internet
Author: User

I. Summary
This article was posted in ACL 2017 for text categorization for most of the existing neural network multitasking learning models, and shared features (shared Features) might again contain specific task characteristics (task-specific Features) or contains noise problems from other tasks, the author proposes a multi-task learning model, which alleviates the mutual interference between shared feature space and specific task feature space, and the author proves the validity of the model in 16 tasks. And the experimental results show that the shared characteristics of the model can be well used in new tasks.

second, model method
2.1 Adversarial Shared-private Model

As outlined in the summary, the method of multi-task learning for neural networks may have problems of sharing and private features, and the author proposes a multi-task learning model. As shown in the figure above, (b) Represents the method in this article, there are two tasks, where blue squares and triangles represent the private feature areas of a particular task, used to capture the characteristics of a particular task, and overlapping red circles represent shared feature areas, which are used to capture common features that exist between different tasks.
This article uses confrontation training to ensure that the shared space contains only multi-tasking shared information, as well as the use of orthogonal constraints to eliminate redundant information between shared and private spaces.

2.2 Recurrent Models for Text classification
This article uses the long Short-term memory (LSTM) model for text categorization, and the LSTM model shows "deep learning" LSTM (long short-term memory) and variants. For a given sentence x = {x1,x2,,xt x_{1},x_{2},,x_{t}}, first get the vector representation of each word through the lookup layer, and then through LSTM, use the output of the last time step HT h_{t} to function the expression of the whole sentence, Finally, a softmax layer is classified to get the probability of the corresponding category.
Y′=softmax (wht+b) y^{'}= Softmax (wh_{t} + B)
L (y′,y) =−∑ni=1∑cj=1yjilog (Y′ji) L (y^{'},y) =−\sum^{n}_{i=1}\sum^{c}_{j=1}y^{j}_{i}log (y^{' J}_{I})
Where y′y^{'} represents the classification probability of output through the Softmax layer, Yji y^{j}_{i} represents the real category label, and C is the number of categories.

2.3 Multi-task Learning for Text classification
The goal of multi-tasking learning is to improve the accuracy of classification by learning their relevance in parallel with multiple tasks, assuming that there is a sample of Nk n_{k} in the K-task, then the Dk d_{k} is defined as:
Dk d_{k}= {(

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.