Paper NOTES: Curriculum Learning of multiple Tasks

Source: Internet
Author: User
Tags svm

Curriculum Learning of multiple Tasks

CVPR 2015

Yes In multi-task learning (multi-task learning) like attribute recognition tasks, before each attribute training a classifier, and later with deep learning, we all use a shared convolution of the way to the joint learning (Joint learning). Let the network learn the various tasks or the potential links between the various attributes, and finally classify them, or logistic regression.  This article puts forward its criticism, that is: the relationship between multiple tasks is not the same, some relationships are weak or basic irrelevant and so on. As shown, two kinds of pipeline are given, one is the green arrow, and all the learning tasks are modeled as related, the correlation is based on information sharing (sharing information), and the other is that this paper proposes to learn a task first, then on this basis, To learn other tasks and ultimately complete all task learning according to this correlation. This paper focuses on the parameter transfer method, which is based on such idea:. That is, the model corresponding to the related task is similar according to their parameter representation. One hypothesis of this paper is that the similarity between models can be measured by the Euclidean distance between the corresponding parameter vectors. The method presented in this paper can be regarded as the multi-task problem   Decomposed into a series of domain adaptation problems  .

Of course, there are also water blowing here, such as: This article is inspired by the process of human education, the students in the school as a multi-tasking learning machine, it is assumed to learn a lot of courses. But not all at once, but in accordance with a certain sequence, sequentially, according to a certain meaningful sequence, the student can gradually increase their just, and will be learned in the past, for the later more effective course learning.

The order of learning courses seriously affects the final performance, this article uses Pac-bayesian theory to prove that relies on the data representation and the algorithm of a total boundary (a genetalization bound) to solve this task. Based on this bound, this paper presents a theoretically provable algorithm to automatically select a better sequence for learning. The experiment in this paper proves that the results can be better than that of individual training or joint training according to the sequence of automatic learning.

See here, are the guys impatient? Come on!

  

Suppose we have n tasks, T1 t2 T3 ... tn, which share the same input and output space. Each task Ti locates the corresponding set SI, which has a training point of MI sampling. We also assume that in order to solve each task, the learner uses a linear estimate f (x) = Sign<w, x>,w is a weight vector, and the performance of the classification is measured by 0/1 loss. The learner's goal is to find n weights vector W1 W2 ... wn makes the task T1 T2 T3 ... the average expected error of TN is minimal:

  1. Learning in a fixed order.

This section uses Adaptive SVM to learn each task, and uses the previous task for the next task of learning, given a weight vector and training data for a task, Adaptive SVM performs the following optimizations:

  A W with a wavy line in the upper represents the weight vector of the previous task.

2. Learning a data-dependent order.

  Here we examine the function of the sequence PI based on the average expected error of the result (theaverage expected error), and we assume that the learning algorithm used to solve the individual task T is the same and deterministic for all tasks. The algorithm is based on the previous solved task to get the understanding and training data s, and returns the corresponding weight vector W. The following theory provides an on-line average expected error:  

  

Paper NOTES: Curriculum Learning of multiple Tasks

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.