• A Summary of Multi-task Learning


    A Summary of Multi-task Learning

    author by Yubo Feng.

    Intro

    In this paper[0], the introduction of multi-task learning through the data hungry, the most common problem of Deep Learning[1].

    Basic assumption: tasks are related.

    MTL mimic human learning since people transfer knowledge from one task to another.

    MTL is similar to transfer learning, multi-label learning, and multi-output regression. In my opinion, Dual Learning[2] is a subset of Multi-Task Learning.

    Multi-task Learning

    Definition 1.(Multi-task learning) Give (m) learning tasks ({{T_i}}^m_{i=1}) where tasks are related but not identical, (multi-task learning) aims to improve the model for (T_i) by using the knowledge in the (m) tasks.

    Two foundations for MTL:

    1. tasks are related;
    2. tasks lead to learning settings.

    1. Multi-task Supervised Learning (MTSL)

    Model Description:

    • Task (T_i) for (i=1,...,m)
    • Dataset (D_i = {(x_j^i, y_j^i)}^{n_i}_{j=1})
    • (x_j^i) is a (d)-dimensional feature vector
    • (y_j^i) is the label for (x_j^i)
    • (f_i(x_j^i)) is a good approximation of (y_j^i) , is used to prediction for (i)-th task
    • to learn ({f_i(x)}^m_{i=1}) for (m) task

    Task relatedness in three aspects:

    1. feature --> feature-based MTSL
    2. parameter --> paramter-based MTSL
    3. instance --> instance-based MTSL

    1.1 Feature-based MTSL

    From this aspect, it assumes that tasks share feature representations can be a subset or a transformation of the original features.

    It can learn a common feature representation for different tasks.

    It is more suitable for applications whose original feature representation is not so informative and discriminative.

    But it can easily be affected by outlier tasks.

    It is a complemental to parameter-based MTSL.

    1.1.1 Feature Transformation Approach

    There are two types of transformation, linear and nonlinear.

    Feedforward neural network

    Can do nonlinear and linear transformation if the activation unit is linear.

    Linear Fitting

    There are two specific methods named multi-task feature learning (MTFL) and multi-task sparse coding(MTSC).

    Generally speaking, they transform data instance as (hat{x_j^i} = U^T x_j^i) and then learning (f_i(x_j^i) = (a^i)^T hat{x_j^i} + b_i) . Apprently, there are two steps of linear transformation on the features.

    Differences between MTFL and MTSC:

    • MTFL
      • (U) is orthogonal
      • (A) is row-sparse via (l_{2,1}) regularization where (A=(a^1, ..., a^m)) .
    • MTSC
      • (U) is overcomplete
      • A is sparse via the (l_1) regularization

    1.1.2 Feature Selection Approach

    In this approach, it aims to learn (f_i(x) = (W^i)^Tx + b) . Furthermore, there are two distinct operations on (W), first is the regularization, second is spare probabilistic priors.

    Regularization

    To minimize (||W||_{p,q}) and (l_{p,q}) norm regularization are most widely used techniques of (W) .

    The effect of (l_{p,q}) regularization is to make (W) row-sparse and hence some unimportant features can be filtered.

    Spare Probabilistic Priors

    (w_{ji} sim GN(0, ho_j, p)) , where (GN(cdot, cdot, cdot )) denote generalized normal distribution.

    1.1.3 Deep Learning Approach

    • tens of or hundreds of hidden layers
    • treat the output of one hidden layer as the shared feature representation

    Recently, the most impact NLP progress, BERT[3], also leverage the output of hidden layers as the shared feature representation to deal with 11 NLP tasks.

    1.2 Parameter-based MTSL

    It uses model parameters to relate the learning of different tasks.

    It can learn more accurate model parameters.

    It is more robust to outlier tasks than the feature-based model.

    1.2.1 Low-rank Approach

    Since tasks are assumed to be related, the parameter matrix (W) is likely to be low-rank.

    Similar tasks usually have similar model parameters, which makes (W) likely to be low-rank. It means that the lower rank of (W) the higher linear similarities between tasks, by other words, similar tasks have similar parameter matrix (W).

    1.2.2 Task-Clustering Approach

    It aims to divide tasks into several clusters and all the tasks in a cluster are assumed to share identical or similar model parameters.

    In this approach, there are several ways to specifically implement.

    TC algorithm

    TC algorithm has few steps as below:

    1. separately learn under the single-task setting
    2. cluster the tasks based on the model parameters
    3. pool the training data of all the tasks in a task cluster

    Bayesian Neural Network

    BNN has a similar structure to the multi-layer neural network.

    BNN is based on the Gaussian mixture model in terms of model parameters.

    The Dirichlet process is widely used in Bayesian learning to do data clustering.

    Regularization

    It tries to decompose model parameters (W) and then regularizes decomposed components.

    1.2.3 Task-relation Learning Approach

    It directly learns the pairwise task relations from data. Task relations are used to reflect the task relatedness. Hence it urgently needs distance measures including task similarities and task covariances.

    But there are some fatal difficulties to be noticed, in the real-world applications the task relations are hard to verify and the prior information is difficult to obtain.

    1.2.4 Dirty Approach

    In this approach, it assumes the decomposition of the parameter matrix (W) into two component matrices, each of which is regularized by a type of the sparsity.

    For example, it decomposes the parameter matrix (W) into (W=U+V). And the objective function can be defined as to minimize the unified training loss (g(U) + h(V)).

    (U) mainly identifies the task relatedness among tasks. (V) is capable of capturing noises or outliers.

    1.2.5 Multi-level Approach

    This approach is a generalization of the dirty approach. it decomposes the parameter matrix (W), more than 2 component matrices, into (h) component matrices ({W_i}_{i=1}^{h}).

    It is capable of modeling more complex task structures than dirty approach.

    1.3 Instance-based MTSL

    There are few works in this category.

    It seems parallel to the other two categories.

    2. Multi-task Unsupervised Learning (MTUL)

    MTUL mainly focuses on multi-task clustering, but not very many studies on multi-task clustering exist. So it is a chance to exploit it.

    My research domain is word representation learning, word2vec is one of my baselines. Hence I am always thinking about how to enhance it with leverage more knowledge into it.

    3. Multi-task Semi-supervised Learning (MTSSL)

    The core idea of the semi-supervised is that unlabeled data are utilized to help improve the performance of supervised learning. In this sense, the MTSSL is same, where unlabeled data are used to improve the performance of supervised learning.

    There are two types of MTSSL, classification, and regression.

    4. Multi-task Active Learning (MTAL)

    Active Learning MTAL
    Identical has a small number of labeled data
    Difference selects unlabeled instances are informative for all the tasks instead of only one task

    5. Multi-task Reinforcement Learning

    Motivation: when environments are similar, different reinforcement learning tasks can use similar policies to make decisions.

    6. Multi-task Online Learning

    Multi-task online learning models can handle the problem that traditional MTL model can not, that is training data came in a sequential way.

    7. Multi-task Multi-view Learning

    Each data point can be described by different feature representations, each feature representation is called a view.

    Each multi-view data point is usually associated with a label.

    Application in Nature Language Processing

    Major NLP tasks are part-of-speech, tagging, chunking, named entity recognition, semantic role labeling, language modeling, and semantically related words.

    Conclusions

    Almost all the deep models just share hidden layers for different tasks, it is very useful when all the tasks are very similar.

    The future work could focus on to design more flexible architecture that can tolerate dissimilar tasks.

    Bibliography

    [0] Zhang Y , Yang Q . An overview of multi-task learning[J]. National Science Review, 2018, 5(1):30-43.

    [1] Li, Hang. Deep learning for natural language processing: advantages and challenges[J]. National Science Review, 2018, 5(1):24-26.

    [2] 夏应策.对偶学习的理论和实验研究[D].中国科学技术大学,2018.

    [3] Jacob Devlin. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. https://arxiv.org/abs/1810.04805

  • 相关阅读:
    Unable to start adb server: adb server version (32) doesn't match this client (39); killing...
    mysql错误指令:Failed to open file "file_name" error 2/error 22
    爬虫流程概述
    Jupyter Notebook的使用
    markdown语法
    pymysql向表中插入数据
    python创建mysql数据库中的表
    python查询ip地址来源
    Pandas读取csv时设置列名
    程序员面试——位运算
  • 原文地址:https://www.cnblogs.com/fengyubo/p/10174164.html
Copyright © 2020-2023  润新知