• [2017


    (论文编号及摘要见 [2017 ACL] 对话系统. [2018 ACL Long] 对话系统. 论文标题[]中最后的数字表示截止2019.1.21 google被引次数)

    1. Domain Adaptation:

       challenges:

      (a) data shifts (syn -> live user data; stale -> current) cause distribution mismatch bet train and eval. -> 2017.1

      (b) reestimate a global model from scratch each time a new domain with potentially new intents and slots is added. -> 2017.4

     papers:

     2017.1 adversarial training 

         [Adversarial Adaptation of Synthetic or Stale Data. Young-Bum Kim. 14]

        2017.4 model(k + 1) = weighted_combination[model(1),...,model(k)] 

        [Domain Attention with an Ensemble of Experts. Young-Bum Kim. 17]

    2. NLG: 

     challenges:

      (a) integrate LM + Affect. -> 2017.2

      (b) refering expression misunderstand -> 2017.5

      (c) neural encoder-decoder models in open-domain: generate dull and generic responses. -> 2017.8

      (d) multi-turn: lose relationships among utterances or important contextual information. -> 2017.11

      (e) automatically evaluating the quality of dialogue responses for unstructured domains:  biased and correlate very poorly with human judgements of response quality. -> 2017.12

      (f) deep latent variable models used in open-domain: highly randomized, leading to uncontrollable generated responses. -> 2017.14


      (g) does not employ knowledge to guide the generation -> tends to generate short, general, and meaningless responses. -> 2018.L1

      (h) encoder-decoder dialog model is limited because it cannot output interpretable actions as in traditional systems, which hinders(阻碍) humans from understanding its generation process. -> 2018.L6

      (i) translate natural language questions ->structured queries: further improvement hard. -> 2018.L8

     papers:

     2017.2  language model + affect info

        [Affect-LM: A Neural Language Model for Customizable Affective Text Generation. 27]

     2017.5  refering expression misunderstand correction - alg:  contrastive focus

        [Generating Contrastive Referring Expressions. 0]

     2017.8  open-domain - Framework - conditional variaional autoencoders 

        pre: word-level decoder

        cur: discourse-level encoder

        [Learning Discourse-level Diversity for Neural Dialog Models using Conditional Variational Autoencoders. CMU. 69]    

     2017.11   muli-turn response selection - sequential matching network (SMN) 

        pre: concatenates utterances in context

          matches a response with a highly abstract context vector

          => lose relationships among utterances or important contextual information 

        current:  matches a response with each utterance on multiple levels of granularity

            distills important matching information -> vector -> conv + pooling

            accumulate vector -> RNN (models relationships among utterances)

            final matching score (calcu with hid of rnn)

        [Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots. 北航. 南开.微软. 48]

     2017.12  auto eval Metric - ADEM 

        [Towards an Automatic Turing Test: Learning to Evaluate Dialogue Responses. 47]

     2017.14  Framework - generation based on specific attributes(manually + auto detected) - both speakers diag states modeled -> personal features

        [A Conditional Variational Framework for Dialog Generation. 20]  

     2017.16  Open-domain - Engine - generation (info retrieval + Seq2Seq) - AliMe chat

        [AliMe Chat: A Sequence to Sequence and Rerank based Chatbot Engine. 27]


      2018.L1  knowledge guide generation - neural knowledge diffusion (NKD) model - both fact + chi-chats

         match the relevant facts for the input utterance + diffuse them to similar entities

         [Knowledge Diffusion for Neural Dialogue Generation. 3]

     2018.L6  encoder-decoder model - interprete- unsup discrete sent representation learning

         DI-VAE + DI-VST - discover interpretable semantics via either auto encoding or context predicting

         [Unsupervised Discrete Sentence Representation Learning for Interpretable Neural Dialog Generation. 8]

     2018.L8  Framework - DialSQL + human intelligence

          identify potential error of SQL  -> ask for validation -> feedback to revise query

         [ DialSQL: Dialogue Based Structured Query Generation. 4]

    3. Task + Non-task hybrid 

     2017.3  whether to have a chat - dataset

        [Chat Detection in Intelligent Assistant: Combining Task-oriented and Non-task-oriented Spoken Dialogue Systems. McGill University. Montreal. 7] 

    4. E2E

     challenges: 

       (a) data-intensive -> 2017.6

       (b) task - interact with KB -> pre: issuing a symbolic query to the KB to retrieve entries based on their attributes. -> 2017.13

        disadvantages:

          (1) such symbolic operations break the differentiability(可辨性) of the system

          (2) prevent end-to-end training of neural dialogue agents


        (c) only consider user semantic inputs and under-utilize other user info. -> 2018.L4

        (b) incorporating knowledge bases. -> 2018.L7

     papers:

     2017.6  Framework - HCNs : RNN + knowledge(software/sys action templates) - reduce train data - opt (sup + RL) - bAbI dialog dataset - 2 commercial diag sys

        [Hybrid Code Networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. Microsoft Research. 87]

     2017.13  KB-InfoBot - E2E - task -multi-turn - interact with KB - present a agent

        replacing symbolic queries ->  induced "soft" posterior distribution over the KB

        integrate soft retrival process + RL 

        [Towards End-to-End Reinforcement Learning of Dialogue Agents for Information Access. CMU. MS. 国立台北. 82]


      2018.L4  multimodel info (sup + RL) - user adaptive - reduce diag length + improve success rate

         [Sentiment Adaptive End-to-End Dialog Systems. 2]

     2018.L7  Mem2Seq - first neural generative model: combines [ multi-hop attention over memories + idea of pointer network] 

         [Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems. UST.8]

    5. NLU

     challenges:

      (a) no systematic comparison to analyze how to use context effectively. -> 2017.15

     papers:

     2017.7  identity discussion points + discourse relations

        [Joint Modeling of Content and Discourse Relations in Dialogues. 7] 

     2017.15  context utiliztion eval -empirical study and compare models - variant: weights context vectors by context-query relevance

        [How to Make Contexts More Useful? An Empirical Study to Context-Aware Neural Conversation Models. 18]

    6. Dialogue state tracking

      challenges:

      (a) have difficulty scaling to larger, more complex dialogue domains. -> 2017.10

        (1) Spoken Language Understanding models that require large amounts of annotated training data

        (2) hand-crafted lexicons for capturing some of the linguistic variation in users' language.

      (b) handling unknown slot values -> Pre: assume predefined candidate lists and thus are not designed to output unknown values. especially in E2E, SLU is absent. -> 2018.L10

     papers:

     2017.10  Framework - Neural Belief Tracking (NBT) - representation learning (compose pre-trained word vector -> utterances and context)

         [Neural Belief Tracker: Data-Driven Dialogue State Tracking. 63]


      2018.L9  Global-Locally Self-Attentive Dialogue State Tracker (GLAD)

         global modules: shares parameters between estimators for different types (called slots) of dialogue states

         local modules: learn slot-specific features

         [Global-Locally Self-Attentive Encoder for Dialogue State Tracking. 0]

     2018.L10  E2E + pointer nerwork (PtrNet) 

         [An End-to-end Approach for Handling Unknown Slot Values in Dialogue State Tracking. 2]

    7. Framework

     challenges:

        (a) pipeline: introduces architectural complexity and fragility. -> 2018.L2

     papers:

     2018.L2  Seq2Seq + opt (sup / RL) - Task

        design text spans named belief spans ->  track dialogue believes -> allow task-oriented sys be modeled in Seq2Seq

        Two Stage CopyNet instantiation -> reduce para, train time + better than pipeline on large dataset + OOV

        [Sequicity: Simplifying Task-oriented Dialogue Systems with Single Sequence-to-Sequence Architectures. 新加坡国立. 复旦. 京东. 9]    

    8. RL

     challenges:

        (a) Training a task-completion dialogue agent via reinforcement learning (RL) is costly: requires many interactions with real users. 

        (b) use a user simulator: lacks the language complexity + biases

     papers:

     2018.L3  RL - policy learning - Deep Dyna-Q

         first deep RL framework that integrates planning for task-completion dialogue policy learning 

         world model update with real user experience + agent opt using real and simulated experience

         [Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning. 3] 

    8. Chi-chat

     challenges:

      (a)  lack specificity

      (b) do not display a consistent personality. -> 2018.L5

     papers:

      2018.L5  add profile info[i. given + ii.partner]  train to engage ii with personal topics -> used to predict profile

          [Personalizing Dialogue Agents: I have a dog, do you have pets too? 31.]        

    9. Others:

     challenges: 

      (a) open-ended dialogue state. -> 2017.9

     papers:

     2017.9 Symmetric Collaborative Dialogue - two agents to achieve a common goal

        [Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings. 21] 

  • 相关阅读:
    第十四周课程总结&实验报告(简单记事本的实现)
    第十三周课程总结
    第十二周课程总结
    第十一周课程总结
    第十周课程总结
    第九周课程总结&实验报告(七)
    第八周课程总结&实验报告(六)
    第七周课程总结&试验报告(五)
    基于C的
    RMQ 区间最值问题
  • 原文地址:https://www.cnblogs.com/shiyublog/p/10301043.html
Copyright © 2020-2023  润新知