Adversarial Learning for Neural Dialogue Generation

  • 更新时间: 2017-04-19
  • 作者: Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan
  • 浏览数: 44
  • 发表评论

【摘 要】

In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues.

In addition to adversarial training we describe a model for adversarial {\em evaluation} that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines.

【发布时间】 2017-02-22

【发布位置】 https://arxiv.org/abs/1701.06547

标签: seq2seq

我来评分 :6
0