A Task-Oriented Dialog Model with Task-Progressive and Policy-Aware Pre-training

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Pre-trained conversation models (PCMs) have achieved promising progress in recent years. However, existing PCMs for Task-oriented dialog (TOD) are insufficient for capturing the sequential nature of the TOD-related tasks, as well as for learning dialog policy information. To alleviate these problems, this paper proposes a task-progressive PCM with two policy-aware pre-training tasks. The model is pre-trained through three stages where TOD-related tasks are progressively employed according to the task logic of the TOD system. A global policy consistency task is designed to capture the multi-turn dialog policy sequential relation, and an act-based contrastive learning task is designed to capture similarities among samples with the same dialog policy. Our model achieves better results on both MultiWOZ and In-Car end-to-end dialog modeling benchmarks with only 18% parameters and 25% pre-training data compared to the previous state-of-the-art PCM, GALAXY. We make our code and data publicly available (https://github.com/lucenzhong/TPLD ).

Cite

CITATION STYLE

APA

Zhong, L., Lu, H., Yuan, C., Wang, X., Sun, J., Zeng, K., & Wan, G. (2023). A Task-Oriented Dialog Model with Task-Progressive and Policy-Aware Pre-training. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 14302 LNAI, pp. 3–15). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-44693-1_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free