Annotation Inconsistency and Entity Bias in MultiWOZ

19Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

MultiWOZ (Budzianowski et al., 2018) is one of the most popular multi-domain taskoriented dialog datasets, containing 10K+ annotated dialogs covering eight domains. It has been widely accepted as a benchmark for various dialog tasks, e.g., dialog state tracking (DST), natural language generation (NLG) and end-to-end (E2E) dialog modeling. In this work, we identify an overlooked issue with dialog state annotation inconsistencies in the dataset, where a slot type is tagged inconsistently across similar dialogs leading to confusion for DST modeling. We propose an automated correction for this issue, which is present in 70% of the dialogs. Additionally, we notice that there is significant entity bias in the dataset (e.g., "cambridge"appears in 50% of the destination cities in the train domain). The entity bias can potentially lead to named entity memorization in generative models, which may go unnoticed as the test set suffers from a similar entity bias as well. We release a new test set with all entities replaced with unseen entities. Finally, we benchmark joint goal accuracy (JGA) of the state-of-theart DST baselines on these modified versions of the data. Our experiments show that the annotation inconsistency corrections lead to 7- 10% improvement in JGA. On the other hand, we observe a 29% drop in JGA when models are evaluated on the new test set with unseen entities.

References Powered by Scopus

An iterative design methodology for user-friendly natural language office information applications

533Citations
N/AReaders
Get full text

ConvLab: Multi-domain end-to-end dialog system platform

60Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Annotation Error Detection: Analyzing the Past and Present for a More Coherent Future

16Citations
N/AReaders
Get full text

Controllable Mixed-Initiative Dialogue Generation through Prompting

12Citations
N/AReaders
Get full text

Analyzing Dataset Annotation Quality Management in the Wild

5Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Qian, K., Berrami, A., Lin, Z., De, A., Geramifard, A., Yu, Z., & Sankar, C. (2021). Annotation Inconsistency and Entity Bias in MultiWOZ. In SIGDIAL 2021 - 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, Proceedings of the Conference (pp. 326–337). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.sigdial-1.35

Readers over time

‘21‘22‘23‘24‘2506121824

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 15

71%

Researcher 4

19%

Professor / Associate Prof. 1

5%

Lecturer / Post doc 1

5%

Readers' Discipline

Tooltip

Computer Science 23

79%

Linguistics 4

14%

Neuroscience 1

3%

Social Sciences 1

3%

Save time finding and organizing research with Mendeley

Sign up for free
0