An Analysis of Dataset Overlap on Winograd-Style Tasks

13Citations
Citations of this article
62Readers
Mendeley users who have this article in their library.

Abstract

The Winograd Schema Challenge (WSC) and variants inspired by it have become important benchmarks for common-sense reasoning (CSR). Model performance on the WSC has quickly progressed from chance-level to near-human using neural language models trained on massive corpora. In this paper, we analyze the effects of varying degrees of overlap between these training corpora and the test instances in WSC-style tasks. We find that a large number of test instances overlap considerably with the corpora on which state-of-the-art models are (pre)trained, and that a significant drop in classification accuracy occurs when we evaluate models on instances with minimal overlap. Based on these results, we develop the KNOWREF-60K dataset, which consists of over 60k pronoun disambiguation problems scraped from web data. KNOWREF-60K is the largest corpus to date for WSC-style common-sense reasoning and exhibits a significantly lower proportion of overlaps with current pretraining corpora.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Emami, A., Trischler, A., Suleman, K., & Cheung, J. C. K. (2020). An Analysis of Dataset Overlap on Winograd-Style Tasks. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 5855–5865). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.515

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 14

64%

Researcher 5

23%

Lecturer / Post doc 3

14%

Readers' Discipline

Tooltip

Computer Science 20

77%

Linguistics 4

15%

Neuroscience 1

4%

Social Sciences 1

4%

Save time finding and organizing research with Mendeley

Sign up for free