Temporal-aware Language Representation Learning From Crowdsourced Labels

1Citations
Citations of this article
52Readers
Mendeley users who have this article in their library.

Abstract

Learning effective language representations from crowdsourced labels is crucial for many real-world machine learning tasks. A challenging aspect of this problem is that the quality of crowdsourced labels suffer high intra- and inter-observer variability. Since the high-capacity deep neural networks can easily memorize all disagreements among crowdsourced labels, directly applying existing supervised language representation learning algorithms may yield suboptimal solutions. In this paper, we propose TACMA, a temporal-aware language representation learning heuristic for crowdsourced labels with multiple annotators. The proposed approach (1) explicitly models the intra-observer variability with attention mechanism; (2) computes and aggregates per-sample confidence scores from multiple workers to address the inter-observer disagreements. The proposed heuristic is extremely easy to implement in around 5 lines of code. The proposed heuristic is evaluated on four synthetic and four real-world data sets. The results show that our approach outperforms a wide range of state-of-the-art baselines in terms of prediction accuracy and AUC. To encourage the reproducible results, we make our code publicly available at https://github.com/CrowdsourcingMining/TACMA.

References Powered by Scopus

The magical number seven, plus or minus two: some limits on our capacity for processing information

15158Citations
N/AReaders
Get full text

FaceNet: A unified embedding for face recognition and clustering

11556Citations
N/AReaders
Get full text

Measuring nominal scale agreement among many raters

6566Citations
N/AReaders
Get full text

Cited by Powered by Scopus

NeuCrowd: neural sampling network for representation learning with crowdsourced labels

0Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Hao, Y., Zhai, X., Ding, W., & Liu, Z. (2021). Temporal-aware Language Representation Learning From Crowdsourced Labels. In RepL4NLP 2021 - 6th Workshop on Representation Learning for NLP, Proceedings of the Workshop (pp. 47–56). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.repl4nlp-1.6

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 11

65%

Researcher 4

24%

Professor / Associate Prof. 1

6%

Lecturer / Post doc 1

6%

Readers' Discipline

Tooltip

Computer Science 17

74%

Linguistics 4

17%

Neuroscience 1

4%

Social Sciences 1

4%

Save time finding and organizing research with Mendeley

Sign up for free