Forecast Aggregation via Peer Prediction

7Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Crowdsourcing enables the solicitation of forecasts on a variety of prediction tasks from distributed groups of people. How to aggregate the solicited forecasts, which may vary in quality, into an accurate final prediction remains a challenging yet critical question. Studies have found that weighing expert forecasts more in aggregation can improve the accuracy of the aggregated prediction. However, this approach usually requires access to the historical performance data of the forecasters, which are often not available. In this paper, we study the problem of aggregating forecasts without having historical performance data. We propose using peer prediction methods, a family of mechanisms initially designed to truthfully elicit private information in the absence of ground truth verification, to assess the expertise of forecasters, and then using this assessment to improve forecast aggregation. We evaluate our peer-prediction-aided aggregators on a diverse collection of 14 human forecast datasets. Compared with a variety of existing aggregators, our aggregators achieve a significant and consistent improvement on aggregation accuracy measured by the Brier score and the log score. Our results reveal the effectiveness of identifying experts to improve aggregation even without historical data.

Cited by Powered by Scopus

Eliciting and Learning with Soft Labels from Every Annotator

21Citations
17Readers
Get full text

Game-theoretic Mechanisms for Eliciting Accurate Information

2Citations
2Readers

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Wang, J., Liu, Y., & Chen, Y. (2021). Forecast Aggregation via Peer Prediction. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (Vol. 9, pp. 131–142). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/hcomp.v9i1.18946

Readers over time

‘19‘20‘21‘22‘23‘2400.751.52.253

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 1

50%

Researcher 1

50%

Readers' Discipline

Tooltip

Psychology 2

40%

Computer Science 1

20%

Neuroscience 1

20%

Engineering 1

20%

Save time finding and organizing research with Mendeley

Sign up for free
0