A generalizable method for validating the utility of process analytics with usability assessments

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Crowdsourcing systems rely on assessments of individual performance over time to assign tasking that improves aggregate performance. We call these combinations of performance assessment and task allocation process analytics. As crowdsourcing advances to include greater levels of task complexity, validating process analytics, which requires replicable behaviors across crowds, becomes more challenging and urgent. Here, we present a work-in-progress design for validating process analytics using integrated usability assessments, which we view as a sufficient proxy for crowdsourced problem-solving. Using the process of developing a crowdsourcing system itself as a use case, we begin by distributing usability assessments to two independent, equally-sized, and otherwise comparable subgroups of a crowd. The first subgroup (control) uses a conventional method of usability assessment; the second (treatment), a distributed method. Differences in subgroup performance determine the degree to which the process analytics for the distributed method vary about the conventional method.

Cite

CITATION STYLE

APA

Mullins, R., Weiss, C., Fegley, B. D., & Ford, B. (2018). A generalizable method for validating the utility of process analytics with usability assessments. In Communications in Computer and Information Science (Vol. 850, pp. 263–267). Springer Verlag. https://doi.org/10.1007/978-3-319-92270-6_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free