Crowdsourcing systems rely on assessments of individual performance over time to assign tasking that improves aggregate performance. We call these combinations of performance assessment and task allocation process analytics. As crowdsourcing advances to include greater levels of task complexity, validating process analytics, which requires replicable behaviors across crowds, becomes more challenging and urgent. Here, we present a work-in-progress design for validating process analytics using integrated usability assessments, which we view as a sufficient proxy for crowdsourced problem-solving. Using the process of developing a crowdsourcing system itself as a use case, we begin by distributing usability assessments to two independent, equally-sized, and otherwise comparable subgroups of a crowd. The first subgroup (control) uses a conventional method of usability assessment; the second (treatment), a distributed method. Differences in subgroup performance determine the degree to which the process analytics for the distributed method vary about the conventional method.
CITATION STYLE
Mullins, R., Weiss, C., Fegley, B. D., & Ford, B. (2018). A generalizable method for validating the utility of process analytics with usability assessments. In Communications in Computer and Information Science (Vol. 850, pp. 263–267). Springer Verlag. https://doi.org/10.1007/978-3-319-92270-6_37
Mendeley helps you to discover research relevant for your work.