A crowdsourcing quality control model for tasks distributed in parallel

Shaojian Zhu, Shaun Kane, Jinjuan Feng, Andrew Sears

Research output: Contribution to conferencePaperpeer-review

14 Scopus citations

Abstract

Quality control for crowdsourcing systems has been identified as a significant challenge [2]. We propose a data-driven model for quality control in the context of crowdsourcing systems with the goal of assessing the quality of each individual contribution for parallel distributed tasks (allowing multiple people working on a same task). The model is initiated with a data training process providing a rough estimate for several quality-related performance measures (e.g. time spent on a task). The initial estimates are combined with observations of results produced by workers to estimate the quality for each individual contribution. We conduct a study to evaluate the model in the context of improving speech recognition-based text correction using MTurk services. Results indicate that the model accurately predicts quality for more than 92% of the non-negative (useful) contributions and 96% of the negative (useless) ones.

Original languageEnglish (US)
Pages2501-2506
Number of pages6
DOIs
StatePublished - 2012
Event30th ACM Conference on Human Factors in Computing Systems, CHI 2012 - Austin, TX, United States
Duration: May 5 2012May 10 2012

Other

Other30th ACM Conference on Human Factors in Computing Systems, CHI 2012
Country/TerritoryUnited States
CityAustin, TX
Period5/5/125/10/12

All Science Journal Classification (ASJC) codes

  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design
  • Software

Fingerprint

Dive into the research topics of 'A crowdsourcing quality control model for tasks distributed in parallel'. Together they form a unique fingerprint.

Cite this