RMIT University
Browse

The Challenge of Variable Effort Crowdsourcing and How Visible Gold Can Help

journal contribution
posted on 2024-11-02, 18:17 authored by Danula HettiachchiDanula Hettiachchi, Mike Schaekermann, Tristan McKinney, Matthew Lease
We consider a class of variable effort human annotation tasks in which the number of labels required per item can greatly vary (e.g., finding all faces in an image, named entities in a text, bird calls in an audio recording, etc.). In such tasks, some items require far more effort than others to annotate. Furthermore, the per-item annotation effort is not known until after each item is annotated since determining the number of labels required is an implicit part of the annotation task itself. On an image bounding-box task with crowdsourced annotators, we show that annotator accuracy and recall consistently drop as effort increases. We hypothesize reasons for this drop and investigate a set of approaches to counteract it. Firstly, we benchmark on this task a set of general best-practice methods for quality crowdsourcing. Notably, only one of these methods actually improves quality: the use of visible gold questions that provide periodic feedback to workers on their accuracy as they work. Given these promising results, we then investigate and evaluate variants of the visible gold approach, yielding further improvement. Final results show a 7% improvement in bounding-box accuracy over the baseline. We discuss the generality of the visible gold approach and promising directions for future research.

History

Journal

Proceedings of the ACM on Human-Computer Interaction

Volume

5

Number

332

Issue

CSCW2

Start page

1

End page

25

Total pages

25

Publisher

Association for Computing Machinery

Place published

United States

Language

English

Former Identifier

2006111743

Esploro creation date

2022-01-21

Usage metrics

    Scholarly Works

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC