Examining the limits of crowdsourcing for relevance assessment
journal contribution
posted on 2024-11-01, 10:49authored byPaul Clough, Mark SandersonMark Sanderson, Jiayu Tang, Tim Gollins, A Warner
Evaluation is instrumental in the development and management of effective information retrieval systems and ensuring high levels of user satisfaction. Using crowdsourcing to obtain relevance assessments has been shown to be viable through a number of publications. What is less well understood are the limits of crowdsourcing for the assessment task, particularly for domain specific search. We present results comparing relevance assessments gathered using crowdsourcing with those gathered from a domain expert for evaluating different search engines in a large government archive. While crowdsourced judgments rank the tested search engines in the same order as expert judgments, crowdsourced workers appear unable to distinguish different levels of highly accurate search results in a way that expert assessors can. The nature of this limitation in crowd sourced workers for this experiment is examined and the viability of crowdsourcing for evaluating search in specialist settings is discussed.