RMIT University
Browse

Using collection shards to study retrieval performance effect sizes

journal contribution
posted on 2024-11-01, 08:52 authored by Nicola Ferro, Yubin Kim, Mark SandersonMark Sanderson
Despite the bulk of research studying how to more accurately compare the performance of IR systems, less attention is devoted to better understanding the different factors that play a role in such performance and how they interact. This is the case of shards, i.e., partitioning a document collection into sub-parts, which are used for many different purposes, ranging from efficiency to selective search or making test collection evaluation more accurate. In all these cases, there is empirical knowledge supporting the importance of shards, but we lack actual models that allow us to measure the impact of shards on system performance and how they interact with topics and systems. We use the general linear mixed model framework and present a model that encompasses the experimental factors of system, topic, shard, and their interaction effects. This detailed model allows us to more accurately estimate differences between the effect of various factors. We study shards created by a range of methods used in prior work and better explain observations noted in prior work in a principled setting and offer new insights. Notably, we discover that the topic*shard interaction effect, in particular, is a large effect almost globally across all datasets, an observation that, to our knowledge, has not been measured before.

Funding

Continuous and summarised search over evolving heterogeneous data

Australian Research Council

Find out more...

History

Journal

ACM Transactions on Information Systems

Volume

37

Number

30

Issue

3

Start page

1

End page

40

Total pages

40

Publisher

Association for Computing Machinery

Place published

United States

Language

English

Copyright

© 2019 Association for Computing Machinery.

Former Identifier

2006093207

Esploro creation date

2020-06-22

Fedora creation date

2019-08-22

Usage metrics

    Scholarly Works

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC