RMIT University
Browse

Quit while ahead: evaluating truncated rankings

conference contribution
posted on 2024-10-31, 20:05 authored by Fei Liu, Alistair Moffat, Timothy Baldwin, Xiuzhen ZhangXiuzhen Zhang
Many types of search tasks are answered through the computation of a ranked list of suggested answers. We re-examine the usual assumption that answer lists should be as long as possible, and suggest that when the number of matching items is potentially small - perhaps even zero - it may be more helpful to "quit while ahead", that is, to truncate the answer ranking earlier rather than later. To capture this effect, metrics are required which are attuned to the length of the ranking, and can handle cases in which there are no relevant documents. In this work we explore a generalized approach for representing truncated result sets, and propose modi- fications to a number of popular evaluation metrics.

History

Related Materials

  1. 1.
    DOI - Is published in 10.1145/2911451.2914737
  2. 2.
    ISBN - Is published in 9781450340694 (urn:isbn:9781450340694)

Start page

953

End page

956

Total pages

4

Outlet

Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval (SIGIR 2016)

Name of conference

SIGIR 2016

Publisher

ACM

Place published

New York, USA

Start date

2016-07-17

End date

2016-07-21

Language

English

Copyright

© 2016 authors

Former Identifier

2006067094

Esploro creation date

2020-06-22

Fedora creation date

2016-10-17

Usage metrics

    Scholarly Works

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC