RMIT University
Browse

Creating a test collection to evaluate diversity in image retrieval

conference contribution
posted on 2024-10-31, 10:16 authored by Thomas Arni, Jiayu Tang, Mark SandersonMark Sanderson, Paul Clough
This paper describes the adaptation of an existing test collection for image retrieval to enable diversity in the results set to be measured. Previous research has shown that a more diverse set of results often satisfies the needs of more users better than standard document rankings. To enable diversity to be quantified, it is necessary to classify images relevant to a given theme to one or more sub-topics or clusters. We describe the challenges in building (as far as we are aware) the first test collection for evaluating diversity in image retrieval. This includes selecting appropriate topics, creating sub-topics, and quantifying the overall effectiveness of a retrieval system. A total of 39 topics were augmented for cluster-based relevance and we also provide an initial analysis of assessor agreement for grouping relevant images into sub-topics or clusters.

History

Related Materials

  1. 1.
    ISBN - Is published in 9781605581644 (urn:isbn:9781605581644)
  2. 2.

Start page

15

End page

21

Total pages

7

Outlet

Proceedings of the ACM SIGIR workshop on Beyond Binary Relevance: Preferences, Diversity, and Set-Level Judgments

Name of conference

ACM SIGIR workshop on Beyond Binary Relevance: Preferences, Diversity, and Set-Level Judgments

Publisher

ACM

Place published

New York, United States

Start date

2008-07-20

End date

2008-07-24

Language

English

Former Identifier

2006021698

Esploro creation date

2020-06-22

Fedora creation date

2011-11-04

Usage metrics

    Scholarly Works

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC