RMIT University
Browse

Relevance judgments between TREC and non-TREC assessors

conference contribution
posted on 2024-10-31, 10:00 authored by Azzah Al-Maskari, Mark SandersonMark Sanderson, Paul Clough
This paper investigates the agreement of relevance assessments between official TREC judgments and those generated from an interactive IR experiment. Results show that 63% of documents judged relevant by our users matched official TREC judgments. Several factors contributed to differences in the agreements: the number of retrieved relevant documents; the number of relevant documents judged; system effectiveness per topic and the ranking of relevant documents.

History

Related Materials

  1. 1.
    DOI - Is published in 10.1145/1390334.1390450
  2. 2.
    ISBN - Is published in 9781605581644 (urn:isbn:9781605581644)

Start page

683

End page

684

Total pages

2

Outlet

Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR 2008)

Editors

Sung-Hyon Myaeng, Douglas W Oard, Fabrizio Sebastiani, Tat-Seng Chua and Mun-Kew Leong

Name of conference

31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (ACM SIGIR 2008)

Publisher

ACM

Place published

New York, USA

Start date

2008-07-20

End date

2008-07-24

Language

English

Copyright

Copyright © 2008 by the Association for Computing Machinery, Inc. (ACM)

Former Identifier

2006021946

Esploro creation date

2020-06-22

Fedora creation date

2013-03-12

Usage metrics

    Scholarly Works

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC