In this chapter we discuss evaluation of Information Retrieval (IR) systems and in particular ImageCLEF, a large scale evaluation campaign that has produced several publicly accessible resources required for evaluating visual information retrieval systems and is the focus of this book. This chapter sets the scene for the book by describing the purpose of system and user centred evaluation, the purpose of test collections, the role of evaluation campaigns such as TREC and CLEF, our motivations for starting ImageCLEF and then a summary of the tracks run over the seven years (data, tasks and participants). The chapter will also provide an insight into lessons learned and experiences gained over the years spent organising ImageCLEF, and a summary of the main highlights.
History
Related Materials
1.
ISBN - Is published in 9783642151804 (urn:isbn:9783642151804)
Start page
3
End page
18
Total pages
16
Outlet
ImageCLEF - Experimental Evaluation in Image Retrieval
Editors
Henning Müller, Thomas Deselaers, Paul Clough and Barbara Caputo