Ranking retrieval systems without relevance judgments

I. Soboroff, C. Nicholas, P. Cahan

Research output: Contribution to journalConference articlepeer-review


The most prevalent experimental methodology for comparing the effectiveness of information retrieval systems requires a test collection, composed of a set of documents, a set of query topics, and a set of relevance judgments indicating which documents are relevant to which topics. It is well known that relevance judgments are not infallible, but recent retrospective investigation into results from the Text REtrieval Conference (TREC) has shown that differences in human judgments of relevance do not affect the relative measured performance of retrieval systems. Based on this result, we propose and describe the initial results of a new evaluation methodology which replaces human relevance judgments with a randomly selected mapping of documents to topics which we refer to as pseudo-relevance judgments. Rankings of systems with our methodology, correlate positively with official TREC rankings, although the performance of the top systems is not predicted well. The correlations are stable over a variety of pool depths and sampling techniques. With improvements, such a methodology could be useful in evaluating systems such as World-Wide Web search engines, where the set of documents changes too often to make traditional collection construction techniques practical.

Original languageEnglish (US)
Pages (from-to)66-73
Number of pages8
JournalSIGIR Forum (ACM Special Interest Group on Information Retrieval)
StatePublished - 2001
Externally publishedYes
Event24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval - New Orleans, LA, United States
Duration: Sep 9 2001Sep 13 2001

ASJC Scopus subject areas

  • Management Information Systems
  • Hardware and Architecture


Dive into the research topics of 'Ranking retrieval systems without relevance judgments'. Together they form a unique fingerprint.

Cite this