Refinement: Measuring informativeness of ratings in the absence of a gold standard

British Journal of Mathematical and Statistical Psychology 75 (3):593-615 (2022)
  Copy   BIBTEX

Abstract

We propose a new metric for evaluating the informativeness of a set of ratings from a single rater on a given scale. Such evaluations are of interest when raters rate numerous comparable items on the same scale, as occurs in hiring, college admissions, and peer review. Our exposition takes the context of peer review, which involves univariate and multivariate cardinal ratings. We draw on this context to motivate an information-theoretic measure of the refinement of a set of ratings – entropic refinement – as well as two secondary measures. A mathematical analysis of the three measures reveals that only the first, which captures the information content of the ratings, possesses properties appropriate to a refinement metric. Finally, we analyse refinement in real-world grant-review data, finding evidence that overall merit scores are more refined than criterion scores.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,475

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Being and Being True.Michael Hymers - 1999 - Idealistic Studies 29 (1-2):33-51.
Assertion, inference, and consequence.Peter Pagin - 2012 - Synthese 187 (3):869 - 885.

Analytics

Added to PP
2023-07-14

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?

Author's Profile

Carole J. Lee
University of Washington

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references