[article]
Titre : |
Ranking Academic Research Performance : A Recipe for Success ? |
Type de document : |
Article |
Auteurs : |
Ruth Dixon, Auteur ; Christopher Hood, Auteur |
Année de publication : |
2016 |
Article en page(s) : |
p. 403-411 |
Langues : |
Français (fre) |
Catégories : |
[Thesagri] recherche scientifique [Thesagri] science [Thesagri] université
|
Note de contenu : |
Using the example of a governance system that allocates public funding for research on the basis of rankings of research quality and impact (as has been developed in the UK over the past three decades), this paper explores three conditions needed for such rankings to be effective as a basis for genuine performance improvement over time. First, the underlying metrics must be capable of meaningfully distinguishing the performance of the institutions being ranked. Second, the basis of assessment must be stable enough for changes in performance over time to be identified. Third, the ranking system should avoid perverse consequences arising from strategic responses by the institutions being assessed. By means of a hypothetical example of a series of research assessment exercises, this article demonstrates the difficulty of fulfilling all three conditions at the same time, and highlights the dilemma between reliability and validity that assessors face. This analysis is relevant to governance by indicators more broadly, because any comparative assessment of institutional performance faces similar issues. |
in Sociologie du Travail > Vol.58, n°4 (01/10/2016) . - p. 403-411
[article] Ranking Academic Research Performance : A Recipe for Success ? [Article] / Ruth Dixon, Auteur ; Christopher Hood, Auteur . - 2016 . - p. 403-411. Langues : Français ( fre) in Sociologie du Travail > Vol.58, n°4 (01/10/2016) . - p. 403-411
Catégories : |
[Thesagri] recherche scientifique [Thesagri] science [Thesagri] université
|
Note de contenu : |
Using the example of a governance system that allocates public funding for research on the basis of rankings of research quality and impact (as has been developed in the UK over the past three decades), this paper explores three conditions needed for such rankings to be effective as a basis for genuine performance improvement over time. First, the underlying metrics must be capable of meaningfully distinguishing the performance of the institutions being ranked. Second, the basis of assessment must be stable enough for changes in performance over time to be identified. Third, the ranking system should avoid perverse consequences arising from strategic responses by the institutions being assessed. By means of a hypothetical example of a series of research assessment exercises, this article demonstrates the difficulty of fulfilling all three conditions at the same time, and highlights the dilemma between reliability and validity that assessors face. This analysis is relevant to governance by indicators more broadly, because any comparative assessment of institutional performance faces similar issues. |
|