I disse tider da mange, ofte, tar begreper som forskningskvalitet og vitenskapelig kvalitet i sin munn som om det er enkelt å identifisere og velge den forskning og kunnskap som er «best», kan det å spille ut noen kunnskapsbaserte momenter om forskningskvalitetens hva og hvordan være en spore til noe mer refleksjon og nyansering. Nedenfor følger et utdrag av det aller første «policy brief» fra det nyetablerte senteret for studier av forskningskvalitet, R-QUEST (se Forskningspolitikk nr. 3, 2016). Den fullstendige teksten, med referanser, kan leses på http://www.r-quest.no/policy-briefs/.
R-QUEST, ved LIV LANGFELDT (NIFU), KAARE AAGAARD (CFA), SIRI BRORSTAD BORLAUG (NIFU) og GUNNAR SIVERTSEN (NIFU)
1. The politics of research quality
One of the most prominent research policy features of the latest decade has been the ambition to promote high quality Research – often under headings such as frontier, outstanding, excellent, groundbreaking and transformative research. Policies promoting high quality Research are seen as (and justified as) a means for solving grand challenges based on the assumption that you need world leading Research groups and ground-breaking Research to solve the challenges our society confronts. They are also the result of the obligation to ensure that public money on R&D are spent wisely: When allocating research funds, the most obvious Choice – if you want value for money – is to prioritise the most successful scientists and the most promising projects. Finally, such policies may also reflect a political wish to maintain or improve national Scientific standing and status – much in the same way as success in international sports competitions is emphasised. In sum, whatever the aim of research policy is, high quality research can be presented as the solution.
This is however too simplistic: You cannot support only the best, you also need to build up competences in new Fields to solve societal challenges. Thus, there is a need to provide good general conditions for research to secure a broad base of rank and file scientists to do research across a variety of fields and topics. We cannot predict (all) what we need in the future. More generally, diversity and excellence can be seen as complementary rather Than contradictory considerations when allocating research resources.
Still, in general, the public can best be convinced that a research policy is successful if the funding agencies and the authorities can document that they help to foster and attract world leading research groups. The concept of high quality research is appealing – and persuading – in terms of solving the grand Challenges of our planet; in terms of ensuring value for public resources spent on Research; and in terms of contributing to national competitiveness and pride (e.g. winning Nobel prizes, having highest-ranking universities, and in brain gain questions in general).
2. Different aspects and perceptions of research quality
Summarising scholarly and empirical studies of research quality we find three basic aspects of the concept:
(1) Plausibility/solidity, methodological soundness (and feasibility)
(3) (a) Scientific and (b) societal value/significance
Each of these aspects may be specified and emphasised in different ways: both in different fields of research and in different evaluation contexts (reviewing grant proposals is different from assessing candidates for professorships or reviewing manuscripts for publishing). On a more general level, they derive from the definition of research: to qualify as scholarly Research, the work should be (1) well-founded in scientific methods, (2) give New knowledge and (3a) be relevant to the Research community and/or (3b) the society. Some of the common concepts of Research quality combine two or more of these aspects, such as «frontier research» which is a combination of 2 and 3a in terms of generating valuable new knowledge at the frontier of science. And then of course, it also needs to be solid (1) to be valuable.
Even if clear and comprehensible at this basic conceptual level, «research quality» is contested and elusive. While there is general consensus that good research is solid, original and significant, there is less consensus about what this means or how to identify good research. What is perceived as the most solid and significant contributions to a specific research field may vary between peers. Furthermore, numerous studies have pointed out biases in peer review, for instance that interdisciplinary and unconventional research is disfavoured. The outcome of peer review may even depend on the way the review is organised.
3. Identifying high quality
Then, what do public authorities do to identify and facilitate high quality Research? And how can they document that they succeed with this? Even with its many limitations and potential biases, peer review is often the best – and only – option when it comes to identifying high quality Research. Peer review is thus widely used for allocating project grants and for performance based funding, as well as for evaluating the outcome of programmes and policy initiatives.
In later years, peer review has increasingly been supplemented – and in some cases replaced – by bibliometrics and other quantitative indicators. These indicators are essentially based on (aggregated) peer assessments – on the outcome of peer review of papers submitted for publication, on the number of citations to published work, and/or on the outcome of review of grant applications. They form the Foundation of performance-based funding, and are seen as indicators of policy success (e.g. comparing countries or institutions, or the outcome of funding schemes). However, being based on the aggregated outcome of peer review, science indicators also risk reproducing the biases in peer review (e.g. discriminating interdisciplinary and original research). Moreover, indicators based on citations primarily reflect scientific impact, which is only one of several aspects of research quality.
In addition, quantitative indicators come with the risk of producing dysfunctional incentives. If your future funding is based on your quantifiable output you may easily give priority to quantity over quality in your research. As stated in the Leiden Manifesto for research metrics: with metrics «we risk damaging the system with the very tools designed to improve it» when used by «organisations without knowledge of, or advice on, good practice and interpretation». The first principle of the Leiden Manifesto is thus that quantitative evaluations should support, not substitute, expert assessments.
In sum, metrics can seldom overcome the limitations, biases and indecisiveness of peer review, and there are additional limitations and biases attached to them. In combination with expert advice/Direct peer review, they may however still contribute to the identification of high quality research. Metrics have important benefits as they demand far less resources Than peer review, may challenge and inform peer review and trigger thorough expert panel discussions. On the other hand, there is also the risk that metrics misguide peer review or lead to less thorough panel discussions. Moreover, it should be kept in mind that the concept of research quality is multidimensional, its operationalisation often contested, and that scholarly Research is dynamic by nature. This implies that a fixed «agreement» on what is the most solid and significant research may be counterproductive in the long run – even if policy makers may perceive such a need. In the research community, diversity and open discussions are more important than agreement.