UKRI spends £8 billion in tax-payers’ money each year to foster a world-class research and innovation system for the UK that gives everyone the opportunity to contribute and to benefit. This investment is central to the UK’s prosperity and security. It is critical to spend it wisely, with every pound delivering on multiple objectives.
UKRI’s responsibility is for the whole UK research and innovation system. This requires a balanced portfolio of investments in people, places, infrastructures, ideas, innovation and impacts. Deciding how to invest within and between these interconnected elements is core to what we do.
Taking ideas as an example, there will always be far more projects to fund than there is money to support them, so we have to decide which ones to fund, and which ones not to fund. UKRI makes 1000s of these decisions each year, creating the breadth needed, with the agility to pivot to capture new opportunities and meet new challenges.
Peer review is a central tool in the decisions we make. The assessment of each proposal by experts able to judge its merits provides critical information for peer review panels, typically with a remit to support a particular area of research and innovation, who must rank the proposals in priority order for funding.
I have participated in and benefited from (yes, really) peer review for decades. I have experienced many different versions operated by funders large and small, in the UK and internationally. UKRI must ask, and ask regularly, what are the best approaches to use, and under what circumstances, to ensure that the peer review systems we operate are fit for purpose.
Peer reviewers are unsung heroes
At the centre of UKRI’s peer review process are people; expert reviewers who give their time and expertise to guide the decisions we make. They are the reason the UK spends money on research and innovation as well as it does. I would like to thank everyone involved for their commitment to peer review. The willingness of experts to commit their precious time and energy to this process, supports the whole research and innovation endeavour and is greatly appreciated and valued.
UKRI’s review of peer review
While peer review is widely considered to be an essential part of research and innovation assessment, it also attracts significant controversy. Among many thoughtful critiques, it is often argued that peer review:
- might be inherently risk averse
- might be prone to individual biases and group think
- is too time consuming for everyone involved
- might not be able to distinguish reliably between proposals at the funding borderline.
To explore these issues and more, UKRI is embarking on a review of peer review, which will look at different approaches such as including double blind stages and lottery-based selection at the funding borderline. We are also commissioning evidence synthesis to bring together what is already known about different peer review methodologies to inform changes to our processes. As part of this synthesis, there will be the opportunity to get involved in focus groups, workshops and interviews. We will also consult on the findings of the synthesis to explore the next steps for UKRI.
The review will have four strands of major activity:
- peer review versus other funding mechanisms
- increasing diversity (of people, places, ideas, outputs and more, including supporting interdisciplinary research) and considering risks for bias
- the culture of peer review (norms, roles and behaviours)
- definition of assessment criteria and their implementation.
Reflections on peer review
As we launch our review of peer review, I would like to offer the following reflections for consideration. UKRI funds a very wide range of projects across all disciplines including high risk blue skies exploration; development and refinement of existing concepts; meta-analyses; development of technologies, tools and resources; practice-based research and innovation; reproducibility studies; and translational work.
Our role is to support fields to flourish, and a balanced funding portfolio across these activities is essential to achieving this. One approach is to pre-define the desired balance and separate these types of activities into different competitions. However, many projects are difficult to classify into a single type, and beyond that there is significant benefit to including multiple project types in the same competition so that they can be tensioned against one another directly. This is common practice at least across parts of the spectrum of project types. This approach creates challenges in the context of the need for fair, clear and transparent assessment criteria.
There are two distinct issues funders must consider. First, we need to be clear and transparent about the need to support a balanced portfolio of activities. This should be explicit in assembling a rank ordered list of projects for funding. It is not only the merits of each individual proposal that is relevant, but also the overall balance of activity across the funded portfolio.
Second, it is not possible to assess this diverse range of activity using the same criteria applied equally to all projects. To ensure consistent and transparent assessment we must explicitly recognise that not all criteria apply equally to all applications. We want to fund excellence. An excellent database development project, an excellent blue skies discovery project, and an excellent translational project will obviously not meet the same criteria, but all of them are excellent.
Delivering the most value from the tax-payers’ money with which we have been entrusted requires a balanced portfolio across the full range of these activities. To achieve this, for each proposal we need to apply a relevant subset of criteria from across a palette. The uniform application of a narrow set of criteria to all proposals inherently limits diversity and biases the system toward particular types of project.
To ensure that excellence across all grant types is supported requires transparency about which criteria should be used, and which ignored, for each project. In this context, it would significantly help applicants to make this clear in the feedback we provide. In my experience, it is common for applicants to assume their project was not funded because it was, for example, not applied enough, or it was too high risk, or it was not hypothesis-driven when these criteria were not relevant for their proposal.
Delivering our vision
Peer review is at the heart of what UKRI does, and so it is critical that we continue to monitor its effectiveness at delivering our vision for an outstanding research and innovation system in which everyone can participate and from which everyone benefits. This is the purpose of the review of peer review we are undertaking. I would like to reiterate my thanks to everyone who contributes to peer review, and to add my thanks in advance to everyone who will contribute to the review of peer review.
Top image: Credit: damircudic, E+ via Getty Images Plus