REF 2029 Briefing Note
Background
The Research Excellence Framework (REF) is the mechanism by which UKRI (UK Research and Innovation) allocates around £2 billion a year in research funding to academic institutions via a block grant known as ‘Quality-related Research’ (QR) funding.
The REF determines a significant part of university funding and also influences the perceived status of institutions, which indirectly affects other revenue streams including student recruitment. The REF is therefore a powerful tool shaping the incentives that universities face.
Higher education institutions devote substantial time and resources to the REF exercise itself. This investment in the REF exercise has grown substantially since the introduction of the precursor of REF, the Research Assessment Exercise (RAE), in 1986.
The headline cost of REF 2021 has been estimated at £421 Million. The cost to higher education institutions of REF 2021 is estimated to have increased by around 50% compared to REF 2014.
Initial decisions for the next REF
In 2023, initial decisions by Research England for REF 2028 were announced. Key changes included the proposal for a ‘People, culture and environment’ (PCE) element to contribute 25% of the overall quality profile. This replaces the environment element in the 2021 REF where environment was defined as ‘the environment for supporting research and enabling impact within each submitting unit’ and was weighted at 15%.
The weighting of direct evaluation of research outputs will drop to below 50% for the first time (to 45%). ‘Research outputs’ has been renamed as ‘contribution to knowledge and understanding’ comprising 45% for research outputs and 5% for a narrative statement ‘based on evidence of the broader contributions to the advancement of the discipline’.
Research outputs were weighted at 65% in 2014 and 60% in 2021. The reduction in the weighting of research outputs to 45% weakens the connection between research quality and REF results.
In addition, the previous requirement for Impact Case Studies (ICS) to be based on research of at least 2* quality has been dropped, further reducing the link between research quality and REF outcomes. This section has been renamed as ‘Engagement and impact’ and remains weighted at 25%. It will consist of impact case studies and an accompanying statement.
There has been no suggestion that the planned changes to the REF were driven by ministerial priorities.
Neither was the PCE proposal driven by the views of the higher education sector. It has been pointed out that “there seems to have been only very limited consultation with research universities’ senior management before the main contours (‘initial decisions’) of the new REF were announced and there is no real opportunity to object to them in the consultation”. Even the highly limited consultation of higher education staff carried out by UKRI does not obviously support the PCE proposal. The UKRI initial decisions report stated that “A significant minority of respondents to the consultation suggested that driving a positive research culture should be a core purpose of the REF.” This statement appears to be based on a small survey in which only 64 out of 248 respondents (26%) said that “research process” should be heavily weighted in the REF. In the same survey, 52 respondents (21%) thought that “research process” should be “weighted less heavily” or “not assessed”.
The initial decisions report references the UKRI-commissioned “Harnessing the Metric Tide” report for examples of data and evidence requirements that could be included in this element. The Harnessing report was critical of the concept of excellence itself as “ill-defined” and together with the original Metric Tide report, it recommended “the adoption of indicators that support equality and diversity as a counterweight” to what it viewed as problematic aspects of research excellence assessment. It cited approvingly critiques centred on “the biases inherent in the concept of excellence” which sustain “epistemic injustice”.
The initial decisions report generated widespread opposition, including a response from the London Universities’ Council for Academic Freedom (LUCAF). LUCAF stated that ‘Ultimately, the best measure of an effective research culture is the production of excellent research’ and proposed that UKRI should: 1/ Include the active promotion of academic freedom as part of the assessment criteria for the “people, culture and environment” element in REF 2028 and 2/ Lower the weighting of the “people, culture and environment” element.
Some commentators took the more radical position that the time has come to scrap the REF entirely (e.g. Iain Mansfield; Day One).
2024 consultation on PCE indicators
In light of widespread concerns and doubts about the proposals, UKRI announced a consultation and delayed the planned date of the next REF to 2029.
The consultation included a survey of academics and others. The survey design seemed flawed due to the overly restrictive nature of some answer choices. For example the question ‘Which of the following sector initiatives, concordats and accreditations do you think ought to play a role in the assessment of PCE?’ provided a list of schemes including Athena Swan and the Race Equality Charter (both run by Advance HE). Guidance stated ‘Please tick all that apply. If you have not heard of any of the initiatives, concordats and accreditations, simply leave them un-ticked’. Thus, the question only allowed respondents to indicate either that they supported each scheme or that they had never heard of it. Respondents could not indicate that they had heard of a scheme and did not favour its inclusion. There were some other obvious omissions. A question on ‘How important do you consider each of the following elements of People, Culture and Environment to the task of supporting high-quality research, engagement and impact?’ did not list research time allowances and sabbaticals as potential elements. Neither was reducing bureaucratic load for researchers and improving administrative support listed as a possibility. Overall, the consultation appeared to reflect a pre-existing viewpoint of what should count as ‘research culture’.
The pilot exercise assessment framework has now been published. There has been no transparency as to how this framework was arrived at. It is unclear whether Research England’s consultation actually informed the pilot exercise assessment framework in any meaningful way.
One might expect a publicly funded research body to share the results of their investigations in a timely way in order to inform discussion and decision making. Yet even the basic headline quantitative results have not been shared. A Freedom of Information request has been made on behalf of the London Universities Council for Academic Freedom (LUCAF), but Research England have refused to provide the information on the grounds that their consultation is ‘research’ which will be published in due course and is therefore exempt from disclosure under section 22A of the Freedom of Information Act (2000).
The suggestion made by LUCAF and others that the risks to academic standards, integrity and diversity entailed by the 25% PCE element could be mitigated by the inclusion of academic freedom as a PCE criterion has been ignored. No justification has been provided for this decision. This is particularly surprising, given that the consultation survey and associated workshops did include academic freedom as a potential PCE indicator. However, with no results from the survey having been released, it remains unclear how the decision to exclude academic freedom as a PCE indicator was informed.
Proposed Indicators
The pilot assessment framework has five strands: Strategy; Responsibility; Connectivity; Inclusivity and Development.
Strand 1 ‘Strategy’ demands ‘data on improvement as a result of strategic initiatives’ without specifying what the improvement would be in. The specified ‘qualitative evidence’ includes ‘documented evidence of the strategy and strategic priorities with coherent plans towards their achievement’. This section includes a demand for evidence of the use of external policy frameworks, with Athena Swan and the Race Equality Charter as examples. If implemented, this would effectively make it impossible for HEIs and academic units to leave these schemes, regardless of their own views of the balance of costs, risks and benefits of submission.
Linking research funding to participation in Advance HE schemes has proven risks. For example, between 2011-2020 the National Institute for Health and Care Research (NIHR) required all institutions to obtain Athena Swan silver to be eligible for research funding. Due to flawed advice by Advance HE who run Athena Swan, this had the consequence that many institutions stopped diversity monitoring by sex, in violation of their Public Sector Equality Duty. A further serious example is provided by the Office for Students case report into the University of Sussex regarding the case of Kathleen Stock. The University of Sussex was fined £585,000 for free speech and governance breaches. The university’s ‘Trans and non-binary equality policy statement’ was based on a template provided by Advance HE (a version of the template is available here). The adoption of this policy was found to be in violation of conditions of registration E1 (public interest governance) and E2 (management and governance). The OfS found that statements in the policy ‘have the effect of restricting lawful speech and content, including in the curriculum and course materials, and have created a chilling effect’. These statements included a requirement for ‘any materials within relevant courses and modules [to] positively represent trans people and trans lives’. Advance HE have now written to universities to withdraw the template on which the Sussex policy was based. The OfS have written to universities asking them to review their policies to ensure compliance with their regulatory obligations.
Advance HE (formerly ECU) has a history of campaigning for gender self-identification, including making submissions to the UK and Scottish government’s 2018 consultations on proposed reforms to the 2004 Gender Recognition Act. These submissions strongly favoured gender self-identification and did not express support for any exemptions, for example for single-sex spaces or sports and make no mention of women’s rights or of balancing the rights of distinct groups. The Advance HE submission states ‘Advance HE has been working with the sector to implement processes to support trans staff and students based on self-identification of gender… Our efforts would be greatly supported by a GRC process which is also based on self-identification.’ Advance HE policy has been developed in close collaboration with campaigners for gender self-identification, and their advice to the HE sector appears to have been grounded in this viewpoint rather than in respect for the law as it stands. For example, their advice incorrectly asserts that it is unlawful discrimination not to allow trans women to use female facilities. The fact that sex (meaning biological sex) is a protected characteristic under EA2010, with gender reassignment also protected as a distinct characteristic, has now been confirmed by the Supreme Court. This further highlights the dangers of embedding Advance HE into the REF.
Strand 2 ‘Responsibility’ groups a hodgepodge of ‘quantitative’ criteria including carbon emissions data. it is unclear that this should be within the scope of the REF. This strand seeks to micromanage promotions criteria. For example, specified ‘qualitative evidence’ includes ‘Documented evidence that membership of relevant committees or involvement in other relevant academic citizenship activities is appropriately recognised (e.g. in workloads or promotion criteria).’
Strand 3 ‘Connectivity’ clearly aims to support some worthwhile goals. However, the drive to reward certain ways of working as an end in itself rather than to reward the research outcomes which may stem from certain ways of working appears to be a ‘one size fits all’ approach, promoting uniformity instead of diversity. This strand also makes incursions into promotions criteria while adding to the administrative burden on HEIs in documenting a long list of practices, for example ‘Documented evidence that activities where knowledge and expertise is shared are appropriately recognised (e.g. in workloads or promotion criteria).’
Strand 4 ‘Inclusivity’ asks that each unit ‘addresses equality, diversity and inclusion across all of its activities’. Quantitative measures include ‘percentage of eligible staff FTE as white, Black, Asian, other/mixed or unknown at institution level’. This appears to suggest rewarding units for their racial composition and (oddly) for non-response (unknown) in data on race. Suggested qualitative evidence includes ‘Pre- and post-training assessments (e.g. on equality, diversity and inclusion (EDI) principles for members of assessment panels, juries, committees and other decision-making bodies, implicit bias).’ Yet there is no evidence that such training is effective. There is no acknowledgement of the contested nature of many aspects of EDI training, with implications for academic freedom. Similarly ‘Support for the use of narrative CVs or other innovative approaches to internal assessment’ is suggested without evidence of benefit being provided. ‘The creation of safe spaces’ is suggested without clarity regarding what this entails.
Strand 5 ‘Development’ suggests various quantitative data collection exercises such as ‘Longitudinal data on share of staff and research students who completed an annual appraisal or equivalent review’. Qualitative evidence suggests ‘pre and post training assessments’. An incursion into organisational promotion priorities includes ‘Documented evidence that leadership of staff support networks is appropriately recognised (e.g. in workloads or promotion criteria)’.
Overall, the assessment framework includes a number of criteria whose merit might be legitimately debated such as membership of the Athena Swan scheme and the Race Equality Charter, the creation of safe spaces and the inclusion of EDI activities in the promotion process. This suggests that Research England has pre-judged some approaches as ‘best practice’ when they are in fact extremely contentious. If the assessment incorrectly judges which approaches work best, it runs the risk of actively harming the quality of research environments. University departments will effectively be prevented from making their own decisions on such matters given the risk to their REF rankings if they do not comply.
The composition of the panel that has been tasked with assessing People Culture and Environment suggests a lack of balance. It includes a number of members who advocate for highly contested frameworks such as decolonisation and gender-identity theory while there does not seem be any member opposing these frameworks. Concerningly, one member has openly used derogatory slurs to refer to colleagues who hold legally protected gender-critical beliefs, and has argued against data collection on sex – despite the fact that there is a legal requirement under the Public Sector Equality Duty (PSED) to collect diversity monitoring data on sex. Another wrote a post for a university web page which appeared to attack JK Rowling with the comment ‘From government interventions to infamous authors being the poster child for a regressive movement, it is a toxic and challenging landscape. Does this sound historically familiar?’. Moreover, the panel does not appear to have been selected because of their track record in promoting cultures of research excellence. It is unclear, for example, why their views on what constitutes best practice in university strategy should be considered superior to those of the departments they are judging.
Conclusion
It is uncontroversial to say that the quality of research environments is important. However, there are a range of views on what matters for a good research environment, and academic departments, disciplines and contexts are diverse. No coherent justification has been provided for attempting to measure (certain dimensions of) research environments directly rather than focussing on research quality.
The proposed PCE indicators suggest a mountain of new bureaucracy and the imposition of numerous unevidenced practices. While some of the suggestions may be positive in supporting research culture, others may be harmful. Taken together, the overall effect will be to detract from research activities (as well as education) as organisations devote time and energy to documenting their compliance with what the REF PCE panel are looking for.
At a time of great economic difficulty for higher education, tying the hands of institutions in ways which will constrain innovation and efficiencies is unhelpful. Universities expect the REF to hold them accountable for the quality of the research they produce. However, the pilot PCE criteria seeks to micro-manage institutional practices, including HR practices, while also creating a substantial administrative task in documenting and reporting these practices. No rigorous evidence base is provided for the list of practices in the pilot exercise, and there is no suggestion that the pilot exercise will provide such evidence. Indeed, the exercise seems predicated on the implausible notion that there is consensus on what constitutes best-practice in research management.
Reducing institutional autonomy diminishes academic governance. Under this proposal, control of the university environment would be determined by the REF assessment panel, further reducing the agency of academics and other staff within institutions. Moreover, matters such as promotion policies have as great an impact on the education function of universities as they do on research. The proposals will therefore extend the REFs influence to areas considerably beyond the remit of UKRI.
Equality work in universities should be regulated by existing equality law and the Equalities and Human Rights Commission, which was expressly created for this purpose. There is no obvious basis for Research England to dictate the terms of this work. Tying research funding to Research England’s assessment of EDI quality exposes institutions to significant legal risk as equality law is a complex and specialised area outside the expertise of Research England.
More generally, as there are serious concerns about the partisan nature of some EDI initiatives, making research funding contingent on taking a particular approach to EDI risks violating the impartiality of academic research.
Lord Vallance, Minister of State for Science Technology and Innovation, has described curiosity-driven research as the “goose that lays the golden egg” which must be protected if the UK is to continue to accrue economic benefits from science. Weakening the link between research quality and funding also weakens the case that research funding provides economic benefits.
Document date: 21/04/2025