Higher Education Grapples with AI’s Role in Research Assessment Reform
Universities Quietly Embrace AI Tools Ahead of REF Reform
Sep 10, 2025 |
As the UK’s national research assessment framework enters a period of critical reflection, commentators have raised concerns about whether the current model remains suitable for its intended purpose. In a recent educational blog post, Richard Watermeyer (University of Bristol), Tom Crick (Swansea University), and Lawrie Phipps (University of Chester) examined the evolving landscape of research evaluation, drawing attention to longstanding tensions between institutional accountability, funding allocation, and government ambitions for research-led economic growth.
Their reflections follow the Government’s decision to pause planning for REF2029, a move announced by Sir Patrick Vallance, Minister for Science, to allow time for reassessing the credibility and coherence of future assessment mechanisms. The authors noted that previous iterations of the national exercise, including the RAE and REF, have consistently prompted debate around how best to define and measure research excellence. While REF2014 was marked by the introduction of “impact” as a core metric, REF2029 had initially been framed around research culture, later expanded into the proposed People, Culture and Environment (PCE) statements.
However, the recent pause has led some to speculate that research culture may be downplayed or removed entirely, a change which, for some in the sector, would simplify a process often viewed as overly complex, costly, and detached from the core purpose of research evaluation. Nonetheless, the overall credibility of the REF remains under scrutiny, particularly in light of the financial pressures currently facing UK higher education institutions.
Drawing on their ongoing research supported by Research England, the authors revealed that they had consulted leaders and REF specialists across 17 institutions. Their findings suggest that the REF is entering a phase of fundamental transformation, one that is increasingly shaped by the influence of artificial intelligence. Interviewees broadly agreed that, should the REF continue, it would likely need to incorporate AI-driven processes or even adopt partial automation. Examples cited include the use of AI tools to generate narratives, support evidence searches, and score outputs or impact case studies. These applications were presented as offering potential efficiencies, reducing costs, saving time, and enabling more continuous assessment, yet they also raised concerns. The authors acknowledged that AI systems remain opaque, prone to bias and manipulation, and may undermine transparency, reproducibility, and the educational integrity of evaluation.
Despite these concerns, the study found that both students and staff were already using AI tools, often discreetly, in preparation for REF submissions. There was also a growing expectation that university panels may adopt generative AI technologies by the time REF2029 is implemented, given the rapid pace of development. Some contributors suggested that such tools could even assist in evaluating research culture, an area where human judgment has proved challenging.
The authors cautioned, however, that the credibility of the REF now hinges on how institutions manage financial constraints, procedural complexity, and the rising appeal of AI-enabled research evaluation. While AI may offer practical benefits, its use should not be outsourced entirely to external vendors; instead, a transparent and accountable framework for integrating AI within the REF is required. They concluded that artificial intelligence can no longer be excluded from the future of research evaluation, but questioned whether a three-month delay would allow the sector sufficient time to address the scale of change now underway. The future credibility of the REF will depend on how effectively the sector reconciles innovation, integrity, and institutional realities.
Editor’s Note
The future of the UK's research assessment system, mainly known as REF2029, is under uncertainty as the recent decision by Sir Patrick Vallance to pause its planning indicates not a scheduling delay but points to deeper concerns about how the framework is working. Various questions have been raised about its credibility, including whether it truly reflects the quality and impact of academic research across various disciplines. Watermeyer, Crick, and Phipps argue that the REF, in its current form, may be struggling to serve its purpose. Their reflections on the rise and possible retreat of “research culture” as a measure of excellence are telling. On the one hand, its removal might simplify the process. On the other hand, it risks narrowing the definition of research quality and undermining broader perspectives once promised through the proposed People, Culture and Environment statements. Moreover, it's also an important observation that Artificial Intelligence is no longer something for the future; it is already being used quietly in REF planning. This also raises essential questions about the transparency, bias and whether machines should be used to judge and evaluate academic work in the place of humans.
Skoobuzz points out that undoubtedly AI can bring efficiency and credibility, but it can only be trusted when there are ethical safeguards and clear accountability. Overall, now there should be complete focus on the way we can use AI most properly, without losing the core values that make research meaningful, fair and high in quality.
FAQs
1. What is the role of artificial intelligence in research assessment?
Artificial intelligence is increasingly being considered as a tool to support and streamline research assessment processes. Its role includes automating the review of outputs, analysing citation patterns, and generating summaries of impact case studies. AI can assist in identifying trends, evaluating research quality, and reducing administrative burden. However, concerns persist around transparency, reproducibility, and the risk of bias, particularly when AI systems are used to score or rank submissions. As REF2029 planning evolves, the sector is actively debating whether AI should be embedded into future frameworks or remain a supplementary tool.
2. How is AI being used in higher education research?
In higher education, AI is being applied across multiple domains: from literature reviews and data analysis to predictive modelling and intelligent tutoring systems. Researchers use AI to extract insights from large datasets, generate hypotheses, and even assist in drafting academic papers. AI tools also support student learning by personalising content delivery and automating feedback. The rise of generative AI has prompted universities to explore its use in curriculum design, assessment, and research supervision, while also grappling with ethical and pedagogical implications.
3. What does REF2029 mean for students and education policy?
REF2029, while primarily focused on institutional research quality, has indirect implications for students and education policy. The framework influences how universities allocate resources, prioritise research areas, and shape academic careers, all of which affect the student experience. If REF evolves to include metrics on research culture and environment, it may prompt institutions to invest more in inclusive practices, staff development, and student engagement in research. Education policy may also need to adapt to ensure that research-led teaching remains aligned with national goals for innovation and skills development.
4. Can artificial intelligence improve the purpose of a research project?
Yes, AI can enhance the clarity, scope, and execution of a research project. It supports researchers in refining research questions, identifying gaps in existing literature, and modelling complex systems. AI tools can also assist in structuring proposals, synthesising findings, and visualising data. However, while AI can improve efficiency and insight, the intellectual purpose and originality of a research project must remain human-led. Responsible use of AI requires critical oversight, ethical safeguards, and transparent methodologies.
5. How are universities rethinking AI in research evaluation?
Universities are beginning to reassess how AI might be integrated into research evaluation, particularly in light of REF reform and growing administrative pressures. Some institutions are piloting AI tools to assist with narrative generation, citation analysis, and benchmarking. Others are developing internal policies to govern AI use, focusing on academic integrity, disclosure, and reproducibility. The sector is also exploring how AI might support longitudinal assessment models, moving away from static, cyclical reviews towards more dynamic and continuous evaluation frameworks.
6. What tools and research methods include AI?
AI-enabled research tools span several categories:
- Literature review platforms (e.g. Elicit, Consensus) for summarising and comparing studies
- Citation analysis tools (e.g. Scite.ai) for evaluating research influence
- Data visualisation and modelling software (e.g. Julius, Lumina)
- Writing and editing assistants (e.g. Wordvice AI, ChatGPT) for drafting and refining academic texts
- Predictive analytics for forecasting trends and outcomes
These tools often incorporate machine learning, natural language processing, and semantic search capabilities, supporting both qualitative and quantitative research methods.
7. How does teaching reflection connect with AI in education?
Teaching reflection is increasingly supported by AI through tools that analyse classroom data, predict learning challenges, and offer feedback on instructional design. AI can help educators identify patterns in student performance, adjust teaching strategies, and evaluate the effectiveness of learning activities. Some platforms enable teachers to simulate scenarios, test pedagogical approaches, and receive AI-generated suggestions. However, meaningful reflection still requires human interpretation, and AI should be viewed as a facilitator, not a replacement, for professional judgement and pedagogical insight.
0 Comments (Please Login To Continue)