Survey Shows Academics Divided Over AI Use in Research Quality Review
Generative AI Could Cut Costs of Research Assessment, But Governance Needed
A new national report exposes how, for the first time, several English universities are considering AI methods for reviewing the quality of their research. Apparently, such tools may be embraced across the sector to improve efficiency and cut costs. Yet the deep scepticism exhibited for academia also raises calls for strong national oversight. The report stresses: UK institutions must get ready very quickly to realise disruptive change in higher education research evaluation.
Concentrate on the Research Excellence Framework (REF)
The study was led by the University of Bristol under the aegis of Research England, focusing on the UK's Research Excellence Framework (REF), which determines the distribution of approx £2 billion of public money every year. Researchers found these institutions have relied on generative AI for researching quality reviews, with a clamour against its usage. Hence, the report espouses growing expectations that GenAI might have a future role in REF 2029.
Cost and Efficiency Pressures
Cost estimates on REF2021 stood at £471 million, with an average cost per university of £3 million. According to the report, REF2029 will cost even more. GenAI, it posits, could relieve some of this administrative pressure. GenAI cannot claim to be a panacea; however, it notes that its use might reduce the time-consuming components of research evaluation frameworks. At the same time, the authors caution that AI could create new layers of bureaucracy requiring clearer protocols and governance.
Current Use of GenAI in Universities
Researchers interviewed 16 UK institutions, including Russell Group and a number of newer universities. They found usage to be highly variable:
- Some institutions relied on GenAI to gather evidence about the impact of research, assist with writing impact case studies or prepare submissions.
- Others had built in-house systems to examine or score research outputs, supporting the machine learning of bibliometrics and scientometrics, as well as AI-powered evaluation of scholarly output.
- Availability of resources continues to influence how institutions incorporate AI-driven operations.
Academic Resistance and Survey Findings
A survey of almost 400 academics and professional staff exhibited strong opposition to using GenAI in most parts of REF. Between 54% and 75% strongly opposed its use. The highest support-just under 23%-was for using AI to help in producing research impact case studies. Attitudes varied by discipline and experience, with the arts, humanities, and social sciences showing the greatest scepticism.
Perspectives in Leadership
Interviews with 16 Pro Vice-Chancellors presented a mixed picture of attitudes towards generative AI in research assessment. Some leaders believed that GenAI would become important and useful in the future, arguing that UK universities should be proactive in demonstrating its benefits.
Others expressed doubts about its reliability and raised concerns about the possibility of an “AI bubble.” Many staff members emphasised that they lacked sufficient knowledge of GenAI, which limited their trust in its role within research evaluation. However, several leaders suggested that staff who learn to manage and adapt to these systems would be better positioned as the workplace continues to change.
Call for Governance and National Standards
According to the report, the use of AI in British universities for research evaluation must be guided by well‑defined governance. Participants stressed the need for national standards, clear transparency rules for disclosure, human oversight, and shared guidelines to ensure responsible use. Without these safeguards, institutions risk adopting uneven approaches, deepening inequalities, or depending on insecure public GenAI tools. To address these concerns, the authors recommend that universities publish their own institutional policies and provide training to staff on the responsible use of AI in research assessment.
REF2029 Proposals
Some of this report's recommendations include the establishment of an overarching framework of national AI governance for REF2029, and the development of a common shared high-quality GenAI platform to which all universities can access in carrying out research assessment through generative AI. Such systems would also enforce funding decisions through the establishment of AI-based evaluation frameworks, while investing in the construction of safe institutional review platforms. These would ensure that fragmented, inequistic, and inequitable AI uses are avoided concerning funding allocation.
International Context
Internationally, some similar models-like Excellence Research Australia, and New Zealand's Performance Based Research Fund-have ceased to exist recently. According to the authors, the UK has a new opportunity to reshape itself globally in this reform by finding both the virtues and the faults of AI in academic research assessment.
Sector Response
The sector leaders reiterated the report's position. They stated that the right safeguards could help retain excellence and ensure fairness as the UK adapts to new technologies. As always, the University of Bristol emphasised that robust assessment still matters and that the discussions about AI must be informed, balanced, and responsible.
Broadly, the report asserts significant promise in GenAI for improving research quality review and efficiencies; still, the launch must be done gradually, transparently, and under strong national governance. Findings should spur on thoughtful modernisation in the UK higher education sector while balancing innovation with fairness and accountability.
Editor’s Note:
This latest report demonstrates the current engagement of artificial intelligence in the evaluation of research quality at British universities. It states that AI may speed up and reduce costs in research assessment, but also argues that many academics have reservations and call for strong national regulation. Brexit changed the relationship universities had with Europe, and now AI stands to make an even greater change. Some universities have begun experimenting with generative AI for case studies and submissions, while others are creating in-house models. At the same time, however, numerous personnel continue to express concerns over fairness and trust, along with the potential for increased bureaucracy. The report makes it very clear that there needs to be national standards, open rules and human oversight into the application of AI. It states that indeed possible to build common platforms to provide equal benefits to all universities to curb uneven practices.
Skoobuzz believes that artificial intelligence offers real opportunities to improve research assessment, but it should be introduced cautiously. With appropriate safeguards, it stands to help modernise the system and make it save time and money, while positioning British universities strongly in an ever-changing world.
FAQs
1. Can AI help universities assess research more fairly and efficiently?
Yes, AI can help universities by making research assessment faster and less costly. It can handle routine tasks such as gathering evidence, checking submissions, and organising data. However, fairness depends on strict rules and human oversight, since AI alone cannot judge the quality of research.
2. What are the risks of using AI for research evaluation?
The main risks include bias in algorithms, lack of transparency, and over-reliance on automated systems. AI could also create extra bureaucracy if not managed well. Without clear governance, universities may adopt uneven practices, which could deepen inequalities.
3. Will generative AI replace human peer review in research assessment?
No, generative AI is unlikely to replace human peer review. Peer review involves expert judgement, which AI cannot fully replicate. Instead, AI may support reviewers by reducing administrative work and helping with case studies, but human oversight will remain essential.
4. How much could AI reduce the cost of research evaluation for universities?
The report suggests AI could lower costs by cutting down time-consuming tasks. For example, REF2021 cost £471 million, with an average of £3 million per university. Using AI could ease this burden, though it is not a complete solution, and savings will depend on how responsibly it is applied.
5. Is AI accepted among academics for evaluating research outputs?
Acceptance is mixed. Surveys show that between 54% and 75% of academics strongly oppose using AI in most parts of research assessment. Support is higher, around 23% for using AI to help produce impact case studies. Attitudes vary by discipline, with arts and humanities showing the strongest mistrust.





0 Comments (Please Login To Continue)