The Next Big Thing in Research Assessment

The Australian Research Council reveals ways it wants to measure research performance in a submission to the University Accord Interim Report.

“Recent advances in technology are approaching the point where it will be possible to transition to a data driven framework that will relieve the reporting burden for universities …a new framework offers opportunities to address other challenges – such as improving our capacity to answer new kinds of questions and to provide advice for the future,” the ARC says.

Specific issues include,

  • “smart data harvesting with less effort: including, “develop suitable engagement and impact indicators and refine existing university reporting requirements”
  • data curation: “to ensure data accuracy and organise vast amounts of research data into meaningful topics or disciplines … we have been working with data providers to develop an Artificial Intelligence algorithm to do this automatically”
  • research impact: “co-design with universities more effective and streamlined practices to develop case studies”

 How they got this far

Last year the ARC appointed an expert group to advise on data driven research assessment, to replace the cumbersome and evidence-intensive Excellence in Research for Australia and Engagement and Impact reports.

The group includes Cameron Neylon from the Curtin Open Knowledge Initiative, which has created a research performance metric based on open-source data.

“National assessment exercises, as well as many higher education providers, continue to rely on traditional, proprietary data sources for performance evaluation.  Open data sources are competitive against proprietary counterparts and offer the potential for greater transparency, access, accuracy and completeness,” Neylon and colleagues wrote last year, (Campus Morning Mail, September 21 2022).

The ARC does not specify any particular model it has in mind, but COKI makes it plain that a scrapable-data methodology can work.

What’s in it for everybody

For universities: “a system-wide understanding of performance, which is especially valuable to smaller or newer universities without the capacity to conduct their own evaluations. Priority evaluations allow for robust testing and promotion of priority areas.”

A data-based system would also save universities a bunch of time previously spent on classifying research and writing submissions – a key consideration for Education Minister Jason Clare when he cancelled ERA, “in light of the sectors concerns about workload (CMM August 31 2022).

For government: “a variety of targeted evaluations based on government priorities,” and in the long term, “system-wide evaluation of Australian research performance, for public accountability and policy purposes. It will also provide options for funding allocation, if required.”

For the research community: “a programme that showcases evidence of research quality and impact, this could build public appreciation of research through outreach and strategic communications. It would leverage the outcomes of priority evaluations and analytics derived from the improved data infrastructure.”

 So will there be an Accord Imprimatur? Who knows? Although in their interim report Mary O’Kane and colleagues state they are, “giving further consideration,” to “improving the measurement of the quality and impact of Australian research, including by deploying advances in data science to develop a ‘light touch’ automated metrics-based research.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!