The Commonwealth’s two major reviews are split on how to measure research performance. OA analytics might make the argument obsolete.
Research did not get many mentions in the Universities Accord Interim Report but while the Accord team appears to assume measurement of research performance is settled, others have different views.
Mary O’Kane and colleagues state they are “giving further consideration,” to “improving the measurement of the quality and impact of Australian research, including by deploying advances in data science to develop a ‘light touch’ automated metrics-based research quality assessment system.”
Which is not exactly what the Sheil Review of the Australian Research Council Act appears to have in mind for reporting research performance.
QUT VC Margaret Sheil and colleagues recommend the end of metric driven, Excellence in Research for Australia and influenced, Engagement and Impact, because, “metrics can be biased or inherently flawed in the absence of expert review and interpretation.”
This, they suggest is what the system wants -pointing to 22 submissions to their review that called for data-driven research reviews but adding that, “when asked directly, 12 per cent of total submissions agreed that a data-based assessment would be helpful in evaluating Australian research outcomes.”
And so, “we are explicitly not recommending ERA and EI be replaced by a so-called light touch metrics-based exercise. “
Instead of data-driven metrics they propose leaving research measurement to the ARC, recommending it, “develops a framework for regular evaluation and reporting on the outcomes of the NCGP program over a timeframe that allows the full impact of research funding to be assessed and the public benefit explained.”
Assuming Professor O’Kane and colleagues do not change their minds in the final report, this could present Education Minister Jason Clare with a challenge in deciding what approach will meet his requirement for, ““impact data to enhance the reporting on the impact value of grants funded so that more robust evaluations of ARC funded programmes and initiatives can be undertaken,” (Campus Morning Mail, August 31 2022).
But it won’t be the existing labour-intensive, citation-based ERA, which Mr Clare put on hold prior to Sheil, citing universities’ concerns about its labour intensive, citation driven, process.
So what does the Accord team have in mind? Perhaps something like the project from the Curtin University Open Knowledge Initiative which uses public datasets to create “ERA-like benchmarks and indicators.”
“National assessment exercises, as well as many higher education providers, continue to rely on traditional, proprietary data sources for performance evaluation. Open data sources are competitive against proprietary counterparts and offer the potential for greater transparency, access, accuracy and completeness … ” the COKI team state, (Campus Morning Mail, September 21 2022).
Given the data used is OA perhaps the ARC could do the work, instead of universities. Last September the ARC appointed a working group “to transition ERA to a modern data driven approach,” which included COKI’s Cameron Neylon. It was due to report in December but if it did, whatever it came up with was not released. However work is still underway, the ARC reports a plan “has been developed” and “is now being considered alongside the Independent Review of the ARC.”
Something inspired by the COKI model for an OA research metric would assist ARC deliver what both professors O’Kane and Sheil want.