
While sector headlines revolve around international student numbers, the fact that no research in Australia has been measured or evaluated by a national system for seven years continues to be swept under the carpet.
In the absence of a national measurement system or strategy for research, the reliance on flawed rankings systems with opaque data and an inability to be replicated is a systemic weakness that is conveniently not of interest / not understood by the public or, it appears, a priority for successive Governments.
At the same time, the charm of generative AI in doing all the reading for you, skimming research papers and highlighting a few that the algorithm deems best for your needs is re-shaping usage patterns for research publications.
All of which makes a recent article by Stuart Macdonald from the University of Leicester particularly interesting – posing the uncomfortable question of what authorship really means if your life’s work is reduced by rankings systems, AI or internal promotion mechanisms to a bunch of citable tokens.
Will academia be reduced to a race to highly visible citation tokens – like a nerdy version of Pokemon GO, where staff are rewarded for ‘catching ‘em all’ rather than focusing on the quality of creation and dissemination of knowledge?
“The role of the author in academic publishing is not quite what it might seem,” Professor Macdonald writes.
“Gone are the days when academics simply conducted research and published their findings. Now their papers are less valued for their content than for providing measures of academic performance. Citation is chief of these.”
“Publish or perish’ is misleading: academics perish if they are not cited. The academic paper is primarily a platform for citation. Wrong citations (inappropriate, irrelevant or simply non-existent) count just as much as right citations, and many citations are wrong – not really surprising when 80% of authors have never read the papers they cite.”
The 80% figure is quite extraordinary – and comes from a 2003 UCLA study, well before ChatGPT clocked on to start skimming papers for us.
The idea that the best papers were those that are most highly cited is highly questionable, Professor Macdonald says.
“What was once the most cited paper of all is about cleaning test tubes, while the paper announcing the double helix, probably the most important discovery in biology for a century, was rarely cited for more than a decade.”
“The most ruthless players are often those with a standing to maintain – prestigious universities, reputable journals, distinguished academics, established publishers. For instance, coercive citation (editors making citation of their own journals a condition of publication) is particularly prevalent in top journals. Over 90% of their authors comply. Many journals expect something like 60% of a submission’s citations to be of the journal’s own papers.”
The examples of gaming the citation system are legion, notes Professor Macdonald, culminating in a fundamental flaw – scholarship is irrelevant to most rankings, with citations instead used as a proxy.
This paper should – but won’t – be mandatory reading for the guardians of Australia’s research activity, mounting a powerful argument that articulates a global issue with research evaluation that has contaminated the priorities, promotion process and performance management of academic staff and institutions.
As a nation that has puttered along without any approach to replace the ERA system for years, it also presents an opportunity to reconsider what really matters when evaluating research impact.