Today a new “Open Edition” of the Leiden Ranking has been launched. But what does it mean that a ranking is “open” or “transparent”? Does it make a difference? Does anyone care? Should you care?
As someone who does research on research I’ve always thought that the answer is self-evidently “yes”. After all, in research, we hold ourselves to high standards of evidence, of transparency and methodological rigour. Surely we must apply the same high standards to the evidence we use to make decisions about research?
But what I’ve learned over the years is that those principles are not enough. Universities pay attention to rankings, not because the numbers mean anything, but because they have real effects. Students pay attention to them, funders pay attention, and governments pay attention.
So, moving away from more abstract questions, what are the concrete reasons that universities, and in particular Australian universities should care that the Leiden Ranking is now “open”? What does that mean in practice?
There are two main things that really do make a difference, and which provide real opportunities for the Australian HE sector. The first is that, for the first time, the actual underlying data for the ranking are available, right down to the individual outputs. You can see what is included, what is not, see where there should be corrections (and more on that later).
But more than this, you can dig into the detail of where the opportunities lie. The citation indicators presented in the Leiden Ranking are threshold measures. In the old ranking, you would never know whether there was a whole set of outputs just outside the top 10%, but now you can examine all the detail. It is possible to model and test scenarios, as well as to use this openly-available data to track progress over the course of a year, rather than waiting for the next year’s results to drop.
After decades of expensive and extensive internal processes to understand and demonstrate research performance and contribute to reports like the now-defunct ERA, this ranking methodology places transparent, scientifically-rigorous research performance methodology in the hands of universities, funders and community all at once.
You can use the same methodologies to ask questions that might be better aligned with your institutional mission. What if you were more interested in the diversity of citations than the count? What if you were concerned about collaboration with civil society, or government, and wanted to compare yourself with a specific cohort of peers? Now you can, by adapting a proven and credible framework to your needs. This won’t happen overnight, and we will need to build up new literacies in the Australian HE sector to take full advantage of the opportunities, but they are there for the taking.
We can also tell different stories. Rankings are always framed around competition between institutions. But the output level data lets us tell new stories about how Australian institutions are working together. Australian universities are great collaborators, with high levels of international and national collaboration. With the new data, we can not only identify the outputs that contributed to high performance, but whether they were the results of collaborations, which institutions are driving those interactions, and ultimately perhaps what is needed to make them more successful.
The availability of the detailed data, if accompanied by an effort to engage with it, offers a truly transformative opportunity to shape indicators and analyses that actually help us to plan and build a robust HE sector, one with real depth and breadth to deliver on national needs, both today, and into the future. With the government signalling bold aspirations through the Accord process, it will be critical for the sector to not just react to, but engage with and guide the development of indicators that are fit for our purposes.
This points to the second major area of opportunity. We are used to thinking of both rankings and evaluation data as something that is done to us. In this new open data world, we all have a stake in making the data and analysis processes better. The data in the new Open Edition of the Leiden Ranking is not perfect, and nor is the Web of Science data used in the traditional ranking. But if you correct your listing at the Research Organization Registry (ROR) or send a list of affiliation corrections to OpenAlex then that data is made available to everyone globally.
If you disagree with the way that an indicator is calculated or displayed, you can test for yourself whether it makes a difference. You can put that change on your website, and perhaps engage with a wider conversation on improvements in general. Or just present the information the way you want, but knowing that you can be transparent and clear about the changes you’ve made, and otherwise be building a credible and trusted system of analysis.
All of this will require effort, and indeed investment (although there are also significant savings to be made as well, a topic for another day). But what is at stake is the difference between a future where the sector is merely reactive to outside assessments – whether they come from government or from foreign media companies – or one in which we take an active role in shaping and curating data and analytics that works for us, informs on our missions, is responsive to Australian needs, and is capable of changing as those needs change.
Professor Cameron Neylon is co-convenor of the Curtin University Open Knowledge Initiative (COKI) and collaborated with CWTS to develop the Open Edition Ranking.