Margaret Sheil presents an observation about career metrics, although it would be appropriately named “Sheil’s Law of Performance Measures.”
“The more seemingly precise or informative a new indicator, especially if applied as a solution, the less scrutiny there is of the quality and opportunities for individuals to whom it is applied.”
It sets the context for comments in a paper for Elsevier by the QUT VC, on metrics for career achievement and national research output.
In the case of measuring individual merit, it is something she observed close-up, as a young scientist, observing the ways research metrics discriminate against people with interrupted or non-linear careers, “mostly women with caring responsibilities.”
“Talent is broadly distributed; opportunity is not. So whether we are selecting for admission to our universities or recruiting staff, we must not start with the assumption that each has the same opportunity to develop or demonstrate their ability,” she writes.
But performance relative to opportunity is not easily identified – the H index may be an improvement on a straightforward list of publications, but it still does not adjust for people with less time for the citations per pub to pile up. And even at ECR stage, women do worse, because men get better breaks, opportunities for first-authorship, for example, which show-up in the metrics.
Professor Sheil points to problems with metrics at the macro – reporting consequences of the late, if not lamented by her, Excellence for Research in Australia, developed while she led the Australian Research Council. “A challenge for a national research measurement exercise is that it is impossible to not influence what you are trying to measure,” she warns, adding that ERA should not be replaced by a metric-only assessment – a view which may not prevail in the current debate on post ERA assessment.
So how to defeat the seductive power of metrics, which “provide the illusion of accuracy … save time and reduce the cost of assessment”?
Professor Sheil suggests DORA can deliver. The Declaration on Research Assessment rejects one-stop metrics “as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.” She writes that QUT is now working on principles, “that will align with best practice in research assessment.”
It’s a stand taken in defence of equity, “we recognise the inevitability and increasing attractiveness of more sophisticated search and artificial intelligence tools, each of which may introduce discrimination and biases affecting different groups and individuals.”