“If peer review were a drug, it wouldn’t be allowed on the market because it has not been rigorously tested,” authors including Adrian Barnett from QUT argue, in a new paper on research funding models.
The researchers critique various ways money is handed out and propose research topics on ways to decide who gets what. “We can think of no other industry that spends so little on evidence-based quality control and process improvement,” they write.
And so they propose:
- Reporting data on program success rates, including amounts distributed and the applicant pool.
- Results on the reliability of funding decisions, notably across disciplines with awareness of the “gameability” of indicators.
- Alternative evaluation, this could include merit indicators in bibliometric data and citation counts, which would “democratise the evaluation process, ensuring that the direction of science is not dictated solely by a selected few.”
- Alternative distribution. For example granting organisations set themes and researchers apply or lotteries of peer-reviewed applicants.
- Data on the cost of competition for grants by different disciplines and for applied and basic research.
- Assessing “epistemological costs” in variously funding “risky research,” “normal science and supporting “proliferation and variety.”
- Considering social costs, “there is little research on the social costs incurred by individuals who miss out on grant competitions.”
The authors suggest “greater funding dispersal is likely to be beneficial” – but don’t know what distribution system would be best, making a strong case for where further research on research is needed.