Research funding systems: more research is (really) needed

There’s way more to efficient approaches to analyse research funding than listing who gets what.  An international team, including Adrian Barnett from QUT, set out to identify the big questions.

“Are countries with a higher share of competitive funding less good at simultaneously supporting risky research, normal science, variety, and stimulating the promising parts?” is one.

How to identity research that goes nowhere is another, given, “low-yield and fruitless ideas are still defended by communities of scientists and organizations who have made careers based on them.”

They don’t have the answers, but they do set out ways to investigate the issues that have to be addressed first.

Their proposals into the way resources are allocated are obvious when clearly set-out and they address fundamental research system issues. There is a weather-like quality to complaints about the cost of competing for grants – nobody knows enough about how measures of achievement are set and outcomes determined to do anything about them.

And the established orthodoxy for deciding who gets what is not proven. “If peer review were a drug, it wouldn’t be allowed on the market because it has not been rigorously tested,” they write.

But instead of proposing alternative and unproven ways to allocate research funding they set out issues that must first be addressed:

  • Data: on success rates, the applicant pool, what promises work in pitches.
  • Reliability and predictive validity of decisions for different disciplines and who to include on selection panels: plus the impact of “noise and bias in decisions.” For example, “a nation’s contribution to technological leadership, or team leadership. There are outcomes that bibliometrics cannot identify, although the authors acknowledge other indicators may also be “gameable.”
  • Alternative evaluation: less peer-review and more assessment by theory-based, field adjusted, purpose-created metrics. For a start they are easier to use, plus they, “would in a way democratize the evaluation process, ensuring that the direction of science is not dictated solely by a randomly selected few.”
  • Different distribution: fewer investigator applications and more responses to funder-set challenges, Also lotteries – say to pick between peer-reviewed applications. Whatever is done it needs to spread money around
  • Why winners win: “has one funding organization proved to be better at funding breakthroughs than another organization,

But there is one question they seem sure of, at least for now. AI isn’t the answer, certainly not while it keeps hallucinating, “Without robust evidence that AI provides substantial, clear benefits, overconfidence in its abilities and the surrounding hype may even have detrimental consequences,” they warn.

Share:

Facebook
Twitter
Pinterest
LinkedIn

Sign Up for Our Newsletter

Subscribe to us to always stay in touch with us and get latest news, insights, jobs and events!