Peer review is the “gold standard” for allocating research grants, states the National Health and Medical Research Council. The problem is, few agree on how it should work.
So the NHMRC established a Peer Review Analysis Committee (PRAC) to look at the process for the first two rounds of the Council’s new Investigator and Ideas grants.^
After analysing a bunch of data, some of PRAC’s conclusions are:
- There was a normal distribution of final scores, clustered around means;
- There were small differences around the cut-off determine results;
- There were marginally more “mostly generous” assessors (5 per cent) than “mostly critical” (2 per cent);
- No change in outcomes for 85 per cent of applications when specific individual assessments were left out;
- Outlier scores, “may reflect the specific expertise and judgement of the assessor … training may help reduce variation but would not be expected to eliminate outliers;” and
- Influential scores that change outcomes if excluded are not outliers
The take out: the Committee did not recommend major changes to existing processes.
However, it did call for “increased transparency” adding, “every review counts. All assessors should be reassured of the importance of their work.”
^ Caroline Homer (UTS, chair) * Emily Banks (ANU) * Adrian Barnett (QUT) * Tony Blakely (Uni Melbourne) * Tanya Chikritzhs (Curtin U) * Philip Clarke (Uni Oxford and Uni Melbourne) * Peter Visscher (Uni Queensland) * Tania Winzenberg (Uni Tasmania)