Tuesday, April 24, 2018

Throw 'em down the stairs! (making grant review fair)

When I was active in the grant process, including my duty to serve as a panelist for NIH and NSF, I realized that the work overload, and the somewhat arbitrary sense that if any reviewer spoke up against a proposal it got conveniently rejected without much if any discussion, meant that reviews were usually scanty at best.  Applications are assigned to several reviewers to evaluate thoroughly, so the entire panel doesn't have to read every proposal in depth, yet each member must vote on each proposal.  Even with this underwhelming consideration, the panel members simply cannot carefully evaluate the boxes full of applications for which they are responsible.  In my experience, once we got down to business, for those applications not immediately NRF'ed (not recommended for funding), there would be some discussion of the surviving proposals; but even then, with still tens of applications to evaluate, most panelists hadn't read the proposal and it seemed that even some of the secondary or tertiary assignees had only scanned it.  The rest of the panel usually sat quietly and then voted as the purported assigned readers recommended.  Obviously (sssh!), much of the final rankings rested on superficial consideration.

When a panel has a heavy overload of proposals it is hard for things to be otherwise, and one at least hoped that the worst proposals got rejected, those with fixable issues were given some thoughtful suggestions about improvement and resubmission, and at least that the best ones were funded.

But there was always the nagging question as to how true that hopeful view was.  We used to joke that a better, fairer reviewing system was to put the proposals to the Stairway Test: throw them down the stairs and the ones that landed closest to the bottom would be funded!

Well, that was a joke about the apparent fickleness (or, shall we say randomness?) of the funding process, especially when busy people had to read and evaluate far, far too many proposals in our heavily overloaded begging system, in which not just science but careers depend on the one thing that counts: bringing in the bucks.
The Stairway Test (technical criteria)

Or was it a joke?  A recent analysis in PNAS showed that randomness is perhaps a best way to characterize the reviewing process.  One can hope that the really worst proposals are rejected, but about the rest.....the evidence suggests that the Stairway Test would be much fairer.

I'm serious!  Many faculty members' careers literally depend on the grant system.  Those whose grants don't get funded are judged to be doing less worthy work, and loss of jobs can literally be the direct consequence, since many jobs, especially in biomedical schools, depend on bringing in money (in my opinion, a deep sin, but in the context of our venal science support system, one not avoidable).

The Stairway Test would allow those who did not get funding to say, quite correctly, that their 'failure' was not one of quality but of luck.  Deans and Chairs would, properly, be less able to terminate jobs because of failure to secure funding, if they could not claim that the victim did inferior work.  The PNAS paper shows that the real review system is in fact not different from the Stairway Test.

So let's be fair to scientists, and the public, and acknowledge honestly the way the system works.  Either reform the system from the ground up, to make it work honorably and in the best interest of science, or adopt a formal recognition of its broken-nature: the Stairway Test.

1 comment:

Ken Weiss said...

And here's a new story in Nature (you know, the grocery check-out gossip magazine) that is related: https://www.nature.com/articles/d41586-018-04958-9?WT.ec_id=NATURE-20180426&utm_source=nature_etoc&utm_medium=email&utm_campaign=20180426&spMailingID=56487811&spUserID=MjM5NTM2MDQwOTg3S0&spJobID=1383950517&spReportId=MTM4Mzk1MDUxNwS2

More evidence of the luck of the draw, in this case the first-grant draw. When will reform happen?