Thursday, June 14, 2018

Thinking about science upon entering the field. Part III: Ethics and Responsibilities

Here is the third of a four-part series of posts by Tristan Cofer, a graduate student in chemical ecology here at Penn State.  He has been thinking about the profession he is being trained for, and the broader setting in which it is taking place, and into which he will have a place:

Growing up in a medical household, I remember being more than just a little impressed by, what seemed to me, to be the many responsibilities that physicians were expected to have towards their patients. Serving on call every third or fourth night, working weekends and holidays, and, not to mention, the years spent in school or as a resident and intern, seemed to me to go beyond the so–called Hippocratic imperative to ‘first, do no harm’, and instead to border on an ethical mandate that one should always strive to do the most good. I am no doubt, engaging in some hero worship here, and I concede that the extent to which this mandate actually informs a physician’s conduct (much less whether it really exists) is debatable. However, I would argue that for many people, myself included, ‘good medicine’, by and large, means medicine that does the most good.

This relationship between healthcare and ethical responsibility is perhaps unsurprising given the influence that physicians have over our, and our loved ones’, mental and physical wellbeing. Simply put, we want to know that the people that we trust with the things that are most important to us are indeed trustworthy. That being said, I find it somewhat curious that, by comparison, we in the scientific community are not held to a similar ethical standard. This, to me, begs the often-unconsidered, if not outwardly ignored, question: What are our social responsibilities as scientists?

Science, like medicine, is embedded in the culture(s) in which it is practiced. It is a humanistic enterprise in that we as humans undertake it, and like all everything we do, it comes with baggage that oftentimes remains unchecked. I wouldn’t claim here that scientists give no consideration to the social frameworks in which they work (that would be both unfair and untrue); only that, based on my own experiences thus far in graduate school, discussions about a scientist’s social responsibilities have been mostly self-interested, concerning internal matters such as research ethics and the like. These conversations are no doubt valuable, in that we need to know that our colleagues are doing work that we can trust and build on; however, they hardly encourage one to think beyond their rather limited responsibilities to our chosen profession.

How much, for instance, should we expect our research to reflect the public’s values and interests? Because research is typically funded by tax-payer dollars, one might argue that, by extension, it is also carried out in their name. Is it, therefore, ethically reprehensible to conduct research that does not directly benefit the public in some way? Are we not also obligated to set research objectives with minority or special interests groups in mind? What happens when our interests conflict with the public’s? For example, can we defend using public funding to conduct research in evolutionary biology, knowing that some groups vehemently oppose teaching evolutionary theory?

Moreover, how should we deal with situations in which our internal responsibilities to ‘Science’ and our external responsibilities to the public are at odds with each other? Is it permissible to develop technologies that can quite literally change the world, without considering the people with whom we share it? Is this even possible? Are we even the best candidates to answer these questions, or should we consult ‘outsiders’ from the humanities and elsewhere in our discussions concerning the questions mentioned above? These discussions may seem like an unnecessary hindrance to scientific advancement, and perhaps they are. But maybe, that’s what we need.

Admittedly, I might be barking up the wrong tree here. Yes, Science has the potential to greatly benefit and harm the public, but so too do politics, business, and any other enterprise with deep pockets and a global reach. As a friend, much smarter than myself, once told me, maybe ‘Science is no more than a good way to keep smart people off the street’. At the end of the day, we all need to make a living, and conversations like these have the potential to make that harder to do. For better or worse, there is considerable pressure (both external and self-imposed) on scientists to do whatever they need to in order to bring in grants, publish to get tenure and advance their careers, and appease the powers-that-be to protect their self-interests. Most people either don’t want to, or can’t, risk rocking the proverbial boat—especially when there is little precedent to do so.

A new biomedical insight?

Here is a thoughtful and timely quote:
". . . . as no single disease can be fully understood in a living person; for every living person has his individual peculiarities and always has his own peculiar, new, complex complaints unknown to medicine—not a disease of the lungs, of the kidneys, of the skin, of the heart, and so on, as described in medical books, but a disease that consists of one out of the innumerable combinations of ailments of those organs. This simple reflection can never occur to doctors . . . . because it is the work of their life to undertake the cure of disease, because it is for that that they are paid, and on that they have wasted the best years of their life.  And what is more, that reflection could not occur to the doctors because they saw that they unquestionably were of use . . .  not because they made the patient swallow drugs, mostly injurious (the injury done by them was hardly perceptible because they were given in such small doses). They were of use, were needed, were indispensable in fact (for the same reason that there have always been, and always will be, reputed healers, witches, homÅ“opaths and allopaths), because they satisfied the moral cravings of the patient . . . . They satisfied that eternal human need of hope for relief, that need for sympathetic action that is felt in the presence of suffering, that need that is shown in its simplest form in the little child, who must have the place rubbed when it has hurt itself. The child . . . . feels better for the kissing and rubbing. The child cannot believe that these stronger, cleverer creatures have not the power to relieve its pain. . . ."
The language seems a bit arcane, and this is a translation, but its cogency as a justification for today's Big Data feeding frenzy is clear.  People who are ill, or facing death, will naturally grasp at whatever straws may be offered them.  In one way or another, this has been written about even back to Hippocrates.

Of course, palliation or cure of what disorders can be eased or cured should be the first order and obligation of medicine.  Where nothing like that is clearly known, trials of possible treatments are surely in order, if the patient understands at least the basic nature of the research, for example, that some are being given placebos while others the treatment under investigation.  Science doesn't know everything, and we often must learn the hard way, by trial and error.

Given that, perhaps the most important job of responsible science is to temper its claims, and to offer doses of the reality that life is a temporary arrangement, and that we need to get the most out of that bit of it to which we are privileged to have.  So research investment should be focused on tractable, definable problems, not grandiose open-ended schemes.  But promises of the latter are nothing new to society (in medicine or other realms of life).

The problem with false promises, by preachers of any type, is that they mislead the gullible, and in many cases this is known by those making the promises--or could and should be known.  The role of false promise in religion is perhaps debatable, but its role in science, while understandable given human ego and the struggle for attention, careers, and funding, is toxic.  People suffering, of poverty, hardship, or disease, seek and deserve solace.  But science needs to be protected from the temptations of huckstering, so that it can do its very important business as objectively as is humanly possible. 

By the way, the quote is from about 150 years ago, from War and Peace, Tolstoy's 1869 masterpiece about the nature of causation in human affairs.

Wednesday, May 9, 2018

Are common diseases generally caused by 'common' variants?

The funding mantra of genomewide mapping is that common variants cause common disease (CV-CD).  This was convenient for HapMap and other association-based attempts to find genetic causation.  The approach didn't require very dense genotyping or massive sample sizes, for example. Normally, based on Mendel's widely known experiments and so on, one would expect anything 'genetic' to run in families; however, because of notions like low penetrance--low probability of having the trait even if you've inherited the genetic variants--small nuclear families can't work, as a rule, and big enough families would be too costly or even impossible to ascertain.  In particular, for traits due to the effects of many genes or environmental factors and/or only weak causal variants' effects, families would not really be practicable.

So, conveniently, when DNA sequencing on a genomewide scale became practicable, the idea was that sequence variants might not have wholly determinative effects but the effects might be enough that we just need to find them in the population as a whole, not the smallish families that we have a hard enough time ascertaining.  People carrying such a variant would have a higher probability of showing the trait.

It was a convenient short-cut, but there is a legitimate evolutionary rationale behind this:  The same mutation will not recur very often so that if there are many copies of a causative allele (sequence variant) in a population, these are probably identical by descent (IBD), from a single ancestral mutational event.  In that sense, genomewide association studies (GWAS) are finding family members carrying the same allele, but without having to work through the actual (inaccessible, very large, multi-generation ancestral) pedigrees that connect them.  If the IBD assumption were not basically true, then different instances of the same nucleotide change will have different local genomic backgrounds, and the effects will often or even likely vary among the descendants of different mutations, affecting association tests, though the analysis rarely, if ever, attempts to detect or adjust for this.

In principle it can work well if a trait really is caused by alleles at a tractably small number of genes.  That's a very big 'if', but assuming it, which is similar to assuming the trait is a classical Mendelian trait, then one can find association of the allele with the trait among affected people, because, at that site at least, they are distant relatives.  If it is to be considered, though, the effect of a given allele does have to be strong enough, and its frequency in the sample large enough, to pass a statistical significance test.  This is a potential major issue since in very large samples searching countless sites across the genome, reaching significance means many observations, and in a sense requires an allele's frequency and/or individual impact to be high.

In essence, this is the underpinning and implicit justification for the huge GWAS empire.  There are many details, but one important assertion by the leaders of the new EverBigger (and more costly) AllOfUs project, is that common diseases are their target.  Rare diseases generally just won't show up often enough to find statistically reliable 'hits'.

Of course, 'common' is a subjective term, and if one searches millions of genome sites and their allele frequencies vary in the sample, tons of them might be 'common' by such a definition.  And they will have to have strong enough effects to be detectable as well based on suitably convincing significance criteria.  So we might expect CV-CD to be a proper description of such studies.  But there is a subtle difference: the implication (and once, 20 years ago, the de facto expectation) is that this meant that one or a few common variants cause the common disease.

Obviously, if that assumption of convenience were roughly true, then one can think of pharmaceutical or other preventive measures to target the causal variants in these genes in affected persons.  In fact, we have largely based the nearly 20-year GWAS effort on such a wedge rationale, starting with smaller-scale projects like HapMap.  Unfortunately, that was a huge success!

Why unfortunately?  Because, no matter how you define 'common', what we've clearly found, time and again, trait after trait, is that these common diseases are in each case due to effects of a different set of 'common' alleles whose effects are individually weak.  In that sense, the individual allele per se is not very predictive, because many unaffected people also carry that allele.  Every case is genetically unique so one Pharma does not fit all.  It is, I would assert, highly misrepresentative if not irresponsible to suggest otherwise, as is the common PR legerdemain.

Instead, what we know very clearly is that in many or most 'common' disease instances, since each case is caused by different sets of alleles, not only is each case causally unique, but no one allele is, in itself nearly necessary for the disease.  There isn't usually a single 'druggable' target of Pharma's dreams.  There was perhaps legitimate doubt about this 20 years ago when the adventure began, but no longer.

Indeed, it is generally rare for anything close to most of cases, compared to controls, to share any given allele, and even when that happens, the cause, as statistically estimated, say, by comparing cases and controls, is usually only slightly attributable to that allele's effects.  Even then most variation is typically not being accounted for, as measured by the trait's estimated heritability, because it seems due to a plethora of alleles too weak or rare to be detected in the sample, even if they're there and are, collectively, the greatest risk contributor. And, of course, we've not mentioned lifestyles and other environmental factors, nor the often largely non-overlapping results from different populations, nor various other factors.

The non-Mendelian Mendelian reality of life
I think that as a community we were led into this causal cul de sac by taking Mendel too literally or too hopefully.  To be sure, some traits are qualitative--they appear in two or a few distinct states, like green vs yellow peas--and these are basically the kinds of traits Mendel studied, because they were tractable.  In such cases each gene transmits in families in a regular way, that in his honor we call 'Mendelian'.  And human genetics had great success identifying them and their causal genes (cystic fibrosis is one well-known example but there have been many others).  However, common diseases are generally not caused by individual alleles at single genes.  Quantitative geneticists, such as agricultural breeders have basically known about the complexity of most traits for a century, even if specific contributing genes couldn't be identified until methods like GWAS came along 15-20 years ago.

Since we know all this now, from countless studies, it is irresponsible to hijack huge funding for more and more again of the same, based on a CV-CD promise that neither the public nor many investigators understand (or if they do, dare acknowledge).  One might go farther and suggest that this makes 'CV-CD' a semantic shell-game, that the Congress and public are still buying--bravely assuming that the administrators and scientists themselves, who are pushing this view, actually understand the genomic (and environmental) landscape.

NIH Director Collins is busy and has to worry about his institute's budget.  He may or may not know the kinds of things we've mentioned here--but he should!  His staff and his advisors should!  We have not invented them, no matter whether we've explained them fully or precisely enough.  We have no vested interest in the viewpoint we're expressing.  But the evidence shows that research should now be capitalizing, so to speak, on what we've actually learned from the genomic mapping era, rather than just doing more of the same, no matter how safe that is for careers (a structural problem that society should remedy).

Instead of ever more wheel-spinning, what we really need is new thinking, different rather than just more of the same Big Data enumeration.  Until new ideas bubble up, neither we nor anyone else can specify what they should be.  Continuing to pay for ever bigger data serves several immediate interests very well: the academic enterprise whose lifeblood includes faculty salaries and overhead funding for research done in their institution, the media and equipment suppliers who thrive on ever-biggerness, and the administrators and scientists whose imagination is too impoverished to generate some actual ideas.  More is easier, more insightful is very much harder.

So, yes, common diseases are caused by common variants--tens or hundreds of them!  Enumerating them is becoming a stale, repetitive costly business and maybe 'business' is the right word.  The public is paying for more, but in a sense getting less.  Until some day, someone thinks differently.

Sunday, May 6, 2018

"All of us" Who are 'us'?

So the slogan du jour, All Of Us, is the name of a 1.4 billion dollar initiative being launched today by NIH Director Francis Collins.  The plan is to enroll one million volunteers in this mega-effort, the goal of which is, well, it depends.  It is either to learn how to prevent and treat "several common diseases" or, according to Dr Collins who talked about the initiative here, "It's gonna give us the information we currently lack" to "allow us to understand all of those things we don't know that will lead to better health care." He's very enthusiastic about All of Us (aka Precision Medicine), calling it a "national adventure that's going to transform medical care."  This might be viewed in the context of promises in the late 1900s that by now we'd basically have solved these problems--rather than needing ever-bigger longer-term 'data'.

And one can ask how the data quality can possibly be maintained if medical records of whoever volunteers vary in their quality, verifiability, and so on.  But that is a technical issue.  There are sociological and ontological issues as well.

All of Us?
Serving 'all of us' sounds very noble and representative.  But let's see how sincere this publicly hyped promise really is.  Using very rough figures, which will serve the point, there are 320 million Americans.  So 1 million volunteers would be about 0.3% of 'all' of us.  So first we might ask: What about achieving some semblance of real inclusive fairness in our society, by making a special effort to oversample African Americans, Hispanics, and Native Americans, before the privileged, mainly white, middle class get their names on the roles?  That might make up for past abuses affecting their health and well-being.

So, OK, let's stop dreaming but at least make the sample representative of the country, white and otherwise.  Does that imply fairness?  There are, for example, about 300,000 Navajo Native Americans in the country.  If All Of Us means what it promises, there would be about 950 Navajos in the sample.  And about 56 Hopi tribespeople.  And there are, of course, many other ethnic groups that would have to be included.  Random (proportionate) sampling would include about 600,000 'white' people in the sample.

These are just crude subpopulation counts from superficial Google searching, but the point is that in no sense is the proposed self-selected sample of volunteers going to represent All Of Us in anything resembling fair distribution of medical benefits.  You can't get as much detailed genomewide (not to mention environmental) data from a few hundred sampled individuals compared to hundreds of thousands.  To be fair and representative in that sense, the sample would have to be stratified in some way rather than volunteer-based.  It seems very unlikely that the volunteers who will be included are in some real sense going to be representative of the US, rather than, say of university and other privileged communities, major cities, and so on--even if not because of intentional bias but simply because they are more likely to learn of All Of Us and to participate.

Of course, defining what is fair and just is not easy.  For example, there are far more Anglo Americans than Navajo or Hopi.  So the Anglos might expect to get most of the benefits.  But that isn't what All Of Us seems to be promising.  To get adequate information from a small group, given the causal complexity we are trying to understand, they should probably be heavily oversampled.  Even doing that would leave room for enough samples from the larger populations of Anglo and African-Americans adequate for the kind of discovery we could anticipate from this sort of Big Data study of causes of common disease.

More problems than sociology
That is the sociological problem of claiming representativeness of 'all' of us.  But of course there is a deeper problem that we've discussed many times, and that is the false implied promise of essentially blanket (miracle?) cures for common diseases.  In fact, we know very well that complex causation, of the common diseases that are the purported target of this initiative, involves tens to thousands of variable genome locations, not to mention the environmental ones that are beyond simple counting.  Further, and this is a serious, nontrivial point, we know that these sorts of contributing causes include genetic and environmental exposures in the sampled individuals' futures, and these cannot be predicted, even in principle.  These are the realities.

And, even if the project were truly representative of the US population demographically, as a sample of self-selected volunteers there remains the problem of representing diseases in the population subsets.  Presumably this is why they are focusing on "common diseases", but still the sample will have to be stratified by possible causal exposures (lifestyles, diets, etc) and ethnicity, and then they'll have to have enough controls to make case-control comparisons meaningful. So, how many common diseases, and how will they be represented (males/females, early/late onset, related to what environmental lifestyles, etc.?)?  One million volunteers isn't going to be representative or a large enough sample that has to be stratified for statistical analysis, especially if the sample also includes the ethnic diversity that the project promises.

And there's the epistemological problem of causation being too individualistic for this kind of hypothesis-free data fishing to solve--indeed, it is just that kind of research that has shown us clearly how that kind of research is not what we need now.  We need research focused on problems that really are 'genetic', and some movement of resources to new thinking, rather than perpetuating the same kind of open-ended, 'Big Data' investment.

And more
In this context, the PR seems mostly to be spin for more money for NIH and its welfare clients (euphemistically called 'universities').  Every lock on Big Money for the Big Data lobby, or perhaps belief-system, excludes funding for focused research, for example, on diseases that would seem to be tractably understood by real science rather than a massive hypothesis-free fishing expedition.

How could the 1.4 billion dollars be better spent?  A legitimate goal might be to do a trial run of a linked electronic records system as part of explicit move towards what we really need, and which would really include all of us; a real national healthcare system.  This could be openly explained--we're going to learn how to run such a comprehensive system, etc., so we don't get overwhelmed with mistakes.  But then for the very same reason, a properly representative project is what should be done.  That would involve stratified sampling, and more properly thought-out design.  But that would require new thinking about the actual biology.

Thursday, April 26, 2018

Gene mapping: More Monty Python than Monty Python

The gene for ...... (Monty  Python)
Here's a link to a famous John Cleese (of Monty Python fame) sketch on gene mapping.  We ask you to decide whether this is funnier than the daily blast of GWAS reports and their proclaimed transformative findings: which is more Monty than the full Monty.

Why we keep spending money on papers that keep showing how MontyPythonish genomewide association with complex traits is, is itself a valid question.  To say, with a straight face, that we now know of hundreds, much less of thousands, of genomewide sites that affect some trait--in some particular sample of humans, with much or most of the estimated heritability yet unaccounted for, without saying that enough is enough, is almost in itself a comedy routine.

We have absolutely no reason--or, at least, no need--to criticize anything about individual mapping papers.  Surely there are false findings, mis-used statistical tests, and so on, but that is part of the normal life in science, because we don't know everything and have to make assumptions, etc.  Some of the findings will be ephemeral, sample-specific, and so on.  That doesn't make them wrong.  Instead, the critique should be aimed at authors who present such work with a straight face as if it is (1) important, (2) novel in any really novel way, and (3) not saying that the paper shows why, by now with so many qualitatively similar results, we should stop public funding of this sort of work.  We should move on to more cogent science that reflects, but doesn't just repeat, the discovery of genomic causal (or, at least, associational) complexity.

The bottom line
What these studies show, and there is no reason to challenge the results per se, is that complex traits are not to be explained by simple, much less additive genetic models.  There is massive causal redundancy with similar traits due to dissimilar genotypes.  But this shouldn't be a surprise.  Indeed, we can easily account for this in terms of evolutionary phenomena, both related to processes like gene duplication and the survival protection that alternative pathways provides.

Even if each GWAS 'hit' is correct and not some sort of artifact, it is unclear what the message is.  To us, who have no vested interest in continuing, open-ended GWAS efforts with ever-larger samples, the bottom line is that this is not the way to understand biological causation.

We reach that view on genomic considerations alone, without even considering the environmental and somatic mutation components of phenotype generation, though these are often obviously determinative (as secular trends in risk clearly show).  We reach this view without worrying about the likelihood that many or perhaps even most of these 'hits' are some sort of statistical, sampling, analytic or other artifact, or are so indirectly related to the measured trait, or so environment-dependent as to be virtually worthless in any practical sense.

What GWAS ignore
There are also three clear facts that are swept under the rug, or just ignored, in this sort of work.  One is somatic mutation, which are not detected in constitutive genomewide studies but could be very important (e.g., cancer).  The second is that DNA is inert and does something only in interaction with other molecules.  Many of those relate to environmental and lifestyle exposures, which candid investigators know are usually dreadfully inaccurately measured.  The third is that future mutations, not to mention future environments are unpredictable, even in principle.  Yet the repeatedly stressed objective of GWAS is 'precision' predictive medicine.  It sounds like a noble objective, but it's not so noble given the known and knowable reasons these promises can't be met.

So, if biological causation is complex, as these studies and diverse other sorts of direct and indirect evidence clearly show, then why can't we pull the plug on these sorts of studies, and instead, invest in some other mode of thinking, some way to do focused studies where genetic causation is clear and real, rather than continuing to feed the welfare state of GWAS?

We're held back by inertia, and the lack of better ideas, but another important if not defining constraint is that investigator careers depend on external funding and that leads to safe me-too proposals.  We should stop imitating Monty Python, and recognize that if the gene-causation question even makes sense, some new way of thinking about it is needed.

Wednesday, April 25, 2018

Improving access to healthcare can usually make malaria go away

Drug resistant malaria has emerged in Southeast Asia several times in history and subsequently spread globally. When there are no other antimalarials to use this has led to public health and humanitarian disasters, especially in high transmission settings (parts of sub-Saharan Africa).

Currently there is a single effective antimalarial left: Artemisinin. But malaria parasites in Southeast Asia are already developing resistance to this antimalarial, leading many in the malaria research community and in public health to worry that we will soon be left with untreatable malaria.

One proposed solution to this problem has been to attempt to eliminate the parasite from regions where drug resistance consistently emerges. The proposed strategy uses a combination of increasing access to health care (so that ill people can be quickly diagnosed and treated, therefore reducing transmission) and targeting asymptomatic reservoirs by asking everyone who lives in a community where there is a large reservoir to take antimalarials, regardless of whether or not they feel ill (mass drug administration).

In Southeast Asia malaria largely persists in areas that are difficult to access and remote. The parasite thrives in conflict zones and in the fringes of society. These are the areas that frequently don’t have strong healthcare or surveillance systems and some have even argued that control or elimination would be impossible in such areas because of these difficulties.

Today on World Malaria Day my colleagues and I published the results after 3 years of an elimination campaign in Karen State of Myanmar.  The job is not complete. But this work has shown that it is feasible to set up a health care system, even in remote and difficult-to-access areas, and that most villages can achieve elimination through beefing up of the health care system alone. In places where there are high proportions of people with asymptomatic malaria, access to health care alone doesn’t suffice and malaria persists for a longer period of time. With high participation in mass drug administration, which requires a large amount of community engagement, these communities are able to quickly eliminate the parasites as well. We are hopeful that similar programs will be expanded throughout Southeast Asia, regardless of the geographic and political characteristics of the regions, so that elimination can be achieved and sustained.

Malaria (P. falciparum) incidence in the target area over three years. The project expanded over the three years, and overall incidence has decreased.

Link to the main paper:
Effect of generalised access to early diagnosis and treatment and targeted mass drug administration on Plasmodium falciparum malaria in Eastern Myanmar: an observational study of a regional elimination programme

Link to a detailed description of the setup of the project:

Tuesday, April 24, 2018

Throw 'em down the stairs! (making grant review fair)

When I was active in the grant process, including my duty to serve as a panelist for NIH and NSF, I realized that the work overload, and the somewhat arbitrary sense that if any reviewer spoke up against a proposal it got conveniently rejected without much if any discussion, meant that reviews were usually scanty at best.  Applications are assigned to several reviewers to evaluate thoroughly, so the entire panel doesn't have to read every proposal in depth, yet each member must vote on each proposal.  Even with this underwhelming consideration, the panel members simply cannot carefully evaluate the boxes full of applications for which they are responsible.  In my experience, once we got down to business, for those applications not immediately NRF'ed (not recommended for funding), there would be some discussion of the surviving proposals; but even then, with still tens of applications to evaluate, most panelists hadn't read the proposal and it seemed that even some of the secondary or tertiary assignees had only scanned it.  The rest of the panel usually sat quietly and then voted as the purported assigned readers recommended.  Obviously (sssh!), much of the final rankings rested on superficial consideration.

When a panel has a heavy overload of proposals it is hard for things to be otherwise, and one at least hoped that the worst proposals got rejected, those with fixable issues were given some thoughtful suggestions about improvement and resubmission, and at least that the best ones were funded.

But there was always the nagging question as to how true that hopeful view was.  We used to joke that a better, fairer reviewing system was to put the proposals to the Stairway Test: throw them down the stairs and the ones that landed closest to the bottom would be funded!

Well, that was a joke about the apparent fickleness (or, shall we say randomness?) of the funding process, especially when busy people had to read and evaluate far, far too many proposals in our heavily overloaded begging system, in which not just science but careers depend on the one thing that counts: bringing in the bucks.
The Stairway Test (technical criteria)

Or was it a joke?  A recent analysis in PNAS showed that randomness is perhaps a best way to characterize the reviewing process.  One can hope that the really worst proposals are rejected, but about the rest.....the evidence suggests that the Stairway Test would be much fairer.

I'm serious!  Many faculty members' careers literally depend on the grant system.  Those whose grants don't get funded are judged to be doing less worthy work, and loss of jobs can literally be the direct consequence, since many jobs, especially in biomedical schools, depend on bringing in money (in my opinion, a deep sin, but in the context of our venal science support system, one not avoidable).

The Stairway Test would allow those who did not get funding to say, quite correctly, that their 'failure' was not one of quality but of luck.  Deans and Chairs would, properly, be less able to terminate jobs because of failure to secure funding, if they could not claim that the victim did inferior work.  The PNAS paper shows that the real review system is in fact not different from the Stairway Test.

So let's be fair to scientists, and the public, and acknowledge honestly the way the system works.  Either reform the system from the ground up, to make it work honorably and in the best interest of science, or adopt a formal recognition of its broken-nature: the Stairway Test.