Showing posts with label research funding. Show all posts
Showing posts with label research funding. Show all posts

Thursday, June 11, 2015

Occasionality, probability, ....., and grantsmanship

In a previous post some time ago, we used the term occasionality to refer to events or outcomes that arise occasionally, but are not the result of the kinds of replicable phenomena around which the physical sciences developed and for which probability concepts and statistical inference are constructed.  Here, we want to extend the idea as it relates to research funding.

There has long been recognized a kind of physics envy among biologists, wishing to have a precise, rigorous theory of life to match theories of motion, atomic chemistry, and the like. But we argue that we don't yet have such a theory or, perhapsthe theory of evolution and genetics that we do have, which is in a sense already a theory of occasionality, is close to the truth.

Instead of an occasionality approach, assumptions of repeatability are used to describe life and to justify the kinds of research being done, when a core part of our science is that evolution, which generates genomic function, largely works by generating diversity and difference rather than replication.  Since individual genetic elements are transmitted and can have some frequency in the population, there is also at that nucleotide level some degree of repetition even if no two genomes, the rest of that element's genomic environmental context, are entirely alike.  The net result is a spectrum of causal strength or regularity.  Because many factors contribute, the distribution of properties in samples or populations may be well-behaved, that is, may look quite orderly, even if the underlying causal spectrum is one of occasionality rather than probability.

Strongly causal factors, like individual variants in a particular gene, are those that when the factor occurs, its effects are usually manifest, and it generates repeatability.  It and analysis of it fit standard statistical concepts that rely on, are built upon, the idea of repeatable causation with fixed parameters. But that is a deception whose practice weaves the proverbial tangled web of deeper realities.  More often, and more realistically, each occurrence of 'occasional' events arises from essentially unique causal combinations of causal factors.  The event may arise frequently, but the instances are not really repeats at the causal level.

This issue is built into daily science in various sometimes subtle ways.  For example, it appears subtly as a fundamental factor in research funding.  To get a grant, you have to specify the sample you will collect (whether by observational sampling or experimental replicates, etc.), and you usually must show with some sort of 'power' calculation that if an effect you specify as being important is taking place, you'll have a good chance of finding it with the study design you're proposing.  But making power computations has become an industry in itself; that is, there is standard software, and standard formulas for doing such computations.  They are, candid people quietly acknowledge, usually based on heavily fictitiously favorable conditions in which the causal landscape is routinely over simplified, the strength of the hypothesized causal factors exaggerated, and so on.

Power calculations and their like rest on axioms or assumptions of replicability, which is why they can be expressed in terms of probability from which power and significance types of analysis are derived.  Hence study designs and the decisions granting agencies make often if not typically rest on simplifications we know very well are not accurate, not usually close to what evidence suggests are realistic truths, and that are based on untested assumptions such as probability rather than occasionality.  Indeed, much of 'omics research today is 'hypothesis free', in that the investigator can avoid having to, or perhaps is not allowed to, specify any specific causal hypothesis except something safely vague like 'genes are involved and I'm going to find them'.  But how is this tested?  With probabilistic 'significance' or conceptually similar testing of various kinds, justified by some variant of 'power' computations.

If you are too speculative, you simply don't get funded.
Power computations often are constructed to fit available data or what investigators think can be done with fundable cost limits.  This is strategy, not science, and everybody knows it.  Nowhere near the  promised fraction of successes occur, except in the sense that authors can always find at least something in their data that they can assert shows a successful result.  The need for essentially fabulous power calculations are accepted is also one reason that really innovative proposals are rarely funded, despite expressed intentions by the agencies to fund real science: Power computations are hard to do for something that's innovative because you don't know what the sampling or causal basis of your idea is.  But routine ones described above are safe.  That's why it's hard to provide that kind of justification for something really different--and, to be fair, it makes it hard to tell when something really different is really, well, whacko.

A rigorous kind of funding environment might say that you must present something at least reasonably realistic in your proposed study, including open acknowledgment of causal complexity or weakness.  But our environment leads the petitioning sheep to huddle together in the safety of appearances rather than substance.  If this is the environment in which people must work, can you blame them?

Well, one might say, we just need to tighten up the standards for grants, and not fund weak grant proposals.  It is true that the oversubscribed system does often ruthlessly cut out proposals that reviewers can find any excuse to remove from consideration, if for no other reason than massive work overload.  Things that don't pass the oversimplified but requisite kinds of 'power' or related computation can easily be dropped from consideration.  But the routine masquerade of occasionality as if it were probability is not generally a criterion for turning down a proposal.

What is done, to some extent at least, is to consider proposals that are not outrightly rejectable, instead scoring them based on their relative quality as seen by the review panel.  One might say that this is the proper way to do things: reject those with obvious flaws (relative to current judgment criteria), but then rank the remaining proposals, so that those with, say, weaker power (given the assumptions of probability) are just not ranked as high as those with bigger samples or whatever.

But this doesn't serve that well, either.  That's because the way bureaucracies work the administrators' careers depend on getting more funding each year, or at least keeping the portfolio they have.  That means that proposals will always be funded from the top-ranked downward in scores until the money runs out.  This guarantees that non-innovative ideas will be funded if there aren't enough strong ideas. And it's part of the reason we see the kinds of stories, based on weak (sometimes ludicrously weak) studies blared across the news almost every single day.

We have a government-university-research complex that must be fed.  We let it grow to become that way.  Given what we've crafted, one cannot really push hard enough to get deeply insightful work funded and yet stop paying for run of the mill work; political budget-protection is also why a great many studies of large and costly scale simply will not be stopped.  This is not restricted to genetics. Or to science.  It's the same sort of process by which big banks or auto companies get bailed out.

How novel might it be if it were announced that only really innovative or more deeply powerful grants were going to be funded, and that institute grant budgets wouldn't be spent otherwise!  They'd be saved and rolled over until truly creative projects were proposed.  In a way, that's how it would be if industry had, once again, to fund its own research rather than farm it out to the public to pay for via university labs.

For those types of research that require major data bases, such as DNA sequence and medical data (e.g., to help set up a truly nationwide single medical records system and avoid various costs and inefficiencies), the government could obligate funds to an agency, like NCBI or CDC and others that currently exist, to collect and maintain the data.  Then, without the burden to collect the data, university investigators with better ideas or even ideas about more routine analysis, would only have to be supported for the analysis.

History has basically shown that Big Data won't yield the really innovative leaps we all wish for; they have to come from Big Ideas, and those may not require the Big Expense that is to a great extent what is driving the system now, in which to some extent regardless of how big your ideas are, if you only have small budgets, you won't also have tenure. That is major structural reason why people want to propose big projects even if important, focused questions could be answered by small projects: you have to please your Dean, and s/he is judged by the bottom line of his/her faculty.  We've set this system up over the years, but few as yet seem to be ready to fight it.

Of course this will never happen!
We know that not spending all available resources is naive even to suggest.  It won't happen.  First, on the negative side, we have peer review, and peers hesitate to vote weak scores on their peers if it meant loss of funding over all. If for no other reason (and there is some of this already), panel members know that the tables will be turned in the future and their proposals will be reviewed then by the people they're reviewing now.  Insiders looking out for each other is to some extent an inherent part of the 'peer' review process, although tight times do mean that even senior investigators are not getting their every wish.

But secondly, we have far too many people seeking funding than are being funded, or than there are funds for, and we have the well-documented way in which the established figures keep the funds largely locked up, so they can't go to younger, newer investigators.  The system we've had for decades had exponential growth in funding and numbers of people being trained built into it.  In the absence of a maximum funding amount or, better yet, investigator age, the power pyramids will not be easy to dislodge (they never are).  And, one might say generically, the older the investigator the less innovative and the more deeply and safely entrenched the ideas--such as probability-based criteria for things for which such criteria aren't apt--will be.  More than that, the powerful are the same ones inculcating their thoughts--and the grantsmanship they entrain into the new up-and-coming who will constitute the system's future.

With the current inertial impediments, and the momentum of our conceptual world of probability rather than occasionality, science faces a slow evolutionary rather than a nimble future.

Wednesday, April 13, 2011

A healthy change for health research....or just day dreaming?

The problem, and a suggested solution
A commentary in the March 31 issue of the journal Nature has taken on the challenge of what to do about the NIH, the research behemoth that couldn't--couldn't deliver on its promise, despite decades of public largesse.  (And no, we're not criticizing Nature this time!)  The commentary is by the President of Arizona State University, Michael Crow, and suggests ways to take a huge operation that isn't doing its job in proportion to its funding, and reform it so it might.  Crow has done some other program turnarounds, including a serious reorganization at ASU, which gives him credibility in writing such a commentary.

From Crow, Time to rethink the NIH, Nature 471:569
NIH spends way more than anybody else on health research (on this planet, at least), but Americans have worse health and longevity, consistently and by many different measures, than many other countries.
This model for discovery and application in health care is failing to deliver. A 2009 report4 found that the United States ranked highest in health spending among the 30 countries that made up the Organisation for Economic Co-operation and Development (OECD) in 2007, both as a share of gross domestic product and per capita. In fact, the country spent 2.5 times the OECD average (see 'Big spender'). Yet life expectancy in the United States ranked 24th of the 30 countries... And on numerous other measures — including infant mortality, obesity, cancer survival rates, length of patient stays in hospital and the discrepancy between the care of high- versus low-income groups — the country fares middling to poor.
And it's not that these other countries exploit our research results better than we do ourselves.  To a great extent it's because our research isn't bureaucratically designed to improve health but to foster the interests of peer-reviewed research.  Crow suggests reorganizing and simplifying to have as much research attention paid to actual improvements to health as to basic science.  With accountability built in to the system, which is not the case now.  He'd like to see a new NIH restructured around just three institutes, a fundamental biomedical systems research institute, a second institute focused on research on health outcomes, "measurable improvements in people's health" (fancy that!), and a third "health transformation" institute, whose funding would be contingent on success. 

Of course, as we note regularly here on MT, the system is a System, interlaced with stubborn vested interests, from NIH's bureaucracy of portfolio-protecting officials and institutes, to investigators dependent on NIH grants regardless of whether anybody's health is improved or not, to universities (dare we say Dr Crow's included?) that hunger for the grants and the overhead which gives their administrators sustenance, to journals and news media who need 24/7 stories to sell, to the biotech industry that feeds on technology-glamorized research, to doctors who like or feel empowered by hi-tech tools and tests (some of which actually work, of course!), to social scientists and epidemiologists who do endless studies to evaluate and re-evaluate the health care system, to politicians who can preen themselves by playing the fear card ("I support research to help your health, so vote for me!").

A more radical solution
How to dislodge such a system and get back to science that works towards basic understanding of the world in a less self-interested way, and make health research about health, is not an easy question. Crow suggests that his ideas are radical, but one doubts that they are nearly radical enough, because truly radical change would have to undercut the bloat in self-proclaimed 'research university' 'research' activities.

Moving agencies like the Genome Institute to NSF would perhaps help.  NSF budgets are typically lower, and their grants pay less or no faculty salary, so tech-hungry investigators and overhead-hungry universities would object.   Many investigators at the NIH trough are paid on grants, not by their universities, a corrupt system that should never have been allowed to begin 30 or 40 years ago, so that it became vital to so many of us today.  But that salary dependency leads to wasteful, often dissembling research, in part because of  the very understandable need always to have external funding--can't blame investigators for wanting to be paid!

Moving genetics research to NSF would force it to focus on real scientific problems, not ones based on exaggerated or even disingenuous promises of health miracles.  It would force NIH to do research on things that mattered to health (shocking idea!).  Some of that would certainly involve genes, for traits that really are 'genetic', but most would involve less glamorous, non-tech, boring things like nutrition and lifestyle changes (not research about such changes, as we already largely know what would deliver the most health for the buck, and that research, soft to begin with, leads to nothing but the claimed need for more follow-up studies).  NIH budgets for research could be greatly pared down with no loss.

If lifestyle changes were made, then diseases that are seriously genetic would be clearer, and they would be the legitimate targets of properly focused genetic research.  Meanwhile, researchers with reasonable ideas could do basic research funded by NSF---but, with less money available, they (we) would have to think more carefully, and not assume we'll always have a huge stream of funding, or that just more tech or more data meant better science.

Universities would have to  phase in a strange policy of actually paying their faculty, would have to provide at least modest lab-continuity support to allow investigators to think before they wrote grant applications, and universities would have to scale-down, gradually having fewer faculty, more stress on actual teaching (another shocking idea!), less dependency on grant overhead, and less emphasis on 'research' (much of which demonstrably goes nowhere).

This could be good for everybody.   Science would be less frenetically and wastefully competitive.  The competition could be more about ideas, less about money and publication-counts.  Such changes could, in principle, put science back toward a path more closely connected to understanding nature, than to feeding Nature.  And the journals, including Science and Nature, could phase out the gossip columns (which, in the current careerist system, we naturally read hungrily--they are probably read far more than the science articles themselves) and get back to reporting rather than touting science in a way more closely connected to their articles' actual import.

Of course, the current system feeds many, and that is probably what it is really about.  So dream-world reforms are unlikely to happen, unless simply forced not by thoughtfulness but by a plain lack of funds.

Thursday, January 14, 2010

Accidents do happen, but....

Touching on what seems to have turned into our theme of the week, John Hawks links to a story in the Telegraph yesterday reporting that a third of academics would leave Britain if threatened cuts to 'curiosity-driven' grants go through. This comes on top of deep cuts in funding for higher education in Britain across the board. According to the story, future research will be funded based on its perceived social and economic benefits; close to 20,000 people have signed a petition protesting this change.
...critics claim the move risks wiping out accidental discoveries as university departments struggle to support professors working on the kind of ground-breaking experimentation that led to the discovery of DNA, X-rays and penicillin.
But hold on.  'Curiosity-driven' research is different from accidental discoveries.

Ken, Malia Fullerton and I wrote a paper not long ago saying that epidemiology isn't working, and, basically, suggesting that people recognize this and come up with some better ideas. We had in mind specifically epidemiology's turn to genetics to explain chronic diseases, including diseases like type II diabetes and asthma, for which, even if people do carry some genetic susceptibility, the more important risk factors are clearly environmental, as shown by the fact that incidence of these diseases has risen sharply in recent decades.

We called the paper "Dissecting complex disease: the quest for the Philosopher's Stone?" (Not the Philosopher's Stone of Harry Potter fame, our reference was to the alchemist's dream of a substance that could turn base metals into gold.) The paper was published as one of the point/counterpoint papers in the International Journal of Epidemiology.

This was an interesting exercise. The paper wasn't reviewed in the usual sense, with us able to correct and revise before publication. The paper was published just as we submitted it, followed by commentaries by prominent epidemiologists. We knew people could find holes in our argument, and we waited for months for the comments, imagining how devastating they were going to be, and how we'd respond. But, when we finally got the commentaries, we were amazed. We could have done a much better job of blasting our paper than any of the comments we got. This was somewhat reassuring in that no one said we were wrong, but disappointing because we had very much wanted to start a dialog on the issues.

How is this relevant to the 'curiosity-driven research' story? Well, one of the major defenses of the status quo in the commentaries about our paper, of spending hundreds of millions of taxpayer dollars on research that everyone knows isn't working, was that we can't cut the funding to epidemiology because everyone knows that good stuff is often found by accident. This strikes us as a very strange justification for maintaining the hugely expensive system of researchers spending inordinate amounts of time and energy to write grants proposing research everyone recognizes isn't going to lead to much, never mind improve public health, and tie up equally inordinate amounts of time, energy and money on the part of reviewers who are also expected not to say that the emperor has no clothes (or the Philosopher has no Stone). In the hope that somebody will stumble across something unexpected one day that really will be progress.

This is not the same as 'curiosity-driven research'. Why is the sky blue? is an honest question and whether or not taxpayers should fund the research needed to answer it can be debated on its merits. If the UK has decided to no longer fund basic science, but only research that will lead to patents, or whatever 'social merits' are, that's very different from the idea that we should maintain a system that isn't working on the off chance that something good will come of it.  That decision can be debated, but at least it's an honest debate.