Some people feel that scientific issues should be kept separate from science politics. But is that even possible? The fear is often voiced that even if science does often work largely as a
business even with its self-promotional components, that is just how things are, and that if anybody listened to what people like ourselves say (which is, of course, unlikely!), many large-scale projects, which are
older than your parents' first car (and just as rusty), might be phased out, which would be unfortunate, because it would be losing all that valuable information
after decades of careful effort.
But we think it is not reason enough to maintain large projects that may once have yielded valuable results but that are now running on fumes, kept running for legacy reasons. Nobody would suggest that the data be discarded, but it could be made available while funds moved to more promising approaches.
The unlimited insult
The widely touted 'new' idea of genomic 'precision' medicine is an example. Here is a quote we ran across from George Eliot's 1871-2 book Middlemarch:
"I believe that you are suffering from what is called fatty degeneration of the heart, a disease which was first divined and explored by Laennec, the man who gave us the stethoscope, not so vary many years ago. A good deal of experience--a more lengthened observation--is wanting on the subject. But after what you have said, it is my duty to tell you that death from this disease is often sudden. At the same time no such result van be predicted. Your condition may be consistent with a tolerably comfortable life for another fifteen years, or even more. I could add no information to this, beyond anatomical or medical details, which would leave expectation at precisely the same point."
What's the point of this quote? After all,
anyone can mine almost anything for juicy quotes that support their biases or points they want to make, and invoking some prior author--not to mention a
fiction writer!--is a form of rhetorical trick that really has little actual
cogency. Who gives a hoot what George Eliot thought, after all?
When our NIH Director proclaimed a billions of
dollar project that we would finally do 'precision' based medicine, it was an
insult to every physician who has ever practiced medicine, back to Hippocrates
and of course all the others who did not write books and are thus not known to
us today. The reason is, of
course, that every honorable physician throughout history was
doing his/her best to be as precise and (to borrow a previous advertising
slogan for NIH) to do 'personalized' medicine.
We've written before about the vacuous or
transparently lobbying nature of words like 'precision', and the point here is
not that being as precise in medical diagnosis, prevention, prediction, and
treatment as is possible at any time is anything other than wholly noble. The point is that blanket
statements suggesting that genotypes are going to predict everything about a
person is a costly way to divert funds from being directed more precisely, one might say, where that word is actually appropriate.
There are many disorders that are highly predictable from genetic data (sickle cell anemia, cystic fibrosis, muscular dystrophy, and many, many others). The genetics community should show that knowing this can lead to effective gene-based approaches to what is truly and clearly 'genetic', before we just spew resources out across the entire genomic landscape.
There are many disorders that are highly predictable from genetic data (sickle cell anemia, cystic fibrosis, muscular dystrophy, and many, many others). The genetics community should show that knowing this can lead to effective gene-based approaches to what is truly and clearly 'genetic', before we just spew resources out across the entire genomic landscape.
Even if it were clearly important to assemble genomic data as part of a unified health-care and health-research system, new large-scale databases should start fresh without the legacy
of past work imposing various frameworks on the future resource. Various rationales, almost amounting to 'we need practice building data bases' have been suggested in defense of keeping elderly projects afloat, but these seem, to us, as much political rationale for holding on to resources as it is truly the best way to start making a national resource. But rather than just being cranky about this, we have various thoughts to offer, about
the current inertia that is built into our system.
The idea of better ways of identifying risk groups, for example, especially far in advance of when their risk becomes manifest as disease, is a proper major goal of public health research. Medicine usually deals with people when they are already ill or at high risk, but research can benefit from partitioned analysis of low and high risk individuals, where that is possible and as early as possible. This can be useful, e.g., in determining risk alleles or environmental exposures, or who will respond well to a given drug or therapy. The earlier the better--and when genotypes at conception do truly have high predictive power that is a proper kind of data to collect. But how often is that? After about a generation's worth of mapping studies, the answer is rather clear--if the politics is separated from the actual science.
We think there are several reasons why building huge data bases to partition populations into high and low risk groups or, much harder, individuals is right-minded in principle, but will have problems.
1. Inertia, and Momentum. Without real change, in personnel and in projects, the gravitational pull of business as usual is huge. A system of science Patricians is established, and they become Geriatric Patricians. They are, as you know, getting the bulk of the grants (e.g., first-time NIH grant recipients are about age 45 according to some recently published data, the percentage age 36 and younger has been plummeting and the percentage of PIs over age 66 growing steadily since 1980, as the graph below shows). Any savvy scientist knows that big long-term projects are politically hard to phase out. That is science politics, not science, though isn't specific to science. However, in science it can impose inertia on current methods and concepts.
Source |
Occasionally a new idea, technology, method, or, today, 'omic, does come along, and there is a swell of momentum as the herd rushes to adopt it. Again, however, the goal is to establish too-big-to-stop longterm projects. Of course, such change may sometimes be a very good thing, if the method, idea, or technology is truly beneficial. But often it is not much of an improvement, or perhaps the questions being asked are themselves conveniently changed as a funding stratagem. Do you not think this is an important part of the current system? Perhaps that's only to be expected, since real innovation is clearly hard to come by, which is nobody's fault. But the more inertially entrenched this system, the less may be the opportunity for real innovation.
For example, when everyone uses the same data sources or sequencing approach or statistical packages for analyzing data, there is a kind of channeling conformism. This is in part because lab equipment and software are complex and highly technical, and developing one's own is generally not feasible. That requires large, long-term funding, so the problem is not an easy one to solve. There are of course always innovators, and we should be grateful for that. But when struggling for one's career, or to keep continued funding, it is easier, for many reasons--in science as in other fields--to simply do what others are doing.
2. The 'Quantum Mechanical Effect.' In quantum mechanics, when a basic property of a primary particle like an electron or photon is measured, it affects the particle's other properties. The combination is probabilistic and measuring is a form of interference, which generates change. You can't know the exact nature of the change without re-measurement--which then creates the same problem.
This Quantum Mechanical Effect has a kind of analogy in biomedical genetics and epidemiology. When even a bad study's findings are trumpeted to the media by the investigators and the journals, in full-throated self-promotion mode, and the media report is more or less without serious circumspection, ordinary people may change their behavior accordingly. One reason is that most doctors themselves cannot keep up, and since the findings are so oscillating and fickle, even researchers may not have a grasp on complex causation. As a result, diets and other habits change, companies change their products, advertising, and even their labels (because, believing the results, the FDA insists). So the pattern of exposure to the purported risk factors changes and hence so does the risk itself. That is, the effects of retrospective statistical data-fitting themselves constitute a kind of 'measurement' that itself affect the risks being measured.
3. The unknown unknowns. Donald Rumsfeld doesn't really deserve the ridicule he gets for his quip about the unknown unknowns (even if he may deserve ridicule for other things). It is very clear from recent history, not just remotely distant lore, that lifestyles change in major ways, and very quickly. If it were just a matter of differing doses of the same old risk factors, then we're faced with problem #2 above that the exposure levels will change in unknowable ways.
This is true if the mix of exposures act in additive ways (just add up the estimated risk of a change in this behavior, and then of that behavior). But even if this additivity assumption were accurate, what is more likely to be important, is that exposures to entirely unforeseeable factors will arise. Nobody in the 1950s could have predicted the number of hours we'd spend watching flickering images, or flying in jets or being CT-scanned, or eating new fad foods, or new manufactured foods, or the kinds of infection or antibiotic exposures we would experience. Yet changes like these have made a huge difference in disease patterns just in the lifetime of us seniors: obesity, lung cancer (down in men, up in women), diabetes, autism, asthma, psychiatric disorders of all sorts. Some of these have gone from being not so important, to being pandemic.
Some of course defend the system not just as the game we know but also arguing that it is the best way to generate good science. This is where the serious debate should take place. Whether real change can be forced on the system is an open question.
prof. weiss, since visscher and deary et al have found that epigenetic mechanisms are highly heritable, does it mean that epigenetics as an atack against genetic causation was a mistake? some form of geentic determinism seems to be the case after all
ReplyDeleteFirst, I would say genetic 'effects' rather than 'determinism'. But in principle, to the extent that epigenetic effects are transmitted in a transgenerational way (that is, not because the fetus experienced the causal environmental effects that produced the epigenetic marking in the parent, they will constitute an additional form of inheritance of 'genetic' factors. It would add to the range of mechanisms of genetic causation. It doesn't mean that from the moment of conception one can predict everything, but it probably does mean (if the effects could be identified) that prediction could be more precise than otherwise--but that doesn't mean the precision will always be high. Sometimes it will, sometimes it won't. It would be easiest for early-onset pediatric traits, but a serious challenge to assess that, especially for effects that take years or decades of life to materialize.
ReplyDelete