Thursday, March 14, 2013

Is "more research" really needed?

The epidemiology roller-coaster
In our recent post "Wait! Wait! Don't tell me!" we discussed the roller-coaster nature of epidemiological studies of heart disease.  The attempt to identify factors that raise or lower risk has been largely farcical--tea, coffee, antioxidants, vitamin C, cholesterol, meat, processed meat, red wine, cruciferous vegetables, olive oil.....  and a host of other poisons or coronary salves that have been in and then out of favor when it comes to diet and heart disease.  Is the 'Mediterranean Diet' (the subject of another recent post) good for you or not??  (But, if it turns out you ate the wrong diet after all, here's some advice from The Onion: "Aspirin Taken daily with Bottle Of Bourbon Reduces Awareness Of Heart Attacks"--H/T Dan Parker.)

This, despite countless large, long studies.**  And going along with that, a similar story in regard to the genetics of heart disease. Genes found to be 'causal' in some studies are ruled out in others.  This has become a largely typical scene.  Yet investigators keep demanding more funding for larger, longer studies.  In our previous post, we said that when investigators or news media reporting such studies conclude with "more research is needed", you should beware that your pocket is about to be picked.  Here's why.
 
Let's here make little reference to the selfish motives of the investigators who want to leverage inconclusive results as a rationale for doing more, more, even much more of the same. We regularly carp on the opportunism, rather than careful cogent thinking, by which epidemiologists and geneticists demand ever more fundingIn fact, there are a host of more scientifically legitimate reasons to say that the plea for more studies should be ignored and is misplaced.  Clinicians have a very hard time knowing what to tell their patients, even though we know enough about diet to know--as Hippocrates explicitly did 2500 years ago!--that moderation is the key.  

Cochrane Reviews sets the whole dilemma, and the solution, to "Somebody That I Used To Know":



(Confused as a practitioner?  Bottom line - tell your patients "Don't eat like a pig and go get some activity.")

If we just do larger studies, we'll be able to get off the roller coaster... Really?

Indeed, there are many real health problems to be solved.  So let's take the issues from a scientific point of view: The rationale for more is based on the assumption that our uncertainties are due to the inadequate sizes of our samples, not to the nature of complex causation that underlies traits like heart disease--if 'heart disease' is indeed a biologically meaningful trait in the context of finding causes for 'it'.
 
The underlying assumption is basically a statistical one, that these are true, more or less independent causes and that other factors will 'even out' if we have large enough samples.  That is another way of saying that the 'signal to noise ratio' will be favorable so that the bigger the sample the clearer the signal (the measured purportedly causal factor), because it will be more easily detectable against the background of measurement errors and other unaccounted complications.  Yet another way of saying this is that these factors really are causes in the scientific sense.

The premise for the "more research is needed" claim is that by such enlarged studies we can slice and dice these putative causal factors to find out what in our diet, or, say, in the proverbial Mediterranean diet, are really responsible. Is it the skin or the meat of the almond?  Is it the tomato or some element of olive oil?  Reservatrol in red wine?  (And a lot of people with a lot invested in reservatrol certainly hope it's that.)

From philosophy of science and practical points of view alike, we can be pure empiricists:  it doesn't matter what aspect of a sensible diet, and we all have a general idea what that means, is specifically responsibleThere is a time in science to accept a kind of ignorance once we know the main facts: even Isaac Newton didn't see a need to know what gravity 'is', once he knew how to understand its effects.

If we follow common sense, the problem will be eased.  We can turn out attention elsewhere.  Indeed, the cases of a disease, like heart disease, that remain, even after we do that, may be those that really are genetic or due to some specific, strong risk factor.

Why not?  Why not.
But there are many other reasons to say it doesn't matter and we don't need to chase the rainbows of this kind of hyper-reductionism.  And these reasons apply widely to biological causation, and to many other common diseases for which similar "more research is needed" demands are made. These reasons are both practical and reflect an understanding of how evolution works and how genes affect traits:
1.  Traits like heart disease arise after decades of normal life that is in fact made possible by our plentiful life style.
2.  For complex diseases, a cause whose effects don't appear for decades is a very weak one with small biological effects, so naturally will be hard to find.
3.  The effects can largely be avoided: environmental exposures, rather than genes, contribute the bulk of risk, but individual environmental components usually have very weak causal effects.
4.  Environmental (even more than genomic) components are notoriously hard and very expensive to identify, or to measure in an accurate way in practice.
5. Most environmental effects seem to be beneficial in some contexts and harmful in others (lower risk of some disease but raise it for others).
6.  Environmental effects are estimated retrospectively by studies like case-control comparisons.  But what we want to do with exposures is predict their effects, yet we don't know what the mix of future exposures will be.  Changes in other factors will affect the risk associated with a test factor--environment or gene--by so much that the factor's specific risk may be wildly different.  Yet future environments cannot be predicted in principle.
7.  Genotypic risk (the target of 'personalized genomic medicine' and GWAS research) is based on many different genes.  Common variants in those genes will stay around, but many or most causal risk factors are (a) so weak they depend thoroughly on the context of the rest of the person's genome, (b) rare so that they will not even be in the future population.  (c) New variants arise by mutation all the time, so that yesterday's genomic complexes are not tomorrow's.  Furthermore, because of sexual reproduction, segregation, and recombination (Mendel's principles), the same genotypes at the many contributing loci will not arise in new samples.  
8.  As a result, especially if rare variants are a major part of the story as they seem to be, we can't assume that the background genotypes of a given test gene will not 'even out' in large samples.
9. Genetic as well as environmental risk factors are pleiotropic--they affect many different traits.  That is why they won't necessarily have an overall good or bad effect, but will have differing effects on different traits.
10.  Even large relative risks are typically very small absolute risks.  A 30% increase in risk (quite a large increase as these things go) will only change a typical disease risk of 1% to 1.3%.  The rest of the uncertainties and inaccuracies mentioned in this list mean that such risk effects aren't stable and aren't worth the cost of identifying them, if we can get even better results with general lifestyle changes, requiring no technical or clinical costs, as we know very well.
11.  Small risks are very difficult to estimate accurately even if the estimates were stable, raising the question whether they are worth estimating, given the above issues that cloud the picture.
12.  If as we are repeatedly told, translation of research to public health improvement is the objective, we should take the overall lifestyle rather than  nit-picking approach. We don't need to identify each sub-sub-sub-element of, say, the Mediterranean or a moderation-diet, if we know that such a diet can have major effects.
13.  Elimination of one disease just leads to the increase in others, sometimes of later onset perhaps but often with longer, more miserable morbidity, higher treatment costs to the health care system and so on.
14.  And the corollary, elimination of a dietary component identified as a risk factor for one disease (alcohol and breast cancer, e.g.) might mean elimination of a factor identified as protective for another (red wine and heart disease).
So let's fund real problems, ones we can actually solve
There are many really important problems, like antibiotic resistance, the need for vaccines, other infectious disease and truly genetic disease problems that would be very suitable for focused (not 'omic') research and where the resources would be well spent, rather than throwing them in the expensive chase to estimate micro-risks.  Of course, it is the lobbying of the university research system that keeps funds pouring into the latter.

Many studies that have been going on for decades are long past their shelf-date.  The first wave of them were valuable because we didn't know better, but now we know that their findings are not stable or even correct (which doesn't mean the studies were badly done in any way---quite the contrary--it's that they were basically done well that makes the point!).  But now we have to face what we know and why these studies didn't generate definitive or clear answers.  Even the Framingham heart study, which is certainly one of the most prominent of them, shows this.  That study's primary findings, about factors as cholesterol, are now coming under various types of question to a surprising extent. Whether or to what extent we have been mistaken in our LDL/HDL/etc obsessions about cholesterol, the study that decades ago set us onto the problem long ago stopped yielding sufficient new important information 

This is only one of the most prominent among a number of other such studies.  The new onslaught of biobanks is the next generation of such studies.  They either long ago found their main result, or they never found a cogent, durable result, or even if they just showed that definitive results are elusive, they have reached the point of costly diminishing returns.

The time has come for resources to go elsewhere.  

---------------------------------
**A brief digression on epidemiology's failings: first, see this nice discussion by a statistician of the idea that "90% of epidemiology is wrong." Then, consider that there are many reasons that studies can be 'wrong.'  In part it depends on the definition of 'wrong'--methodology can be inapt or misapplied, or statistical analysis can be in technical error or wrongly applied or interpreted, so that the results don't in fact accurately answer the question posed by the study. 

Or, methodology and statistics are first-rate but the study isn't replicable, which, by definition, means 'wrong', even if the results accurately reflect the study population--that is, they're 'correct', but it's a weird sample and for that reason can't be replicated.  This is often true of genetic studies, because families or isolated populations or populations with a history of isolation might have their own causal mutation/s.  Or, samples can simply be different, with no errors involved.

Or, because so many complex diseases are due to environmental factors, or genetic interaction with environmental factors, the environment will change, and what was once causal no longer is, or vice versa--type two diabetes is endemic in Native American populations, but was unknown before WWII, e.g.  So, 'wrong' need not imply culpable failings by the investigators, but just that the subject matter is challenging--and that preconceptions or theoretical assumptions can subtly affect the chosen design or implementation, etc.

No comments:

Post a Comment