That time hasn't come yet, and partly that's because professors have to have new studies to keep our grants and our jobs, and we do what we know how to do. But there are deeper reasons, without obvious answers, and they're important to you if you care about what science is, or what it should be--or what you should be paying for.
Last Thursday, we discussed some aspects of the problem when a set of causes that we suspect work only by affecting the probability of an outcome we're interested in. The cause may truly be deterministic, but we just don't understand it well enough, so must view its effect in probability terms. That means we have to study a sample, of repeated individuals exposed to the risk factor we're interested in, in the same way you have to flip a coin many times to see if it's really fair--if its probability of coming up Heads is really 50%. You can't just look at the coin or flip it once.
Nowadays, reports are often of meta-analysis, in which, because it is believed that no single study is definitive (i.e., reliable), we pool them and analyze the lot, that is, the net result of many studies, to achieve adequate sample sizes to see what risk really is associated with the risk factor. It should be a warning in itself that the samples of many studies (funded because they claimed and reviewers expected them to be adequate to the task) are now viewed as hopelessly inadequate. Maybe it's a warning that the supposed causes are weak to begin with--too weak for this kind of approach to be very meaningful?
Why, among countless examples, after having done many studies don't we know if HDL cholesterol does or doesn't protect from heart disease, or antioxidants from cancer, or coffee is a risk factor, or obesity is, or how to teach language or math, or avoid misbehavior of students, or whether criminality is genetic (or is a 'disease'), and so on--so many countless examples from the daily news, and you are paying for this, study after study without conclusive results, every day!
There are several reasons. These are serious issues, worthy of the attention of anyone who actually cares about understanding truth and the world we live in, and its evolution. The results are important to our society as well as to our basic understanding of the world.
So, then, why are so many results not replicable?
Here are at least some reasons to consider:
1. If no one study is trustworthy, why on earth would pooling them be?Overall, when this is the situation, the risk factor is simply not a major one!
2. We are not defining the trait of interest accurately
3. We are always changing the definition of the trait or how we determine its presence or absence
4. We are not measuring the trait accurately
5. We have not identified the relevant causal risk factors
6. We have not measured the relevant risk factors accurately
7. The definition of the risk factors is changing or vague
8. The individual studies are each accurate, and our understanding of risk is in error
9. Some of the studies being pooled are inaccurate
10. The first study or two that indicated risk were biased (see our post on replication), and should be removed from meta-analysis....and if that were done the supposed risk factor would have little or no risk.
11. The risk factor's effects depend on its context: it is not a risk all by itself
12. The risk factor just doesn't have an inherent causal effect: our model or ideas are simply wrong
13. The context is always changing, so the idea of a stable risk is simply wrong
14. We have not really collected samples that are adequate for assessing risk (they may not be representative of the population at-risk)
15. We have not collected large enough samples to see the risk through the fog of measurement error and multiple contributing factors
16. Our statistical models of probability and sampling are not adequate or are inappropriate for the task at hand (usually, the models are far too simplified, so that at best they can be expected only to generate an approximate assessment of things)
17. Our statistical criteria ('significance level') are subjective but we are trying to understand an objective world
18. Some causes that are really operating are beyond what we know or are able to measure or observe (e.g., past natural selection events)
19. Negative results are rarely published, and so meta-analyses cannot include them, so a true measure of risk is unattainable
20. The outcome has numerous possible causes; each study picks up a unique, real one (familial genetic diseases, say), but it won't be replicable in another population (or family) with a different cause that is just as real
21. Population-based studies can never in fact be replicated because you can never study the same population--same people, same age, same environmental exposures--at the same time, again
22. The effect of risk factors can be so small--but real--that it is swamped by confounding, unmeasured variables.
This situation--and our list is surely not exhaustive--is typical and pervasive in observational rather than experimental science. (In the same kinds of problems, lists just as long exist to explain why some areas even of experimental science don't do much better!)
A recent Times commentary and post of ours discussed these issues. The commentary says that we need to make social science more like experimental physical science with better replications and study designs and the like. But that may be wrong advice. It may simply lead us down an endless, expensive path that simply fails to recognize the problem. Social sciences already consider themselves to be real science. And presenting peer-reviewed work that way, they've got their fingers as deeply entrenched into the funding pot as, say genetics does.
Whether coffee is a risk factor for disease, or certain behaviors or diseases are genetically determined, or why some trait has evolved in our ancestry...these are all legitimate questions whose non-answers show that there may be something deeply wrong without current methods and ideas about science. We regularly comment on the problem. But there seems to be no real sense that there's an issue being recognized, in opposition to the forces that pressure scientists to continue business as usual---which means that we continue to do more and more and more-expensive studies of the same things.
One highly defensible solution would be to cut support for such non-productive science until people figure out a better way to view the world, and/or that we require scientists to be accountable for their results. No more, "I write the significance section of my grants with my fingers crossed behind my back" because I know that I'm not telling the truth (and the reviewers, who do the same themselves, know that you are doing that).
As it is, resources go to more and more and more studies of the same that yield basically little, students flock to large university departments that teach them how to do it, too, journals and funders make their careers reporting their research results, and policy makers follow the advice. Every day on almost any topic you will see in the news "studies show that....."
This is no secret: we all know the areas in which the advice goes little if anywhere. But politically, we haven't got the nerve to make such cuts and in a sense we would be lost if we had nobody assessing these issues. What to do is not an easy call, even if there were the societal will to act.
1 comment:
Nice list! Thank you.
Post a Comment