Back on June 25, the New York Times ran a story by Carl Zimmer about the role that the lack of confirmation or refutation studies plays in the history, legacy, and directions of science these days. We're late blogging about it, but we didn't want to let it slide. The story highlights two recent papers that were much criticized, and describes what happened after the criticisms were in. One of the papers was the one claiming that arsenic can substitute for phosphorus in bacterial DNA, suggesting that other forms of life might exist, especially extraterrestrially, and the other was the extrasensory perception paper, with its claims that experiments have proven it exists, both of which we blogged about when they were published (here and here).
The arsenic bacteria paper was criticized heavily for its methodological lack of rigor, but, as the Times story points out, no one has the time, funding, or inclination to actually demonstrate this with new experiments; the paper has not been retracted, which means that the results still stand. And, three papers reporting a failure to reproduce the results of the clairvoyance paper, on the other hand, have since been submitted to and rejected by the journal in which the original claims were made. The journal says it has a longstanding policy of not publishing replication studies. So, again, those results still stand.
Indeed, the point of the Times article is that journals want to publish New Discoveries, not replication or even refutation studies. The latter are not sexy. There are many ways a study that is true might not be confirmed by another study, and it can be hard to tease out why. And we are not a me-too society. Investigators, given their career (and funding) circumstances, don't want to spend time and resources on trying to replicate a result: if they do, we'll say "why did you bother to repeat somebody else's work?". If they don't, we'll say "so what? That study was suspicious anyway!"
The problem of incorrect studies surviving far past their sell-by date is widespread in science, but is usually not a matter of malfeasance or even incompetence. But it is a matter of our go-too-fast, claim-too-much, and self-promoting careerism in science. We've posted on this many times. John Ioniddes a few years ago made some statistical arguments about why ballyhoo'ed results in major journals usually are wrong, and various authors have shown why the strength-of-effect claims in such studies are systematically over-estimated.
When claims of this kind are trumpeted and believed, there can be major shifts in resources, as others rush to extend the findings to their own areas of expertise. Scientists, caught in careerist traps in which funding and publication depend largely on what's trendy, cannot be faulted for acting this way. But such lemming behavior takes funds away from other areas that might be much more informative or interesting. The whole system needs to slow down and sober up, if we want what gets published to have stronger bases that it seems to now.
Examples like those cited in the article are the more visible ones, which is why they are known to Zimmer and referred to (also, are recognized by more readers, because the original story got so much ink). But there is every reason to think these are the proverbial tip of the iceberg. Every day journals publish stories on genetic effects, for example, and around 140 per week deal with the genetics of human disease alone! What fraction of these, and what specific findings are wrong, for one (legitimate) reason or another?
Science correctly makes a big deal about actual fraud, but which it seems (we believe, quite correctly) are fortunately very rare. But uncorrected, incorrect, hasty science does seem to be a real problem with serious consequences for science itself.
No comments:
Post a Comment