|Blackbuck male, females; Photo from Wikimedia, Mr Raja Purohi|
And there is the notorious problem that 'negative' results are not published very often. They're not glamorous and won't get you tenure--even if some of the most important findings in science are 'negative' if they steer work towards valid rather than dreamt-of theory or hypothesis. Clinical trials are a major example, but less noticed are ephemeral natural selection stories about evolution.
A paper published last year claiming support for extrasensory perception, or psi, for example, produced a major kerfuffle (we blogged about it at the time). The aftermath has been no less interesting, and informative about the world of publishing, as researchers who tried to replicate the findings but failed also failed to find publishers for their results. This lead to a lot of discussion about the implications of negative results not being published, a discussion that has flared up frequently in academia, as well it should, although we're no closer to resolving it than ever.
There are some experiments that everyone knows don't replicate, but this knowledge doesn't get into the literature,” says [Eric-Jan] Wagenmakers [mathematical psychologist at the University of Amsterdam]. The publication barrier can be chilling, he adds. “I've seen students spending their entire PhD period trying to replicate a phenomenon, failing, and quitting academia because they had nothing to show for their time.”But we'll leave that issue for another time.
The question of why studies so often aren't replicable is a different, if related one. And one that The Reproducibility Project, a large scale collection of scientists from around the world, is addressing head on, as they attempt to replicate every study published in three major psychology journals in 2008, as described last month in the Chronicle of Higher Education.
For decades, literally, there has been talk about whether what makes it into the pages of psychology journals—or the journals of other disciplines, for that matter—is actually, you know, true. Researchers anxious for novel, significant, career-making findings have an incentive to publish their successes while neglecting to mention their failures. It’s what the psychologist Robert Rosenthal named “the file drawer effect.” So if an experiment is run ten times but pans out only once you trumpet the exception rather than the rule. Or perhaps a researcher is unconsciously biasing a study somehow. Or maybe he or she is flat-out faking results, which is not unheard of.According to Yong, the culture in psychology is such that experimental designs that "practically guarantee positive results" are perfectly acceptable. This is one of the downsides of peer review -- when all your peers are doing it, good scientific practice or not, you can get away with it, too.
And once positive results are published, few researchers replicate the experiment exactly, instead carrying out 'conceptual replications' that test similar hypotheses using different methods. This practice, say critics, builds a house of cards on potentially shaky foundations.So, if a study isn't replicated exactly (or however exactly it can be), it's possibly because the methods were not described in enough detail for the study to be replicated. Or, and this is a problem certainly not confined to psychology, the effect was small and significant by chance, as epidemiologist John Ionnides suggested in a paper published in 2005 that garnered a lot of attention for saying most Big-Splash studies are false. He explained this in statistical terms, having to do with bias in significance levels of studies of new hypotheses and similar issues.
As the Chronicle story says about non-replicability:
The researchers point out, fairly, that it’s not just social psychology that has to deal with this issue. Recently, a scientist named C. Glenn Begley attempted to replicate 53 cancer studies he deemed landmark publications. He could only replicate six. Six! Last December I interviewed Christopher Chabris about his paper titled “Most Reported Genetic Associations with General Intelligence Are Probably False Positives.” Most!So, psychology is under attack. We blogged not long ago about an op/ed piece in the New York Times by two social scientists calling for an end to the insistence that the social sciences follow any scientific method. Enough with the physics envy, they said, we don't do physics. Thinking deeply is the answer. But, would giving these guys free rein to completely make stuff up really be the solution? Well, it might just be, if their peers agree. But, let's not just pick on psychology. The problem is rampant throughout the sciences.
Meanwhile, the motto seems to be: Haste makes....nutrition for scientists!