Showing posts with label negative results. Show all posts
Showing posts with label negative results. Show all posts

Friday, May 18, 2012

Non-replicability in science: your antelope for the day

A piece in the May 17 Nature supports one of Ken's favorite observations, something he says while wearing his Anthropologist's hat -- "Journal articles are just an academic's antelope for the day."  We're still just hunter/gatherers -- our published papers are, more often than not, nothing more than the way we feed ourselves.  Our basket of berries -- eaten today, droppings tomorrow.

Blackbuck male, females; Photo from Wikimedia, Mr Raja Purohi
Ed Yong, in "Replication studies: Bad copy," reports that most published studies can't be replicated.  This is something we often talk about with respect to genetic studies, and there are many reasons for this that are specific to genetic data, but apparently it's even more rampant in psychology, for reasons also specific to the field.  

And there is the notorious problem that 'negative' results are not published very often.  They're not glamorous and won't get you tenure--even if some of the most important findings in science are 'negative' if they steer work towards valid rather than dreamt-of theory or hypothesis.  Clinical trials are a major example, but less noticed are ephemeral natural selection stories about evolution.

A paper published last year claiming support for extrasensory perception, or psi, for example, produced a major kerfuffle (we blogged about it at the time).  The aftermath has been no less interesting, and informative about the world of publishing, as researchers who tried to replicate the findings but failed also failed to find publishers for their results.  This lead to a lot of discussion about the implications of negative results not being published, a discussion that has flared up frequently in academia, as well it should, although we're no closer to resolving it than ever.
There are some experiments that everyone knows don't replicate, but this knowledge doesn't get into the literature,” says [Eric-Jan] Wagenmakers [mathematical psychologist at the University of Amsterdam]. The publication barrier can be chilling, he adds. “I've seen students spending their entire PhD period trying to replicate a phenomenon, failing, and quitting academia because they had nothing to show for their time.”
But we'll leave that issue for another time.

The question of why studies so often aren't replicable is a different, if related one.  And one that The Reproducibility Project, a large scale collection of scientists from around the world, is addressing head on, as they attempt to replicate every study published in three major psychology journals in 2008, as described last month in the Chronicle of Higher Education.  
For decades, literally, there has been talk about whether what makes it into the pages of psychology journals—or the journals of other disciplines, for that matter—is actually, you know, true. Researchers anxious for novel, significant, career-making findings have an incentive to publish their successes while neglecting to mention their failures. It’s what the psychologist Robert Rosenthal named “the file drawer effect.” So if an experiment is run ten times but pans out only once you trumpet the exception rather than the rule. Or perhaps a researcher is unconsciously biasing a study somehow. Or maybe he or she is flat-out faking results, which is not unheard of. 
According to Yong, the culture in psychology is such that experimental designs that "practically guarantee positive results" are perfectly acceptable.  This is one of the downsides of peer review -- when all your peers are doing it, good scientific practice or not, you can get away with it, too.
And once positive results are published, few researchers replicate the experiment exactly, instead carrying out 'conceptual replications' that test similar hypotheses using different methods. This practice, say critics, builds a house of cards on potentially shaky foundations.
So, if a study isn't replicated exactly (or however exactly it can be), it's possibly because the methods were not described in enough detail for the study to be replicated.  Or, and this is a problem certainly not confined to psychology, the effect was small and significant by chance, as epidemiologist John Ionnides suggested in a paper published in 2005 that garnered a lot of attention for saying most Big-Splash studies are false.  He explained this in statistical terms, having to do with bias in significance levels of studies of new hypotheses and similar issues.

As the Chronicle story says about non-replicability:
The researchers point out, fairly, that it’s not just social psychology that has to deal with this issue. Recently, a scientist named C. Glenn Begley attempted to replicate 53 cancer studies he deemed landmark publications. He could only replicate six. Six! Last December I interviewed Christopher Chabris about his paper titled “Most Reported Genetic Associations with General Intelligence Are Probably False Positives.” Most!
So, psychology is under attack.  We blogged not long ago about an op/ed piece in the New York Times by two social scientists calling for an end to the insistence that the social sciences follow any scientific method.  Enough with the physics envy, they said, we don't do physics.  Thinking deeply is the answer.  But, would giving these guys free rein to completely make stuff up really be the solution?  Well, it might just be, if their peers agree. But, let's not just pick on psychology.  The problem is rampant throughout the sciences. 

Meanwhile, the motto seems to be:  Haste makes....nutrition for scientists!

Saturday, April 4, 2009

Credible research

Marion Nestle, Professor of Nutrition and Food Studies at NYU, was on campus last week to speak, sponsored by the Penn State Rock Ethics Institute. Nestle is the author of a number of popular books about the politics of food, and an outspoken critic of the influence of the food industry on how and what we eat, and thus, on the health of the American population. She's particularly concerned with obesity in children and the role of advertizing in promoting the consumption of excess calories even in children as young as two. She believes that any money researchers take from the food industry is tainted money. Her point is that it's impossible for a scientist to do unbiased research, however well-intentioned, if the money comes from a funder that stands to gain from the findings. Indeed, it has been found that results are significantly more likely to favor the funder when research is paid for by industry.

The same can and has been said about the pharmaceutical industry and drug research, of course, and, though we don't know the particulars, it has to be equally true of chemistry or rehab or finance or fashion design. But, as we hope our posts about lobbying last week make clear, the problem of potentially tainted research doesn't start and stop with the involvement of money from industry. Research done with public money can be just as indebted to vested interests, its credibility equally as questionable. It can be somewhat different because researchers tend not to feel indebted to the actual source of the money -- the taxpayer -- but research done on the public dollar can be just as likely to confirm the idea or approach the funding agency supports.

Even when money isn't the motivation, there are many reasons that research might not be free from bias -- the rush to publish, the desire to be promoted or get a pay raise, commitment to given results, prior assumptions, unwillingness to be shown wrong. Many prominent journals won't publish negative results and of course journals and the media like to tout if not exaggerate positive findings. There is pressure to make positive findings -- and quickly -- to use to get one's next grant (and salary). This is one reason it is commonly said that one applies for funds to do what's already been done. This makes science very conservative and incremental when careers literally depend on the march of funding, no matter what their source.

Besides the pressure to conform and play it safe, a serious problem is that such bias doesn't necessarily make the science wrong, but it does make it more difficult to know how or where it's most accurate and worthy. And it can stifle innovative, truly creative thinking. Some of the most important results are likely to be negative results, because they can tell us what isn't true or important, and guide us to what is. But that isn't necessarily what sponsors, especially corporate sponsors, want, and it isn't what journals are likely to publish.

So, while it's essential, as Marion Nestle and others consistently point out, to eliminate the taint of vested interest from research, it's impossible to rid research of all possible sources of bias. And the reality is, at least for our current time, that it's only the fringe of those most secure in their jobs etc., who can speak out about the issues (as Nestle said, she has tenure and doesn't need money to do her work, so she can say anything she wants to) -- and they do not have the leverage to change the biases built into our bottom-line, market- and career-driven system.