We commented a week or so ago on a piece in Science about misconduct in science, emphasizing, as do the two subjects of the piece, Drs Ferric Fang and Arturo Casadevall who have done so much to highlight the issue, that it's not just outright deliberate fraud that is the problem. Indeed, an opinion piece in the latest issue of The Lancet, "What is the purpose of medical research?", a story in Science in mid January about a hugely expensive research debacle, and the latest chapter in the arsenic life story make this abundantly clear.
From The Lancet:
About US$160 billion is spent every year on biomedical research. In a 2009 Viewpoint, Iain Chalmers and Paul Glasziou estimated that 85% of research is wasteful or inefficient, with deficiencies in four main areas: is the research question relevant for clinicians or patients? Are design and methods appropriate? Is the full report accessible? Is it unbiased and clinically meaningful?They go on:
When asked about the purpose of medical research most people would hopefully reply: to advance knowledge for the good of society; to improve the health of people worldwide; or to find better ways to treat and prevent disease. The reality is different. The research environment, with its different players, is now much less conducive to thinking about such noble goals.The journal didn't state an obvious underlying aspect of this, which is that scientists have strong professional and vested interests in doing what they know how to do. Finding nothing is not highly rewarded by the system, yet claiming more than one finds (much less faking data) is not nearly so strongly sanctioned. Scientists have to earn a living by the kind of hustling that they do in the system we have today.
The journal claims it will further examine these issues in an upcoming series, so stay tuned. Yes, The Lancet is a journal largely for medical practitioners, and if you aren't one you might think this doesn't apply to you, but the issues are indeed generalizable -- too many negative results are never published, clinical trials never reported, studies poorly conceived and carried out. (As UK physician and writer Ben Goldacre says, in his campaign to get all clinical trials released, if you flipped a coin 50 times and only reported it when it came up heads, and didn't report the number of flips, the result would look positive; this is a problem when physicians come to deciding which drugs or other treatments to use.*)
You are a stakeholder as a taxpayer, and as a patient as well, because physicians are understandably not usually very sophisticated in reading or interpreting highly technical scientific results, which scientists know very well; and in any case get home too exhausted from actual work to try more than to grasp the bottom line of papers in journals like the Lancet. Have some sympathy for them!
Too big to fail?
The piece in Science tells a specific story of good intentions but egregious waste. The "National Children's Study" was funded 12 years ago to collect data on environmental influences on health. The study has already spent $1 billion -- billion! -- and has yet to get under way. And when it does, it is likely to be so compromised that results will be much less useful than they might have been in a better organized and conducted study.
The original idea was to follow 100,000 children for 25 years, at a total cost of $2.7 billion. Pregnant women were to be identified, and their children followed up to document toxic exposures and health effects from pregnancy on. (Whether or not this was at the time a seemingly good way to be money well-spent is a valid question, but not the one we're addressing here.)
Then the bureaucracy grew, and became large and unwieldy; study design got reworked, including who and how to sample, and questions to ask. The study was broadened, to include 28 different hypotheses, such as whether video games cause violence and whether genes affect response to environmental toxins.
The study "became the vehicle for most hypotheses related to children's health," says Jonathan Samet, an epidemiologist at the University of Southern California in Los Angeles and an adviser at the time. But that was a problem, says epidemiologist Lynn Goldman of George Washington University in Washington, D.C., who, as an EPA official in the 1990s, helped conceive the project: "It was a tabula rasa. Whatever you wanted it to be, it would be that."But now, abruptly, the 40 university led sites that have spent millions on set-up and readying to interview and collect samples are being closed.
Large contractors are taking over the pilot work. The study's overseers at NIH's National Institute of Child Health and Human Development (NICHD) say that to contain costs they had no choice but to pull the plug and start fresh. But their still-evolving new plan is being challenged.The NCS is still spending $3 million every week on this study, and yet, 12 years after it was first funded, no data have yet been collected, and it's quite possible that what is finally collected will be so compromised by having been produced by a study designed by a committee that it will be, at best, hard to interpret. Indeed, the current director has announced that the study will no longer be hypothesis driven (that is, nobody will have to actually state any idea about what they think is out there) but purely data collection instead, which is worrying some investigators -- even if hypothesis-free studies are all the rage.
This seems to be a case of a study that's too big to fail. Francis Collins, head of the NIH, the parent funding agency, says, according to the piece in Science, that
The $1 billion spent so far was worth it... "You would not want to undertake this without being really sure that the model was going to work."
We can speak from decades of direct professional, as well as indirect, experience that epidemiologists are smart enough to know that bigger and longer is safer. Like AIG, too big to terminate. NIH program officers don't want to admit to millions (or billions) of lost funds. NIH leadership doesn't want Congress to see such things--even normally, much less in our tight funding era! Keep the cuttable safely out of sight!
The track record of big, bigger, biggest, long, longer, longest epidemiological studies is now quite extensive, and shows clearly that the approach may find some things of importance but even then some of those are often later (sometimes by the continuation of the same study) shown to have been incorrect or inaccurate or overstated. There are well-known inherent biases towards false-positives in huge, hypothesis-free studies. Often, the studies' primary and most useful findings were quickly made, and the continuation of the studies should provide textbook illustrations of diminishing returns. Strong epidemiological effects are relatively easily found, without requiring elephantine studies. But the academic welfare system simply won't fess up.
Is peer review the best we can do?
Remember the arsenic-based life paper published in Science in 2010? The NASA-funded study showed that bacteria replaced the phosphorus in their cells with arsenic, which , if true, would have meant that fundamental properties of life as we know them weren't always true.
The study was subsequently roundly discredited, as pretty much everyone agrees. Several biochemists tried to replicate the work and could not. These results were also published in Science (here and here). The reviews of the original paper were released to USA Today in January through the Freedom of Information Act and reviewed by, well, let's not say peers, but experts in the field, and they show how inadequate a system peer-review can be. The 3 reviewers pretty much missed the essential problems with the biochemistry, and bought the authors' claims, outlandish as they were. As the Feb 2 story in USA Today reports, the reviewers wrote:
"The results are exceptional," said Reviewer 1.
"It's a pleasure to get a well-received and carried-out study to review," said Reviewer 2.
"Reviewing this paper was a rare pleasure," said Reviewer 3, adding later on: "Great job!"Apparently Reviewer 2 did raise some concerns about the biochemistry but not enough to halt publication, or require additional evidence.
This is an example of egregiously poor reviewing, it's true, but reviewers overlooking real problems happens all the time, because they (we) are often already overwhelmed, the pace and amount of publication is heated up to the level of a nuclear reactor, so we must review in haste, or don't have the expertise to catch the depth and extent of scantily explained and/or highly technical methods, despite being 'peers'. This is true for grant reviews as well. Surely anyone who has had a grant reviewed has stories of reviewer comments that indicated how poorly someone understood the proposal (or whether they even actually read it beyond the abstract). And of course a reviewer doesn't want to pan a study proposing an approach that s/he him/herself is funded for!
And, these problems with peer review don't even touch on the benefits of not rocking the boat, or accepting innovative methods or ideas.
Honest malfeasance--truth or oxymoron?
So, scientific malfeasance can be completely honest, if you consider spending large amounts of taxpayer money on studies that are poorly designed and carried out, or asking superficial questions that, even if answered, contribute little, to be malfeasance. And we do. These kinds of problems can be blamed in part on the System -- universities dependent on overhead from grants and thus careers dependent on grants, peer review. It's a you-scratch-my-back-and-I'll-scratch-yours kind of system, in which it's to everyone's advantage not to challenge the status quo. Except the innovator's.
Open access and open review, such as what happens at the likes of the arXiv website, or PLoS ONE, look more and more appealing. But that won't fix the whole system, including the problem of too-big-to-terminate funding.
*Goldacre and others are campaigning to make the release of all clinical trials data mandatory. You can sign their All Trials Release petition here.