Monday, August 30, 2010

Trust in science

The NYTimes has been carrying articles about the investigations of Marc Hauser, a behavioral psychologist at Harvard, who was accused of fabricating data in published research papers (here, and here and here, if you're not totally sick of the subject yet, but the story is also being covered by Nature and Science, if you subscribe, and elsewhere).  This had apparently caused uneasiness among his students and collaborators for many years until finally someone blew the proverbial whistle.

We don't know anything other than what's in the news about this case, but it does serve as a reminder of how fragile the underpinnings of science can be.  Everything we do in science depends heavily on the work of others, the equipment and chemicals we use, the published literature, laboratory procedure manuals, recorded provenance of the samples we use, and much else.  No matter how important your work, you are still using this past legacy even just to design what you are going to do next, including even the questions you choose to ask.

Blatant fraud in science seems to be very rare indeed.  But in our self-righteous feelings at the discovery of someone who pollutes the commons, we do have to be aware of where truthfulness lies, and where it doesn't.

Every experiment involves procedures, protocols, data collection, data analysis, and interpretation.  It usually involves a crew of students, technicians, faculty, and others to do a project.  We often need collaborative teams of specialists.  They all can't know everything the other's doing, and can't all know enough to judge the other's work.We can replicate others' work only up to a very limited extent.  We have to rely on trust.

Every one is fallible. Study designs are rarely perfect, and their execution usually has areas in which various kinds of mistakes can be made, and it's very hard to catch them.  We all know this, and thus have to carefully judge studies we see, but we tend to view sources of error as simply random, and usually minor or easily caught.  Minor inefficiencies in science, and reasons for quality control.

But that is a mistaken way to view science, because science, as any human endeavor, involves integrity.  Aspects of many or even most studies involve shadings that can alter the objective facts.  The authors have a point of view, and they may color their interpretations accordingly--telling the literal truth, but shaping it to advance their theories.  Very unusual or unrepresentative observations are often discarded and barely mentioned if at all in the scientific report (justified by calling them 'outliers').  Reconstructions--like assembling the fossil fragments into a hip bone or skull--are fallible and may reflect the expectations or evolutionary theories of the paleontologist involved.

These are not just random errors, they are biases in science reporting, and they are ubiquitous.  They are, in fact, often things the investigators are well aware of.  Disagreeing work, or work by the investigator's rivals or competitors, may not be cited in a paper, for example.  Cases may be presented in ways that don't cover all that's known because of the need to get the next grant.  Negative results may not be published.  Some results involve judgment calls, and these can be in the eye, and the expectations, of the observer.  We are not always wholly oblivious of these things, but reported results are not always 100% as honest as we'd like to believe, again because while not outright lies, authors may be tempted to shade the known truth.

So is intentional fabrication of data worse, because it's more complete?  The answer would be clearly yes, because we have to have constraints on our tendencies to promote our own ideas, to rigidly prevent making up data, and enhance as much as possible our ability, at least in principle, to try to replicate an experiment.  But we are perhaps more tolerant than we should be of the shadings of misrepresentation that exist, especially those of which investigators are aware.

We don't know whether Dr Hauser did what he's accused of.  But we do feel that if the accusations are true, he should be prevented from gainful employment by any university.  We have to hold the line, strictly, on outright fraud.  It's simply too great a crime against all the rest of us.

But at the same time, we need to realize both the central importance of trust and truthfulness in science, and the range of ways in which truth can be undermined.  Fortunately, most science seems to be at least technically truthful.  But our outrage against outright fraud should be tempered by knowledge of  the many subtle ways in which bias and dissembling, along with honest human error, are part of science, and that must lead us to question even the past work on which all of our present work rests.


Jennifer said...

What will happen if we lose out trust in science? I suppose the same thing that happens when we lose our trust in government or health care... We try to be more self reliant and depend more on our own intuition - for better or for worse. Like the belief my mother-in-law used to follow "don't confuse me with facts, I already have my opinion". This article would just confirm her belief in that. If you can't trust the facts, why bother changing your ideas. Also, the "facts" aren't always facts - depending on where science is at the time.

Texbrit said...

Tell me it ain't so! If I lose my faith in science, what else is left for me to consider as my religion? We found out God was a hypocrite (if not a lier)...and now scientists too?

Ken Weiss said...

We know that science is very successful in many areas, especially in technology where things either work or they don't (whether or not we really understand the reasons).

The non-blatant issues we raise here apply to complex questions where, perhaps especially these days, there are many pressures to 'produce'.

We're not saying that all science is bad, or dishonest. Not at all! But we are saying that we're all only human, we hunger for success, and we see things through those lenses. We're in such a system these days that many scientists are well aware of the truth-shaping that goes on and that they do. But some of it is done unawares, perhaps more so than in the less pressured past, when science was more the playground of the idle wealthy.

One of the first to recognize this kind of thing in a systematic way was Ludwik Fleck in the 1930s who wrote a book about how syphilis was approached and a test for it developed. The later presentation of similar ideas was the famous book by Thomas Kuhn about scientific 'revolutions.'

These works changed the nature of history and philosophy of science, from studies about how we discover the objective facts of nature, to about how our attempts to such discoveries are part of a social fabric that affects what gets done and how it's interpreted.