The NYTimes has been carrying articles about the investigations of Marc Hauser, a behavioral psychologist at Harvard, who was accused of fabricating data in published research papers (here, and here and here, if you're not totally sick of the subject yet, but the story is also being covered by Nature and Science, if you subscribe, and elsewhere). This had apparently caused uneasiness among his students and collaborators for many years until finally someone blew the proverbial whistle.
We don't know anything other than what's in the news about this case, but it does serve as a reminder of how fragile the underpinnings of science can be. Everything we do in science depends heavily on the work of others, the equipment and chemicals we use, the published literature, laboratory procedure manuals, recorded provenance of the samples we use, and much else. No matter how important your work, you are still using this past legacy even just to design what you are going to do next, including even the questions you choose to ask.
Blatant fraud in science seems to be very rare indeed. But in our self-righteous feelings at the discovery of someone who pollutes the commons, we do have to be aware of where truthfulness lies, and where it doesn't.
Every experiment involves procedures, protocols, data collection, data analysis, and interpretation. It usually involves a crew of students, technicians, faculty, and others to do a project. We often need collaborative teams of specialists. They all can't know everything the other's doing, and can't all know enough to judge the other's work.We can replicate others' work only up to a very limited extent. We have to rely on trust.
Every one is fallible. Study designs are rarely perfect, and their execution usually has areas in which various kinds of mistakes can be made, and it's very hard to catch them. We all know this, and thus have to carefully judge studies we see, but we tend to view sources of error as simply random, and usually minor or easily caught. Minor inefficiencies in science, and reasons for quality control.
But that is a mistaken way to view science, because science, as any human endeavor, involves integrity. Aspects of many or even most studies involve shadings that can alter the objective facts. The authors have a point of view, and they may color their interpretations accordingly--telling the literal truth, but shaping it to advance their theories. Very unusual or unrepresentative observations are often discarded and barely mentioned if at all in the scientific report (justified by calling them 'outliers'). Reconstructions--like assembling the fossil fragments into a hip bone or skull--are fallible and may reflect the expectations or evolutionary theories of the paleontologist involved.
These are not just random errors, they are biases in science reporting, and they are ubiquitous. They are, in fact, often things the investigators are well aware of. Disagreeing work, or work by the investigator's rivals or competitors, may not be cited in a paper, for example. Cases may be presented in ways that don't cover all that's known because of the need to get the next grant. Negative results may not be published. Some results involve judgment calls, and these can be in the eye, and the expectations, of the observer. We are not always wholly oblivious of these things, but reported results are not always 100% as honest as we'd like to believe, again because while not outright lies, authors may be tempted to shade the known truth.
So is intentional fabrication of data worse, because it's more complete? The answer would be clearly yes, because we have to have constraints on our tendencies to promote our own ideas, to rigidly prevent making up data, and enhance as much as possible our ability, at least in principle, to try to replicate an experiment. But we are perhaps more tolerant than we should be of the shadings of misrepresentation that exist, especially those of which investigators are aware.
We don't know whether Dr Hauser did what he's accused of. But we do feel that if the accusations are true, he should be prevented from gainful employment by any university. We have to hold the line, strictly, on outright fraud. It's simply too great a crime against all the rest of us.
But at the same time, we need to realize both the central importance of trust and truthfulness in science, and the range of ways in which truth can be undermined. Fortunately, most science seems to be at least technically truthful. But our outrage against outright fraud should be tempered by knowledge of the many subtle ways in which bias and dissembling, along with honest human error, are part of science, and that must lead us to question even the past work on which all of our present work rests.