We're doing a series of posts on the subject of 'evidence based' medicine, because of a flurry of items in the news. They have to do with 3 tests for cancer: PSA tests for prostate cancer in men, and pap smears and mammograms for breast cancer in women. (the issues are much, much broader but of a similar nature in regard to many other things--even things like climate change--where even people who would strongly proclaim themselves to be evidence-based manifest the same experience). Here's the latest on the cancer stories from the New York Times.
In all cases, the problem is that early diagnosis can lead to treatment of tumors that would either never reach clinical significance (prostate cancers in particular) or would go away on their own. The treatments have costs including the trauma of the diagnosis in the first place, the financial costs to the individual and the health-care system, and the trauma of surgery or other treatments like radiation or chemotherapy.
But people seem to be quickly deciding to carry on as usual anyway. Why? Presumably doctors will order tests at least in part to avoid law suits. People will continue presumably because the fear of cancer outweighs the more abstract and distant fear of the treatments and the diagnostic tests, the fact that costs may be covered on their health insurance, and because they still believe it to be best. And it takes a hell of a lot of guts to say "don't bother to check me," or "well, let's not intervene in this early tumor right now; let's wait to see what happens."
But science is about evidence. The purpose of statistical significance tests (which these studies largely rest on, since risks, costs, and benefits are always based on statistics) is to inform decision-making -- that's the entire rationale behind such studies. So if we're really as evidence-based as we think we are, why not respond to the new data by changing our behavior?
There are many reasons, but in a nutshell, clearly we are not just evidence-based. We choose a conventional significance level (the famous p value), in an arbitrary or conventional way, but that doesn't mean we must follow the result rigidly. But if not follow it, why not?
Fear, for example, is part of the equation. Fear of disease. Fear of lawsuits. Fear of the consequences of a missed diagnosis. Fear of changing how we do things. This outweighs fear of wrong decisions which in any way can't be proven (once you carve out a tumor that would have regressed on its own, you can never know that, so you can always feel you did the right thing).
Of course, when studies come up with inconsistent, differing conclusions on a regular basis, for the same question, as we see in this general field, there is also the subjective aspect of deciding which study to believe. Is the most recent one the definitive one? It is not easy to decide which bit of evidence is the evidence that counts.
Relevant to this is that in none of these cases do we think those who want to stand pat challenge the studies, nor their p values. That would be a different thing, and sometimes happens, most properly when a single new study challenges accepted wisdom. But that's not the case here.
The 'scientific method' is not nearly so straightforward as we tend to say in our classrooms and in the media. It's nobody's fault. But decision-making, even by scientists is substantially subjective. it would be better if we recognized the problem. How we might then change our criteria for change is anybody's guess, but it might help.