"Evidence-based medicine" is much bruited about these days, but was medicine ever not evidence-based? And aren't hunches still an important part of the art of medicine? Perhaps the question in our era is whether what is counted as evidence, even by those who mean data formally gathered by science, has gotten easier to evaluate? In this regard, several stories about the efficacy of various drugs and even vitamins have showed up in the news this week. One is a report of a study of the effectiveness of cholesterol-lowering medication vs. vitamin B12, published in the New England Journal of Medicine, and the other a report of the association of vitamin D insufficiency with, well, just about everything. After Holly's post last week about the effectiveness, or not, of her homeopathic treatment, it seems fitting to look at these reports in a little more detail.
The common criticism leveled by scientists at alternative medicine is that it is not evidence-based, which presumably means that formally designed studies have not shown a statistically significant benefit, and/or that no mechanism is known, or that no studies have been done. Or, in practice, it's that doctors wildcat--treat patients their own way for their own reasons--rather than following some centrally specified way.
It is at least slightly strange that placebo effects are not credited as a form of medicine, even if real, because they are not rigorously understood--e.g., in 'dose-response' kinds of terms. In any case, evidence-based medicine (EBM) is defined in Wikipedia as an approach that
aims to apply the best available evidence gained from the scientific method to medical decision making. It seeks to assess the quality of evidence of the risks and benefits of treatments (including lack of treatment).EBM came of age in the 1980's, as an attempt to standardize medical care. It's probably not entirely coincidental that this was about the same time that reliance on expensive technological diagnostic tools was increasing, and malpractice law suits, and settlement amounts, began to rise--though we have no evidence of actual cause and effect for this. A doctor is expected to follow locally accepted practice, but that criterion itself is one that requires formalized documentation.
Science as we know it is a method that our society has evolved over the past 200-300 years. It is based on replication, the attempt to isolate individual variables (e.g, averaging over all other variables) and to show that their variation is causally associated with variation in target outcomes such as disease.
In former times different kinds of evidence were accepted. For example, it was part of Aristotle's worldview that, in effect, our minds were built to have a true picture of the nature of the real world, so that deductive reasoning had some empirical cogency. We don't accept that today. And, spiritual experiences or opinions, not being subject to the same kind of technical scrutiny, are not considered evidence.
We're not arguing that informal experience should count as evidence, or that mystical water-cures should replace surgery. Novocaine alone is a strong enough argument for western science!
But people do tend to have a high acceptance tolerance for views based on current study results, even though history shows how often and how wrong, often diametrically wrong, they can be.
We can't know the true truth until we know it, and knowledge may always be incomplete. But how do we know how to implement 'evidence-based' criteria, if this does not at its core rest on subjective judgment about 'evidence'? If it is that we should base action on what we know today from a particular kind of evidence (that is described as 'scientific'), that is perfectly defensible in principle as simply being the best we can do. But even then, the evidence is not so easy to interpret.
Evidence-based decisions?
Statins and vitamin B12
The current cholesterol study getting so much play reports that the two treatments that were being tested reduce LDL (the 'bad' cholesterol), but that the (pricey) medication under review, ezetimibe, contained in the cholesterol meds Zetia and Vytorin, isn't as good at reducing arterial plaque as (the cheaper one) Niaspan (niacin, or vitamin B12), and Niaspan raises HDL (the 'good' cholesterol). Because millions of people control their cholesterol levels with ezetimibe drugs, this is of some concern.
But, and there are always buts in these stories, because perfect studies are few and far between, the study was halted early (that is, before all the subjects in the study had completed the course of treatment) because, according to the investigators, it would have been unethical to continue once it became clear that Niaspan was out-performing Zetia. The problem with this, as critics note, is that continuing the study through to its planned conclusion might have led to a different outcome--the study was supposed to follow a design that presumably was explained and justified, and passed peer review, in the grant proposal, presumably for good reasons. Also, the study was fairly small--only 208 subjects, of 363 enrolled, completed the study, which reduces its power. And, perhaps most important in terms of how this study will be evaluated, even though it tells us nothing about the science per se, it turns out that the study was funded by Abbott Laboratories, the maker of Niaspan, and several of the investigators, including the principal investigator, have received thousands of dollars in speaking fees from Abbott. Does this prove that the study is biased? No, but given the vested interest of the funder, it's obviously difficult to trust the results.
This is the story getting so much play in the press. One wonders why this is, given all the good reasons to doubt it (A nice laying out of many of those reasons can be found here.) It's tempting to assume that it's primarily because a lot of money is at stake, out of the pockets of patients and insurance companies and Merck and Co, the makers of Zetia into the pockets of Abbott Labs--as is the health of millions of people with high cholesterol. But, even without the issues about vested interest, are the results clear cut?
Vitamin D
The latest vitamin D studies suggest that vitamin D deficiency is the cause of everything that ails us, including depression, heart disease, diabetes, autoimmune diseases, some cancers and TB, and can lead to 'early death'--and more. According to these studies, two thirds of Utahans don't have enough vitamin D. And that's what most studies of vitamin D say about most populations studied. Some researchers even say that 100% of us have insufficient vitamin D--sometimes it's called 'subclinical insufficiency', meaning that a threshold has been set and most of us don't meet it, regardless of whether we show any ill health effects or not.
The Utah study divided a sample of 27,000 people into three categories by their vitamin D levels, high, sufficient, or deficient and followed them for a year. Those with low vitamin D were much more likely to die, to suffer from depression, heart disease, and so.
Now, no one can doubt that vitamin D is essential to our health. Vitamin D deficiency really can cause rickets because it's involved in calcium uptake, and rickets is the result of weakening and softening of bone, usually because of low vitamin D levels, which lead to inadequate calcium uptake. That's been known for a long time, and it's been known for even longer that exposure to the sun can cure it. As can cod liver oil, one of the few dietary sources of vitamin D.
But, these were observational studies, which by themselves can't be used to determine cause and effect. Further, let's take depression and its purported association with vitamin D deficiency. Our primary source of vitamin D is exposure to the sun. But let's suppose that people suffering from depression aren't spending a lot of time outdoors soaking up rays--not an unreasonable supposition, really. So which comes first? Vitamin D deficiency or depression? If we're all deficient, and depression and heart disease and cancer are common, how do we determine actual cause and effect? A prospective study might help--following people with sufficient vitamin D levels at the outset for some time, and collecting data on their vitamin D levels and disease experience, and in fact it looks like this is the next stage in this study. But then, even when low vitamin D comes before the cancer, does that make it determinative?
Basing decisions on what kind of evidence?
So, what does evidence-based medicine do with this kind of evidence? And these studies are just a few among many that are yielding evidence that's not all that easy to know what to do with. An underlying problem is that there are too many unmeasured and hence possibly confounding variables, along with too many shortcomings in our knowledge of human biology to know how to definitively evaluate these kinds of studies. Those are nobody's fault, and we can only work away at improving.
In an intersection of new and old age, and thus bringing up the question of what counts as health care, and what as evidence, the BBC reports that meditation reduces risk of stroke and heart attack. A nine year case-control study of its effects was carried out by researchers at the Medical College in Wisconsin and the Maharishi University in Iowa. The study showed risk of death, heart attack and stroke was reduced by 47% in the group who meditated. This study now enters the pool of evidence for evidence-based medicine. It was a state-of-the-art study, yielding statistically significant and potentially reproducible results. Even though it tells us nothing about how meditation works, and might even go counter to how some think western medicine should work, that doesn't matter. Demonstrating effect is enough for EBM. Unless, of course, doctors aren't comfortable recommending meditation in spite of the evidence.
Another example of just that, that we'll do another post on for other reasons, is the issue of the right practice for using mammograms to find breast cancer. Several recent studies have lead a government science panel to suggest that the benefits of early detection are not worth the cost in terms of over-treatment, in younger women, and have suggested less screening for women in their 40s. Yet, today, the NY Times reports some doctors saying they won't change their screening practices because they are not persuaded:
“It’s kind of hard to suggest that we should stop examining our patients and screening them,” said Dr. Annekathryn Goodman, director of the fellowship program in gynecological oncology at Massachusetts General Hospital “I would be cautious about changing a practice that seems to work.”This shows the elusive, at least partly subjective and judgment-based idea of 'evidence' and how to use it. Two criteria are mixed, implicitly, here: the evidence from formal studies and the doctor's belief that current practice 'seems to work'. The challenge is to design evaluations that bring these criteria into agreement. What happens with non-traditional medicine is a more complicated question....
No comments:
Post a Comment