Friday, January 29, 2010

Expert advice

Jerome Groopman, Harvard Medical School physician, and writer, has an interesting piece in the February New York Review of Books. Called "Health Care: Who Knows Best?", he discusses some of the recommendations built into the health care legislation now stalled (dead?) in the US congress.
One of the principal aims of the current health care legislation is to improve the quality of care. According to the President and his advisers, this should be done through science. The administration's stimulus package already devoted more than a billion dollars to "comparative effectiveness research," meaning, in the President's words, evaluating "what works and what doesn't" in the diagnosis and treatment of patients.
The idea is that science can determine what works and what doesn't, and the government can then mandate that doctors and hospitals that use these 'best practices' will get more money than those who don't. It's in part paternalistic, as Groopman points out, in (large) part driven by insurance industry interests, and completely bad science.

Over the past decade, federal "choice architects"—i.e., doctors and other experts acting for the government and making use of research on comparative effectiveness—have repeatedly identified "best practices," only to have them shown to be ineffective or even deleterious.
For example, Medicare specified that it was a "best practice" to tightly control blood sugar levels in critically ill patients in intensive care. That measure of quality was not only shown to be wrong but resulted in a higher likelihood of death when compared to measures allowing a more flexible treatment and higher blood sugar. Similarly, government officials directed that normal blood sugar levels should be maintained in ambulatory diabetics with cardiovascular disease. Studies in Canada and the United States showed that this "best practice" was misconceived. There were more deaths when doctors obeyed this rule than when patients received what the government had designated as subpar treatment (in which sugar levels were allowed to vary).

Why is it bad science? There are several answers to this. First, everyone is different -- in the same way that single genes don't explain a disease in everyone who has it, single practices don't work equally in everyone with the same condition.  Second, there are many epistemological issues having to do with how 'successful outcomes' are measured.

In addition, and perhaps over-riding all other issues, is the notion of 'experts' and how we dub them and use them. In a field like, say, calculus or Plato, it is rather easy to envision what an expert is. The body of knowledge, thought perhaps extensive or technical, is relatively circumscribed (yes, we know calculus is a big field not all fully known, but one can conceive of an expert mathematician).

In other fields knowledge is so vast, or changeable, or epistemically uncertain, that experts tend to be socially prominent practitioners of a field--medicine, genetics, evolutionary biology, public health. They may know tons about their field, tons more than the people they're asked to advise. But there is too much for anyone to know in detail, and there is too much room for disagreement at levels too close to the policy issues on which the experts are consulted.

Like 'complexity', expert status is not clear and yet the need for experts is often great. One British investigator evaluates potential experts based on their past track records, weighting their current advice on those track records. This may narrow the ranges of estimates of critical policy parameters by rejecting wild estimates by those with less past success. But even that assumes that past success means knowledge and skill, rather than luck, that implies future success, and that's a big (and untestable) assumption.

So we face very challenging problems in deciding, as in this case, what the 'best' medical practices should be. We have to constrain usages of the system in some ways, but how to do it is as much a sociopolitical as scientific decision.

No comments: