Tuesday, February 22, 2011

Why? Risk and decisions about risk

Breast cancer isn't even that rare, unfortunately, but there is apparently still a lot that isn't clear about best practice when it comes to diagnosis and treatment.  We have earlier posted about stories related to whether mammograms are worth the risk--that the radiation will induce too many cancers relative to what they detect, or detect tumors that would mainly regress on their own.  A positive mammogram leads to some sort of follow up, and this has its own risks and morbidities.

A story about a new study reports conclusions that more extensive biopsies ('open biopsies') are being done far too often, rather than less costly and less traumatic 'needle' biopsies.  If a tumor is detected, usually surgery is required, but in the former case this means two surgeries, and the story says this is considerably more difficult than a needle biopsy and one surgery.

There was a recent related story saying that lymph node biopsies or removal (in the armpit area through which breast cancer often metastasizes, when it spreads) were not worth doing, as judged by subsequent course of the disease.  And another story claims, at least, the discovery of another breast-cancer related gene--another type of test which, depending on risk estimates, will then lead to further decisions about further tests or treatment.

We know that when an absolute risk is very rare, and must be assessed by aggregate results of very large numbers of instances, it is difficult to make much less evaluate policy.  In the case of radiation, we can estimate the per dose-rate effect of high doses, but must extrapolate the dose-response curve to make a guess at what the low-dose risk, if any, might be.  This is the case with mammograms and even more so with exposures to radiation workers, dental x-rays, CT scans, and the like.

The same issues arise in GWAS or efforts to detect natural selection at the gene level.  Very small effects are difficult to detect, evaluate, or prove.  We usually do so with statistical significance criteria, but often even large samples are not adequate because too many sources of variation impair the ability to convincingly detect the effect.  Things that are real but small can go undetected and tests for them are vulnerable to interpreting fluke positives true positives.

These are challenging issues for science, because we're very good at picking up strong signals, that behave well relative to statistical evaluation criteria (like significance testing.  That ability itself may lure us to try--and expect--to be successful with very weak effects if can but collect huge enough samples.  At present, it's not working very well: at least, reaching consensus is not easy.

And the problems apply even to breast cancer which, in this context, is not even that rare.

No comments:

Post a Comment