Thursday, November 19, 2009

Mammography: Grim tales of real life.

The use of x-rays to detect breast cancer, known as mammography, started around 1960. The idea was that x-rays could give an in-depth picture of the breast that would be superior to palpation for detecting small tumors that had not yet become obvious or symptomatic. It seemed like a very good idea that could save many lives, and became not just widespread but formally recommended as part of preventive care.

This was based on the belief, or perhaps even dogma, that tumors are 'transformed' cells that are out of control and will continue dividing without the normal context-specific restraint. The tumors induced vascularization that nourished its cells, and eventually cells flake off into the blood or lymph systems, to be carried along to other sites where they would eventually lodge, spreading the tumor (this is called metastasis). If anything, treatment or just competition would lead this distributing clone of transformed cells to gain an increasing evolutionary advantage over the woman's (and, in much rarer instances, men's) normal tissue: tumor cells would continue to accumulate mutations at the regular or even an accelerated rate, that would give them even further growth advantage.

Sometimes tumors seemed to regress, but this was difficult to explain and often it was thought that perhaps the initial diagnosis was wrong. If the tumor had escaped immune destruction when it was only a single or few cells large, what could then later make it regress?

Thus the general dogma in cancer biology that the earlier it was caught, the less likely it would spread. That also meant the earlier in life one was screened, the better. Local surgery could then cure the disease.

But there was a problem: the same x-radiation used to detect different cell densities between tumor and normal tissue, is also a very well-known mutagen and cause of cancer!

Worse, the more actively dividing cells were, the more liable to mutation and thus transmission to increased numbers of a descendant line of daughter cells in the tissue. Since breast tissue grows every menstrual cycle, pre-menopausal women would be particularly vulnerable to iatric carcinogenesis. Yet the idea was that earlier screening was better!

Even further, early onset cases are more likely to be or to become bilateral (both breasts) or multiclonal (more independent tumors), and it was suspected and is now known that some of this, at least, is due to inherited susceptibility mutations (in BRCA1 and BRCA1 and a few other genes). These mutations put a woman at very high risk, so earlier and more frequent screening--but higher total radiation doses!--could be important.

Especially after the atomic bombing of Japan in World War II, and the subsequent fallout from nuclear reactors and bomb tests, and the proliferation of diagnostic x-rays, many extensive studies were done to document the risk, and for example chest x-rays used in routine tuberculosis screening were shown to be a risk for cancers including breast cancer.

So, to screen or not to screen? The obvious answer to this Hobson's choice was a grim cost-benefit analysis: how many cancers are detected and cured vs those that are caused by mammographic screening? Even grimmer, this could be evaluated by age, so that recommendations could be made based on a judgment as to how favorable the age-specific balance between cause and cure was. And there's more: radiation-induced carcinomas take years to develop before they would appear as clinically detectable tumors, so evaluating and attributing risk was (and is) not easy.

Breast cancer is unfortunately quite common, but the differences being considered, among many additional variables known and unknown, are small. That means very large, long-term studies needed even to come to a tentative rational ('evidence-based') conclusion. The result was recommendations of occasional mammograms for women in their 40's, with more frequent screens in 50's and beyond.

This made sense....until a few studies recently began to appear with curious results. Several studies showed that the number of cancers in women not screened was lower than those in women who had been screened. How can this be? The answer appears to be that screening leads to detection, reporting, and treatment of tumors that would eventually disappear on their own. So screening led to interventions of various types, some rather grim in themselves, in a substantial fraction of cases that would go away without any treatment with its associated cost and trauma.

The same has been found recently in PSA testing of men for prostate cancer, so it's not a fluke of the study design. Scars of remitted tumors have been found, showing clearly that they regressed without diagnosis or treatment.

So now a panel of experts has recommended backing off, and doing screening less often (except in those who, in a grim kind of good luck, know they carry a high-risk mutation and hence need to be checked carefully, and often, where early detection can more clearly be effective).

Now if that isn't 'evidence' what is? Yet this is controversial, because it goes against accepted practice. In the Wednesday NY Times it's reported that some physicians don't plan to change their recommendations (what will insurance companies, our most noble citizens, and the entities that will actually drive this decision, do?). The NIH Secretary also backed away from this new report. This is curious to say the least and relevant, of course, to the notion of 'evidence based' medicine that we discussed in a recent post, and why we think the notion of evidence is actually rather slippery.

This strikes close to home for many of us, who have very close relatives who have died of breast cancer. For us, research on this subject could hardly be more important. If you're a young woman you face these grim or even terrifying choices. But in real life, rather than fairy stories, there's no easy answer.


No comments: