Showing posts with label cancer. Show all posts
Showing posts with label cancer. Show all posts

Wednesday, March 29, 2017

The (bad) luck of the draw; more evidence

A while back, Vogelstein and Tomasetti (V-T) published a paper in Science in which it was argued that most cancers cannot be attributed to known environmental factors, but instead were due simply to the errors in DNA replication that occur throughout life when cells divide.  See our earlier 2-part series on this.

Essentially the argument is that knowledge of the approximate number of at-risk cell divisions per unit of age could account for the age-related pattern of increase in cancers of different organs, if one ignored some obviously environmental causes like smoking.  Cigarette smoke is a mutagen and if cancer is a mutagenic disease, as it certainly largely is, then that will account for the dose-related pattern of lung and oral cancers.

This got enraged responses from environmental epidemiologists whose careers are vested in the idea that if people would avoid carcinogens they'd reduce their cancer risk.  Of course, this is partly just the environmental epidemiologists' natural reaction to their ox being gored--threats to their grant largesse and so on.  But it is also true that environmental factors of various kinds, in addition to smoking, have been associated with cancer; some dietary components, viruses, sunlight, even diagnostic x-rays if done early and often enough, and other factors.

Most associated risks from agents like these are small, compared to smoking, but not zero and an at least legitimate objection to V-T's paper might be that the suggestion that environmental pollution, dietary excess, and so on don't matter when it comes to cancer is wrong.  I think V-T are saying no such thing.  Clearly some environmental exposures are mutagens and it would be a really hard-core reactionary to deny that mutations are unrelated to cancer.  Other external or lifestyle agents are mitogens; they stimulate cell division, and it would be silly not to think they could have a role in cancer.  If and when they do, it is not by causing mutations per se.  Instead mitogenic exposures in themselves just stimulate cell division, which is dangerous if the cell is already transformed into a cancer cell.  But it is also a way to increase cancer by just what V-T stress: the natural occurrence of mutations when cells divide.

There are a few who argue that cancer is due to transposable elements moving around and/or inserting into the genome where they can cause cells to misbehave, or other perhaps unknown factors such as of tissue organization, which can lead cells to 'misbehave', rather than mutations.

These alternatives are, currently, a rather minor cause of cancer.  In response to their critics, V-T have just published a new multi-national analysis that they suggest supports their theory.  They attempted to correct for the number of at-risk cells and so on, and found a convincing pattern that supports the intrinsic-mutation viewpoint.  They did this to rebut their critics.

This is at least in part an unnecessary food-fight.  When cells divide, DNA replication errors occur.  This seems well-documented (indeed, Vogelstein did some work years ago that showed evidence for somatic mutation--that is, DNA changes that are not inherited--and genomes of cancer cells compared to normal cells of the same individual.  Indeed, for decades this has been known in various levels of detail.  Of course, showing that this is causal rather than coincidental is a separate problem, because the fact of mutations occurring during cell division doesn't necessarily mean that the mutations are causal. However, for several cancers the repeated involvement of specific genes, and the demonstration of mutations in the same gene or genes in many different individuals, or of the same effect in experimental mice and so on, is persuasive evidence that mutational change is important in cancer.

The specifics of that importance are in a sense somewhat separate from the assertion that environmental epidemiologists are complaining about.  Unfortunately, to a great extent this is a silly debate. In essence, besides professional pride and careerism, the debate should not be about whether mutations are involved in cancer causation but whether specific environmental sources of mutation are identifiable and individually strong enough, as x-rays and tobacco smoke are, to be identified and avoided.  Smoking targets particular cells in the oral cavity and lungs.  But exposures that are more generic, but individually rare or not associated with a specific item like smoking, and can't be avoided, might raise the rate of somatic mutation generally.  Just having a body temperature may be one such factor, for example.

I would say that we are inevitably exposed to chemicals and so on that will potentially damage cells, mutation being one such effect.  V-T are substantially correct, from what the data look like, in saying that (in our words) namable, specific, and avoidable environmental mutations are not the major systematic, organ-targeting cause of cancer.  Vague and/or generic exposure to mutagens will lead to mutations more or less randomly among our cells (maybe, depending on the agent, differently depending on how deep in our bodies the cells are relative to the outside world or other means of exposure).  The more at-risk cells, the longer they're at risk, and so on, the greater the chance that some cell will experience a transforming set of changes.

Most of us probably inherit mutations in some of these genes from conception, and have to await other events to occur (whether these are mutational or of another nature as mentioned above).  The age patterns of cancers seem very convincingly to show that.  The real key factor here is the degree to which specific, identifiable, avoidable mutational agents can be identified.  It seems silly or, perhaps as likely, mere professional jealousy, to resist that idea.

These statements apply even if cancers are not all, or not entirely, due to mutational effects.  And, remember, not all of the mutations required to transform a cell need be of somatic origin.  Since cancer is mostly, and obviously, a multi-factor disease genetically (not a single mutation as a rule), we should not have our hackles raised if we find what seems obvious, that mutations are part of cell division, part of life.

There are curious things about cancer, such as our large body size but delayed onset ages relative to the occurrence of cancer in smaller, and younger animals like mice.  And different animals of different lifespans and body sizes, even different rodents, have different lifetime cancer risks (some may be the result of details of their inbreeding history or of inbreeding itself).  Mouse cancer rates increase with age and hence the number of at-risk cell divisions, but the overall risk at very young ages despite many fewer cell divisions (yet similar genome sizes) shows that even the spontaneous mutation idea of V-T has problems.  After all, elephants are huge and live very long lives; why don't they get cancer much earlier?

Overall, if if correct, V-T's view should not give too much comfort to our 'Precision' genomic medicine sloganeers, another aspect of budget protection, because the bad luck mutations are generally somatic, not germline, and hence not susceptible to Big Data epidemiology, genetic or otherwise, that depends on germ-line variation as the predictor.

Related to this are the numerous reports of changes in life expectancy among various segments of society and how they are changing based on behaviors, most recently, for example, the opiod epidemic among whites in depressed areas of the US.  Such environmental changes are not predictable specifically, not even in principle, and can't be built into genome-based Big Data, or the budget-promoting promises coming out of NIH about such 'precision'.  Even estimated lifetime cancer risks associated with mutations in clear-cut risk-affecting genes like BRCA1 mutations and breast cancer, vary greatly from population to population and study to study.  The V-T debate, and their obviously valid point, regardless of the details, is only part of the lifetime cancer risk story.

ADDENDUM 1
Just after posting this, I learned of a new story on this 'controversy' in The Atlantic.  It is really a silly debate, as noted in my original version.  It tacitly makes many different assumptions about whether this or that tinkering with our lifestyles will add to or reduce the risk of cancer and hence support the anti-V-T lobby.  If we're going to get into the nitty-gritty and typically very minor details about, for example, whether the statistical colon-cancer-protective effect of aspirin shows that V-T were wrong, then this really does smell of academic territory defense.

Why do I say that?  Because if we go down that road, we'll have to say that statins are cancer-causing, and so is exercise, and kidney transplants and who knows what else.  They cause cancer by allowing people to live longer, and accumulate more mutational damage to their cells.  And the supposedly serious opioid epidemic among Trump supporters actually is protective, because those people are dying earlier and not getting cancer!

The main point is that mutations are clearly involved in carcinogenesis, cell division life-history is clearly involved in carcinogenesis, environmental mutagens are clearly involved in carcinogenesis, and inherited mutations are clearly contributory to the additional effects of life-history events.  The silly extremism to which the objectors to V-T would take us would be to say that, obviously, if we avoided any interaction whatsoever with our environment, we'd never get cancer.  Of course, we'd all be so demented and immobilized with diverse organ-system failures that we wouldn't realize our good fortune in not getting cancer.

The story and much of the discussion on all sides is also rather naive even about the nature of cancer (and how many or of which mutations etc it takes to get cancer); but that's for another post sometime.

ADDENDUM 2
I'll add another new bit to my post, that I hadn't thought of when I wrote the original.  We have many ways to estimate mutation rates, in nature and in the laboratory.  They include parent-offspring comparison in genomewide sequencing samples, and there have been sperm-to-sperm comparisons.  I'm sure there are many other sets of data (see Michael Lynch in Trends in Genetics 2010 Aug; 26(8): 345–352.  These give a consistent picture and one can say, if one wants to, that the inherent mutation rate is due to identifiable environmental factors, but given the breadth of the data that's not much different than saying that mutations are 'in the air'.  There are even sex-specific differences.

The numerous mutation detection and repair mechanisms, built into genomes, adds to the idea that mutations are part of life, for example that they are not related to modern human lifestyles.  Of course, evolution depends on mutation, so it cannot and never has been reduced to zero--a species that couldn't change doesn't last.  Mutations occur in plants and animals and prokaryotes, in all environments and I believe, generally at rather similar species-specific rates.

If you want to argue that every mutation has an external (environmental) cause rather than an internal molecular one, that is merely saying there's no randomness in life or imperfection in molecular processes.  That is as much a philosophical as an empirical assertion (as perhaps any quantum physicist can tell you!).  The key, as  asserted in the post here, is that for the environmentalists' claim to make sense, to be a mutational cause in the meaningful sense, the force or factor must be systematic and identifiable and tissue-specific, and it must be shown how it gets to the internal tissue in question and not to other tissues on the way in, etc.

Given how difficult it has been to chase down most environmental carcinogenic factors, to which exposure is more than very rare, and that the search has been going on for a very long time, and only a few have been found that are, in themselves, clearly causal (ultraviolet radiation, Human Papilloma Virus, ionizing radiation, the ones mentioned in the post), whatever is left over must be very weak, non tissue-specific, rare, and the like.  Even radiation-induced lung cancer in uranium minors has been challenging to prove (for example, because miners also largely were smokers).

It is not much of a stretch to simply say that even if, in principle, all mutations in our body's lifetime were due to external exposures, and the relevant mutagens could be identified and shown in some convincing way to be specifically carcinogenic in specific tissues, in practice if not ultra-reality, then the aggregate exposures to such mutations are unavoidable and epistemically random with respect to tissue and gene.  That I would say is the essence of the V-T finding.

Quibbling about that aspect of carcinogenesis is for those who have already determined how many angels dance on the head of a pin.

Tuesday, July 12, 2016

In Memoriam: Al Knudsen, a modest, under-recognized founder of cancer genetics (and more)

My first job was a young faculty member was in the Graduate School of Biomedical Sciences, at the University of Texas Health Science Center in Houston.  Our small Center for Demographic and Population Genetics was part of the Graduate School, and it was small enough that we got to know, and interact with, the Dean.  And what a dean he was!

The great, and good Al Knudsen (1922-2016).  Google images.
It was a small graduate school, so Dr Knudsen still was active in research, cancer research. One of the first talks I heard down there in Houston, when I still didn't have my first pair of cowboy boots, y'all, was an interesting idea about the causes of cancer.

Radiation was a known carcinogen, as were some chemicals, and there were various ideas about how carcinogenesis worked at the gene level. The basic idea was that these agents caused genetic mutations that led cells to misbehave, and though abnormal, escape detection by the immune system. More mutations meant more cancer risk, and this was consistent with 'multi-hit' ideas of cancer. More mutations took longer to accumulate, which was consistent with the increasing risk of cancer with age.  But genetics was still very rudimentary then, compared to now, direct testing primitive at best. And there were some curious exceptions.  An interesting fact was that some cancers seemed familial, arising in close relatives, and typically at earlier ages than the sporadic versions of what seemed to be the same type of tumor.  Why?

One example was the eye cancer retinoblastoma which arose in children or young adults, mostly in isolated cases; but there were affected families in which Rb was often present at birth.  Knudsen's idea was that in affected families one harmful allele was being transmitted, but the disease did not arise until a second mutation occurred.  Al published a quantitative mutational model of the onset age pattern in a PNAS paper in 1971, just before I myself had arrived in Houston, but by chance I had heard him present his work at the time of my job interview.

The basic idea was a 2-hit hypothesis, in which you could inherit one Rb mutation, and then only had to 'wait' for some one of your embryonic retinal cells to suffer the bad luck of a hit in the normal copy in order for a cancer to develop.  That waiting time accounted for the earlier onset of familial cases, because they only had to 'wait' for one mutation, whereas sporadic cases needed to experience two Rb hits in the same cell lineage.

This was a profound insight.  It allowed for cancer genetic findings, in which some forms of cancer clustered in families (e.g., some breast and colorectal cancers). Yet most cases were sporadic.  It was shown roughly at that time, by clever work in those crude days of human genetics, that tumors were clonal--the tumor, even when it had spread, was the descendant of a single aberrant (mutated) cell.

It did not take long for this sort of thinking, along with various methods for detection, to find the Rb gene....and other genes related to cancer.  This eventually included genomewide tests for loss of detectable variation based on microsatellite sites, continued to confirm the idea, far beyond those types of cancer that seem to be caused largely by changes in a single gene. The idea of somatic mutation caused by environmental factors, was complemented by the idea that it is common to inherit genotypes that are partially altered but insufficient by themselves to cause cancer, so that the tumor only arises later in life, after environmentally-caused (or stochastic) further mutations occur.

Knudsen's basically 2-hit idea was quickly generalized to 'multi-hit' models of cancer, and the discovery that cancers in a given individual were clonal led to models in which combinations of inherited mutations (present in every cell) and those that occurred somatically, seemed to account for the basic biology of cancer.  Many of the individual genes whose mutation puts a person at very elevated risk of one or more forms of cancer have since been identified, and newer technology has allowed their functional nature (and reason for their role in cancer) to be found.  Some are involved in DNA repair or control of cell division, and it's understandable why their mutational loss is dangerous.

The sources of variation in these genes may vary, but cancer as a combination of inherited and somatically generated mutations is a, if not the, prevailing general model for its biological nature and epidemiology, and shows why tumors are somatic evolutionary phenomena at the gene level.  But his nugget of an idea triggered much broader work in human genetics that, once technology caught up to the challenge, led to our understanding (and, too often, convenient ignoring) of the role of combined inherited and somatically induced variation as a major cause of the common, complex disorders for which genomewide mapping has become a routine approach.

I was still in Houston when Dr Knudsen moved to the Fox Chase Cancer Center in Philadelphia.  We missed him, but over the following decades he continued to contribute to the understanding of cancer.  His inspiring, gentle, and generous nature was an exception in the snake-pit that has become so common in the 'business model' of so many biomedical research circles.

Al's foundational work earned him many honors.  But he didn't get one that I think he richly deserved: his quiet, transformative role in understanding cancer, and the much broader impact on human genetics that followed as a result, deserved a Nobel Prize.

Monday, May 30, 2016

Cancer moonshot and slow-learners

Motivated by Vice President Biden's son's death at an early age from cancer, President Obama recently announced a new health initiative which he's calling the cancer 'moonshot'.  This is like a second Nixonian 'war' on cancer but using a seemingly more benign metaphor (though cancer is so awful that treating it as a 'war' seems apt in that sense). Last week the NYTimes printed an op-ed piece that pointed out one of the major issues and illusions belied by the rhetoric of the new attack on cancer, as with the old:  Curing one cancer may extend a person's life, but it also increases his or her chances of a second cancer, since risks of cancer rise with age.

Cancers 'compete' with each other for our lives
The op-ed's main point is that the more earlier onset cancers we cure, the more late onset, less tractable tumors we'll see.  In that sense, cancers 'compete' with each other for our lives.  The first occurrence would get us unless the medical establishment stops it, thus opening the door for some subsequent Rogue Cell to generate a new tumor at some later time in the person's life.  It is entirely right and appropriate in every way to point this out, but the issues are subtle (though not at all secret).

First, the risk of some cancers slows with age.  Under normal environmental conditions, cancers increase in frequency with age because they are generally due to the accumulation of multiple mutations of various sorts, so that the more cell-years of exposure the more mutations that will arise.  At some point, one of our billions of cells acquires a set of mutational changes that lead it to stop obeying the rules of restraint in form and cell-division that are appropriate for the normal function of its particular tissue. A tumor is a combination of exposure to mutagens and mutations that occur simply by DNA replication errors--totally chance events--when cells divide.  As the tumor grows it acquires further mutations that lead it to spread or resist chemotherapy etc.

This is important but the reasons are subtle.  The attack on cells by lifestyle-related mutagens like radiation or chemicals in the environment becomes reduced in intensity as people age and simplify their lives, slowing down a lot of exposures to these risk factors. However, cell division rates, the times when mutations arise, themselves slow down, so the rate of accumulation of new mutations, whether they be by chance or by exposures, slows.  This decrease in the increase of risk with age at least tempers the caution that curing cancers in adults will leave them alive for many years and hence at risk for at least some many more cancers (though surely it will make them vulnerable to some!)


Apollo 11, first rocket to land humans on the moon; Wikipedia

Competing causes: more to the story, but nothing at all new
There's an important issue not mentioned in the article, but that is much more important in an indirect way.  This is an issue the authors of the op-ed didn't think about or for some reason didn't mention or perhaps because they are specialists they just weren't aware of.  But it's not at all secret, and indeed is something we ourselves studied for many years, and we've blogged about here before: anything that reduces early onset diseases increases the number of late onset diseases.  So, curing cancer early on (which is what the op-ed was about) increases risk for every later-onset disease, not just cancer.  In the same way as we've noted before, reducing heart disease or auto accident rates or snake bite deaths will increase dementia, heart disease, diabetes, and cancer--all other later-onset diseases--simply because more people will live to be at risk.  This is the Catch-22 of biomedical intervention.

In this sense all the marketing rhetoric about 'precision' genomic medicine is playing a game with the public, and the game is for money--research money among other things.  There's no cure for mortality or the reality of aging.  Whether due to genetic variants or lifestyle, we are at increasing risk for the panoply of diseases as we age, simply because exposure durations increase.  And every victory of medicine at earlier ages is a defeat for late-age experience.  Even were we to suppose that massive CRISPRization could cure every disease as it arose, and people's functions didn't diminish with age, the world would be so massively overpopulated as to make ghastly science fiction movies seem like Bugs Bunny cartoons.

But the conundrum is that because of the obvious and understandable fact that nobody wants major early onset diseases, it seems wholly reasonable to attack them with all the research and therapeutic vigor at our disposal. The earlier and more severe, the greater the gain in satisfactory life-years that will be made.  But the huge investment that NIH and their universities clients make in genomics and you-name-it related to late-age diseases is almost sure to backfire in these ways.  Cancer is but one example.

People should be aware of these things.  The statistical aspects of competing causes have long been part of demographic and public health theory.  Even early in the computer era many leading demographers were working on the quantitative implications of competing causes of death and disease, and similar points were very clear at the time.  The relevance to cancer, as outlined above, was also obvious.  I know this first-hand, because I was involved in this myself early in my career.  It was an important part of theorizing, superficial as well as thoughtful, about the nature of aging and species-specific lifespan, and much else.  The hard realities of competing causes have been part of the actuarial field since, well, more or less since the actuarial field began.  It is a sober lesson that apparently nobody wants to hear.  So it should not be written about as if it were a surprise, or a new discovery or realization.  Instead, the question--and it is in every way a fair question--should be why we cannot digest this lesson.  Is it because of our normal human frailty wishful thinking about death and disease, or because it is not convenient for the biomedical industries to recognize this sober reality front and center?

It's hard to accept mortality and that life is finite.  Some people want to live as long as possible, no matter the state of their health, and will reach for any life-raft at any age when we're ill.  But a growing number are signing Do Not Resuscitate documents, and the hospice movement, to aid those with terminal conditions who want to die in peace rather than wired to a hospital bed, continues to grow.  None of us wants a society like that in Anthony Trollope's 1881 dystopic novel The Fixed Period, where at age 67 everyone is given a nice comfortable exit--at least that was the policy until it hit too close to home for those who legislated it.  But we don't want uncomforable, slow deaths, either.

The problem of competing causes is a serious but subtle one, but health policy should reflect the realities of life, and of death.  I wouldn't bet on it, however, because there is nothing to suggest that humans as a collective electorate are ready or able to face up to the facts, when golden promises are being made by legislators, bureaucrats, pharmas, and so on.  But, science and scientists should be devoted to truth, even when truth isn't convenient to their interests or for the public to hear.

Tuesday, January 12, 2016

Cancer--luck or environment? Part II: Nothing to food-fight over

Yesterday we commented on the 'controversy' over whether cancer is mainly due to environmentally (lifestyle) or inherently (randomly) arising mutations.  This is a tempest in a teaspoon.

Mutations, whatever their individual cause, must accumulate among dividing cells until one cell has the bad luck to accumulate a set of changes that 'transforms' it into a misbehaving cancer cell.  The set of changes varies even among tumors of the same organ, because many different genes and their expression-regulation contribute to the growth, or restraint of growth, even within the same tissue. That is, not all breast, colon, or lung cancers are caused by the same set of mutations.   It then proliferates, rapidly dividing and thus indubitably acquiring more mutational changes that enable it to do things like metastasize to other parts of the body, or develop resistance to drug treatment.  The more rapidly it grows and spreads, the more rapidly such things can happen.

Even if the first transformational cause were due entirely to environmentally-induced mutations, the real dangers that ensue during the tumor's lifespan are relatively rapid additions to the original tumorigenesis process, and so in a sense the main dangers of cancer are primarily, if not nearly exclusively, due to inherent mutation among cancer cells.  If you get lung cancer and then stop smoking, your lung cancer will still evolve. Indeed, if environment contributes, it may make things worse--if that "environment" is radiation or chemotherapy: radiation definitely causes mutations, and chemotherapy weeds out cells that haven't experienced resistance mutations, leaving or even making room for tumor lineage cells that do have resistance mutations.  Finally, things that stimulate cell division can facilitate new mutations or even just make a tumor spread more rapidly.

So clearly cancer is not all due to environmental, nor to inherently occurring changes.  These and other factors comprise multiple, interacting causative effects.  Attributing cause to environment or inherency is misleading.  But what if cancer were in fact even entirely due to lifestyle factors that stimulate cell division or directly cause mutation?  Of course this would be very good for the Big Data epidemiologists and their studies, and threatening to industries and so on that produce mutagenic waste or products etc.  But suppose epidemiologists were to continue to find major carcinogenic environmental factors (that is, that the major ones, like smoking, aren't already known). Let us further suppose that avoidance behavior were to follow the announcement of the risks (not an obvious thing to assume, actually; the tobacco industry is still thriving, after all).  Then what?

Epidemiologists would say their work has prevented cancer and would claim victory over the to-them strange idea that cancer is due to inherent mistakes in DNA replication and is inevitable if one were to live long enough.  A lifestyle-change-based reduction in cancer would be clearly a very good thing.  But in fact, it would not be an unalloyed victory: one thing it would do is keep the non-exposed person alive (because s/he didn't get cancer!) and that in turn means that s/he would be at higher risk of (1) other age-related deteriorative diseases that dying of cancer would have precluded, many of which are waiting in the wings at ages when cancers arise, and (2) eventually getting cancer at some older age.  In the first case, the rates of other diseases like stroke and diabetes etc. would necessarily go up.  The risk of slowly petering out in increasingly bad shape in an intensive nursing unit would go up.  That would, of course, lower the lifetime cancer risk, but not in a very pleasant way.

In other words, lifestyle changes can delay cancer, but even assuming that the per-year exposure to environmental mutagens were reduced, the consequently longer exposure to those mutagens might mean their lifetime total would go up), so whether or not it decreased the lifetime risk of cancer would be an open question.  However, what this would do would be, by removing environmental causes, to raise the fraction of cancers that are due to inherent mutation, strengthening the fraction of Vogelstein-Tomasetti cases!

It's undoubtedly good to get cancer later rather than earlier in life, but not an unalloyed good.  In any case, what these points show is that the argument over the particular fraction of cancers that are due to environment vs inherent mutation is rather needless.  At most it might be relevant to ask how much of funding investment in big epidemiological studies is going to pay off, rather than spending on some other clearer issues (especially if the major environmental mutagens are already known).  There have already been scads of massive long-term studies of almost anything you can name, to identify carcinogenic exposures. With some very important exceptions, that are by now well known, these studies have largely come up empty, or with now-it-is/now-it-isn't conclusions, in the sense that risk factors are either weak, or if strong are rare and hard to find embedded in the broad mix of chronic disease risk factors.  Environments are always changing with new possible carcinogenic exposures arising, but basically those with strong effects usually show up on their own such as by multiple cases of a particular cancer type in some specific location or among workers in a particular industry or in vitro mutagenesis studies and the like.

If causation is too generic, don't get your hopes up
If comparisons among countries, for example, show that the same cancer can have very different age patterns or incidence rates, this may suggest lifestyles as major risk differences. But that's far from saying that the causal elements are individually strong or simple enough to be enumerated by the usual Big Study epidemiological approach. One can be extremely doubtful that this would be the case.

Saying something is 'environmental' because, for example, it varies among populations is like saying something is 'genetic' because it varies among relatives.  If it's like genetic factors as documented by countless GWAS studies, there are many different, correlated or even independent contributors, then each person's cancer will be due to a different set or complex set of experiences and the luck of the mutational draw.  As with GWAS and related approaches, it is far from clear that large, long-term environmental studies, more mega than we've already had for decades, will be the appropriate way to approach the problem.

Indeed, to a considerable extent, if each case is causally unique, by some different combination of factors and their respective strengths in that individual, then it's epistemologically not very different from saying that cancer occurs randomly, which, though for a different sort of reason, is what V and T said.  There won't, for example, be a specific environmental change you can make, any more than a specific gene you can re-engineer, to make the disease go away or even to change much in frequency or age of onset.

Food fights like this one are normal in science and often have to do with egos, investment in one 'paradigm' or another, how research is supported or advice from experts are conveyed to the public.  But such disputes, though very human, are rather off the point.  We often basically ignore risks we know, as in the proliferation of CT and other radiation-based scanning and medical testing which can be carcinogenic.  Life is mutagenic, one way or another.  So while you have life, enjoy your food--don't waste it by throwing it at each other!  There are better questions to argue about.

Monday, January 11, 2016

Food-Fight Alert!! Is cancer bad luck or environment? Part I: the basic issues

Not long ago Vogelstein and Tomasetti stirred the pot by suggesting that most cancer was due to the bad luck of inherent mutational events in cell duplication, rather than to exposure to environmental agents.  We wrote a pair of posts on this at the time. Of course, we know that many environmental factors, such as ionizing radiation and smoking, contribute causally to cancer because (1) they are known mutagens, and (2) there are dose or exposure relationships with subsequent cancer incidence. However, most known or suspected environmental exposures do not change cancer risk very much or if they do it is difficult to estimate or even prove the effect.  For the purposes of this post we'll simplify things and assume that what transforms normal cells into cancer cells is genetic mutations; though causation isn't always so straightforward, that won't change our basic storyline here.

Vogelstein and Tomasetti upset the environmental epidemiologists' apple cart by using some statistical analysis of cancer risks related, essentially, to the number of cells at risk, their normal time of renewal by cell division, and age (time as correlated with number of cell divisions).  Again simplifying, the number of at-risk actively dividing cells is correlated with the risk of cancer, as a function of age (reflecting time for cell mutational events), and with a couple of major exceptions like smoking, this result did not require including data on exposure to known mutagens.  V and T suggested that the inherently imperfect process of DNA replication in cell division could, in itself, account for the age- and tissue-specific patterns of cancer.  V and T estimated that except for the clear cases like smoking, a large fraction of cancers were not 'environmental' in the primary causal sense, but were just due, as they said, to bad luck: the wrong set of mutations occurring in some line of body cells due to inherent mutation when DNA is copied before cell division, and not detected or corrected by the cell.  Their point was that, excepting some clear-cut environmental risks such as ionizing and ultraviolet radiation and smoking, cancer can't be prevented by life-style changes, because its occurrence is largely due to the inherent mutations arising from imperfect DNA replication.

Boy, did this cause a stink among environmental epidemiologists!  Now one we think undeniable factor in this food fight is that environmental epidemologists and the schools of public health that support them (or, more accurately, that the epidemiologists support with their grants) would be put out of business if their very long, very large, and very expensive studies of environmental risk (and the huge percent of additional overhead that pays the schools' members meal-tickets) were undercut--and not funded and the money went elsewhere.  In a sense of lost pride, which is always a factor in science because it's run by humans, all that epidemiological work would go to waste, to the chagrin of many, if it was based on misunderstanding the basic nature of the mutagenic and hence carcinogenic processes.

So naturally the V and T explanation has been heavily criticized from within the industry.  But they will also raise the point, and it's a valid one, that we clearly are exposed to many different agents and chemicals that are the result of our culture and not inevitable and are known to cause mutations in cell culture, and these certainly must contribute to cancer risk.  The environmentalists naturally want the bulk of causation to be due to such lifestyle factors because (1) they do exist, and (2) they are preventable at least in principle.  They don't in principle object to the reality that inherent mutations do arise and can contribute to cancer risk, but they assert that most cancer is due to bad behavior rather than bad luck and hence we should concentrate on changing our behavior.

Now in response, a paper in Nature ("Substantial contribution of extrinsic risk factors to cancer development," Wu et al.) provides a statistical analysis of cancer data that is a rebuttal to V and T's assertions.  The authors present various arguments to rebut V and T's assertion that most cancer can be attributed to inherent mutation, and argue instead that external factors account for 70 to 90% of risk.  So there!

In fact, these are a variety of technical arguments, and you can judge which seem more persuasive (many blog and other commentaries are also available as this question hits home to important issues--including vested interests).  But nobody can credibly deny that both environment and inherent DNA replication errors are involved.  DNA replication is demonstrably subject to uncorrected mutational change, and that (for example) is what has largely driven evolution--unless epidemiologists want to argue that for all species in history, lifestyle factors were the major mutagens, which is plausible but very hard to prove in any credible sense.  

At the same time, environmental agents do include mutational effects of various sorts and higher doses generally mean more mutations and higher risk.  So the gist of the legitimate argument (besides professional pride or territoriality and preservation of public health's mega-studies) is really the relative importance of environment vs inherent processes.  The territoriality component of this is reminiscent of the angry assertion among geneticists, about 30 years ago, that environmental epidemiologists and their very expensive studies were soaking up all the money so geneticists couldn't get much of it.  That is one reason geneticists were so delighted when cheap genome sequencing and genetic epidemiological studies (like GWAS) came along, promising to solve problems that environmental epidemiology wasn't answering--to show that it's all in the genes (and so that's where the funding should go).  

But back to basic biology 
Cells in each of our tissues have their own life history.  Many or most tissues are comprised of specialized stem cells that divide and one of the daughter cells differentiates into a mature cell of that tissue type.  This is how, for example, the actively secreting or absorbing cells in the gut are produced and replaced during life.  Various circumstances inherent and environmentally derived can affect the rate of such cell division. Stimulating division is not the same as being a direct mutagen, but there is a confounding because more cell division means more inherent mutational accumulation.  That is, an environmental component can increase risk without being a mutagen and the mutation is due to inherent DNA replication error.  Cell division rates among our different tissues vary quite a lot, as some tissues are continually renewing during life, others less so, some renew under specific circumstances (e.g., pregnancy or hormonal cycles), and so on.

As we age, cell divisions slow down, also in patterned ways.  So mutations will accumulate more slowly and they may be less likely to cause an affected cell to divide rapidly.  After menopause, breast cells slow or stop dividing.  Other cells, as in the gut or other organs, may still divide, but less often.  Since mutation, whether caused by bad luck or by mutagenic agents, affects cells when they divide and copy their DNA, mutation rates and hence cancer rates often slow with advancing age.  So the rate of cancer incidence is age-specific as well as related to the size of organs and lifestyle stimulates to growth or mutation.  These are at least a general characteristics of cancer epidemiology.

It would be very surprising if there were no age-related aspect to cancer (as there is with most degenerative disease).  The absolute risk might diminish with lower exposure to environmental mutagens or mitogens, but the replicability and international consistency of basic patterns suggests inherent cytological etiology.  It does not, of course, in any sense rule out environmental factors working in concert with normal tissue activity, so that as noted above it's not easy to isolate environment from inherent causes.

Wu et al.'s analysis makes many assumptions, the data (on exposures and cell-counts) are suspect in many ways, and it is difficult to accept that any particular analysis is definitive.  And in any case, since both types of causation are clearly at work, where is the importance of the particular percentages of risk due to each?  Clearly strong avoidable risks should be avoided, but clearly we should not chase down every miniscule risk or complex unavoidable lifestyle aspect, when we know inherent mutations arise and we have a lot of important diseases to try to understand better, not just cancer.

Given this, and without discussing the fine points of the statistical arguments, the obvious bottom line that both camps agree on is that both inherent and environmental mutagenic factors contribute to cancer risk. However, having summarized these points generally, we would like to make a more subtle point about this, that in a sense shows how senseless the argument is (except for the money that's at stake). As we've noted before, if you take into account the age-dependency of risk of diseases of this sort, and the competing causes that are there to take us away, both sides in this food fight come away with egg on their face.  We'll explain what we mean, tomorrow.

Thursday, November 5, 2015

Red meat makes a good, scary cancer story....but is it?

It's off again, on again:  don't eat processed meat, don't eat red meat, or you'll get colon cancer!! Eat fish (well, unless it has mercury) or chicken (unless it has salmonella), or 'the other white meat': pork (remember the billboards?). They're safe!

A few years ago we seemed to have been given some relief when stories suggested that red meat (beef) was OK after all (of course, the lives of the cows were awful, and eating beef meant you doped up on antibiotics, but at least it didn't give you colon cancer).

Recently, a statement (now apparently offline) released by the International Agency for Cancer Research, a part of the World Health Organization, asserts that eating processed meat and red meat, 'causes' cancer.  Actually, the report was a bit more nuanced than the headlines, but journalists have to make a living, no?

Bacon, Stock photo

In response to strong backlash, the WHO quickly was forced to 'clarify' their clarion call to vegetarianism -- here's a link to their Q&A on the subject.  They now acknowledge, or 'clarify' that what they had done was simply add the meats to a list of known nasties, that cause cancer.  Putting meat on a causal list is one thing, but dishing it out to the media is another, and a rather irresponsible way to play for publicity (of course, if the news media made an exception and actually did their job of being skeptics, this wouldn't have unwound as it did).

In any case, the bottom line was basically that even two strips of bacon a day increases your colon cancer rate by 18%.  That sounds like a whopping and terrifying difference!  The WHO put this in the same carcinogenic-substance category as asbestos and tobacco. As they quickly clarified, that is in a sense a warning list, but the 18% figure is what got in the news and may have, at least temporarily, slammed the bacon and hamburger industry, if anybody still listens to the daily Big Warnings. However, let's hold all cynicism for the moment, hard as that is to do, and look a bit more closely at was said.

First, there seems little doubt that processed meats 'cause' cancer.  That doesn't mean an innocent-looking strip of bacon will give you cancer.  Instead, what it means is that various high quality studies have found a dose-response pattern in which higher or longer exposure levels earlier in life are associated with higher cancer incidence later on.  We know that correlation is not the same as causation, and that lifestyle factors are highly correlated.  Thus, for example, those in dire poverty don't eat tons of processed meat, and those who eat less salami also eat more brussels sprouts, take vitamins, don't smoke, lay off the double gin tonics.....and of course, go to the Ashram regularly to get your mind off the bacon you didn't eat at breakfast and the aftertaste of your dinner's brussels sprouts, and say a mantra to stay calm after you've given up everything that's fun.

Now, in the west, the lifetime risk of colon cancer is about 5%.  That means that if you tote up the probability of having cancer at age 40, 45, 50, .... 100, if the 18% figure is credible, it means that risk is about that much higher in those who dose up on pastrami and burgers.  Actually, this was the estimate based on eating 2-strips of bacon or the equivalent every day.  Of course, by far most of these cancers occur in older people (over the age of 60, say).  That means that the risk figures mainly apply to you if you live to old age, and of those who die earlier of other things their actual risk turned out to be zero--they enjoyed their visits to McD's and the deli!  That's why smoking is, in a literal epidemiological sense, a preventive relative to colon cancer (smoking will kill you of something else first).  There's no joking about cancer, but the basic idea is that for those who lived long enough, about 5% get colon cancer at some age.  Actually, while we don't know about meat-eating habits, but risks have been declining in recent years in developing countries (and, I think, increasing in other countries as they westernize).

Eat meat and lower your risk!
At a baseline of 5%, an 18% increase means a lifetime risk of about 6%.  Now if you hog up even more, your risk will go higher, perhaps much higher.  But wait a minute.  How many people actually dish up so heavily on processed meat (including steak and burger)?  Surely some do.  In fact, we don't know exactly where the lifetime risk estimate of 5% comes from; if from a population sample, then it wouldn't have regressed out meat-eating, and the figure would already include meat-eaters. However, let's ignore all these potential confounding or confusing issues and just consider the 18% figure on its own, as a given, as risk differences between abstainers and sausage gluttons.

Now in modern countries with health care systems, one routine health-care procedure is regular colonoscopy in older adults.  There was a recent estimate that regular colonoscopy can prevent about 53% of colon cancers; the reason is that precancerous polyps are found and excised so they can't transform into cancer.  Actually, you can find even more dramatic estimates of the preventive effectiveness if you scan at the web.  Likewise, you'll find many other lifestyle factors widely cited as having protective effects, including exercise, vitamins, eating vegetables, and the like.

Let's just do a bit of back-of-the-envelope numerology to make the point that if you're a bacon hog but have regular screening, get your exercise and all that, and you reduce your meat-elevated risk by 50%, then your net risk is around 3%, about half the 'average' of 5%.  One can surmise that if you stop your bacon fix, but then figure you're fine and don't do the other preventives, many of which are likely to be wanting in the meat-hog's normal lifestyle, then the actual effect of your 'healthy' baconless diet change will be to increase your cancer risk!

This is a lesson in complex causation and oversimplified news stories.  Processed meat may be a risk factor for colon cancer, but throwing irresponsibly simplified figures like raw meat to the news media leads to worse, rather than better information for the public.

So, as Hippocrates said, moderation in all things.  Eat your reuben (OK, yes, along with some broccoli).  But go one better than Hippocrates:  get scoped!

Friday, October 9, 2015

The Elephant (not) in the Cancer Ward

Recently, Tomasetti and Vogelstein (the latter a senior and highly regarded cancer geneticist) suggested that most cancer is due just to bad luck.  We discussed that study here.  When cells divide, DNA is copied, but that is a molecular process that isn't perfect (see discussion of Wednesday's Nobel Prize in Chemistry, e.g., for the discovery of DNA repair mechanisms and their association with cancer).  There are mutation detection mechanisms of various sorts (the BRCA1 gene whose mutations are associated with breast and some other cancers, is one with that sort of function).  The more at-risk cell divisions, the more mutations, and the higher the likelihood that one cell will experience a combination of mutations that (along with inherited variation) transforms the cell into the founder of a cancer.  T and V's assertion based on statistical analysis of numbers of cells at risk, their division rate for given tissues, and age of onset patterns, was that random mutation was a major contributor to cancer, rather than inherited genotype or environmental exposures, which they argue would account for this substantial fraction of cases.

Naturally, those whose grant fortunes depending on the idea that cancer is 'genetic' and/or 'environmental' roared in opposition to an idea that could threaten their perspective (and empires). Some of the T and V paper's statistical methods were questioned, and perhaps their paper was over-stated or less definitive than claimed.  Nobody can doubt that genetic variation and environmental exposures that could cause cells to be more likely to experience mutations, play a role in cancer.  But in any practical sense, it is hard to deny that luck plays a role (even with environmental exposures, because if they cause mutations, they basically strew them randomly across the genome, rather than causing them in any particular gene, etc.).

But we mentioned an important issue then that had been raised 40 years ago by epidemiologist Richard Peto.  Essentially it is that other mammals, like mice, experience a similar array of cancer types, with similarly increasing risk with age....but that increase is roughly calibrated with their life span. In fact, mice have far fewer stem cells in, say, their intestine or blood than humans, but their risk of cancer in these tissues increases far more rapidly (in years) than does human risk, though we have orders of magnitude more at risk cells and cell divisions.  This became known as Peto's Paradox.  It has not really been answered though there are some attempts to determine how it is that different species, of different sizes, calibrate their cancer risk in relation to their observed typical lifespan.

"Elephas maximus (Bandipur)" by Yathin S Krishnappa - Own work. Licensed under CC BY-SA 3.0 via Commons - 

For example a 2014 paper in Nature Reviews Genetics by Gorbunova et al. documents the very different typical lifespans of rodent species, and suggests some plausible genetic mechanisms that may protect the longer-lived species from cancer.  There must be some such mechanism, or else we misunderstand something very important in the carcinogenesis process.

Now a new commentary has been discussed in the NY Times of a JAMA paper, that makes similar genetic arguments for the very out-of-line cancer-free longevity of elephants.  Based on their numbers of at-risk cells, elephants should drop over with cancer at a very young age, but instead they typically live for a very long time.  How can this be?


The JAMA authors, Abegglen et al., found that a gene, called TP53, that is clearly related (when mutated) to cancer susceptibility in humans and in experimental assays, at least in part because it detects and effectively kills misbehaving mutated cells.  The study included humans with Li Fraumeni syndrome (LFS), a genetic disorder that greatly increases the risk of developing cancer, susceptibility to which has long been known to be associated with variants in TP53, and blood samples from Asian and African elephants.  


The study needs close scrutiny for methodological issues, but the authors make what they feel, reasonably, is a relevant finding.  There is only one copy of the TP53 gene in humans, but in elephants there are 20.  In blood cell assays this gene's activity was higher than in humans.  The inference is that elephants' longevity relative to cancer is due to this gene. If that is indeed the (or at least, an) explanation for the elephants' cancer-related longevity, it raises some other important questions, which should at least raise eyebrows and the need for ever-present skepticism.


Questions raised by the results

As in the rodent paper cited above, single-gene mechanisms for complex traits are appealing and publication-worthy, but in a sense such claims raise questions about themselves.  Elephants live long lives relative to other diseases that essentially have little if anything to do with cancer.  One can think of heart disease, dementia, stroke, kidney failure, liver disease, neuromuscular and joint disease, and waning immune systems.  Are these traits all due to having more TP53?  That seems unlikely.  

Alternatively, apparently whales are known not to have multiple TP53 duplicates, and I don't know about other very large animals like rhinos, giraffes, and so on.  A standard argument would be that in ecological circumstances when natural selection favors longer lives for some species, it uses whatever mechanism happens to be available--that is, selection has no foresight and can't just choose genes to duplicate.  Each species will have experienced the longevity advantage in its own local time, place, and ecosystem.  Just as the genes whose mutation yields resistance to malaria in humans vary from continent to continent, so will longevity-related genes favored by selection


So, Peto's Paradox remains curious.  If each species has its own protective mechanism (and perhaps several for its different organ and physiological systems), then we can account in a reasonable way for longevity patterns.  There is no need to find, or even to expect the same thing in all species' evolution: variation in response to selection can vary by organ system, species, and location even among species.  This is exactly the sort of thing that we should expect when we think of the complexity of genomic mechanisms--and what has consistently been found by genome mapping studies (GWAS) of late onset traits (and, for that matter, even early onset ones).


In turn, that means that each paper that claims subtly or overtly to have found 'the' or even a widespread important mechanism related to aging needs to be taken circumspectly.  Aging and lifespans are complex phenomena.  We will learn from each example we document, as with GWAS results, that a simple anti-aging strategy can't be inferred.  It's not likely to be a single magic bullet.

Tuesday, March 10, 2015

The lucky ones were vaporized: Hiroshima, Nagasaki, and genetics

I was alive when the two bombs, Little Boy and Fat Man, were dropped on Japan, but I was too young to know about it.  Only years later did I know in some abstract sense what dropping the "A- bomb" was all about.  It was a merciful act, designed to end WWII before hundreds of thousands of soldiers on both sides were slaughtered or disarmed (literally), no?

Bombs away.  Little Boy (wikipedia)


I'm reading a new book Hiroshima Nagasaki, by Paul Ham.  According to the author, by the time our testing was over and we were ready to deliver our babies, both the Allies and the Japanese knew that Japan was finished.  The Nazis had just surrendered, having suffered but a fraction of what they deserved, and the Allies' attention could be turned full-bore on Japan.  Pincers from China, Russia, the US and Britain were closing in on the besieged island Japanese.  But if those suicidal maniacs (as we were told they were) decided to fight to the last samurai, countless good guys, as well as suicidal maniacs, would suffer.  If we let the Japanese have a look at what we could do, then they'd surrender quickly and no battle for invasion would be needed.

To make sure they got the message, the Allies decided to give them a very close look, not at a bomb dropped in the harbor but  the first directly on Hiroshima, not off-shore. The second was insurance. There were about 100,000 killed outright and 100,000 injured.  There were hundreds of thousands who survived exposure to one (or in a few cases both) of the bombs, and of those who did not die of radiation sickness, life would never be the same. While some managed to have basically normal, or even happy lives, for many or most there was a huge lifelong trauma of radiation-related diseases, disfigurement, social ostracization, suicide, and other forms of misery.  PTSD hadn't even been defined, but in some ways, the lucky ones were vaporized.




MT readers may not know some of the biomedical aftermath and the way that these events not only ended a war but began a search for understanding of the biological effects of radiation exposure on humans.  At war's end, the Atomic Bomb Casualty Commission (later renamed the Radiation Effects Rsearch Foundation, or RERF) was established to do research on the surviving victims.  The victims' location when the bombs fell, shielding (e.g., in a house, out in the open), and resulting dose exposure were estimated for tens of thousands of bomb survivors.  Their health was followed by RERF staff with regular examinations and so has been the health of their children in the nearly 70 years since the events.

Two leaders of the studies of these effects were the late Jim Neel and still-kicking Jack Schull.  They were leading geneticists at the time (indeed, were among the founders of the American Society of Human Genetics).  I was fortunate to be Jim's post-doc at Michigan and to have Jack on my PhD committee and then as my department director in Houston for 13 years.  I was not directly involved in the RERF, but I did co-author with Jack a book-length report on radiation and cancer for a UN radiation agency.  So while I've never been to Japan, I've followed some of what has gone on there, from a genetic point of view, for many years.

Why genetics?
Why would geneticists get involved in studies of A-bomb survivors?  Wouldn't trauma surgeons, plastic surgeons and the like be the ones to be involved in the post-bomb medical studies?  The answer takes us deep in to the nature of genetics and evolutionary theory of the time.

Before the war, it had been shown experimentally that atomic radiation was mutagenic.  Whatever genes were, radiation could induce heritable changes in them.  Ionizing nuclear radiation was one source, but we were exposed to others, such as cosmic rays, chemicals and who knew what else.  If mutations caused disease or deformity, then a species had to have a way to purge them from its population if it were not to be mutated into extinction.  Some theory held that natural selection worked to preserve favorable genetic variants by making them dominant and hence to protect the diploid organism from new mutations, which were held mainly to be harmful.  But if most new mutations were thus recessive, if they subsequently rose in frequency eventually matings would generate recessive homozygotes, and hence severely defective, offspring.  Humans had adapted to the 'load' of mutations from these normal exposures, but could we sustain to a huge additional dose imposed by modern life?  The survivors of Hiroshima and Nagasaki provided a kind of gruesome natural experiment to answer this question.

At the time it seemed that there was plenty to fear.  The A-bombs were a lesson, but after the war, we were experiencing plenty of routine exposures:  we had nuclear testing and the fear of the damage of fallout that from above-ground tests was wind-borne to much of the whole globe.  There were growing diagnostic and therapeutic exposures to radiation.  There were casual uses such as in shoe stores to see how things fit your feet (and kids like me played with them while our parents shopped for shoes).  We were getting annual chest x-rays to look for tuberculosis, and dentists were firing away (I once had a job evaluating the quality of calibration of dentists' x-ray machines--which was highly variable). Radiation, though itself a carcinogen, is even a treatment for cancers that had arisen for other reasons.  Nuclear power stations had employees and the uranium that fueled them had to be mined, processed, and shipped.  How much radiation could workers safely be exposed to?

So, questions about the nature of exposure were cogent at the time.  Indeed, Neel's studies of the Yanomami and other Amazonian tribes, on the results of which I did some work early in my career, were in part designed to see what the genetic load was like in indigenous populations today, who like our species' ancestors were not exposed to these industrial-world dangers, to compare with what we were finding in ourselves--including by the RERF.

In the first decades after the attacks studies of inherited mutation were first done using various indirect indicator traits like still or defective births as indicators; later a few protein polymorphisms were found whose variation could be detected by electrophoresis during the decades after the attacks. These various tests for gonadal mutation were done on the births of those pregnant when exposed or the children of exposed survivors who because they hadn't been so disfigured or shunned that they were able to marry. Basically no excess mutations were found in these children, nor did they suffer any unusual or early diseases or shorter lives that might suggest heritable effects of radiation. Essentially, very little if any excess mutation was found in these studies.

In a sense, this result was presented as 'no mutations were found', a surprise given the expectations at the time, based on the known mutagenic role of radiation.   However, what was instead clearly found rather quickly was that survivors were at greatly increased risk for cancer.  First to appear clearly were leukemias, excess of which showed up in the first decade or so. Then excess cases of some solid tumors arose, with somewhat less risk and after longer latency times, and this excess continued throughout the survivors' lives.  There were and still are major debates about the dose-response relationships, but the carcinogenic effect of exposure in survivors was very clear.

The techniques for germ-line mutation detection were crude by today's standards, but I think the result (no real excess of mutations) still generally holds.  After all, gonadal tissue is a small target in terms of numbers of at-risk cells, relative to lungs and other organs.  And harmed gametes might just not have a chance to compete for fertilization and gestation, and never show up as a viable mutated baby who could carry the change into future generations.

But the cancer findings showed clear evidence of radiation as a mutagen, which makes sense given that cancers are to a fundamental extent diseases of changes that transform individual body cells so they no longer behave as they should for their particular organ context. They divide and spread.  So cancer is a genetic disease, and radiation is a mutagen, and that's why it's also a carcinogen.

Findings like these for cancer fit into a genetics research perspective, and with huge improvements in sequencing and genotyping technologies we gradually have grown to view genes as the major elements of biological causation.  Even in the Japanese survivors, however, an important fact is that except for leukemias, solid tumors didn't arise for many years, usually decades, after the bombings. This must reflect the fact that it takes many genetic changes to turn cells normal enough to enable gestation and postnatal life into tumor cells.  The victims who experienced post-exposure cancers may have inherited some risk variants, but the latency time implies that they had to await some for number of additional, non-radiation-related mutations to occur and supplement those caused by the exposure, before a tumor arose.

Relevance to today's genetics
These things are relevant to the insistence today that genes are responsible for major diseases.  For decades, starting with Archibald Garrod's pioneering use of Mendel's ideas to show that some metabolic diseases clustered among close relatives ('segregated') in the way Mendel's selectively chosen pea traits did, human genetics was about clear-cut, basically single-gene pediatric traits. Indeed, Neel related in his autobiography (Physician to the Gene Pool) that as a medical student he had been discouraged from doing genetics as a research clinician because there was nothing beyond rare, uncureable pediatric traits to study.  Fortunately, he ignored that advice.

A 1954 book by Neel and Schull pioneered the idea that genetic factors might be contributing to late-onset traits like cancer (not associated with radiation) and other diseases that aggregated but didn't segregate in families, that is, genes with weak effects (because if they had strong effects they would segregate, a point still not well-learned by today's army of geneticists).  This foreshadowed the era of genetic epidemiology that has gradually led to the notion that your inherited genome is like your palm-lines in foretelling your life and fortunes.  That genetic variation could 'cause' traits that take decades to develop was a strange thought, and we should recognize why we have such trouble finding major factors for such complex, non-segregating late-onset diseases.  The A-bomb studies clearly showed that somatic mutation was a result of the exposure of survivors.  That is one reason I've personally been interested in somatic mutation and its sources and consequences, and why I am skeptical about what I think are highly exaggerated notions about specific genetic causation of complex traits.

A commenter on the first published version of this post noted, correctly, that Jack's 1990 book Song Among the Ruins gives another treatment of postwar Japan, including much more about the biomedical studies and experiences, and the mutational findings.  Plus, it's very movingly written.

I can't judge Ham's book and its political or historical inferences.  He asserts that the bombs were dropped perhaps needlessly for reasons of global politics involving competition among the winners for domination of Asia.  Dropping the bomb let us get there first.  However, for me, the book presents a disturbing, sad chronicle of what the survivors' experienced, though it is mainly about the geopolitical history.  It is easy for us to have great sympathy for the horrified, scarred, often socially shunned survivors of events that the rest of the Japanese population would rather forget or pretend never happened.  It is easy to criticize the political motivations made at the time, when fallible mortal leaders had to make judgment calls in a very complex political web with countless military and civilian people being horrified, mangled, or killed on a daily basis.  Other nations on both sides were known to be working on a nuclear bomb.  And the Japanese were not innocent babes, and had committed unconscionable cruelty and horror on their enemies.

Of course, as always the ordinary citizen was caught up in global affairs beyond his or her control--as we are today.  The lesson isn't to blame countries for their wartime atrocities, because no country is angelic under such stress.  The lesson is to prevent such conflict in the first place.  That seems to be a lesson nobody can learn, given what we see today, even in Europe and Asia, even among the WWII belligerent countries, always pressing, edging toward conflict.  We are prisoners of our emotions and our short memories.

But geopolitics and the shackles of history aside,  it is interesting to think about the ways that radiation mutagenesis and carcinogenesis have molded our thoughts more broadly, when it comes to the causes of death and disease even in ordinary times like ours.


[this has been edited from the original posting, to correct grammatical and phrasing mistakes]

Thursday, February 26, 2015

Digesting yeast's message

A new paper in Nature by Levy et al. reports on the genomic consequences of large-scale selection experiments in yeast.  Yeast reproduce asexually and clones can be labeled with DNA 'barcode' tags and followed in terms of their relative frequency in a colony over time.  This study was able to deal with very large numbers of yeast cells and because they used barcodes the investigators could practicably follow individual clones without needing to do large-scale genome sequencing.  Prior to this, this sort of experiment was prohibitively costly and laborious.  So the authors add to findings in selection experiments using bacteria or flies and so on, where mostly aggregate responses could be identified.

In this case, nutrient stress was imposed, and as beneficial mutations occurred and gave their descendant cells (identified by their barcode) an advantage, the dynamics of adaptation could be followed.  The authors showed, in essence, that at the beginning the fitness of the overall colony increased as some clones, bearing advantageous mutations, rose rapidly in relative frequency.  Then, the overall colony fitness stabilized and subsequent advantageous mutations were largely kept at low frequency (most eventually went extinct).  But overall, the authors found thousands of colonies with different advantageous variants; most fitness effects were of only a small (or, for the majority, very small) percent.  Once a set of large numbers of 'fit' variants had become established, new ones had a difficult time making any difference, and hence staying around very long.

This study will be of value to those interested in evolutionary dynamics, though I think the interpretation may be rather more limited than it should, for reasons I'll suggest below.  But I would like to comment on the implications beyond this study itself.

Who cares about yeast (except bakers, brewers, and a few labs)?  You should!
This is interesting (or not) you might say, depending on whether you're running a yeast lab, or in the microbrew or bakery business. But there are important lessons for other areas of science, especially genomics and the promises being made these days.  Of course, the lesson isn't a pleasant one (which, you might correctly assume, is why we're writing about it!).

This study has important implications for basic evolutionary theory perhaps, but also for much that is going on these days in human biomedical (and also evolutionary) genetics, where causal connections between genomic genotypes and phenotypes are the interest.  In evolution, selection only works on what is inherited, mainly genotypes, but if causation is too complex, the individual genotype components have little net causal effect and as a result are hardly 'seen' by selection, and evolve largely by chance.  That's important because it's very different from Darwin's notions and the widespread idea that evolution is causally rather simple or even deterministic at the gene level.

Put another way, genomic causation evolved via the evolutionary process.  If natural selection didn't or couldn't refine causation to a few strong-effect genes, that is, to make it highly deterministic at the individual gene level, then biomedical prediction from genome sequences won't work very effectively.  This is especially true for traits, disease or otherwise, that are heavily affected by the environment (as most are) or for late-onset traits that were hardly present in the past or arose post-reproductively and hence didn't affect reproductive fitness and are not really 'specified' by genes.

There was considerable genomic variation between the authors' two replicate yeast experiments.  As one might say, meta-analysis would have some troubles here.  Likewise, from cell lineage to cell lineage, different sets of mutations were responsible for the fitness of the lineage in this controlled, fixed environment. This means that even in this very simplified set-up, genomic causation was very complex.  No 'precise' yeastomic prognostication!

In real biological history, even for yeast and much more so for sexually reproducing species in variable environments, selection has never been unitary or fixed, and genomes much more complex. Human populations have been until very recently very much smaller than 10^8 in the yeast experiments, and recent population expansion will make the number of low-frequency variants much greater, and with recombination, vastly more genomically unique.

The bottom line here is that our traits should be much less predictable from genotypes than traits in yeast. We have not reached, nor did our ancestors ever reach, the kind of fitness equilibrium reached in the yeast study under controlled selection, and fixed environments.

Somatic mutation
The authors also compare the large numbers of cells whose evolution they were able to follow with their barcode-tagging method, to the evolution of genetic variation in cancer and microbial infections, where there are even larger numbers of cells in an affected person and, importantly, clones expanding because of advantageous mutations. From the yeast results, these clonal advantages may not generally be due to one or two specific mutations (with perhaps, hopefully, exceptions when chemotherapy or antibiotics exert far stronger selection than was imposed in the yeast experiment). But the general complexity of such clonal expansions present major challenges, because they may end up with descendant branches distributed throughout the body where even in principle the responsible variation can't be directly assessed.

But the implications go far beyond cancer.  As we've recently posted, cancer is a clear but perhaps only a single manifestation of a more general phenotypic relevance of the accumulation of somatic mutations, that occur in body cells during life and can in aggregate have systemic or organismal-level implications.  The older we get the more likely we are to generate such clones, all over the body, and it seems likely that they can become manifest not just as individually ill-behaving cells, but as disease for the whole person.

But it's not just late onset implications that the yeast work may forebode.  There are already huge numbers of cells in the early embryo and fetus whose even huger descendant clades of cells during life grow many, many fold by adulthood.  There is no reason not to expect that each of us will carry clades that include differently-than-normal functioning cells in our tissues.  Let age, environmental exposure, and further mutations add to this and disease or age-related degeneration can result.  Yet none of this can be detected in the usual individual's 'genome' as currently viewed.  This is a potentially important fact that, for practical reasons or what one might call reasons of convenience, is ignored in the wealth of mega-sequencing projects being lobbied for based on genome sequencing (precision prediction being the most egregious claim).

So a bit of brewer's yeast may be telling us a lot--including a lot that we don't want to hear. Inconvenient facts can be dismissed.  Oh, well, that's just yeast!  They evolve differently!  That was just a lab experiment!  Brewers and bakers won't even care!

So let's just ignore it, as if it only applies to those rarefied yeast biologists.  Eat, drink, and be merry!

Tuesday, January 6, 2015

Is cancer just bad luck? Part II. It's a genetic, but usually unpredictable, disease

Yesterday, we discussed some history of research on the cause and predictability of cancer.  Today, we'll try to raise some questions that seem to have been overlooked in the recent Tomasetti and Vogelstein paper in Science that argues that much or most cancer, with a few notable and clear exceptions, does not arise from inherited genetic mutations, nor from lifestyle exposures, but arises just by bad luck during the countless cell divisions that occur during our lives.  Much reaction to the paper has overlooked these issues as well.

In the usual use of the term, cancer is not genetic because there are only a few types of cancer that are clearly due to inherited variations in known individual genes. Even these are usually only a subset of all instances of cancer of the particular organ in question.  Most breast cancer does not involve inherited variation in the BRCA1 or BRCA2 genes, for example.

At the same time, some cancers, most notably breast but also colorectal and some other cancers, show family correlations of risk, suggesting that multiple contributing inherited variants might be involved. By far the bulk of cancers are 'sporadic' in the sense that they arise without detectable genetic risk factors.  Even large-scale GWAS type studies find very few genome sites that contribute more than individually very small, barely detectable, risk.

Before the frenetic genome mapping era began around 20 years ago, it seemed clear that with few exceptions (those perhaps mainly due to viruses) cancer was the archetype of a lifestyle-related disease.  Smoking caused a very clear risk of lung cancer.  Some viral exposures caused cancers. Colorectal cancers were largely due to low-roughage western diets, and various things like hormone drugs, coffee, and you-name-it, were suspects.  In addition, we knew clearly that ionizing radiation such as in x-rays and in uranium miners caused cancer risk.

The genome-wielders largely took over, of course, but that was as much a sociopolitical coup as it was based on any serious level science.  DNA was fashionable, sequencers were fancy (and expensive), and we could search the whole genome to find the culprit variants.  This turned out largely to be a big low-payoff bust, though not all geneticists are candid enough to admit it. Still, to many, with the few known exceptions, cancer has been seen as not a genetic disease.

But it's 'genetic' nonetheless!
This may all be true--it certainly is so empirically.  It gives the impression cancer is not really a genetic disease, in the usual sense of the word, meaning due to inherited risk.  But another sense of the word refers to mechanism, and cancer generally does seem clearly to be genetic in that sense.  It's just that the source of the variation is among cells within the body rather than among people (really, conceptuses) in a population.  Or, more properly it's a mix.  In fact if it were really genetic in the inherited sense the fetus would not develop properly, so one should never expect a really deterministic variant to 'cause' cancer by itself.  In this sense, cancer really is, if anything, the archetype of a genetic disease.  Here's why.

Diseases all must arise in some way or other in the behavior of cells.  Usually, it will be some collective aspect of cells, say, the pancreas's cells, as a whole, just don't make enough insulin,  Or by the way diets and other factors affect them, the blood stream produces too much of the wrong kinds of fats and they clog arteries.

But cancer is a disease of a single cell that then goes awry, and its cellular descendants.  The reason is that its genes are not responding in the usually self-restrained way for their local tissue environment. The genes could be induced by viral insertions, or by somatic mutations (that is, mutations occurring in body cells but that were not in the sequences inherited by the individual at his/her conception). The mutations cause the cells to divide without the usual orderly constraints.

Somatic mutations are not in the germ line and are not transmitted from parents to offspring.  They don't generate family risk correlations.  They can't be found by GWAS or other studies based on sequencing inherited genomes.  But they are genetic changes nonetheless, and many studies have shown that tumor cells do share mutational changes not found in normal tissue from the same person, and that as a tumor grows, spreads, develops drug resistance the cells in different descendant parts of the cancer have acquired even further mutational changes.

So that most cancer is not predictable from inherited genotypes is a disappointment, at least for genetic epidemiologists, it's a genetic disease nonetheless.  It's just hard or impossible to detect individual cells with a combination of the 'wrong' changes so as to found a tumor lineage.

At the same time, there is no reason to doubt that countless genetic variants that are inherited can affect risk, and make a cell more vulnerable to transformative somatic mutations.  It's just that, as GWAS types of research shows, the majority of these have individually very small effects--that's because they only have an effect when some other unlucky mutation(s) happen to arise in the same cell during the person's life.  But there can be uncountedly many such heritable weak-effect genome-types that simply can't be found by the current mapping techniques, and that's why such techniques don't find them.

And, yes, it's 'environmental'
Yesterday, we started this series stimulated by the Tomasetti and Vogelstein paper, in which they related the number of dividing cells in a person and the risk and age of onset of cancers of that organ.  They showed statistically that with the few known exceptions such as smoking and lung cancer, that cancer rates correlated pretty well with these considerations.  Since we ourselves were working with cancer site-specific and worldwide age-patterns of cancer, and formulating somatic-mutational models in those pre-genetic days, these ideas were already rather well-established, so the new paper uses newer data and seems very good and apt, but the idea isn't as new as the headlines and attention made it seem.  If anything, the profession at large should never have got to the point of expecting better tumor predictability than was at hand.

Still, environmental risk factors are not ruled out by that analysis.  Environmental or life-history risk factors, like diet or reproductive history and so on, stimulate cell divisions and in that way can affect the risk of mutations arising in the way Tomasetti and Vogelstein suggested: simply the normal errors in DNA copying.  Since the exposure has to affect a cell in a given tissue and in a particular relevant gene being used by that tissue, it is no surprise that the exposure's net effect, and hence predictability, is usually very small.  Still, exposure to environmental agents must contribute to mutations if the agent is known to be mutagenic or to stimulate cell-division.  So epidemiologists may be right that mutagenic or mitogenic exposures can have carcinogenic effect, but Tomasetti and Vogelstein are right that this will be essentially undetectable.  In no way does their analysis relate to the carcinogenic effect per se, just to the net magnitude.  Indeed, we know that such predictions, except relating to a few risk factors like smoking and UV light and HPV virus, haven't proven to be very powerful or reliable.  So there's nothing new here, except to the extent that genetic or environmental epidemiologists are in denial.

But actually, there are very clear environmental factors related to cancer risk.  They have to do with the subtle concept of competing causes.  If mutations arising by chance during cell division ultimately lead to transforming genotypes in some cell, the longer one lives the more likely such changes are likely to arise in at least one such cell in the person.  This is generally why most cancer rates rise with age in ways correlated with rates of cell division.

So, if we were to obtain wonderful preventive measures to eliminate heart disease and stroke, cancer rates would go dramatically up!  That is simply because those who now no longer died from the former would be alive to await the latter.  That is environmental causation, even if indirect!  Likewise, if we really want to reduce the risk of cancer, all we need do is keep eating McBurgers in greater and greater amounts, start some wars, or continue to over-use antibiotics: then we'll all die off of other causes, before we're old enough to get cancer.

Among many things that were said, unaccredited now, by many people including myself, because of the somatic mutational nature of cancer, if you were to live long enough you would get cancer of every organ you have.

Yes, luck is involved!
Indeed, even in inheritors of risk alleles, it need not be that if they get the cancer involved that their case is due to that allele--this is obvious in the sense that the same tumor can arise in people without the allele, usually the vast majority of cases.  So other factors are involved, and the natural occurrence of mutations in cell division, as well as environmental mutagenic or promoter agents doesn't change the fact that which exposed person has the wrong mutations in the vulnerable cells is simply a matter of luck.  An environmental mutagen has to hit the wrong set of genes in the wrong cell.  Naturally and fortunately, the odds are against such bad luck.

Tomasetti and Vogelstein essentially are saying that only the internal luck of mis-copying by DNA causes cancer. But environmental factors contribute to those errors, even if any individual exposure has very weak effects relative to a given type of cancer.  Relative to all cancers, it's harder to say, because through most of history few have lived long enough for there to be the kind of data needed, and since the risk per cell per cell division is small, and cell division generally slows with age, the newer evidence in an aging population will be statistically weak; cancer rates taper off, cancers grow more slowly, and the elderly have more urgent problems to deal with, as a rule.

But even if these findings are true but not revolutionary, not so fast!
The idea that risks per at-risk cell per cell-division that Tomasetti and Vogelstein based their analysis on makes sense, even if it's something that was essentially known decades ago.  We ourselves built multi-hit mutation-accumulation models that seemed to provide reasonably good fits to the known age-onset patterns of specific cancers.  These were based on somatic mutations.  But the T and V paper's analysis actually raises some issues that suggest maybe the authors have given too 'pat' of an explanation.

Even in the mid-20th century it was known that different species of animal also got a similar array of cancers, but that their accelerating age-specific risks, in principle related to the relative number of cells, were correlated with the species' typical lifespan.  And this had little if anything to do with environmental exposures, since the animals involved were typically those we managed or that had rather uniform environments.  This is not a trivial observation!

For example, inbred animals tell the tale as to tissues with a particular life-history of mitosis.  Mice housed in essentially identical conditions, develop an array of tumors at age-specific rates.  But mice get them in months, while we get them in decades.  This problem was raised around 1970 by prominent epidemiologist Richard Peto, but seems to have basically just been (conveniently) ignored. There are also strain-specific cancer risks in mice and other animals (including dogs and cats) that suggest that inherited vulnerability genotypes may be involved, but not single-gene variants.  If the number of cells at risk, or their division rates, are responsible for the just-bad-luck theory, then tiny mice should never get cancer!  And elephants or cows should be dropping over with huge tumors very early in life.

This raises another interesting issue about theory vs data in understanding cancer.  Among the transformative ideas in the late 1900s was that cancer is a 'multistage' disorder, that arises only after several events have occurred in some unlucky cell lineage in the body (or are inherited).  Early results suggested that only 2 events might be responsible.  A number of biostatistical epidemiologists began fitting, or I'd say 'forcing', 2 or 3-stage models to the data.  That is, they had their a specific theory, based on the fragmentary evidence then available, and fit the data to it, to estimate, for example, the rates at which the events occurred.  Then they had to explain what those events were, say, a cell-division inducer and a mutation.  But there was very little substantial evidence that that was the general story of cancer, and the evidence was far weaker than the commitment to the model.

Ranajit Chakraborty and I took a different approach.  We applied a more open multiple hit model and let the data speak for themselves; that is, we estimated (rather than pre-specified) the number of hits required.  We got, I think, better fits and better explanations.  The number of hits was higher, though at the time nothing was known about what they were.  Around 1990 Adam Connor and I suggested that the age pattern of cancer could be accounted for by the age-related probability that some individual cell would acquire some critical set of changes as a function of age, here we didn't specify the number.  This, too, seemed to fit the age-patterns and both approaches suggested that cancers were as a group due to similar genetic processes (whether or not they affected different genes in each instance--there was no useful data at the time), but left open the number of events involved. Since then, it has become clear that many different genes, and different combinations in different instances of cancer in the same organ (lung, stomach, etc.) are involved.  In all, these facts and findings account for the complexity of cancer (and, indeed, many other common normal or abnormal traits).

But if 'luck' means that some individual cell has, for whatever reason, acquired an initiating set of mutations or growth stimuli, then we can expect that to a great extent, each transformed cell is transformed for a different genotypic reason, and no one gene need be involved, or is sufficient.  You just get a bad roll of the mutational dice in one of your cells, regardless of whether the mutation is only due to DNA copying or has been affected by external agents.  The difference would be rather slight, and the main correlation (as in Tomasetti and Vogelstein) related to how many cell-turnovers are at risk.

But the species differences show that something other than just 'luck', or luck affected by lifestyle factors, is involved, and what that is, is basically not known.  That suggests that the Tomasetti and Vogelstein interpretation is itself missing something important (though it won't change the empirical fact that neither inherited genotypes nor most environmental exposures do not have highly predictive effects).

In sum
Cancer is more, and less, than pure luck.  And its causes are still poorly understood.  We think as we've said that the Tomasetti and Vogelstein paper points to many things that are shown by new data--but little if anything that wasn't known, shown, and understood for the right reasons a generation ago.  The love affair with inherited genotypes, enabled, encouraged, and funded by a variety of enthusiasms, opportunities, and vested interests, has distracted attention from working from what we knew.  The problem is that the somatic mutational nature of cancer doesn't lead to tidy prediction, prevention or interventions, at least not with current thinking.  But that's where future thinking should be going.