Wednesday, June 17, 2015

Remembrance of things past--in your genes? Part III: Was Lamarck so laughable?

A favorite sport of those holding to strict Darwinian views (to the extent they understand Darwin),  is to ridicule Jean Baptiste de Lamarck (1744-1829), he of the stretchy giraffe neck.

Lamarck
Lamarck has gotten a very bad name and at least partly undeservedly.  As are we all, he was a product of his time, his academic environment, and the knowledge then available.  He was apparently a quirky personality and got crosswise with other powerful French biologists, notably Georges Cuvier.  For various reasons it became important for Darwinians to distance themselves from Lamarck as an important intellectual ancestor, in particular, to avoid crediting him for his insight about evolution. Intentional PR-spinning to advance Darwinians (and British over French science)?

Lamarck did his best to clarify very explicitly that he was seeking material explanations not being mystical as he is essentially accused of being.  His basic idea of the inheritance of acquired characteristics was not even that new. It was the obvious thing to infer from the data available at the time, and the idea was commonly held as far back as Hippocrates (probably classics scholars can find it elsewhere as well).

In fact, Lamarck was very clear that he wanted a strictly materialistic explanation for species evolution and diversity. This is interesting, and if it weren't for the rather smug glee with which Lamarck is so universally ridiculed by biologists (whether or not they've read his actual work or even much of Darwin's), we might not want to make the following points.  But under the circumstances, we think it's merited, especially in light of the interpretations being given to widespread reports of various sorts of epigenetic inheritance, that is, of DNA marking rather than sequence change during life.

In our two prior posts in this series (here and here), we took the usual view and stated that any suggestion that epigenetic inheritance is Lamarckian inheritance is trying too hard to be revolutionary, because epigenetic inheritance is imposed by the environment, not by some mystic inner drive on the part of the organism as Lamarck is supposed to have suggested as the cause of adaptive evolutionary change. So, as the usual view has it, even if genome marking is inherited, it is not Lamarckian.  But is that actually so?

"Laws of Nature": who was right?
As we use the term, a Law of Nature is a concept that grew out of the so-called Enlightenment period in European culture history beginning around the mid-1600s.  Darwin's view of natural selection was that it was a Law of Nature that Isaac Newton might have been proud to recognize.  In the amended introduction to the 6th edition of the Origin of Species, Darwin added a review of the history of evolutionary thinking, and there he couldn't have expressed his views better: Darwin said that Lamarck "...did the eminent service of arousing attention to the probability of all change in the organic, as well as in the inorganic world, being the result of law, and not of miraculous interposition."  (italics mine).  

But in fact Darwin was quite wrong, reflecting his own ideological commitment, not Lamarck's. This is because Lamarck said something far more important in my view than the way Darwin thought, and in fact the exact opposite. Here's a fundamental point that Newton himself made very clear, about the central characteristic of a Law of Nature: Principia in 1687: 
"Those qualities of bodies that . . . belong to all bodies on which experiments can be made should be taken as qualities of all bodies universally."
That is, if you find something to be true in a local, restricted setting or 'sample', such as dropping an apple or the orbit of the moon around the earth, the same would be true everywhere else that you didn't or couldn't study.  That was the very essence of what it meant for some phenomenon to be a 'law'.   In his books and other writings it is repeatedly crystal clear that Darwin accepted this Newtonian view: natural selection is a law of nature the way the law of gravity is.  Indeed, in the autobiography he penned for his children near the end of his life, he couldn't have been more clear, writing "...now that the law of natural selection has been discovered..." and "Everything in nature is the result of fixed laws.”

Lamarck was closer to Newton in time than Darwin, but what were his views about laws of nature?  I know not, but Georges Cuvier gave a scathing 'eulogy' upon Lamarck's death, a bitter attack that poisoned posterity about Lamarck's reputation, and Cuvier notes of Lamarck "He had meditated on the general laws of physics and chemistry, on the phenomena of the atmosphere, on those of living bodies, and on the origin of the globe and its revolutions."  If accurate, Lamarck shared the prevailing idea of laws of Nature with Darwin.  Yet, when it came to life, Lamarck in his book said something very cogent, that Darwin and his intellectual descendants seem not fully to realize, even to this day: 
"In dealing with nature, nothing is more dangerous than generalizations, which are nearly always founded on isolated cases: nature varies her methods so greatly that it is difficult to set bounds to them."
Lamarck wrote this basically before the widespread development of statistical thinking, but it is a fact that has still not yet been absorbed by most biologists in evolutionary or biomedical genetics. Lamarck said that when the environment changes, that change in turn induces responses in behavior of organisms. His theory was about the consequent importance of (1) habit, (2) the use and disuse of traits, (3) the inheritance of acquired characters, and (4) the very slow process of adaptive evolution. 

As Lamarck described things, organisms have ways of life that depend on their circumstances.  They seek out resources, like food, that they are able to find, and the resulting 'habits' are essentially their ways of life.  Traits that are used seem to become more important over time but traits that are not used seem to wither or disappear.  Traits acquired during life are passed down to descendants.  The process is very slow, almost unimaginably so.

None of this seems to be at all forced, invented, or strange, and in fact, Darwin adopted all of these ideas in his own way.  As noted above, the idea that one's characteristics were controlled by some sort of transmitted substance is ancient, and the idea of evolution of species was hypothesized in classical times and here and there after that.

[If you are skeptical of our take on this, given Lamarck's clown-like image (due in large part to Darwin and his pals and Cuvier), that would not be surprising.  But that image is wrong, as you can see if you check his book itself and the Introductions by two highly respected evolutionary scholars (the 1984 U of Chicago Press English translation of Lamarck's Zoological Philosophy itself with very informative introductions by Hull and Burkhardt.), or Stephen Jay Gould's The Structure of Evolutionary Theory, or Ernst Mayr's The Growth of Biological Thought.]

So, what was so laughable about Lamarck?
Lamarck is routinely sneered at because, among other things, Darwin and his colleagues were motivated essentially to claim more credit for evolutionary ideas (I'm not the first to suggest this). Lamarck was a human with all the associated failings, but his work is derided because he suggested that the very striving or habits of life caused the associated heritable changes.  By contrast, the Darwinian idea is that new variation arises randomly relative to any need it might or might not have (one can debate how clearly Darwin understood or held such a view).  

However, Lamarck was trying to explain the same phenomena as Darwin, and to do so in terms of natural, historical evolutionary processes, rather than individual events of divine creation.

In our two previous posts in this series we basically took the Darwinian view, that Lamarck was laughable and any attempt to say that epigenetic inheritance was Lamarckian was equally wrong, trying too hard to challenge standard evolutionary theory. In fact, one can argue that epigenetic inheritance really is Lamarckian, based on what he actually said rather than what Darwin said about him, and adjusting for what was known in Lamarck's time.

(1) Habit: How do epigenetic changes arise?  
They arise because of the conditions and behavior of the organism: where they live, what they eat, stresses they are exposed, etc., and how their bodies respond to those exposures.  That is, they are the effects of the habits, as one could say, of the organisms.

And such changes are obviously adaptive if they allow the organism to persist and reproduce! If epigenetic changes are important and persist, over time they will be built into the characteristics of the species. Indeed, there are means by which such traits can eventually be built into the genome in the usual DNA-sequence way (one term for this is 'genetic assimilation').  Over time, nothing strange need be involved for epigenetic changes to be wholly compatible with our understanding of evolution.

(2) What about use and disuse?
In modern theory, 'disuse' means that eventually mutational or gene-expression changes (even if due to epigenetic mechanisms) lead a function or a gene to become more degenerate--as Darwinians would say, because there's no selection pressure to maintain it or even because it's costly if not useful and selection will favor its disappearance. And 'use', of course, would mean of adaptive value.  All perfectly compatible with Darwin (and part of his own theory).

(3) What is epigenetic inheritance?  
When and/or if it occurs, it is the modification of DNA (or the contents of cells) that arises in gametes (sperm or egg) during a parent's life and is transmitted to offspring.  The modifications of interest affect gene usage and hence the traits of the organism.  That is, this is the inheritance of acquired characteristics.

(4) What about the pace of evolution?
As to time, Lamarck was every bit as clear about the slow, gradual nature of evolution.  Both stressed this, recognizing the need to avoid creationist explanations.

So many of Lamarck's basic  ideas were similar to Darwin's (again, historians, not just I, have pointed this out).  What matters is not what someone said 200 years ago.  Instead, the bottom line is that if transgenerational inheritance by way of epigenetic changes acquired during life occurs and is functionally relevant, it is basically Lamarckian, but is also just a different form of 'mutation'.  And the differential proliferation of successful inherited traits, however acquired, will be a natural form of selection.

In that sense, it is Lamarck who is being misrepresented, and whose work in this context, given his context, is not risible.  It doesn't make Lamarckism entirely 'true'; there are wildly wrong things in Lamarck (but also in Darwin).  Cuvier, himself grossly wrong about life in many ways, cruelly portrayed Lamarck as a real nut case.  Despite Lamarck's sometimes free-wheeling ideas, that is not the judgment of history, and in any case, based on what Lamarck wrote in regard to the issues here, if epigenetic inheritance does turn out to have long-term relevance, which is not yet the case in terms of current evidence, it does not in any serious way undermine Darwin. What it does do, is to undermine ideological Darwinism.  And that is a very good thing for science.

Tuesday, June 16, 2015

Remembrance of things past--in your genes? Part II: some curious inconsistencies

Yesterday, we discussed the idea of 'Lamarckian' inheritance in modern guise and some curious evidence that has been found that suggests that, at least in a rather restricted set of contexts, gene-usage traits (though not new genes) that reflected an individuals life-experience could be inherited by its offspring, and maybe its offspring or possibly for even further descendant generations.  Today, we want to discuss some curious new data on this subject.

The June issue of Cell has 3 articles on this subject, and an overview commentary about those papers by von Mayenn and Reik, titled "Forget the Parents."   Experiments on mice and data on humans show that, first of all, a new embryo quickly isolates its future germline (sperm or egg) cell lineage as primordial germ cells (PGCs) from the rest of its cells.  Those germline cells divide and differentiate into ovary and testis tissue, to be used as gametes later in life.  During this time the differentiation of the cells, and just their maintenance, require interaction with other cells (such as those that supply blood and hence nutrition), so they aren't entirely isolated, but from a genetic point of view, that is, the stream of developmental cell division and hence genetic inheritance, they are separate lineages of cells.

The papers show that by and large the epigenetic marking, that is, the instructions for which genes to use or to silence, in the germline cells is basically stripped away early in their lineage formation.  The phrase they use is 'global erasure.'  Early on, the authors of one reviewed paper show that this marking is 'progressively erased genome wide...to the lowest levels...observed in the human genome to date." Another paper shows that a similar pattern occurs in mice, so that the reprogramming of the germline is highly conserved in mammals.  While not 100%, the evidence shows the "erasure of epigenetic memory is a key purpose" of this stripping away of marking.  Forget your parents, indeed!

This can be seen, in principle at least, as a great thing!  It means that we are not hide-bound by our parents' experience in a Lamarckian way.  It means that expression effects that accumulated during their whole lives before they made us do not force us into any particular path, except that basically enabled by our genes proper.  That's good, because our genes are the result of 3+ billion years of evolution and there's a reason that they are as they are, a reason one would not want to trifle with based our parents'  particular experiences.  We each get a fresh start, by forgetting our parents, which lets evolution screen our genomes for fitness to current situations, that is, enables our patrimony to evolve.  Or do we each start life fresh?

The Cell papers and commentary also report that some regions 'evade' epigenetic resettting, and they are called 'escapees'.  The figure from Tang et al. in the June 4 Cell shows that these regions can be viewed as relatively concentrated in regions, where classical protein-coding genes are.  This shows that there are different proportions of escapees in genome regions that are replete with functionless repeat sequences, compared to those that are repeat-poor.  The latter contain a higher fraction of genes proper.  It is, of course, just a difference in proportions, not a qualitative one.  Whether or how these differences indicate meaningful or important functional consequences is a separate question I can't comment on.


These regions are reported as including segments associated with some brain and growth-related physiological function and one paper suggests they are also associated with obesity, schizophrenia, and multiple sclerosis.  What are we to make of such statements and findings?  My first temptation is to say that this is cherry-picking based on known associations, many of which have proven to be minor or fickle, ignoring other things that are simply not known but could be important.  The size of genomes is such that any substantial subset will likely include just by chance what may in retrospect seem to be of particular importance.  They may be, but with limited knowledge such appearance of major findings may be inevitable, unless we resist the temptation to tell Just-So adaptive stories.

Furthermore, there are reports that much or even most of the genome is transcribed into various types of non-coding RNA (reports from the ENCODE project, for example), so the conservation/non-conservation distinction becomes rather unclear: it is taken as remarkable that some fraction of 'real' genes are not erased but the bulk of the genome that many are arguing does have function is erased. And Tang et al report that these escapee regions mainly became re-marked later in development...and might contribute to transgenerational epigenetic inheritance.  Why?  One can speculate this is because they can be marked, during gestation, as result of the mother's environmental exposures.

Making things even more curious, most of these regions were found not to have sequence conservation.  Since sequence conservation is widely taken as the criterion for evolutionary importance, the Just-So story temptation must be tempered.  In essence, this can be seen as correctly reporting data, but trying too hard to give importance explanations to them.

Obvious questions remain, as to the mechanisms by which various levels and patterns of developmental erasure are managed and how they evolve (these issues were discussed with substantial findings, in the Cell papers, but are beside our point here).

Playing word games?
In a sense the findings seem really to say "Forget your parents....except when you remember them!" It is a post hoc explanation offered, in a sense, to rescue a theory.  But if a theory says something's true except when it isn't, that isn't much of a theory: it's a tautology. It may be true, but what it means is that there is a mix of effects that we don't yet understand.

The enthusiasm for this phenomenon, in which in utero experiences based on the mother's state affect the later lives of exposed fetuses has generated lots of excitement, claims for paradigm shifts and the like.  This is how our society works.  But as noted by Heard and Martienssen, such effects have not yet convincingly been shown to be truly transgenerational, that is, to affect unexposed descendant generations.  So, the epigenetic-claims temperature probably needs to be cooled--which is not how our hyperbolic society treats things these days.

No one can be faulted from interest in a subject or from our inadequate knowledge, since the unkown is what science is about.  However, what we need is some formal, regular understanding of what 'except' means, that is, of when and where in genomes we should find marking conserved and when/where stripped or re-set.  Clearly something more specific in the way of knowledge is needed. This is not to denigrate the science, because we can only know what we've learned so far.  But it shows our general unwillingness to consider our imperfect knowledge, and the temptation to overstate what we do know.  In fact, it would be important enough just to say that epigenetic inheritance seems possible and can be important when it occurs.

Whatever's here isn't Lamarckian
In this case, the caveat offered by the Tang paper's authors that the escapee regions "may be sensitive to environmentally induced variations in methylation [marking] in individual embryo [sic] that could persist over a short term or might even become heritable.  As such, they are potential candidates for transgenerational epigenetic inheritance."  This is unexceptionable, but again it is the same tautology, phrased to hint at something remarkable.

The idea is often raised that such transgenerational phenomena, affected by experience, would overthrow Darwin, cause a 'paradigm shift' and be truly remarkable if not dramatic.  The phenomenon is interesting, and important to the extent that it happens, and would change some aspects of evolution from our usual idea of it, there is nothing all that new and nothing at all 'Lamarckian' about it.  So to whatever extent marking patterns are inherited and a response to experience, they don't challenge evolutionary theory in that way.

For example, an important fact about evolutionary generalization that probably only a small fraction of biologists expounding on the heresy of Lamarckism or proclaiming a conceptual revolution, is that plants do regularly have transmission of acquired traits.  Germ lines in many plants develop from the meristem cells, at the ends of the same individual plant's branches.  Whatever environmentally based genome marking, or mutations and their effects, have occurred during the life of a given branch will be transmitted to its gametes.  What happens in the other meristems of the plant may or may not be similar as they are independent cell lineages branching--literally!--from the original seed.

But this, while important and different from how we usually think of inheritance is not all that different from our own condition.  Germline lineages divide into separate egg or sperm cells, and mutations can occur in one but not the others of these cells in an individual.  Presumably, marking can do that as well.

A core epigenetic question is how permanent are these settings and how are they set and removed specifically.  Hard-to-change settings, like hard to mutate genetic variants, could be part of evolution that would modify our theory in what may seem like a kind of Lamarckian way, but in fact they are still 'Darwinian' in that they are set by environmental conditions imposed on the individual, that is, the set-determining conditions are external rather than internal.

Right now, enough is unknown that we cannot take seriously any claim that this is a fundamentally different aspect of inheritance that will modify evolutionary theory except, at most, in some details. But such modification is always happening to any scientific theory, and is healthy for science.  Epigenetic inheritance would join DNA-sequence inheritance as the mechanism of evolutionary change.  We'd just have to welcome it and wonder at why we didn't think it happened.  But there would be little if any real 'paradigm shift'.  Future work will show what's true.

Indeed, if we think about this carefully, we might not so easily dismiss things, sneeringly, as being 'Lamarckian'.  Tomorrow, we'll examine a bit more closely what it was that Lamarck actually said, and to what extent he deserves the ridicule that is so consistently directed at him.

Monday, June 15, 2015

Remembrance of things past--in your genes? Part I: Is epigenetics 'Lamarckian'?

Marcel Proust's epic novel, In Remembrance of Things Past, was a 20th century masterpiece of thoughtful reflections.  It is about small things that trigger recollections of events that happened earlier in life but that were otherwise lost to memory (an alternative title translation from the French is In Search of Lost Times).  For Proust it was the madeleine, a small cake he dipped in tea (the most famous example), or any number of other unexpected nostalgic triggers.  These evoke times past without one explicitly trying to dig into one's memory: they just come up.  We all have that experience, I assume. An odor, a piece of music, a food trigger a particular birthday or Christmas or girlfriend's name, or event at the beach.  In some instances, Proust's protagonist remembered things about his parents or other relatives.  But wistful as such memories were, they are things in the past that can be retrieved in memory but not in reality.  Or can they?

Forget the parents?
I'm not talking about, say reincarnation or strangely eerie experiences like deja vu. Instead, I'm thinking of reports over the years, and increasingly these days, of genomes that 'remember' events that happened in their, and their ancestors' past.

The consensus idea about genome evolution, supported with overwhelming evidence of all sorts, is that genetic mutations, that is, changes in DNA sequence, occur through various chemical process that are random with respect to any functional effect they may have.  It is chance and various forms of selection (see our series of posts on these, starting here), that determine which changes proliferate over the generations.

Before the understanding of the nature of genetic inheritance, going back from Darwin's wild guesses about gemmules, to Lamarck's famously dissed ideas about the inheritance of acquired characteristics*, even to Hippocrates, was the reasonable idea that your traits were somehow the result of elements (now we would call them 'molecules') that traveled to your gonads to be transmitted to offspring.  They were molecular images, so to speak, of who you were.  It made sense, but Mendel's work and much else showed clearly that that is not how inheritance works, or evolves.  Lamarckian thinking of that sort is out, and not because the Mendelians are bullies, but because there isn't any evidence for it!  Or is there?

Lamarck redux, or trying too hard?
There are many incentives for scientists to try hard to be the next Darwin, and to press their hopeful ideas to the public.  But most such claims are hopeful monsters, quickly shown not to be true. For example, ten years ago two Purdue plant geneticists published a paper in Nature that reported that the plant Arabidopsis, related to mustard, had a self-correcting mechanism that for generations could restore 'good' gene versions, replacing 'bad' mutants.  This if true would be a kind of Lamarckism, in that the organism could remember what was good and impose it (as I recall, the authors were not claiming to be resuscitating Lamarck, but that this was a new or different kind of inheritance).

It quickly turned out that the results were due to some experimental artifact, and pollen contamination and/or other problems undermined these results and nobody, probably not even the original authors, believes them any longer (the authors have tried to suggest that the artifacts didn't explain everything, but this hasn't been convincing and not even the authors seem to be following it up).  Nice try, but no cigar.

At the same time, over recent years, there have been findings that suggest that experiences acquired during life, that involve gene expression, could be transmitted to offspring, but were not encoded in DNA sequence.  Instead, they were epigenetic, that is, they involved modifying the DNA sequence in a way that affected which genes were being used in given contexts.  Clear examples involved coat color genetics in mice and some physiological responses related to obesity and associated traits.  The idea is that a fetus could acquire the change in gene usage while in utero, which would affect its traits, and then 'remember' the gene-usage setting and transmit it to their offspring.

There have long been suggestions that offspring can, even when adults, have physiological traits that resemble their parents, not through inheriting genetic variation but inheriting physiological states themselves.  Examples included traits like blood pressure, where mothers' state during pregnancy set their children on a related path, such as to having high blood pressure.  Terms like 'set point' were used to describe how the infant's body was 'set' to respond to its life experience in a way that resembled how the mother's state was, but that was not because of her specific genotype.  These results could not be related to known genes at the time, but technology has improved and there are several examples and some of the genetic basis seems to be becoming known.

Epigenetic changes are real
In short, what we know has to do with gene usage, not gene sequence.  Gene usage is affected by very well-known mechanisms that modify the chemical state of a given DNA region in ways that enable a nearby gene to be used (or, depending on the mark, prevent usage).  This is known as epigenetic marking because it is not due to mutations in DNA sequence.  The difference is basically that between Lamarckian and modern inheritance ideas in that epigenetic changes can be directed--that is, set or removed--based on experience.  Generally, the idea is that epigenetic changes are responses that relevant cells 'know' how to make in a given environmental context.

These effects thus seem 'Lamarckian' in a restricted sense, but that has been thought to be very restricted.  Your body's cells, say muscle or heart cells,  may respond to their environment (e.g., what's passing by in the blood stream) by context-specific mechanisms that use epigenetic marking to turn some genes on or shut others down.  This may be inherited by the body's cells, when they divide, unless changing circumstances lead them to change the genes they're using.  But that would not necessarily be inherited, because all you transmit to your offspring is a sperm or egg cell.  There is no reason to think that, for example, nutritional components that affect how insulin is used or how fat cells store energy would affect the use of energy-storing genes in sperm or egg cells.

This might happen, however, if the body cells, or even all the cells of a fetus including the germ line, also sense the maternal environment, and that in turn induces similar changes in the fetal cells.  The physiological settings of the fetus would reflect the mother's experience, not by being transmitted in her egg cell but by the effects of her blood constituents via the placenta.

A fine review of the state of knowledge as of 2014 is by Heard and Martienssen ("Transgenerational Epigenetic Inheritance: Myths and Mechanisms," Cell, 3/27/14, p99). This post today and Monday's post take some selective bits from that, but if you are interested in this subject, it seems to be an excellent source to read carefully.

This figure shows how exposure in utero can transmit to the offspring (F1 generation) or grandchildren (F2), but only if it also appears in the next generation in unexposed individuals is it really now incorporated into the genome for future generations.  The figure labels stress and nutrition as possible causes of gene-expression change that could get into the germline.

Figure from Heard and Martienssen, 2014, Cell

If altered epigenetic settings affected all the fetus's cells, then the settings could be inherited by its offspring, that is, the mother's grandchildren.  Even with no further environmentally induced effects, this could indeed be transmitted for multiple generations, and be more truly 'Lamarckian' inheritance into posterity. Unless the pattern really shows up in the 3d generation, it will not be considered truly transgenerational.  But even then, or with the examples, there are at least three problems to consider.

First, we know that these set-points are generally changeable during life.  Circumstances set and un-set them, as cells respond to their environment.  Indeed, epigenetic changes are in large part responsible for how tissues differentiate--into stomach, lung, brain, skin, etc.--during embryonic development.  That's how you become a differentiated organism.  Having gene-usage too rigidly programmed during an adult's life (the parent), could prevent the offspring from even becoming an offspring.  So there is likely to be a re-set mechanism so a fertilized egg can start life anew.  Why would your Mom's experience override your own responses?

Secondly, the evidence to date suggests, at least, that even the persistent marking that's been observed fades or disappears eventually.  It is not as permanent as changes in the DNA sequence itself (the genes proper).  The problem that raises is that it won't be part of long-term evolution.  Unless--unless a phenomenon called 'genetic assimilation' occurs.  That's when something not engraved in DNA persists for whatever favoring reason and eventually some 'real' mutations--DNA sequence changes--with similar effect arise.  In that case, the actual hard-wired changes can persist, with the 'good' trait being produced even when the epigenetic marking has long gone.  But that then is a form of ordinary rather than Lamarckian selection, and how often would such things arise in the world? This has been debated since CH Waddington's advocacy of the idea as an important evolutionary process, in the mid-20th Century, and indeed the idea was proposed in the late 1800s before any actual genes were known, in a somewhat different context and known as the Baldwin effect.

Thirdly, and at least as important, too much epigenetic change could prevent new mutations--'real' evolutionarily relevant change--from having effects, if gene usage patterns, which are an important part of evolution, were too rigidly entrenched by DNA marking.

Fourthly, sperm and egg cells are developed in particular cell lineages in a fetus, initially called primordial germ cells (PGCs), cell lineages that are isolated from cell lineages that form the rest of the body, and vice versa.  So how is it that something specifically affecting gene usage in a particular organ, like a blood vessel, kidney or eye, would also be 'set' in the gonadal cells?  How exotic would such an information-passing mechanism have to be, and if it exists would we have to re-think our skepticism about Lamarckian inheritance--because if such specific mechanisms exist, they would really be the transmission in the patrimony, of things acquired by experience during life?

In fact, one recent paper has suggested that this may be occurring.  Male rats exposed to a particular odor that is known to activate one particular odor-recipient ('olfactory receptor', or OR) gene and at the same time exposed to a mild fear-inducing stress, were conditioned to activate that OR when exposed to the stress. These males were mated to unconditioned females, and the investigators report that the offspring males respond strongly to that odorant (via the specific OR gene).  But when such offspring males, who had not been exposed to the conditioning fear stress, were mated with unconditioned females, the next, grand-rat, generation also showed the preferential usage of this OR gene.

This latter study is remarkable if you think of the fact that a rat has about 1000 different OR genes, two instances of each (one inherited from each parent), and they are scattered in sets of varying numbers across most of its chromosomes.  So how is it that the rat's body 'knows' how to just mark that one particular OR gene, not just in its nose cells but also in its sperm cells, so the marking can be inherited?

If this experiment is to be believed, and it is remarkable enough that it must be carefully tested, and the effect shown to last for further generations, then we have to wonder what mechanism might underlie it. Is it really Lamarckian?  There is a sort of precedent, in the worm C. elegans, in which olfactory gene switching can be inherited for multiple generations and may have adaptive function.  But if this is true in mammals, is it general enough to challenge the axiom about inheritance on which current evolutionary theory rests?

Adaptive function would seemingly be something a species would 'want' to be permanent and not easily erasable or, put in a more mechanistic way, the freeze-in-place mechanism would eventually work against adaptation if circumstances changed.  And how would such decision-making characters work?  Wouldn't genetic assimilation remove this from epigenetic control?

Some new papers in the June 4th issue of the journal Cell raise questions, or perhaps raise serious problems, about how to interpret these various results on gene-usage effects and their inheritance, and we'll talk about them tomorrow.
-------------------------------------

*Lamarckian inheritance has been badly misunderstood, as we'll discuss in part 3 of this series later this week.

Friday, June 12, 2015

In memoriam

My father passed away last night, at the over-ripe old age of 99.92 years.  He was just another guy, not a celebrity, though he might wistfully have wished otherwise.  He wondered near the end if he'd led a good enough life, remembering that doing things for others was how his college, Antioch College, at the time of its glory days, defined a good life. He had been a news reporter, career civil servant, a folk artist making enamels and origami, and a writer of doggerel.  Lots of doggerel!

As his various systems began falling apart, literally from head to toe, Dad several times told me he had had enough, though he couldn't (or wouldn't, he wasn't clear) take his own life.  He just wanted his misery to be over with.  At his age and in his condition, he hardly had the means even if in his terrorized panics he might wish to end it all.  Over many weeks, he faded in and out of partial awareness of people, surroundings, memories, just as his physical abilities and his physiology were also heading down the road to Nowhere.

Our Pincushion Finales
For a while, my father was largely trapped in a Sartre-like Hell, not by being confined forever with unsavory people, but with the abstract techno world of modern biomedicine.  Its obsession with plumbing and its failure, despite the very best of intentions, to be just plain decent to others and to ourselves when it runs out of pins to insert almost everywhere, and causes huge distress.  This, even if the reason is that it can keep us going in one gear or another for a long time, as our various systems experience their accelerated decline.  The most visible current commenter on this state of affairs from inside the medical system is Atul Gawande, whose work is very much worth reading--and his suggestions on making the most meaningful kind of end of life worth heeding.

As he was (the person, not the horse)
The obstructionist troglodytes who impede assisted suicide measures are in part responsible for the nation's increasing Agony of the Aged, a tragedy that makes the Greeks' tragedies seem farce by comparison.  Our relentless spending on huge projects designed to prevent or cure disease is only going to exacerbate things.  One can't denigrate the desire to prevent or at least cure serious diseases, but one can critique the lack of more comprehensive thought about the problem of life as a finite proposition.  Rather than intensely and with laser-like focus on finding treatment or even real cures for early onset, life-devastating diseases, our widely publicized stress on 'precision' medicine is going to leave ever more people in miserably decrepit states for increasing numbers of years. Pinpointing cause, genetic or otherwise, leads to pincushion outcomes and distracts from the real 'point', which should be the overall quality of life.

The problem is real and getting worse.  We are setting many or most of us up for ending up as medical pincushions.  We simply do not have a humane policy about aging and disease, because not only are the issues challenging, technically as well as philosophically, but they are too deeply interleaved with material interests.

"Affordable care"
My father, though not really wealthy, had a good or even de luxe health care plan--as a civil servant he had what many Congressmen have but cruelly deny to the rest of society, and those without it often suffer a fate one can hardly bear to think about.  But in Dad's case money was not a barrier, and in the end it became clear that he was not going to get any better, suffer less discomfort, or experience less terror in a hospital.  So the rest of his family were able to arrange for hospice care.

We can all be thankful, very thankful, for the hospice movement.  What they do seems incredibly honorable.  We were lucky, if one can use that ironic term in a non-cynical way, because my father was eventually moved out of the ER, out of the hospital pincushion ward, and into a hospice-care section of the retirement complex where he had lived for the past 20 years.  Being 'downstairs' wasn't the same as being in the recognizable--even to one in and out of partial confusion--comforting environs of his apartment, but at least he had calm, knowledgeable, and supportive care, that enabled him to fade away, in relative peace.  My sister and step-mother were at his side during these last weeks, though he often didn't recognize even them.

Even then, he was handed from one regime of medical responsibility to another, assessment and palliative measures weren't uniformly or rapidly applied, one nursing or medical hand didn't entirely know what the other was doing.  All were helpful and conscientious, but these are daunting circumstances.  The net result: a distraught person helplessly wanting it to be over with.  Here is a poignant recounting by our daughter, his granddaughter, of her visit to him, fortunately long enough before the very end.

At least he had skilled nursing care and some intervention by the hospice team.  Even so, in his last days, wraith-like and hardly able to move or respond, he had some flailing, sleep-interrupting terrors, that for some reason even under hospice care were not prevented.

How many are even as lucky as he was?  How many end their lives multiply intubated and pinned like a collector's butterfly, largely left alone in a rush-rush, impersonal hospital room?  How many have little or perhaps even none of my father's level of care? How many doctors know or care enough, or are free enough from fearing judgment or lawsuits, to ease their patients' way out?  How many of our many elderly contemporaries even have access to any semblance of hospice care when it becomes clear that 'medicine' can no longer help?

It requires no cynicism about our medical system to recognize that we face a huge dilemma, about which we've recently posted.  Even those with faith in an afterlife seem to cling to this worldly one. Most of us would not give up on medical treatment, or research, where it can really do good; and no one would want to just write off those suffering even from avoidable diseases related to lazy or indulgent lifestyles.  Even when lifestyle choices are largely responsible, we wouldn't want to stop considering genetic or other causes of vulnerability. We don't want to ignore antibiotic resistance, because infectious diseases take even those for whom it is not a favor to ease them out of misery.

But the inevitable hard fact is that the more our success in such preventive or therapeutic work, the more we condemn ourselves to deteriorating in our every bodily system and, in the absence of a better way, being degraded, more like objects than subjects, in our final experiences.

The more we live well into senior years, the more this will be our dragged-out fate, and those who for various reasons both sincere and self-interested promise otherwise are committing culpable acts of thoughtless or indirect cruelty.  There is no obvious solution but there is an obvious issue and it has or should have major policy implications. This is a real moral conundrum, our society hasn't really yet faced, except here and there, and subordinated to the enamored rush of 'science'.

RIP, Dad
Well, for my Dad this is now moot. He has found peace, as the euphemism goes.  In this age of self-declared 'precision', I guess he can claim to have lived to be 100; after all, rounding up 99.92 is far less a violation of 'precision' that what we're being promised!  And in that spirit, at least he should have the final word, or as he might put it, the last laugh, typical of his humor rather than anything morbid.  Here is a verse he published in his book Now I am Ninety....But It's Not My Fault:

Now I am Ninety
I am ninety.  It’s not my fault.
I got there from zero.
People just fed me—
My mom and my pop.
Oats they threw in,
Fish and fowl and brisket,
And even fruit they threw in
And spinach—ugh—and
Turnips—ugh—they threw in.
And rhubarb—ugh!

I am ninety.  It’s not my fault.
School marms and men taught me
ABC’s and threw in 6 times 7,
English and higher math too.
Calculus—ugh—and differential
Equations they threw in—ugh—
Twisting my brain in knots.
Dates they taught me—1776 and 1812.
And Caesar and et tu Brute
They taught me.  My head gorged with stuff.
Surely it was not my fault.

I am ninety.  I am not to blame.
They sent me to war.
They taught me to shoot—ugh—
Sent  me on forced marches—ugh—
Taught me to swim in oil-slick water,
And identify strange planes in the sky,
And fight fires on aircraft carriers.
I came home as others did not.
It is not my fault
That I am here at ninety.

On my own, I think, came love
And coupling and kids
And diapers—ugh—
And  careers and dear ones died—sigh—
And  it was not my fault.
DNA, perhaps, and more love
And twenty odd-shaped pills—ugh—
And I am here at 90,
And as I have said,
Through no fault of my own,

I am here.

Thursday, June 11, 2015

Occasionality, probability, ....., and grantsmanship

In a previous post some time ago, we used the term occasionality to refer to events or outcomes that arise occasionally, but are not the result of the kinds of replicable phenomena around which the physical sciences developed and for which probability concepts and statistical inference are constructed.  Here, we want to extend the idea as it relates to research funding.

There has long been recognized a kind of physics envy among biologists, wishing to have a precise, rigorous theory of life to match theories of motion, atomic chemistry, and the like. But we argue that we don't yet have such a theory or, perhapsthe theory of evolution and genetics that we do have, which is in a sense already a theory of occasionality, is close to the truth.

Instead of an occasionality approach, assumptions of repeatability are used to describe life and to justify the kinds of research being done, when a core part of our science is that evolution, which generates genomic function, largely works by generating diversity and difference rather than replication.  Since individual genetic elements are transmitted and can have some frequency in the population, there is also at that nucleotide level some degree of repetition even if no two genomes, the rest of that element's genomic environmental context, are entirely alike.  The net result is a spectrum of causal strength or regularity.  Because many factors contribute, the distribution of properties in samples or populations may be well-behaved, that is, may look quite orderly, even if the underlying causal spectrum is one of occasionality rather than probability.

Strongly causal factors, like individual variants in a particular gene, are those that when the factor occurs, its effects are usually manifest, and it generates repeatability.  It and analysis of it fit standard statistical concepts that rely on, are built upon, the idea of repeatable causation with fixed parameters. But that is a deception whose practice weaves the proverbial tangled web of deeper realities.  More often, and more realistically, each occurrence of 'occasional' events arises from essentially unique causal combinations of causal factors.  The event may arise frequently, but the instances are not really repeats at the causal level.

This issue is built into daily science in various sometimes subtle ways.  For example, it appears subtly as a fundamental factor in research funding.  To get a grant, you have to specify the sample you will collect (whether by observational sampling or experimental replicates, etc.), and you usually must show with some sort of 'power' calculation that if an effect you specify as being important is taking place, you'll have a good chance of finding it with the study design you're proposing.  But making power computations has become an industry in itself; that is, there is standard software, and standard formulas for doing such computations.  They are, candid people quietly acknowledge, usually based on heavily fictitiously favorable conditions in which the causal landscape is routinely over simplified, the strength of the hypothesized causal factors exaggerated, and so on.

Power calculations and their like rest on axioms or assumptions of replicability, which is why they can be expressed in terms of probability from which power and significance types of analysis are derived.  Hence study designs and the decisions granting agencies make often if not typically rest on simplifications we know very well are not accurate, not usually close to what evidence suggests are realistic truths, and that are based on untested assumptions such as probability rather than occasionality.  Indeed, much of 'omics research today is 'hypothesis free', in that the investigator can avoid having to, or perhaps is not allowed to, specify any specific causal hypothesis except something safely vague like 'genes are involved and I'm going to find them'.  But how is this tested?  With probabilistic 'significance' or conceptually similar testing of various kinds, justified by some variant of 'power' computations.

If you are too speculative, you simply don't get funded.
Power computations often are constructed to fit available data or what investigators think can be done with fundable cost limits.  This is strategy, not science, and everybody knows it.  Nowhere near the  promised fraction of successes occur, except in the sense that authors can always find at least something in their data that they can assert shows a successful result.  The need for essentially fabulous power calculations are accepted is also one reason that really innovative proposals are rarely funded, despite expressed intentions by the agencies to fund real science: Power computations are hard to do for something that's innovative because you don't know what the sampling or causal basis of your idea is.  But routine ones described above are safe.  That's why it's hard to provide that kind of justification for something really different--and, to be fair, it makes it hard to tell when something really different is really, well, whacko.

A rigorous kind of funding environment might say that you must present something at least reasonably realistic in your proposed study, including open acknowledgment of causal complexity or weakness.  But our environment leads the petitioning sheep to huddle together in the safety of appearances rather than substance.  If this is the environment in which people must work, can you blame them?

Well, one might say, we just need to tighten up the standards for grants, and not fund weak grant proposals.  It is true that the oversubscribed system does often ruthlessly cut out proposals that reviewers can find any excuse to remove from consideration, if for no other reason than massive work overload.  Things that don't pass the oversimplified but requisite kinds of 'power' or related computation can easily be dropped from consideration.  But the routine masquerade of occasionality as if it were probability is not generally a criterion for turning down a proposal.

What is done, to some extent at least, is to consider proposals that are not outrightly rejectable, instead scoring them based on their relative quality as seen by the review panel.  One might say that this is the proper way to do things: reject those with obvious flaws (relative to current judgment criteria), but then rank the remaining proposals, so that those with, say, weaker power (given the assumptions of probability) are just not ranked as high as those with bigger samples or whatever.

But this doesn't serve that well, either.  That's because the way bureaucracies work the administrators' careers depend on getting more funding each year, or at least keeping the portfolio they have.  That means that proposals will always be funded from the top-ranked downward in scores until the money runs out.  This guarantees that non-innovative ideas will be funded if there aren't enough strong ideas. And it's part of the reason we see the kinds of stories, based on weak (sometimes ludicrously weak) studies blared across the news almost every single day.

We have a government-university-research complex that must be fed.  We let it grow to become that way.  Given what we've crafted, one cannot really push hard enough to get deeply insightful work funded and yet stop paying for run of the mill work; political budget-protection is also why a great many studies of large and costly scale simply will not be stopped.  This is not restricted to genetics. Or to science.  It's the same sort of process by which big banks or auto companies get bailed out.

How novel might it be if it were announced that only really innovative or more deeply powerful grants were going to be funded, and that institute grant budgets wouldn't be spent otherwise!  They'd be saved and rolled over until truly creative projects were proposed.  In a way, that's how it would be if industry had, once again, to fund its own research rather than farm it out to the public to pay for via university labs.

For those types of research that require major data bases, such as DNA sequence and medical data (e.g., to help set up a truly nationwide single medical records system and avoid various costs and inefficiencies), the government could obligate funds to an agency, like NCBI or CDC and others that currently exist, to collect and maintain the data.  Then, without the burden to collect the data, university investigators with better ideas or even ideas about more routine analysis, would only have to be supported for the analysis.

History has basically shown that Big Data won't yield the really innovative leaps we all wish for; they have to come from Big Ideas, and those may not require the Big Expense that is to a great extent what is driving the system now, in which to some extent regardless of how big your ideas are, if you only have small budgets, you won't also have tenure. That is major structural reason why people want to propose big projects even if important, focused questions could be answered by small projects: you have to please your Dean, and s/he is judged by the bottom line of his/her faculty.  We've set this system up over the years, but few as yet seem to be ready to fight it.

Of course this will never happen!
We know that not spending all available resources is naive even to suggest.  It won't happen.  First, on the negative side, we have peer review, and peers hesitate to vote weak scores on their peers if it meant loss of funding over all. If for no other reason (and there is some of this already), panel members know that the tables will be turned in the future and their proposals will be reviewed then by the people they're reviewing now.  Insiders looking out for each other is to some extent an inherent part of the 'peer' review process, although tight times do mean that even senior investigators are not getting their every wish.

But secondly, we have far too many people seeking funding than are being funded, or than there are funds for, and we have the well-documented way in which the established figures keep the funds largely locked up, so they can't go to younger, newer investigators.  The system we've had for decades had exponential growth in funding and numbers of people being trained built into it.  In the absence of a maximum funding amount or, better yet, investigator age, the power pyramids will not be easy to dislodge (they never are).  And, one might say generically, the older the investigator the less innovative and the more deeply and safely entrenched the ideas--such as probability-based criteria for things for which such criteria aren't apt--will be.  More than that, the powerful are the same ones inculcating their thoughts--and the grantsmanship they entrain into the new up-and-coming who will constitute the system's future.

With the current inertial impediments, and the momentum of our conceptual world of probability rather than occasionality, science faces a slow evolutionary rather than a nimble future.