Tuesday, May 31, 2016

Genes: convenient tokens of our time

My post today, perhaps typically cranky, was triggered by an essay at Aeon about the influence that the film Still Alice has had on thinking about Alzheimer's Disease (AD). As the piece puts it, AD is presented in the film as a genetic disease with a simply predictable doom-like known genetic cause.  The authors argue that the movie is more than entertainment.  It's a portrayal that raises an important ethical issue, because it is very misleading to leave the impression that AD is a predictable genetic disease.  That's because a clear genetic causation, and thus the simple 'we can test for it' representation, applies only to a small fraction of AD.  The film badly misrepresents the overall reality of this awful form of the disease (a good treatment of Alzheimer's disease and its history is Margaret Lock's thoughtful The Alzheimer Conundrum, 2013, Princeton Press).

While focusing on AD, the Aeon piece makes strong statements about our obsession with genes, in ways that we think can be readily generalized.  In a nutshell, genes have become the convenient tokens of our time.

Symboling is a key to making us 'human'
If there is any one thing that most distinguishes our human species from others, it may be the use of language as a symbolic way to perceive the world and communicate to others.  Symboling has long been said by anthropologists to be an important key to our evolution and the development of culture, itself based on language.

Symbol and metaphor are used not just to represent the world and to communicate about it, but also to sort out our social structure and our relationships with each other and the world.  Language is largely the manipulation or invocation of symbols.  In a species that understands future events and generalities, like death and sex, in abstract terms, the symbols of language can be reassuring or starkly threatening.  We can use them to soothe ourselves or to manipulate others, and they can also be used in the societal dance around who has power, influence, and resources.

Symbols represent a perception of reality, but a symbol is not in itself reality.  It is our filter, on or around which we base our interactions and even our material lives.  And, science is as thoroughly influenced by symbols as any other human endeavor.

Science is, like religion, a part of our culture that purports to lead us to understand and generalize about the world, but because science is itself a cultural endeavor, it is also part and parcel of the hierarchy and empire building we do in general, part of a cultural machinery that includes self-promotion, and mutually reinforcing service industries including news media, and even scientific journals themselves.

The current or even growing pressures to maintain factory-like 'productivity' in terms of grants coming in and papers going out is largely at odds with the fundamental purpose of science (as opposed to 'technology').  Unlike designing a better product, in the important, leading-edge areas of science, we don't know where we're going.  That is indeed the reason that it is science.  Exploring the unknown is what really good science is about.  That's not naturally an assembly-line process, because the latter depends on using known facts.  However, our society is increasingly forcing science to be like a factory, with a rather short-term kind of fiscal accountability.

Our culture, like any culture, creates symbols to use as tokens as we go about our lives.  Tokens are reassuring or explanatory symbols, and we naturally use them in the manipulations for various resources that culture is often about.  Nowadays, a central token is the gene.

DNA; Wikipedia

Genes as symbols
Genes are proffered as the irrefutable ubiquitous cause of things, the salvation, the explanation, in ways rather similar to the way God and miracles are proffered by religion.  Genes conveniently lead to manipulation by technology, and technology sells in our industrial culture. Genes are specific rather than vague, are enumerable, can be seen as real core 'data' to explain the world.  Genes are widely used as ultimate blameworthy causes, responsible for disease which comes to be defined as what happens when genes go 'wrong'.  Being literally unseen, like angels, genes can take on an aura of pervasive power and mystery.  The incantation by scientists is that if we can only be enabled to find them we can even cure them (with CRISPR or some other promised panacea), exorcising their evil. All of this invocation of fundamental causal tokens is particulate enough to be marketable for grants and research proposals, great for publishing in journals and for news media to gawk at in wonder. Genes provide impressively mysterious tokens for scientists to promise almost to create miracles by manipulating.  Genes stand for life's Book of Truth, much as sacred texts have traditionally done and, for many, still do.

Genes provide fundamental symbolic tokens in theories of life--its essence, its evolution, of human behavior, of good and evil traits, of atoms of causation from which everything follows. They lurk in the background, responsible for all good and evil.  So in our age in human history, it is not surprising that reports of finding genes 'for' this or that have unbelievable explanatory panache.  It's not a trivial aspect of this symbolic role that people (including scientists) have to take others' word for what they claim as insights.

This token does, of course, have underlying reality
We're in the age of science, so that it is only to be expected that we'll have tokens relevant to this endeavor.  That we have our symbols around which to build other aspects of our culture doesn't mean that the biology of genes is being made up out of whole cloth.  Unlike religion, where things can be 'verified' only by claims of communication with God, genes can of course, at least in principle, be checked and claims tested.  Genes obviously do have major and fundamental roles in life.  If that isn't true, we are really misperceiving fundamentals of our existence.  So, even when complexities of causation are daunting, we can claim and blame what we want on genes and in a sense be correct at least at some level.  That enhances and endorses the token value of genes.

Genes do have great sticking power.  The Aeon piece about AD is just one of countless daily examples.  A fraction of cases of AD are so closely associated with the presence of some known variants in a couple of genes, that true causation--whatever the mechanism--seems an entirely plausible explanation.  Likewise, there are hundreds or thousands of disorders that seem clearly to be inherited and as the result of malfunction of one or two specific genes.  The cultural extension of this in our society that we are stressing here is the extension of these clearly causative findings to the idea that causation can be enumerated in convenient ways mainly by peoples' inherited genomes and that other aspects of biological causation are often treated as being rather superficial or incidental.  That in a sense is typical of deeply held cultural icons or tokens.

The problem with genes as tokens is that they are invoked generally or generically in the competition for cultural resources, material and symbolic.  Personally, we think there are issues, genetic issues in fact, that deserve greater investment, rather than just the easier to invoke bigger-is-better approach. They include a much more intense attack on those many traits that we already know without any serious doubt are tractably genetic--due to one or only a couple of genes, and therefore which real genetic therapy might treat or prevent effectively.  By contrast, most traits even if they are affected by genetic variation as all traits must be, are predominantly due to environmental or chance causative factors.  We have ways to avoid many diseases that don't require genetic approaches, but as vague entities they're perfect subjects for invoking the gene token, and policy in the industrial world clearly shows this.

Some progress does of course occur because of genetically-based research, but the promise far outpaces the reality of genetic cures.  But genes are the material tokens that keep the motor running far beyond the actual level of progress.  They effectively reflect our time--our molecular, computer, technological culture imagery, our love of scale, size and the material grandeur they generate.

Every culture, every generation has its tokens and belief systems.  Genes are among ours.  They're never perfect.  People seek hope, and what velvet robes and gilded cathedrals and mosques provide for many, whereas the humming laboratories do for a growing number of others.

Tokens, symbols and metaphors: they drive much of what people do, even in science.

Monday, May 30, 2016

Cancer moonshot and slow-learners

Motivated by Vice President Biden's son's death at an early age from cancer, President Obama recently announced a new health initiative which he's calling the cancer 'moonshot'.  This is like a second Nixonian 'war' on cancer but using a seemingly more benign metaphor (though cancer is so awful that treating it as a 'war' seems apt in that sense). Last week the NYTimes printed an op-ed piece that pointed out one of the major issues and illusions belied by the rhetoric of the new attack on cancer, as with the old:  Curing one cancer may extend a person's life, but it also increases his or her chances of a second cancer, since risks of cancer rise with age.

Cancers 'compete' with each other for our lives
The op-ed's main point is that the more earlier onset cancers we cure, the more late onset, less tractable tumors we'll see.  In that sense, cancers 'compete' with each other for our lives.  The first occurrence would get us unless the medical establishment stops it, thus opening the door for some subsequent Rogue Cell to generate a new tumor at some later time in the person's life.  It is entirely right and appropriate in every way to point this out, but the issues are subtle (though not at all secret).

First, the risk of some cancers slows with age.  Under normal environmental conditions, cancers increase in frequency with age because they are generally due to the accumulation of multiple mutations of various sorts, so that the more cell-years of exposure the more mutations that will arise.  At some point, one of our billions of cells acquires a set of mutational changes that lead it to stop obeying the rules of restraint in form and cell-division that are appropriate for the normal function of its particular tissue. A tumor is a combination of exposure to mutagens and mutations that occur simply by DNA replication errors--totally chance events--when cells divide.  As the tumor grows it acquires further mutations that lead it to spread or resist chemotherapy etc.

This is important but the reasons are subtle.  The attack on cells by lifestyle-related mutagens like radiation or chemicals in the environment becomes reduced in intensity as people age and simplify their lives, slowing down a lot of exposures to these risk factors. However, cell division rates, the times when mutations arise, themselves slow down, so the rate of accumulation of new mutations, whether they be by chance or by exposures, slows.  This decrease in the increase of risk with age at least tempers the caution that curing cancers in adults will leave them alive for many years and hence at risk for at least some many more cancers (though surely it will make them vulnerable to some!)


Apollo 11, first rocket to land humans on the moon; Wikipedia

Competing causes: more to the story, but nothing at all new
There's an important issue not mentioned in the article, but that is much more important in an indirect way.  This is an issue the authors of the op-ed didn't think about or for some reason didn't mention or perhaps because they are specialists they just weren't aware of.  But it's not at all secret, and indeed is something we ourselves studied for many years, and we've blogged about here before: anything that reduces early onset diseases increases the number of late onset diseases.  So, curing cancer early on (which is what the op-ed was about) increases risk for every later-onset disease, not just cancer.  In the same way as we've noted before, reducing heart disease or auto accident rates or snake bite deaths will increase dementia, heart disease, diabetes, and cancer--all other later-onset diseases--simply because more people will live to be at risk.  This is the Catch-22 of biomedical intervention.

In this sense all the marketing rhetoric about 'precision' genomic medicine is playing a game with the public, and the game is for money--research money among other things.  There's no cure for mortality or the reality of aging.  Whether due to genetic variants or lifestyle, we are at increasing risk for the panoply of diseases as we age, simply because exposure durations increase.  And every victory of medicine at earlier ages is a defeat for late-age experience.  Even were we to suppose that massive CRISPRization could cure every disease as it arose, and people's functions didn't diminish with age, the world would be so massively overpopulated as to make ghastly science fiction movies seem like Bugs Bunny cartoons.

But the conundrum is that because of the obvious and understandable fact that nobody wants major early onset diseases, it seems wholly reasonable to attack them with all the research and therapeutic vigor at our disposal. The earlier and more severe, the greater the gain in satisfactory life-years that will be made.  But the huge investment that NIH and their universities clients make in genomics and you-name-it related to late-age diseases is almost sure to backfire in these ways.  Cancer is but one example.

People should be aware of these things.  The statistical aspects of competing causes have long been part of demographic and public health theory.  Even early in the computer era many leading demographers were working on the quantitative implications of competing causes of death and disease, and similar points were very clear at the time.  The relevance to cancer, as outlined above, was also obvious.  I know this first-hand, because I was involved in this myself early in my career.  It was an important part of theorizing, superficial as well as thoughtful, about the nature of aging and species-specific lifespan, and much else.  The hard realities of competing causes have been part of the actuarial field since, well, more or less since the actuarial field began.  It is a sober lesson that apparently nobody wants to hear.  So it should not be written about as if it were a surprise, or a new discovery or realization.  Instead, the question--and it is in every way a fair question--should be why we cannot digest this lesson.  Is it because of our normal human frailty wishful thinking about death and disease, or because it is not convenient for the biomedical industries to recognize this sober reality front and center?

It's hard to accept mortality and that life is finite.  Some people want to live as long as possible, no matter the state of their health, and will reach for any life-raft at any age when we're ill.  But a growing number are signing Do Not Resuscitate documents, and the hospice movement, to aid those with terminal conditions who want to die in peace rather than wired to a hospital bed, continues to grow.  None of us wants a society like that in Anthony Trollope's 1881 dystopic novel The Fixed Period, where at age 67 everyone is given a nice comfortable exit--at least that was the policy until it hit too close to home for those who legislated it.  But we don't want uncomforable, slow deaths, either.

The problem of competing causes is a serious but subtle one, but health policy should reflect the realities of life, and of death.  I wouldn't bet on it, however, because there is nothing to suggest that humans as a collective electorate are ready or able to face up to the facts, when golden promises are being made by legislators, bureaucrats, pharmas, and so on.  But, science and scientists should be devoted to truth, even when truth isn't convenient to their interests or for the public to hear.

Thursday, May 19, 2016

Another look at 'complexity'

A fascinating and clear description of one contemporary problem of sciences involved in 'complexity' can be found in an excellent discussion of how brains work, in yesterday's Aeon Magazine essay ("The Empty Brain," by Robert Epstein).  Or rather, of how brains don't work.  Despite the ubiquity of the metaphor, brains are not computers.  Newborn babies, Epstein says, are born with brains that can learn, respond to the environment and change as they grow.
But here is what we are not born with: information, data, rules, software, knowledge, lexicons, representations, algorithms, programs, models, memories, images, processors, subroutines, encoders, decoders, symbols, or buffers – design elements that allow digital computers to behave somewhat intelligently. Not only are we not born with such things, we also don’t develop them – ever.
We are absolutely unqualified to discuss or even comment on the details or the neurobiology discussed.  Indeed, even the author himself doesn't provide any sort of explanation of how brains actually work, using general hand-waving terms that are almost tautologically true, as when he says that experiences 'change' the brains.  This involves countless neural connections (it must, since what else is there in the brain that is relevant?), and would be entirely different in two different people.

In dismissing the computer metaphor as a fad based on current culture, which seems like a very apt critique, he substitutes vague reasons without giving a better explanation.  So, if we don't somehow 'store' an image of things in some 'place' in the brain, somehow we obviously do retain abilities to recall it.  If the data-processing imagery is misleading, what else could there be?

We have no idea!  But one important thing is that this essay reveals is that the problem of understanding multiple-component phenomena is a general one.  The issues with the brain seem essentially the same as the issues in genomics, that we write about all the time, in which causation of the 'same' trait in different people is not due to the same causal factors (and we are struggling to figure out what they are in the first place).

A human brain, but what is it?  Wikipedia

In some fields like physics, chemistry, and cosmology, each item of a given kind, like an electron or a field or photon or mass is identical and their interactions replicable (if current understanding is correct).  Complexities like the interactions or curves of motion among many galaxies each with many stars, planets, and interstellar material and energy, the computational and mathematical details are far too intricate and extensive for simple solutions.  So one has to break the pattern down into subsets and simulate them on a computer.  This seems to work well, however, and the reason is that the laws of behavior in physics apply equally to every object or component.

Biology is comprised of molecules and at their level of course the same must be true.  But at anything close to the level of our needs for understanding, replicability is often very weak, except in the general sense that each person is 'more or less' alike in its physiology, neural structures, and so on. But at the level of underlying causation, we know that we're generally each different, often in ways that are important.  This applies to normal development, health and even to behavior.  Evolution works by screening differences, because that's how new species and adaptations and so on arise.  So it is difference that is fundamental to us, and part of that is that each individual with the 'same' trait has it for different reasons.  They may be nearly the same or very different--we have no a priori way to know, no general theory that is of much use in predicting, and we should stop pouring resources into projects to nibble away at tiny details, a convenient distraction from the hard thinking that we should be doing (as well as addressing many clearly tractable problems in genetics and behavior, where causal factors are strong, and well-known).

What are the issues?
There are several issues here and it's important to ask how we might think about them.  Our current scientific legacy has us trying to identify fundamental causal units, and then to show how they 'add up' to produce the trait we are interested in.  Add up means they act independently and each may, in a given individual, have its own particular strength (for example, variants at multiple contributing genes, with each person carrying a unique set of variants, and the variants having some specifiable independent effect).  When one speaks of 'interactions' in this context, what is usually meant is that (usually) two factors combine beyond just adding up.  The classical example within a given gene is 'dominance', in which the effect of the Aa genotype is not just the sum of the A and the a effects.  Statistical methods allow for two-way interactions in roughly this way, by including terms like zAXB (some quantitative coefficient times the A and the B state in the individual), assuming that this is the same in every A-B instance (z is constant).

This is very generic (not based on any theory of how these factors interact), but for general inference that they do act in relevant ways, it seems fine.  Theories of causality invoke such patterns as paths of factor interaction, but they almost always assume various clearly relevant simplifications:  that interactions are only pair-wise, that there is no looping (the presence of A and B set up the effect, but A and B don't keep interacting in ways that might change that and there's no feedback from other factors), that the size of effects are fixed rather than being different in each individual context.

For discovery purposes this may be fine in many multivariate situations, and that's what the statistical package industry is about. But the assumptions may not be accurate and/or the number and complexity of interactions too great to be usefully inferred in practical data--too many interactions for achievable sample sizes, their parameters being affected by unmeasured variables, their individual effects too small to reach statistical 'significance' but in aggregate accounting for the bulk of effects, and so on.

These are not newly discovered issues, but often they can only be found by looking under the rug, where they've been conveniently swept because our statistical industry doesn't and cannot adequately deal with them.  This is not a fault of the statistics except in the sense that they are not modeling things accurately enough, and in really complex situations, which seem to be the rule rather than the exception, it is simply not an appropriate way to make inferences.

We need, or should seek, something different.  But what?
Finding better approaches is not easy, because we don't know what form they should take.  Can we just tweak what we have, or are we asking the wrong sorts of questions for the methods we know about?  Are our notions of causality somehow fundamentally inadequate?  We don't know the answers.  But what we now do have is a knowledge of the causal landscape that we face.  It tells us that enumerative approaches are what we know how to do, but what we also know are not an optimal way to achieve understanding.  The Aeon essay describes yet another such situation, so we know that we face the same sort of problem, which we call 'complexity' as a not very helpful catchword, in many areas.  Modern science has shown this to us.  Now we need to use appropriate science to figure it out.

Monday, May 16, 2016

What do rising mortality rates tell us?

When I was a student at a school of public health in the late '70s, the focus was on chronic disease. This was when the health and disease establishment was full of the hubris of thinking they'd conquered infectious disease in the industrialized world, and that it was now heart disease, cancer and stroke that they had to figure out how to control.  Even genetics at the time was confined to a few 'Mendelian' (single gene) diseases, mainly rare and pediatric, and few even of these genes had been identified.

My field was Population Studies -- basically the demography of who gets sick and why, often with an emphasis on "SES" or socioeconomic status.  That is, the effect of education, income and occupation on health and disease.  My Master's thesis was on socioeconomic differentials in infant mortality, and my dissertation was a piece of a large study of the causes of death in the whole population of Laredo, Texas over 150 years, with a focus on cancers.  Death rates in the US, and the industrialized world in general were decreasing, even if ethnic and economic differentials in mortality persisted.

So, I was especially interested in the latest episode of the BBC Radio 4 program The Inquiry, "What's killing white American women?" Used to increasing life expectancy in all segments of the population for decades, when researchers noted that mortality rates were actually rising among lower educated, middle-aged American women, they paid close attention.

A study published in PNAS in the fall of 2015 by two economists was the first to note that mortality in this segment of the population, among men and women, was rising enough to affect morality rates among middle-aged white Americans in general.  Mortality among African American non-Hispanics and Hispanics continued to fall.  If death rates had remained at 1998 rates or continued to decline among white Americans who hadn't more than a high school education in this age group, half a million deaths would have been avoided, which is more, says the study, than died in the AIDS epidemic through the middle of 2015.

What's going on?  The authors write, "Concurrent declines in self-reported health, mental health, and ability to work, increased reports of pain, and deteriorating measures of liver function all point to increasing midlife distress."  But how does this lead to death?  The most significant causes of mortality are "drug and alcohol poisonings, suicide, and chronic liver diseases and cirrhosis."  Causes associated with pain and distress.


Source: The New York Times

The Inquiry radio program examines in more detail why this group of Americans, and women in particularly, are suffering disproportionately.  Women, they say, have been turning to riskier behaviors, drinking, drug addiction and smoking, at a higher rate than men.  And, half of the increase in mortality is due to drugs, including prescription drugs, opioids in particular.  Here they zero in on the history of opiod use during the last 10 years, a history that shows in stark relief that the effect of economic pressures on health and disease aren't due only to the income or occupation of the target or study population.

Opioids, prescribed as painkillers for the relief of moderate to severe pain, have been in clinical use since the early 1900's.  Until the late 1990's they were used only very briefly after major surgery or for patients with terminal illnesses, because the risk of addiction or overdose was considered too great for others.  In the 1990's, however, Purdue Pharma, the maker of the pain killer Oxycontin, began to lobby heavily for expanded use.  They convinced the powers-that-be that chronic pain was a widespread and serious enough problem that opioids should and could be safely used by far more patients than traditionally accepted.  (See this story for a description of how advertising and clever salesmanship pushed Oxycontin onto center stage.)

Purdue lobbying lead to pain being classified as a 'vital sign', which is why any time you go into your doctor's office now you're asked whether you're suffering any pain.  Hospital funding became partially dependent on screening for and reducing pain scores in their patients.

Ten to twelve million Americans now take opioids chronically for pain.  Between 1999 and 2014, 250,000 Americans died of opioid overdose.  According to The Inquiry, that's more than the number killed in motor vehicle accident or by guns.  And it goes a long way toward explaining rising mortality rates among working-class middle-aged Americans.  And note that the rising mortality rate has nothing to do with genes.  It's basically the unforeseen consequences of greed.

Opioids are money-makers themselves, of course (see this Forbes story about the family behind Purdue Pharma, headlined "The OxyContin Clan: The $14 Billion Newcomer to Forbes 2015 List of Richest U.S. Families;" the drug has earned Purdue $35 billion since 1995) but pharmaceutical companies also make money selling drugs to treat the side effects of opioids; nausea, vomiting, drowsiness, constipation, and more.  Purdue just lost its fight against allowing generic versions of Oxycontin on the market, which means both that cheaper versions of the drug will be available, and that other pharmaceutical companies will have a vested interest in expanding its use.  Indeed, Purdue just won approval for use of the drug in 11-17 year olds.

In a rather perverse way, race plays a role in this epidemic, too, in this case a (statistically) protective one even though it has its roots in racial stereotyping.  Many physicians are less willing to prescribe opioids for African American or Hispanic patients because they fear the patient will become addicted, or that he or she will sell the drugs on the street.

"Social epidemiology" is a fairly new branch of the field, and it's based on the idea that there are social determinants of health beyond the usual individual-level measures of income, education and occupation.  Beyond socioeconomic status, to determinants measurable on the population-level instead; location, availability of healthy foods, medical care, child care, jobs, pollution levels, levels of neighborhood violence, and much more.

Obviously the opioid story reminds us that profit motive is another factor that needs to be added to the causal mix.  Big Tobacco already taught us that profit can readily trump public health, and it's true of Big Pharma and opioids as well.  Having insinuated themselves into hospitals, clinics and doctors' offices, Big Pharma may have relieved a lot of pain, but at great cost to public health.

Wednesday, May 11, 2016

Darwin the Newtonian. Part V. A spectrum, not a dogma

Our previous installments on genetic drift (a form of chance) vs natural selection (a deterministic force-like phenomenon) and the degree to which evolution is due to each (part 1 here) lead to a few questions that we thought we'd address to end this series.

First, there is no sense in which we are suggesting that complex traits arise out of nowhere, by 'chance' alone.  There is no sense in which we are suggesting that screening for viability or utility does not occur as a regular part of evolution.  But we are asking what the nature of that screening is, and what a basically deterministic, Newtonian view of natural selection, that is we believe widely if often tacitly held, implies and how accurate it may be.

It's also important here to point out something that is obvious.  The dynamics of evolution from both trait and genome level comprise a spectrum of processes, not a single one that should be taken as dogma.  A spectrum means that there is a range of relative roles of what can be viewed as determinism and chance that the two are not as distinct as may seem, and that even identifying, much less proving what is going on in a given situation is often dicey.  Some instances of strong selection, like some of chance seem reasonably clear and those concepts are apt.  But much, perhaps most, of evolution is a more subtle mix of phenomena and that is what we are concerned with.

Secondly, we have discussed our view of natural selection before, in various ways.  In particular, we cite our series on what we called the 'mythology' of selection, a term we used to be provocative in the sense of hopefully stimulating readers to think about what many seem to take for granted.  Yes, we're repeating ourselves some, but think the issues are important and our ideas haven't been refuted in any serious way so we think they're worth repeating.

A friend and former collaborator took exception to our assumption that people still believe that what we see today is what was the case in the past.  He felt we were setting up a straw man. The answer is somewhat subjective, but we believe that if you read many, many descriptions of current function and their evolution, you'll see that they are often if not usually just equated de facto with being 'adaptations', and that means that doing what they do now came about because it was favored by the force of selection in the past.  We think it's not a straw man at all, but a description of what is being said by many people much of the time: very superficial, dogmatic assumptions both of determinative selection and that we can infer the functional reason.

Of course everyone acknowledges that earlier states had their own functions and today's came from earlier, and that functions change (bat wings used to be forelegs, e.g.), but the idea is that bat flight is here because the way bats fly was selected for.  One common metaphor going back to an article by Lewontin and Gould is that evolution works via 'spandrels', traits evolved for one purpose or incidentally part of some adaptation, that are then usable by evolution to serve some new function. Yes, evolution works through changing traits, but how often are they 'steps' in this sense or is the process more like a rather erratic escalator, if we need a metaphor?

There are ways for adaptive traits to arise that have nothing to do with Darwinian competition for limited resources, and are perfectly compatible with a materialist view.  Organismal selection occurs when organisms who 'like' a particular part of their environment, tend to hang out there.  They'll meet and mate with others who are there as well.  If the choice has to do with their traits--ability to function at high altitude, or whatever--then over time this trait will become more common in this niche compared to their peers elsewhere, and eventually mating barriers may arise, and a new species with what appears to be a selected adaptation. But no differential reproduction is required--no natural selection.  It's natural assortment instead.

All aspects of our structure and function depend on interaction among molecules.  If two molecules must interact for some function to occur, then mutant versions may not serve that purpose and the organism may perish. This would seem most important during embryonic development.  An individual with incompatible molecular interactions (due to genetic mutation) would simply not survive.  This leaves the population with those whose molecules do interact, but there is no competition involved--no natural selection.  It's natural screening instead.

Natural selection of the good ol' Darwinian kind can occur, leading to complex adaptations in just the way Darwin said 150+ years ago.  But if the trait is the result of very many genes, the individual variants that contribute may be invisible to selection, and hence come and go essentially by chance. This is what we have called phenogenetic drift.  Do you doubt that?  If so, then why is it that most complex traits that are mapped can take on similar values in individuals with very different genotypes?  This is, if anything, the main bottom line finding of countless very large and extensive mapping studies, in humans and even bacteria.  This is basically what Andreas Wagner's work, that we referred to earlier in the series, is about.   It rather obviously implies that which of equivalent variants proliferates is the result of chance.  There's nothing non-Darwinian about this.  It's just what you'd expect instead.

We'd expect this because the many factors with which any species must deal will challenge each of its biological systems. That means many screening factors (better we think than calling them selection 'pressures' as would usually be done).  Most of these are affected by multiple genes.  Genes vary within a population.  If any given factor's effects were too strong, it would threaten the species' existence.  At least, most must be relatively weak at any given time, even if persisting over very long time periods.  Multiple traits, multiple contributing genes in this situation means that relative to any one trait or gene, the screening must be rather weak.  That in turn means that chance affects which variant proliferates.  There's nothing non-Darwinian about this.  It's essentially why he stressed the glacial slowness of evolution.

There is, however, the obvious fact that known functional parts of DNA are far less variable than regions with no known function.  This can be, and usually is assumed to be, the expected evidence of Darwinian natural selection.  But factors like organismal dispersion or functional (embryonic) adequacy can account for at least some of this.  Longer-standing genes and genetic systems would be expected to be more entrenched because they can acquire fewer differences before they won't work with other elements in the organism.  This is at least compatible with the view we've expressed, and there could be some ways of testing the explanation.

This view means we need not worry about whether a variant is 'truly' neutral in the face of environmental screening.  We could even agree that there's no testable sense in which a variant evolves by 'pure' chance. Even very tiny differences in real function can evolve in a way that is statistically 'neutral'.  Again, this can be the case even if the trait to which such variants contribute is subject to clear natural or other forms of selection.

This view is also wholly compatible with the findings of GWAS, the evidence that every trait is affected by genetic variation to some extent, the fact that organisms are adapted to their environment in many ways and the fact that prediction based on genotyping is often a problematic false promise.  And because this is a spectrum, randomly generated by mutation, some variants and or traits they affect will be very harmful or helpful--and will look like strong, force-like natural selection.  These variants and traits led to Mendel, and led to the default if often tacit assumption that natural selection is the force that explains everything in life.

Further, it is important for all the same sorts of reasons that the shape of the spectrum--the relative amount of a given level of complexity--is not based on any distribution we know of and hence is not predictable, generally because it is the result of a long history of random and local context and contingencies, of various unknown strength and frequency (about the past, we can estimate a distribution but that doesn't mean we understand any real underlying probabilistic process that caused what we see).  This is interesting, because many aspects of genetic variation (and of the tree of life) can be fitted to a reasonable extent to various probability distributions (see Gene Koonin's paper or his book The Logic of Chance).  But these really aren't causal parametric 'laws' in the usual sense, but descriptions after the fact without rigorous causal characteristics.  Generally, prediction of the future will be weak and problematic.

In the view of life we've presented, evolution will have characteristics that are weak or unpredictable directional tendencies, and the same for genetic specificities (and hence predictive power). It is the trait that is in a sense predictable, not the effects of individual genes.

We think this view of evolution is compatible with the observed facts but not with many of the simplified ideas that are driving life sciences at present.

Our viewpoint is that the swarm of factors environmental and genomic means that chance is a major component even of functional adaptations, in the biodesic paths of life.

Tuesday, May 10, 2016

Darwin the Newtonian. Part IV. What is 'natural selection'?

If, as I suggested yesterday, genetic drift is a rather unprovable or even metaphysical notion, then what is the epistemological standing of its opposite: not-drift?  That concept implies that the reproductive success of the alternative genotypes under consideration is not equal. But since we saw yesterday that showing that two things are exactly equal is something of a non-starter, how different is its negation?  

Before considering this, we might note that to most biologists, those who think and those who just invoke explanations, non-drift means natural selection.  That is what textbooks teach, even in biology departments (and in schools 
of medicine and public health, where simple-Simon is alive and well). But natural selection implies systematic, consistent favoring of one variant over others, and for the same reason.  That is by far the main rationale for the routine if unstated assumption that today's functions or adaptations are due to past selection for those same functions: we observe today and retroactively extrapolate to the past.  It's understandable that we do that, and it was a major indirect way (along with artificial selection) in which Darwin was able to reconstruct an evolutionary theory that didn't require divine ad hoc creation events.   But there are problems with this sort of thinking--and some of them have long been known, even if essentially stifled by what amounts to a selectionist ideology, that is, a rather unquestioning belief in a kind of single-cause worldview.

What does exactly not-zero mean?
I suggested yesterday that drift, meaning exactly no systematic difference between states (like genotypes) was so illusive as to be essentially philosophical.  But zero-difference is a very specific value and may thus be especially hard to prove.  But non-zero is essentially an open-ended concept and might thus be trivially easy to show.  But it's not!

One alternative to two things being not zero is simply that they have some difference.  But need that difference be specifiable or of a fixed amount?  Need it be constant or similar over instances of time and place?  If not, we are again in rather spooky territory, because not being identical is not much if any help in understanding.  One wants to know by how much, and why--and if it's consistent or a fluke of sample or local circumstance.  But this is not a fixed set of things to check.

Instead of just 'they're different', what is usually implicitly implied is that the genotypes being compared have some particular, specific fitness difference amount, not just that they differ. That is what asserting different functional effects of the variants largely implies, because otherwise one is left asserting that they are different....sort of, sometimes, and this isn't very satisfying or useful.  It would be normal, and sensible, to argue that the difference need not be precisely, deterministically constant, because there's always a luck component, and ecological conditions change.  But if the difference varies widely among circumstances, it is far more difficult to make persuasive 'why' explanations. For example, small differences favoring variant A over variant B in one sample or setting might actually favor B over A in other times or places.  Then selection is a kind of willy-nilly affair--which probably is true!--but much more difficult to infer in a neat way, because it really is not different from being zero on average (though 'on average' is also easier to say than to account for causally).  If a difference is 'not zero', there are an infinity of ways that might be so, especially if it is acknowledged to be variable, as every sensible evolutionary biologist would probably agree is the case.

But then looking for causes becomes very difficult because among all the variants in a population, and all the variation in individual organisms' experience means that there may be an open-ended  number of explanations one would have to test to account for an observed small fitness difference between A and B.  And that leads to serious issues about statistical 'significance' and inference criteria.  That's because most alleged fitness differences are essentially local and comparative.  In turn that means the variant is not inherently selected but is context-dependent: fitness doesn't have a universal value, like, say, G, the universal Newtonian gravitational constant in physics, and to me that means that even an implicitly Newtonian view of natural selection is mistaken as a generality about life. 

If selection were really force-like in that sense, rather than an ephemeral, context-specific statistical estimate, its amount (favoring A over B) should approach the force's parameter, analogous to G, asymptotically: the bigger the sample and greater the number of samples analyzed the closer the estimated value would get to the true value.  Clearly that is not the way life is, even in most well-controlled experimental settings.  Indeed, even Darwin's idea of a constant struggle for existence is incompatible with that idea.

There are clearly many instances in which selective explanations of the classical sort seem specifically or even generally credible.  Infectious disease and the evolution of resistance is an obvious example.  Parallel evolution, such as independent evolution of, say, flight or similar dog-like animals in Australia and Africa, may be taken to prove the general theory of adaptation to environments.  But what about all the not dogs in these places?  We are largely in ad hoc explanatory territory, and the best of evolutionary theory clearly recognizes that.

So, in what sense does natural selection actually exist?  Or neutrality?  If they are purely comparative, local, ad hoc phenomena largely demonstrable only by subjective statistical criteria, we have trouble asserting causation beyond constructing Just-So stories.  Even with a plausible mechanism, this will often be the case, because plausibility is not the same as necessity.  Just-So stories can, of course, be true....but usually hard to prove in any serious sense.

Additionally, in regard to adaptive traits within or between populations or species, if genetic causation is due to contributions of many genes, as typically seems to be the case, there is phenogenetic drift, so that even with natural selection working force-like on a trait, there may be little if any selection on specific variants in that mix: even if the trait is under selection, a given allelic variant may not be.

Some other slippery issues
Natural selection is somewhat strange.  It is conceptually a passive screen of variation, but often treated as if an inherent property of a genotype (or an allele), whose value is determined on what else is in the same locus in the population.  Yet it's also treated as if this is inherent and unchanging property of the genotype...until any competing genotypes disappear.  As the favored allele becomes more common, its amount of advantage will increasingly vary because, due to recombination and mutation, the many individuals carrying the variant will also vary in the rest of their genomes, which will introduce differences in fitness among them (likewise, early on most carriers of the favored 'A' variant will be heterozygotes, but later on more and more will be homozygotes).  When the A variant becomes very common in the population, its advantage will hardly be detectable since almost all its peers fellws will have the same genotype at that site.  Continued adaptation will have to shift to other genes, where there still is a difference.  Some AA carriers will have detrimental variants at another gene, say B, and hence reduced fitness. Relatively speaking, some A's, or eventually maybe all A's, will have become harmful, because even in classical Darwinian terms selection is only relative and local.  So, selection even in the force-like sense, is very non-Newtonian, because it is so thoroughly context-dependent.  

Another issue is somatic mutation.  The genotypes that survive to be transmitted to the next generation are in the germ line.  But every cell division induces some mutations, and depending on when and where during development or later life a mutation occurs, it could affect the traits of the individual.  Even if selection were a deterministic force, it screens on individuals and hence that includes any effects of somatic mutation in those individuals.  But somatic mutations aren't inherited, so even if the mechanism is genetic their effects will appear as drift in evolutionary terms.  

Most models of adaptive selection are trait-specific.  But species do not evolve one trait at a time, except perhaps occasionally when a really major stressor sweeps through (like an epidemic).  Generally, a population is always subject to a huge diversity of threats and opportunities, contexts and changes.  Every one of our biological systems is always being tested, of in many ways at once. Traits are also often correlated with one another, so pushing on one may be pulling on another.  That means that even if each trait were being screened for separate reasons, the net effect on any one of the must typically be very very small, even if it is Newtonian in its force-like nature.  

The result is something like a Japanese pachinko machine.  Pachinko is popular type of gambling in Japan. A flurry of small metal balls bounces down from the top more or less randomly through a jungle of pins and little wheels, before finally arriving at the bottom.  The balls bounce off each other on the way in basically random collisions. The payoff (we could say it's analogous to fitness) is based on the balls that, after all this apparent chaos, end up in a particular pocket at the bottom.  In biological analogy, each ball can represent a different trait or perhaps individuals in a population. They bounce around rather randomly, constrained only by the walls and objects there--nothing steers them specifically. What's in the pocket is the evolutionary result. 

Pachinko machine (from Google images)
 (you can easily find YouTube videos showing pachinkos in action)

All similes limp, and these collisions are probably in truth deterministic, even if far too too complex to predict the outcome.  Nonetheless, this sort of dynamics among individuals with their differing genes of varying and context-specific effects, in diverse and complex environments, suggests why in this dynamic complex, change related to a given trait will be a lot like drift; there are so many that if each were too strongly force-like extinction would be more likely the result.  Further, since most traits are affected by many parts of the genome, the intensity of selection on any one of them must be reduced to be close to the expectations of drift. Adaptive complexity is another reason to think that adaptive change must be glacially slow, as Darwin stressed many times, but also that selection is much less force-like, as a rule.  After the fact, seeing what managed to survive, it looks compatible with force-like, straight-line selection.

Here, the process seems to rest heavily on chance.  But as we discussed in a post in 2014 in a series on the modes and nature of natural selection, we likened the course that species take through time to the geodesic paths that objects take through spacetime, that is determined (and there it really does seem to be 'determined') by the splattered matter and energy in any point it passes through.

An overall view
This leaves us in something of a quandary.  We can easily construct criteria for making some inferences, in the stronger cases, and testing them in some experimental settings.  We can proffer imaginative scenarios to account for the presence of organized traits and adaptations.  But evolutionary explanations are often largely or wholly speculative.  This applies comparably to natural selection and to genetic drift as well, and these are not new discoveries although they seem to be in few peoples' interest to acknowledge them fully.

Darwin wanted to show by plausibility argument that life on earth was the result of natural processes, not ad hoc divine creation events.  He had scant concepts of chance or genetic drift, because his ideas of the mechanism of inheritance were totally wrong.  Concepts of probabilism and statistical testing and the like were still rather new and only in restricted use.  Darwin would have no trouble acknowledging a role for drift.  How he would respond to the elusiveness of these factors, and that they really are not 'forces', is hard to say--but he probably would vigorously try to defend systematic selection by arguing that what is must have gotten here by selection as a force. 

The causal explanation of life's diversity still falls far short of the kind of mathematical or deterministic rigor of the core physical sciences, and even of more historical physical sciences like geology, oceanography, and meteorology.  Until someone finds better ways (if they indeed are there to be found), much of evolutionary biology verges on metaphysical philosophy for reasons we've tried to argue in this series.  We should be honest about that fact, and clearly acknowledge it.

One can say that small values are at least real values, or that you can ignore small values, as in genetic drift.  Likewise one can say that small selective effects will vary from sample to sample because of chance and so on.  But such acknowledgments undermine the kinds of smooth inferences we naturally hunger for.  The assumption that what we see today is what was the case in the past is usually little more than an assumption. This is a main issue we should confront in trying to understand evolution--and it applies as well to the promises being made of 'precision' prediction of genomic causation in health and medicine.  The moving tide of innumerable genotypic ways to get similar traits, at any time, within or between populations, and over evolutionary time, needs to be taken seriously. 

It may be sufficient and correct to say, almost tautologically, that today's function evolved somehow, and we can certainly infer that it got here by some mix of evolutionary factors.  Our ancestors and their traits clearly were evolutionarily viable or we wouldn't be here.  So even if we can't really trace the history in specifics, we can usually be happy to say that, clearly, whales evolved to be able to live in the ocean.  Nobody can question that.  But the points I've tried to make in this series are serious ones worth thinking seriously about, if we really want to understand evolution, and the genetic causal mechanisms that it has produced.

Monday, May 9, 2016

Darwin the Newtonian. Part III. In what sense does genetic drift 'exist'?

It has been about 50 years since Motoo Kimora and King and Jukes proposed that a substantial fraction of genetic variation can be selectively neutral, meaning that the frequency of such an allele (sequence variant) in a population or among species changes by chance--genetic drift--and, furthermore, that selectively 'neutral' variation and its dynamics are a widespread characteristic of evolution (see Wikipedia: Neutral theory of molecular evolution). Because Darwin had been so influential with his Newtonian-like deterministic theory of natural selection, natural evolution was and still is referred to as 'non-Darwinian' evolution. That's somewhat misleading, if convenient as a catch-phrase, and often used to denigrate the idea of neutral evolution, because even Darwin knew there were changes in life that were not due to selection (e.g., gradual loss of traits no longer useful, chance events affecting fitness).

First, of course, is the 'blind watchmaker' argument.  How else can one explain the highly organized functionally intricate traits of organisms, from the smallest microbe to the largest animals and plants?  No one can argue that such traits could plausibly just arise 'by chance'!

But beyond that, the reasoning basically coincides with what Darwin asserted.  It takes a basically thermodynamic belief and applies it to life.  Mother Nature can detect even the smallest difference between bearers of alternative genotypes, and in her Newtonian force-like way, will proffer better success on the better genotype.  If we're material scientists, not religious or other mystics, then it is almost axiomatic that since a mutation changes the nature of the molecule, if for no other reason that it requires the use of a different nucleotide and hence the use and or production of at least slightly different molecules and at least slightly different amounts of energy.

The difference might be very tiny in a given cell, but an organism has countless cells--many many billions in a human, and what about a whale or tree! Every nonessential nucleotide has to be provided for each of the billions of cells, renewed each time any cell divides.  A mutation that deleted something with no important function would make the bearer more economical in terms of its need for food and energy. The difference might be small, but those who then don't waste energy on something nonessential must on average do better: they'll have to find less food, for example, meaning spend less time out scouting and hence exposed to predators, etc.  In short, even such a trivial change will confer at least a tiny advantage, and as Darwin said many times to describe natural selection, nature detects the smallest grain in the balance (scale) of the struggle for life.  So even if there is no direct 'function,' every nucleotide functions in the sense of needing to be maintained in every cell, creating a thermodynamic or energy demand.  In this Newtonian view, which some evolutionary biologists hold or invoke quite strongly, there simply cannot be true selective neutrality--no genetic drift!


The relative success of any two genotypes in a population sample will almost never be exactly the same, and how could one ever claim that there is no functional reason for this difference?  Just because a statistical test doesn't find 'significant' differences in the probabilistic sense that it's not particularly unusual if nothing is going on, tiny differences nonetheless obviously can be real.  For example, a die that's biased in favor of 6 can, by chance, come up 3 or some other number more often in an experiment of just a few rolls. Significance cutoff values are, after all, nothing more than subjective criteria that we have chosen as conventions for making pragmatic decisions (the reason for dice being this way is interesting, but beyond our point here).

But what about the lightning strikes?  They are fortuitous events that, obviously, work randomly against individuals in a population in a way unrelated to their genotypes, thus adding some 'noise' to their relative reproductive success and hence of allele (genetic variant) frequencies in the population over time.  That noise would also be a form of true genetic drift, because it would be due to a cause unrelated to any function of the affected variants, whose frequencies would change, at least to some extent, by chance alone. A common, and not unreasonable selectionist response to that is to acknowledge that, OK! there's a minor role for chance, but nonetheless, on average, over time, the more efficient version must still win out in the end: 'must', for purely physical/chemical energetics if no other reasons.  That is, there can be no such thing as genetic drift on average, over the long haul.  Of course, 'overall' and 'in the end' have many unstated assumptions.  Among the most problematic is that sample sizes will eventually be sufficiently great for the underlying physical, deterministic truth to win out over the functionally unrelated lightning-strike types of factors.

On the other hand, the neutralists argue in essence that such minuscule energetic and many other differences are simply too weak to be detected by natural selection--that is, to affect the fitness of their bearers.  Our survival and reproduction are so heavily affected by those genotypes that really do affect them, that the remaining variants simply are not detectable by selection in life's real, finite daily hurly-burly competition. Their frequencies will evolve just by chance, even if the physical and energetic facts are real in molecular terms.

But to say that variants that are chemically or physically different do not affect fitness is actually a rather strong assertion! It is at best a very vague 'theory', and a very strong assumption of Newtonian (classical physics) deterministic principles. It is by no means obvious how one could ever prove that two variants have no effect.


So we have two contending viewpoints.  Everyone accepts that there is a chance component in survival and reproduction, but the selectionist view sees that component as trivial in the face of basic physical facts that two things that are different really are different and hence must be detectable by selection, and the other view that true equivalence is not only possible but widespread in life.

When you think about it, both views are so vague and dogmatic that they become largely philosophical rather than actual scientific views.  That's not good, if we fancy that we are actually trying to understand the real world.  What is the problem with these assertions?

Can drift be proved?
Maybe the simplest thing in an empirical setting would just be to rule out genetic drift, and show that even if the differences between two genotypes are small in terms of fitness there is always at least some difference.  But it might be easier to take the opposite approach, and prove that genetic drift exists.  To that, one must compare carriers of the different genotypes and show that in a real population context (because that's where evolution occurs) there is no, that is zero difference in their fitness. But to prove that something has a value of exactly zero is essentially impossible!


Is each outcome equally likely?  How to tell?


Again to a dice-rolling analogy, a truly unbiased die can still come up 6 a different number of times than 1/6th of the number of rolls: try any number of rolls not divisible by 6!  In the absence of any true theory of causation, or perhaps to contravene the pure thermodynamic consideration that different things really are different, we have to rely on statistical comparisons among samples of individuals with the different competing genotypes.  Since there is the lightning-strike source of at least some irrelevant chance effects and no way to know all the possible ways the genotypes' effects might differ truly but only slightly, we are stuck making comparisons of the realized fitness (e.g., number of surviving offspring) of the two groups.  That is what evolution does, after all.  But for us to make inferences we must apply some sort of statistical criteria, like a significance cut-off value ('p-value') to decide. We may judge the result to be 'not different from chance', but that is an arbitrary and subjective criterion.  Indeed, in the context of these contending views, it is also an emotional criterion.  Really proving that a fitness difference is exactly zero without any real external theory to guide us, is essentially impossible.

All we can really hope to do without better biological theory (if such were to exist) is to show that the fitness difference is very small.  But if there is even a small difference, if it is systematic it is the very definition of natural selection!  Showing that the difference is 'systematic' is easier to say than do, because there is no limit to the causal ideas we might hypothesize.  We cannot repeat the study exactly, and statistical tests relate to repeatable events.

There's another element making a test of real neutrality almost impossible.  We cannot sample groups of individuals who have this or that variant and who do not differ in anything else.  Every organism is different, and so are the details of their environment and lifestyle experiences.  So we really cannot ever prove that specific variants have no selective effect, except by this sort of weak statistical test averaging over non-replicable other effects that we assume are randomly distributed in our sample.  There are so many ways that selection might operate, that one cannot itemize them in a study and rule out all such things.  Again, selectionists can simply smile and be happy that their view is in a sense irrefutable.

A neutralist riposte to this smugness would be to say that, while it's literally true that we can't prove a variant to confer exactly zero effect, we can say that it has a trivially small effect--that it is effectively neutral.  But there is trouble with that argument, besides its subjectivity, which is the idea that the variant in question may in other times and genomic or environmental contexts have some stronger effect, and not be effectively neutral.


A related problem comes from the neutralists' own idea that by far most sequence variants seem to have no statistically discernible function or effect.  That is not the same as no effect.  Genomes are loaded with nearly or essentially neutral variants by the usual sampling strategies used in bioinformatic computing, such as that neutral sites have greater variation in populations or between species than is found in clearly functional elements.  But this in no way rules out the possibility that combinations of these do-almost-nothings might together have a substantial or even predominant effect on a trait and the carriers' fitness.


After all, is not that just what have countless very large-scale GWAS studies shown? Such studies repeatedly, and with great fanfare, report that there are tens, hundreds, or even thousands of genome sites that have very small but statistically identifiable individual effects but that even these together still account for only a minority of the heritability, the estimate of the overall amount of contribution that genetic variation makes to the trait's variation.  That is, it is likely that many variants that individually are not detectably different from being neutral may contribute to the trait, and thus potentially to its fitness value, in a functional sense.


This is one of the serious and I think deeply misperceived implications of the very high levels of complexity that are clearly and consistently observed, which raises questions about whether the concept of neutrality makes any empirical sense, and remains rather a metaphysical or philosophical idea.  This is related to the concepts of phenogenetic drift that we discussed in Part II of this series, in which the same phenotype with its particular fitness can be produced by a multitude of different genotypes--the underlying alleles being exchangeable.  So are they neutral or not?

In the end, we must acknowledge that selective neutrality cannot be proved, and that there can always be some, even if slight, selective difference at work.  Drift is apparently a mythical or even mystical, or at least metaphoric concept.  We live in a selection-driven world, just as Darwin said more than a century ago.  Or do we?  Tune in tomorrow.

Friday, May 6, 2016

Darwin the Newtonian. Part II. Is life really 'Newtonian'?

In yesterday's post I suggested that Darwin had a Newtonian view of the world, that is, he repeatedly and clearly described the organisms and diversity of life as the product of evolution, due to natural selection viewed as a force, which in an implicit way he likened to gravity.  At the same time, he knew that there was widespread evidence of various kinds for long-term evolutionary stasis, which a prominent geologist has recently called  "Darwin's null hypothesis of evolution," the idea that evolution does not occur if the environment stays the same.

That suggests that a changing environment leads to a changing mix of organisms that live in the environment, including of their genotypes.  It makes evolutionary sense, of course, because environments screen organisms for 'fitness'.  However, its negative--no change in the environment implies no evolution-- doesn't make sense and badly misrepresents what is widely assumed that we know about evolution. Even if we define evolution, as often done in textbooks, as 'change in gene frequencies' such change clearly occurs even in stable environments.  Mutations always arise, and selectively neutral variants, that is, that don't affect the fitness of their bearers, change in frequency by chance alone, not by natural selection, which means that at the genomic level evolution still occurs. It's curious that not only can organisms stay very similar in what seem like static environments, but also can be similar even in changing environments.

The idea of dual environmental-genetic stasis is an inference made from species that seem to stay similar for long time periods in environments that also appear similar--but how similar are they really?

Indeed, there are several problems with the widely if often implicitly assumed 'null hypothesis':

  1.  It is a very narrow assumption of the meaning of 'evolution', implicitly implying that it refers only to functionally important traits or their underlying genotypes. As we will see, there are ways for genetic change (and even trait change) to occur even in static environments, so that an unchanging environment doesn't imply no biological change.
  2.  It implies that 'the environment' actually stays the same, although 'environment' is hard to define.
  3.  It implies a tight essentially one-to-one fit between genotype and adaptive traits, so that in unchanging environments there will not be any functional genomic change.

All of these assumptions are wrong.  In essence, there cannot be 'the', or even 'a' null hypothesis for evolution.   Sexual reproduction, horizontal transfer, and recombination occur even without new sequence mutation.  To ignore that along with assuming a stationary environment, and adopt a null hypothesis that had anything like mathematical or Aristotelian rigor would be to reduce evolution's basis to something like this not-very-profound tautology:  Everything stays the same, if everything stays the same.

So let's look at this a little more closely
From the fossil record, we infer that some species stay the 'same' for eons, sometimes millions of years.  Then they change.  Gould and Eldridge called this 'punctuated equilibrium' and it was taken as a kind of up-dated version of Darwinism--mistakenly, because Darwin recognized it very clearly at least by the 6th edition of his Origin.  And while some aspects of animals and plants can hardly change in appearance for long time periods, close inspection shows that only some aspects of what can be preserved in fossils stays similar; other aspects typically change.  Also, speciation events occur and some descendants of an early form do change in form, even if the older species seems not to change. So we should be very careful even to suggest that environments or species really are not changing.

But mutations certainly occur and that means that even for a set of fossils that look the same, the genomes of the individuals would have varied, at least in non-functional sequence elements.  That itself is 'evolution', and it is misleading to restrict the term only to visible functional change.  But genetic drift is just the tip of the molecular evolution iceberg.  It is now very clear that there are many ways for an organism to produce what appears to be the same trait--and this is true both at the molecular and morphological levels.  That is, even a trait that 'looks' the same can be produced by different genotypes.  I wrote about this long ago in a rather simple vein, calling it phenogenetic drift, and Andreas Wagner in particular has written extensively about it, with sophisticated technical detail, in his book The Origin of Evolutionary Innovation, and this paper.  (The images are of my general paper and Wagner's book given just to break up the monotony of long text! ; he has written a more popular-level book as well called Arrival of the Fittest, which is a very good introduction to these ideas).

Recent exploration, with great detail



A modest statement of principl


Wagner explores this in many ways and among his views is that the ability of organisms to evolve innovative traits is based on the huge number of overlapping, essentially similar ways it can carry out its various functions, which allows mutations to add new function without jeopardizing the current one. Redundancy is protective against environmental changes as well as enabling new traits to arise.

This is in a sense no news at all. It was implicit in the very foundational concept of 'polygenic' control-- the determination of a trait by independent, or semi-independent of many different genes.  The way complex traits are thus constructed was clear to various biologists more than a century ago, even if the specific genes could not be identified (and the nature of a 'gene' was unknown).  A fundamental implication of the idea for our current purposes is that each individual with a given trait value (say, two people with the same height or blood pressure) can have its own underlying multi-locus genotype, which can vary among them.  Genotypes may predict phenotypes, but a phenotype does not accurately predict the underlying genotype (a deep lesson that many who promote simplistic models of causation in biomedical contexts should have learned in school).

And of course that does not even consider environmental effects, even though we know very well that for most characters of interest, normal or pathological, 'genetic' factors account only for a modest fraction of their variation. And, of course, if it's hard to identify contributing genetic variants, it's at least as difficult to identify the complex environmental contributors who make inference of phenotype from genotype so problematic. That is, neither does genotype reliably predict phenotype, nor does phenotype reliably predict genotype and the idea that they do so with 'precision' (to use todays' fashionable branding phrase) is very misleading.

In turn, these considerations imply that even if we accepted the idea of natural selection as a Newtonian deterministic force, it works at the level of the achieved trait, and can ignore (actually, is blinded to) the underlying causal genetic mechanism.  There can be extensive variation within populations in the latter, and change over time.  Just because two individuals now or in the past have a similar trait does not imply they have the same underlying genotype and hence does not imply there's been no 'evolution' even in that stable trait!

In this sense, evolution could be Newtonian, driven by force-like selection, and still not be genetically static.  But there's more.  How can there actually be stasis in a local environment?  If organisms adapt to conditions, then that in itself changes those conditions.  Even within a species, as more and more of its members take on some adaptive response to the environment, they change their own relative fitness by changing the mix of genotypes in their population, and that in turn will affect their predators and prey, their mate selection, and the various ways that the mix of resources are used in the local ecology.  If, say, the members of a species become bigger, or faster, or better at smelling prey, then the distribution of energy and species size must also change.  That is, the 'environment' cannot really remain the same.  That ecosystems are fundamentally dynamic has long been a core aspect of population ecology.

In a nutshell, it must be true that if genotypes change, that changes the local environment because my genotype is part of everybody else's 'environment'. In that sense, only if no mutations are possible can there be no 'evolution'. Even if one wants to argue that all mutations that arise are purged in order to keep the species the 'same', there will still be a dynamic mix of mutational variants over time and place.

One could even assert that an essence of Darwinism, literally interpreted, is that environments cannot be the same because the adaptation of one species affects others, even were new mutations not arising, because it affects the fitness of others. That is what his idea of the relentless struggle for existence among species meant, so stasis did cause him a bit of a problem, which he recognized in the later edition of the Origin.

I think that in essence Darwin viewed natural selection as being basically a deterministic force, like gravity, corresponding to Newton's second law of motion. And the idea of stasis corresponds to Newton's first law, of inertia. Today even many knowledgeable biologists seem to think in that way (for example, invoking drift only as a minor source of 'noise' in otherwise force-like adaptive evolution). Selective explanations are offered routinely as true, and the word 'force' routinely is used to explain how traits got here.
But there are deep problems even if we accept this view as correct.  Among other things, even if natural selection is really force-like, or if genetic drift exists as a moderating factor, then these factors should have some properties that we could test, at least in principle.  But as we'll see next time, it's not at all clear that that is the case.