Showing posts with label selection. Show all posts
Showing posts with label selection. Show all posts

Monday, May 9, 2016

Darwin the Newtonian. Part III. In what sense does genetic drift 'exist'?

It has been about 50 years since Motoo Kimora and King and Jukes proposed that a substantial fraction of genetic variation can be selectively neutral, meaning that the frequency of such an allele (sequence variant) in a population or among species changes by chance--genetic drift--and, furthermore, that selectively 'neutral' variation and its dynamics are a widespread characteristic of evolution (see Wikipedia: Neutral theory of molecular evolution). Because Darwin had been so influential with his Newtonian-like deterministic theory of natural selection, natural evolution was and still is referred to as 'non-Darwinian' evolution. That's somewhat misleading, if convenient as a catch-phrase, and often used to denigrate the idea of neutral evolution, because even Darwin knew there were changes in life that were not due to selection (e.g., gradual loss of traits no longer useful, chance events affecting fitness).

First, of course, is the 'blind watchmaker' argument.  How else can one explain the highly organized functionally intricate traits of organisms, from the smallest microbe to the largest animals and plants?  No one can argue that such traits could plausibly just arise 'by chance'!

But beyond that, the reasoning basically coincides with what Darwin asserted.  It takes a basically thermodynamic belief and applies it to life.  Mother Nature can detect even the smallest difference between bearers of alternative genotypes, and in her Newtonian force-like way, will proffer better success on the better genotype.  If we're material scientists, not religious or other mystics, then it is almost axiomatic that since a mutation changes the nature of the molecule, if for no other reason that it requires the use of a different nucleotide and hence the use and or production of at least slightly different molecules and at least slightly different amounts of energy.

The difference might be very tiny in a given cell, but an organism has countless cells--many many billions in a human, and what about a whale or tree! Every nonessential nucleotide has to be provided for each of the billions of cells, renewed each time any cell divides.  A mutation that deleted something with no important function would make the bearer more economical in terms of its need for food and energy. The difference might be small, but those who then don't waste energy on something nonessential must on average do better: they'll have to find less food, for example, meaning spend less time out scouting and hence exposed to predators, etc.  In short, even such a trivial change will confer at least a tiny advantage, and as Darwin said many times to describe natural selection, nature detects the smallest grain in the balance (scale) of the struggle for life.  So even if there is no direct 'function,' every nucleotide functions in the sense of needing to be maintained in every cell, creating a thermodynamic or energy demand.  In this Newtonian view, which some evolutionary biologists hold or invoke quite strongly, there simply cannot be true selective neutrality--no genetic drift!


The relative success of any two genotypes in a population sample will almost never be exactly the same, and how could one ever claim that there is no functional reason for this difference?  Just because a statistical test doesn't find 'significant' differences in the probabilistic sense that it's not particularly unusual if nothing is going on, tiny differences nonetheless obviously can be real.  For example, a die that's biased in favor of 6 can, by chance, come up 3 or some other number more often in an experiment of just a few rolls. Significance cutoff values are, after all, nothing more than subjective criteria that we have chosen as conventions for making pragmatic decisions (the reason for dice being this way is interesting, but beyond our point here).

But what about the lightning strikes?  They are fortuitous events that, obviously, work randomly against individuals in a population in a way unrelated to their genotypes, thus adding some 'noise' to their relative reproductive success and hence of allele (genetic variant) frequencies in the population over time.  That noise would also be a form of true genetic drift, because it would be due to a cause unrelated to any function of the affected variants, whose frequencies would change, at least to some extent, by chance alone. A common, and not unreasonable selectionist response to that is to acknowledge that, OK! there's a minor role for chance, but nonetheless, on average, over time, the more efficient version must still win out in the end: 'must', for purely physical/chemical energetics if no other reasons.  That is, there can be no such thing as genetic drift on average, over the long haul.  Of course, 'overall' and 'in the end' have many unstated assumptions.  Among the most problematic is that sample sizes will eventually be sufficiently great for the underlying physical, deterministic truth to win out over the functionally unrelated lightning-strike types of factors.

On the other hand, the neutralists argue in essence that such minuscule energetic and many other differences are simply too weak to be detected by natural selection--that is, to affect the fitness of their bearers.  Our survival and reproduction are so heavily affected by those genotypes that really do affect them, that the remaining variants simply are not detectable by selection in life's real, finite daily hurly-burly competition. Their frequencies will evolve just by chance, even if the physical and energetic facts are real in molecular terms.

But to say that variants that are chemically or physically different do not affect fitness is actually a rather strong assertion! It is at best a very vague 'theory', and a very strong assumption of Newtonian (classical physics) deterministic principles. It is by no means obvious how one could ever prove that two variants have no effect.


So we have two contending viewpoints.  Everyone accepts that there is a chance component in survival and reproduction, but the selectionist view sees that component as trivial in the face of basic physical facts that two things that are different really are different and hence must be detectable by selection, and the other view that true equivalence is not only possible but widespread in life.

When you think about it, both views are so vague and dogmatic that they become largely philosophical rather than actual scientific views.  That's not good, if we fancy that we are actually trying to understand the real world.  What is the problem with these assertions?

Can drift be proved?
Maybe the simplest thing in an empirical setting would just be to rule out genetic drift, and show that even if the differences between two genotypes are small in terms of fitness there is always at least some difference.  But it might be easier to take the opposite approach, and prove that genetic drift exists.  To that, one must compare carriers of the different genotypes and show that in a real population context (because that's where evolution occurs) there is no, that is zero difference in their fitness. But to prove that something has a value of exactly zero is essentially impossible!


Is each outcome equally likely?  How to tell?


Again to a dice-rolling analogy, a truly unbiased die can still come up 6 a different number of times than 1/6th of the number of rolls: try any number of rolls not divisible by 6!  In the absence of any true theory of causation, or perhaps to contravene the pure thermodynamic consideration that different things really are different, we have to rely on statistical comparisons among samples of individuals with the different competing genotypes.  Since there is the lightning-strike source of at least some irrelevant chance effects and no way to know all the possible ways the genotypes' effects might differ truly but only slightly, we are stuck making comparisons of the realized fitness (e.g., number of surviving offspring) of the two groups.  That is what evolution does, after all.  But for us to make inferences we must apply some sort of statistical criteria, like a significance cut-off value ('p-value') to decide. We may judge the result to be 'not different from chance', but that is an arbitrary and subjective criterion.  Indeed, in the context of these contending views, it is also an emotional criterion.  Really proving that a fitness difference is exactly zero without any real external theory to guide us, is essentially impossible.

All we can really hope to do without better biological theory (if such were to exist) is to show that the fitness difference is very small.  But if there is even a small difference, if it is systematic it is the very definition of natural selection!  Showing that the difference is 'systematic' is easier to say than do, because there is no limit to the causal ideas we might hypothesize.  We cannot repeat the study exactly, and statistical tests relate to repeatable events.

There's another element making a test of real neutrality almost impossible.  We cannot sample groups of individuals who have this or that variant and who do not differ in anything else.  Every organism is different, and so are the details of their environment and lifestyle experiences.  So we really cannot ever prove that specific variants have no selective effect, except by this sort of weak statistical test averaging over non-replicable other effects that we assume are randomly distributed in our sample.  There are so many ways that selection might operate, that one cannot itemize them in a study and rule out all such things.  Again, selectionists can simply smile and be happy that their view is in a sense irrefutable.

A neutralist riposte to this smugness would be to say that, while it's literally true that we can't prove a variant to confer exactly zero effect, we can say that it has a trivially small effect--that it is effectively neutral.  But there is trouble with that argument, besides its subjectivity, which is the idea that the variant in question may in other times and genomic or environmental contexts have some stronger effect, and not be effectively neutral.


A related problem comes from the neutralists' own idea that by far most sequence variants seem to have no statistically discernible function or effect.  That is not the same as no effect.  Genomes are loaded with nearly or essentially neutral variants by the usual sampling strategies used in bioinformatic computing, such as that neutral sites have greater variation in populations or between species than is found in clearly functional elements.  But this in no way rules out the possibility that combinations of these do-almost-nothings might together have a substantial or even predominant effect on a trait and the carriers' fitness.


After all, is not that just what have countless very large-scale GWAS studies shown? Such studies repeatedly, and with great fanfare, report that there are tens, hundreds, or even thousands of genome sites that have very small but statistically identifiable individual effects but that even these together still account for only a minority of the heritability, the estimate of the overall amount of contribution that genetic variation makes to the trait's variation.  That is, it is likely that many variants that individually are not detectably different from being neutral may contribute to the trait, and thus potentially to its fitness value, in a functional sense.


This is one of the serious and I think deeply misperceived implications of the very high levels of complexity that are clearly and consistently observed, which raises questions about whether the concept of neutrality makes any empirical sense, and remains rather a metaphysical or philosophical idea.  This is related to the concepts of phenogenetic drift that we discussed in Part II of this series, in which the same phenotype with its particular fitness can be produced by a multitude of different genotypes--the underlying alleles being exchangeable.  So are they neutral or not?

In the end, we must acknowledge that selective neutrality cannot be proved, and that there can always be some, even if slight, selective difference at work.  Drift is apparently a mythical or even mystical, or at least metaphoric concept.  We live in a selection-driven world, just as Darwin said more than a century ago.  Or do we?  Tune in tomorrow.

Friday, January 10, 2014

Whooza good gurrrrrl? Whoozmai bayyyy-bee boy?

I think we should stop feeling sheepish about treating our dogs like kids. After all, blurring the line between dogs and people is not exactly new to just the sickos of our modern world.
One of these sickos with @ElroyBeefstu
Many cultures, across space and time, made wolves and dogs the stars of their origin myths. Their human origin myths. Canids weren't just nursing the infant founders of Rome, they were actually part of some peoples' conception too. [For more check out a great read by Marion Schwartz.]

Sure, sometimes the same people that lifted dogs up chowed down on them too. Long the fate of Chow Chows, dog meat's been a big hit on myriad ancient menus. We could even make the case that Paleo Dieters should put Spot on a spit. [Aside: Paleo Dieters should also try second harvest if they're earnest, but that's even less likely to catch on than literal hot dogs.]

Anyhow, this dog-crazed world that's gone to the dogs, head over heels for the dogs, didn't just poof out of the blue.

Without a legacy of dog obsession we wouldn't have them as they are now. Even if the first tens of thousands of years of their domestication was mostly unintentional, our enduring relationship with dogs was a natural precursor to this:

source
Evolving right alongside the pop culture popularity of dogs is the flourishing scientific interest in them. Part of that has to do with the simple fact that they're built to cooperate with human experimenters, like with this dog swimming study going swimmingly. And this reminds me of a story I read a while ago (not here but this is where I relocated it) in which a dog that's so eager to please lifts her paw to shake with the human carting her off to be vivisected.

Aw. Sorry. Let's pause and shake out those bad thoughts....
source, with so many more
OK. Much better.

It's thanks to the enthusiastic dissemination of the increasing amount of dog research that, for example, I could learn about the dog visual spectrum when I was curious and googled for it:
Source and here too
See? All those red chew toys for Rover are much more visually appetizing to the primates who purchase them.

Also, not too long ago, I learned via Twitter...
... that when the magnetic fields aren't obscured by clouds, pooping dogs seem to align with the north and south poles. Creates a great opportunity to use "polar vortex" on a daily basis.

It's the behavioral studies that seem to get so much play in the media, especially the cognitive ones and the ones that speak to our relationship with dogs. And I'm a sucker for all those but I'm an even bigger sucker for the evolutionary ones. The ones that do all that but also try to help tell our co-evolutionary tale.

And one of these that really sucked me in is, "Paedomorphic Facial Expressions Give Dogs a Selective Advantage" by Waller et al.

It's such a well-written piece and so simple... too simple, perhaps, but they acknowledge it well and they don't overstate their findings.

The group of researchers wanted to know whether dogs make faces* that are more or less attractive to humans. They were particularly interested in any facial expressions that might enhance the already paedomorphic faces of many dogs--a trait traditionally blamed on selection for cuteness and selection against aggression that's supposedly genetically linked to dog face, head, and ear morphology.

They developed a tool to objectively observe doggie facial musculature changes on film, based on one already in use for humans (FACS --> DogFACS). To collect the data on their dog sample, they stood fairly neutrally outside each enclosure at an adoption shelter and filmed each animal under these conditions for two minutes. To minimize the confounding effects of vastly different dog breed craniofacial morphology, they stuck to one group: the bulls and bull mixes. This was probably also the breed group with the biggest sample size at the shelter. Just a hunch.

Then, they waited to see how long these dogs had to wait to get adopted. That was the measure of human preference or "selection." It's not perfect; one could think of many things that could factor into the time a dog stays in a shelter. But the authors make a strong case for how this is as close to a proxy for selection in our co-evolutionary history as we might get as humans interested in reconstructing that history. We're talking about all that history where we weren't actively breeding short-legged corgis, but instead just co-existing with dogs.

Lip pucker, lip corner puller, nose wrinkler, eye closure, blink, mouth stretch, jaw drop: These are some of the expressions they captured but none so much as the inner brow raiser. It's the face you make when you flex your medial frontalis and that a dog makes when flexing its levator anguli occuli medialis. This trait was the focus of the paper. The authors say that by raising the inner brow, a dog's eyes appear larger which is more puppylike, more paedomorphic. This simple maneuver also reveals their white sclera--tissue that's long been assumed an important signal for non-verbal communication among our kind.

Elroy's doing it more, but we're both raising our inner brows and showing our sclera.
Therefore, you want to adopt us. 
The code for inner brow raise is AU101 and here's how the data per dog plot out.  X-axis shows AU101 frequency during filming and y-axis is the length of time on the adoption market.
Figure 2 from Waller et al.
"Relationship between frequency of AU101 and days before re-homing in the dog shelter.
Curved line shows the power estimation."
A quick glance shows an apparent pattern in which dogs that make this facial expression more frequently are adopted faster. And that could be the case, but with this small sample it's not a strong case. If we remove only those five dogs in the blue circle that I slapped on there, it's not much of a relationship at all. Still, the paper acknowledges this in so many ways and the whole paper just makes so much sense I can't even be mad about the sample size.

You might not be mad either if you ever adopted a dog from a shelter or if this passage from the end of the article speaks to you:
In humans, the equivalent facial movement to AU101 is AU1(inner brow raiser), which features heavily in human sadness expressions [20]. It is possible, therefore, that human adopters were responding not to paedomorphism, but instead to perceived sadness in the dogs looking for adoption.
Guilty as charged. This is the sad, pensive face I saw standing behind the bars at the shelter. All the other dogs were hurling themselves towards me and this little girl stood back and did this:

My sweet sad-faced Murphy.
Sure I could have been homing in on her paedomorphic expression and only overlaid my perception of her sad, thoughtful nature which was also so appealing, but they could be all part of the same phenomenon.

Here's the rest of that paragraph from the paper, to round out the discussion about other explanations besides a preference for straight-up cuteness or paedomorphy with this facial expression:
However, it is also possible that the human sadness expression is itself derived from paedomorphism, and that sadness is attributed to this specific facial movement because it enhances paedomorphism and thus perceived vulnerability. Another possibility is that humans are responding to the increase in white sclera exposed in the dogs as the orbital cavity is stretched through AU101 action. Visibile sclera is a largely unique human trait [27] (which likely contributes to our extensive gaze following abilities) and people are more likely to cooperate or behave altruistically when exposed to cues of being watched [28], [29]. It is unclear, however, whether it is the sclera specifically or simply the presence of eyes per se which has such a powerful affect on human behavior and attention, and so this is more a complimentary hypotheses as opposed to an alternative.
So her white sclera guilted me into taking her home. Maybe that's what it was. The Eckleberg Effect.
Something tells me I'm using this symbol wrong here. But forgive me, I didn't read this with a teacher to tell me how to interpret it.

Whatever it is, it worked then and she still looks like this and it still works on me. 

In closing, I'm wondering whether these expressions are simple and genetic, or are enhanced by positive conditioning during life. It's unclear from this paper, but that's a whole other paper (or career). I wish this paper included a survey with the study, asking the humans who adopted these particular dogs, Why? And, of course, the tiny elephant in the room is that all these dogs were taken home by somebody eventually... so is this as good a proxy for selection over evolutionary time as we're tempted to think it is? I'm not sure. I haven't let this sit with me for long enough and clearly I'm way too excited about how it explains my own adoption story, with Murphy, at least with so many related options. I feel psychoanalyzed. It's thrilling! I'm just glad I wasn't asked to review this paper. What a conflict of interest!

On a final note. It's often said that humans are self-domesticated, or just plain domesticated, animals. And within that discussion we talk about how we're cute and have especially cute babies because we preferred them and cared for them and hence they were able to survive and replicate their cuteness. But doing that to dogs, which are without a doubt domesticated, is a bit different from doing that to ourselves don't you think? 

Taking credit for dog cuteness--be it their facial expressions or the way their structures scream squeeee! when they're puppies and into adulthood--seems not unreasonable given what we've  untinentionally and then intentionally done to dogs recently, happily encouraging breeding in some and snippily discouraging breeding in others. 

But to take credit for our own cuteness the way some of the just-so stories answer 'why are babies cute'... that's a little harder for me to fathom. Instead the best explanation for why babies are cute seems to be not that they are inherently so, but that our perceptions of their cuteness are inherently so. If we didn't behold them as adorable, lovable little creatures, we might be in trouble given how long they depend on us before they make cute little adorable creatures for themselves. And it's these perceptions that we can redirect as preferences onto other creatures, spreading our love all over the wild kingdom, but mainly, for now, all over dogs.
source
source


*If you misread "dogs make faeces" here you're not alone. That dog-poop-polar-alignment story really sticks!**
** If you read that as "really stinks!" then you and I need to find the same shrink.

Wednesday, September 5, 2012

Getting hip about hips

Holly Dunsworth's recent paper and very fine posts here on MT ( the lead-in, the story, what it means for your pregnancy, and cultural and philosophical implications) skillfully discussed important issues about the kinds of explanations people offer for various of our traits.  One of the most subtle, and important, was her explanation of the fact that there can be variation among individuals around a species' characteristic trait (which she discusses here).

In this case, as she pointed out, not all women have the same gestation length, there is variation in metabolism within species, and so on.  Yet, there are general characteristics of a given species that differ from the similar trait in other species.  In other words, the trait is not rigidly fixed, and variation within species and species differences can both be consistent with the same general accounting for the trait.

One way to account for this is to consider the genetic basis of what we call metabolism.  Metabolism, like many if not most traits, is the result of many different processes taking place, and these are brought about by the action of many different genes (often hundreds of them), reacting to conditions in the environment. In this case, those conditions involve diet, type and level of activity, perhaps climate and more.

Different environmental experiences can add variation to a trait like metabolic activity and so on, modifying the exact trait value--like days of gestation for an individual fetus.  Random effects (chance) also almost always plays a role, since no two humans are identical.  Similarly, all the contributing genes, and the regulation of their timing and level of expression, are subject to mutation and vary among individuals.

Successful gestation and maternal survival and infant care can be affected by this genetic variation and for obvious reasons the selective effects can be brutally stark.  Maternal or fetal mortality are huge potential fitness effects.  So there is no doubt that gestation and delivery can be under strong natural selection.  However, that doesn't mean selection favoring one precise direction or trait, or one gene.

Most selection will basically strongly affect women or fetuses who do not 'behave' successfully enough to be born.  But each instance will involve different genotypes, and the selection affecting any given variant in a given gene will likely be very small.  The distribution of the trait in the population--such as hip-width or gestation length--will have some mean (average) value and some amount of variation.  These can change gradually over time, or tend to hover around some rather stable values.

If selection occurs for some other functions that may involve the trait (that is, something about the skeleton or diet and metabolism) distribution of the trait can change.  If this were to happen differently in isolated populations, they will diverge and eventually become different species.  Even just chance meandering of the trait values could leave them different between species.

As a result, each species will have its own mean trait value, while individuals within the species can vary.  This is, I think, just the pattern Holly was describing in her recent post.  It is largely the consequence of causal complexity.  This is not a yes/no, Mendelian pea-like kind of trait in which there are only 2 or 3 states in the population, one of which is favored and the other lethal, so that different species evolve to have only one or the other variant.  It's the result of many different contributing factors.

And this, naturally, ties into the whole issue of the complex nature of genetics underlying traits that, like the ones considered here, are not just simple.  That's why it is difficult to identify 'the' gene or a few genes 'for' the trait, because that's not how the trait is produced.  And that is the problem with GWAS or other attempts to simplify the basis of what we are or what happens to us.

Wednesday, April 11, 2012

The next challenge in malaria control - artemisinin resistant parasites

Anopheles mosquito, Wikimedia Commons
Sometimes the news about malaria is good, as recently when deaths from malaria were reported to be decreasing, even if inexplicably, and sometimes it's not so good.  Last week saw two not-so-good stories -- one in The Lancet and one in Science -- about the increase in anti-malarial resistance in the Plasmodium falciparum parasite.  The Lancet paper documents this on the border between Thailand and Burma, and the Science paper reports the identification of the genome region in the parasite that is responsible for this newly developing resistance.  Because the parasites are becoming resistant to the best anti-malarial in use today, arteminisin, this is a serious issue.

The Science paper sets the stage:
Artemisinin-based combination therapies (ACTs) are the first-line treatment in nearly all malaria-endemic countries and are central to the current success of global efforts to control and eliminate Plasmodium falciparum malaria. Resistance to artemisinin (ART) in P. falciparum has been confirmed in Southeast Asia, raising concerns that it will spread to sub-Saharan Africa, following the path of chloroquine and anti-folate resistance. ART resistance results in reduced parasite clearance rates (CRs) after treatment...
As the BBC piece about this story says, "In 2009 researchers found that the most deadly species of malaria parasites, spread by mosquitoes, were becoming more resistant to these drugs in parts of western Cambodia."  This will make it much harder to control the disease in this area, never mind eradicate it.

Most malaria deaths occur in sub-Saharan Africa, and the spread of resistance to this part of the world would have disastrous public health consequences.  There is no therapy waiting in the wings to replace ACTs.  Whether the newly identified resistance is because infected mosquitoes have moved the 500 miles from the initial sites where resistance was found toward the border or because the parasites spontaneously developed resistance on their own is not known.  If the latter, this suggests that resistance is likely to arise de novo anywhere that artemisinin is in use -- and that's everywhere malaria is found, as ACTs are the most effective treatment currently in use.

This is, of course, evolution in action, artificial selection in favor of resistant parasites.  It's artificial because we're controlling 'nature' and how it screens.  Normally, selection that's too strong for the reproductive power of the selected species can mean doom -- extinction.  Blasting the species with a lethal selective factor can do that.  In this case, we'd like to extinctify the parasite.  But selection in a rapidly reproducing species is difficult because if any resistance mutations exist, the organisms bearing them have a relative smorgasbord of food -- hosts not hosting other parasite individuals, and this can give them an emormous selective advantage.  So the artificial selection against susceptibility is also similarly strong selection for resistance.

Unfortunately the development of resistance is inevitable when a strong selective force such as a drug against an infectious agent is in widespread use against a prolific target.  And it shows why the idea that Rachel Carson was personally responsible for millions of deaths from malaria because she pointed out in her 1962 book, Silent Spring, the harmful environment effects of DDT, an insecticide that effectively kills non-resistant mosquitoes, is short-sighted.  If its use against mosquitoes had been widespread and sustained, it would have long ago lost its efficacy.

The inevitable rise of resistance to treatment is why prevention or, even better, eradication are the preferred approaches.  Unfortunately developing a vaccine against malaria is proving to be a scientific challenge, and similarly evolutionary considerations will apply; and eradication, while doable in theory, is a political and economic challenge, and could involve the same resistance phenomenon if not done right.  So, the documented rise of drug resistant P. falciparum on the Thai Burma border is a severe blow.

We don't happen to know what, if any, intermediate strategies are being considered or tried.  Multiple moderate attacks, with different pesticides or against various aspects of the ecology or life-cycle might not wipe individuals out so quickly, but may 'confuse' them so that no resistance mechanism can arise because those bearing the new mutation protecting from agent X would be vulnerable to agent Y.  A complex ecology of modest selective factors, could possibly reduce the parasite population to a point where it really did become lethally vulnerable to some wholesale assault.

Or would it be necessary to accept some low level, but not zero, rate of infection to prevent major resistance?   Small pox and polio would seem to suggest that real eradication is possible, but how typical that can be expected to be, is unknown (to us).

Friday, March 23, 2012

You look just like.... well, almost like

Female hoverfly on cistus flower; Wikimedia

Ken mused a few weeks ago about mimicry, the traits that many species have evolved as defense against predators, according to Darwinians -- the butterfly with an eye spot on the back of its wing, the moth that has taken on the coloring of tree bark.  A commentary in this week's Nature, accompanying a paper about mimicry ("A comparative analysis of the evolution of imperfect mimicry", Penney et al.), quotes the British naturalist, Henry Walter Bates, who described these kinds of traits 150 years ago as "a most powerful proof of the theory of natural selection". 

Ken's point in his post was that, yes, Darwinians have explained these as examples of very clever adaptation, a way to outwit predators and increase the odds of survival, but that in fact these ruses aren't always all that successful.  As he said, after describing some examples of very effective mimicry in butterflies that he had come across in his travels, "...while I did see the effectiveness of protective coloration in these two instances, I also did, after all, see the butterflies.  I wasn't completely fooled."

Indeed, if it's so effective, why haven't all species evolved protective coloration?  And, the Darwinian's answer is that it's only one kind of adaptation, and there are many others.  Each trait has its own adaptive purpose, and it is the job of science to uncover what that is.  But, as Ken also pointed out, most adaptive explanations can't be confirmed, no matter how plausible they seem.  Furthermore, whether an organism is eater or eatee it may typically largely be due to chance, and the genetic contribution is usually very slight, and essentially undetectable (even industrial melanism in moths, recently confirmed statistically with new data, was a lot of work).  And, the assumption that natural selection detects all functional genetic variation is simply an assumption, but it makes Just-so stories about adaptation unfalsifiable.

We read the Nature pieces on mimicry, then, with this in mind.  The question posed in the Penney et al. paper is why some harmless hoverflies are such good mimics of stinging Hymemoptera (wasps or bees), and others are much less convincing.  They point out that evolutionary theory about mimicry assumes that some copies are pretty exact but that examples of inexact copying abound (though, there does come a point where one would ask how we're sure it's in fact a mimic or what 'exact' means in this kind of situation). 

Explanations of poor mimicry include that it may look poor to us, but it's good enough to fool a predator, or imperfect mimicry is even more adaptive than perfect mimicry, or that imperfect mimicry benefits co-specifics (this is a kin selection argument), or that there are constraints on the evolution of a more perfect copy, or relaxed selection, whereby selection for mimicry becomes weak enough that it is "counteracted by weak selection or mutation", that is, that there's no selective benefit to refining the mimic further. 

To try to determine which of these explains the poor hoverfly mimics, Penney et al. used "subjective human rankings of mimetic fidelity...across a range of species" of two different taxa (Syrphidae and Hymenoptera) and compared them for consistency against a statistical analysis of trait values. They found a strong positive relationship between body size and mimetic fidelity, and suggest that "body size influences predation behaviour and thereby the intensity of selection for more perfect mimicry."

The idea is that the larger the prey, the more benefit to the predator, and thus the more urgently the prey needs to figure out a way to avoid the predator.  Smaller or more abundant hoverflies needn't spend so much energy trying to fool a predator, as each insect is less likely to be preyed upon because there are more of them to choose from, or because they are less of a mouthful, and so less disirable. 

So, they explain imperfect mimicry as the relaxation of selection on mimicry, though they do not find this counteracted by weak selection or mutation, and they do not reject the constraints hypothesis.  They conclude that "reduced predation pressure on less profitable prey species limits the selection for mimetic perfection."

The same explanation always, but always different
Notice that each of the 5 possible explanations they offer assume that selection of some sort must be the explanation, if only they knew it.  This is the Darwinist assumption, that the ground-state of life is competitive natural selection.  If one selection story is shown to be wrong, then it must be another, as we've seen in this case.  This explanatory tack is very widely accepted, indeed, assumed without question.  But the assumption of selective adaptation is not itself ever questioned.  Is it true?

More accurate than that assumption is that the ground state of life is existence, or over time, persistence.  Whatever reproduces, reproduces.  This is an assumption, too, and is testable....but isn't very helpful at all in and of itself.  We can go a bit further: There is differential reproduction among differing genotypes, but even in a totally non-selective world this would be the case (in formal terms, genetic drift is inevitable).  Sometimes success may be due to a systematic relationship between the state that succeeds and its success, and that's natural selection, but this need not be the case.  The question is when and to what extent predictable, non-random  change is occurring, and that is not at all easy to show most of the time.

More profoundly, selection need not be (as Darwin seriously thought, and as most modern biologists accept without thinking seriously about) the force-like phenomenon it is usually, if often tacitly, assumed to be.  It can be weak, ephemeral, local, moveable, and even probabilistic.  Even from a purely selectionist point of view, all sorts of species with all sorts of variation are reproducing in all sorts of ways in relation to all sorts of factors -- including each other. There is no reason to expect that single factors, alone, will necessarily motor on through with some clear-cut force-like trajectory of change.  These statements are not at all controversial, yet seem to be in effect ignored when each investigator's favorite trait is being evaluated 'on the margins' as one would say in statistics: that is, evaluated in isolation, on its own.

We have a great parallel here with polygenic causation that is so pervasive and frustrates GWAS as we've said countless times here.  With polyfactorial ecologies, what oozes through the blizzard of factors will not necessarily be simple or explicable in terms of one factor on its own -- say, looking like something else.  This is a very different perspective than trying to analyze everything as if it is the type of selection we have to identify or explain why, surprisingly, it's not perfect.

Think of it in this very simple way.  It is almost always possible to change most traits.  Experimentally, this is reflected by the fact that most traits respond to artificial selection.  In this case, that means that it should always be possible for natural selection to lead to change in ways that make any species that somebody else eats look more like the background of where it lives (even against bacterial predators, some form of camouflage defense at the molecular level should always be possible).  If the selectionist stories are accurate, that camouflage increases your odds of living to frolic another day, then every species should be camouflaged and should normally dwell on its match. 

This is so clearly not how nature is that one wonders why fundamentalistic Darwinism ever took hold, even by Darwin himself.  Why isn't everything camouflaged?  The answer, which we referred to casually as the 'slop' of nature in earlier posts (e.g., here), is that evolution is persistence in the entire ecology of each organism, and sometimes something seemingly so obvious as mimicry clearly seems to happen. But not most of the time.  Or, each trait in each organism can be argued to have some such story. That is so ad hoc, or perhaps post hoc, it has a resemblance to creationism--in the epistemological sense of being something in which every individual observation has the same explanation (selection made it so), no matter what the details.  If we assume selection, we can always adjust our explanations for what's here to fit some selection-story.  One simple component of this, obviously, is that the predators are evolving their detection ability as well.  It's all an endless dance, and this is not controversial biology, but is within what we know very well.

Biology should grow up.  The ground state of life is persistence, however it happens.  And stuff happens.  There are lots of ways to persist.  Selection is one, but it's only one, and it is itself a subtle, moveable feast.

Tuesday, March 29, 2011

Learning the lessons of the Land: part I

This post is inspired by the latest issue of The Land Report, the thrice-yearly report by The Land Institute, of Salina, Kansas.  This organization is dedicated to research into developing sustainable agriculture that can conserve water and topsoil, reduce industrial energy dependence, and yet produce the kind of large-yield crops that are needed by the huge human population. 

Minnesota cornfield, Wikimedia Commons
An article in the spring 2011 issue concerns an approach called molecular breeding.  Here the idea is to speed up traditional empirical breeding to improve crops, as a different (and better, they argue) means than traditional GM transgenic approaches, that insert a gene--often from an exotic species such as a bacterium--into the plant genome.

The important point for Mermaid's Tale is that crop breeders have been facing causal complexity for millennia, and from a molecular point of view for decades.  Their experience should be instructive for the attitudes and expectations we have for genomewide association studies (GWAS) and other 'personalized genomic medicine.'   To develop useful crop traits means to select individual plants that have a desired trait that is genetic--that is, that is known to be transmitted from parent to seed, and to replicate the trait (at least, under the highly controlled, standardized kinds of conditions in which agricultural crops are grown).  For this to work, one needs to be able to breed, cross, or inter-breed seeds conferring desired traits to proliferate those into a constrained strain-specific gene pool.  Traditionally, this requires generations of breeding, and selection of seed from desired plants, repeated for many generations.

The idea of molecular breeding is to use genome-spanning sets of genetic markers--the same kinds of data that human geneticists use in GWAS--to identify regions of the genome that differ between plants with desirable, and those with less desirable, versions of a desired trait.  If the regions of the genome that are responsible can be identified, it is easier to pick plants with the desired genotype and remove some of the 'noise' introduced by the kind of purely empirical choice during breeding that farmers have done for millennia.

Relating phenotype to genotype in this way, to identify contributing regions of the genome and select for them specifically is in a sense like personalized genome-based prediction.  As discussed in the article, 'Biotech without foreign genes', by Paul Voosen in The Land Report (which, unfortunately, doesn't seem to be online) molecular breeding is a way to greatly speed up the process of empirical crop improvement.  What we mean by empirical is that the result uses whatever genome regions are identified, without worrying about finding the specific gene(s) in the regions that are actually responsible (this means, in technical terms, using linkage disequilibrium between observed 'marker' genotype, and the actual causal gene).

For crops with small genomes, like rice, breeders have been more readily able to identify specific genes responsible for desired traits.  But for others, the large size of the genome has yielded much more subtle and complex control that is not dominated by a few clearly identifiable genes. Sound familiar?  If so, then we should be able to learn from what breeders have experienced, as it may apply to the problem of human genomic medicine and public health.

We'll discuss that in our next post.