Friday, March 30, 2012

Forget bipedalism. What about babyism?

First, a little time travel experiment...

Here's a newspaper headline and blurb from 709,987 CE: 

Hominin fossils dated to 2,012 CE show arboreality

Artists have made a realistic reconstruction (above) of an early human species based on the anatomy of the latest fossil discovery in paleo-Alabama by a team of paleontologists. These primitive hominins were still climbing trees!

In yet another reconstruction of this primitive species (above), an adult forages for honey by scrambling nimbly up a tall tree trunk. 

In other paleontology news, a team of scientists determined that a Morrocan variety of domestic goat, from roughly the same primitive era, could adeptly climb small bush-like trees.

I promise that's not sarcasm or postmodernism! That's just me trying to jossle you loose of some traditional assumptions and spark you to wonder about how we know what we know about the functional relationships between anatomy and behavior.

I'm thinking about this because of the latest and greatest news in the world of paleoanthropology, published this week in Nature and here's a nice video to bring you up to speed:

If this Burtele foot fossil represents a hominin, it's different enough in anatomy (and the functional interpretation of that anatomy) to be considered something separate from other hominins on record.

It's this whole other animal.

As of now, all hominins at this time in the mid Pliocene (the foot is dated to 3.4 mya) belong to the genus Australopithecus and are only known from sites in East and South Africa. Perhaps the Burtele foot is another species that is not A. afarensis (which supports long-existing arguments that there are two lineages at this time) or maybe this is an ancestor to the Paranthropus radiation that occurs in the early Pleistocene. Or maybe it's not even a hominin, because the foot is very ape-like. Moreso than any australopiths on record.

Chimpanzee displaying it's non-human-like hallux (big toe). 
Can we tell what this animal is? Without teeth and cranial bones--our gold standard for identification and distinction--can we surmise what kind of creature this Burtele foot belonged to?

For starters, if it's to be deemed a "hominin," we should determine what exactly about this foot is human-like.

In terms of metrics, the Burtele foot shares traits with humans but these are also shared by gorillas and sometimes chimpanzees and sometimes old world monkeys too. The striking similarities between human and gorilla feet have been known since the earliest comparative anatomy studies and it was only a matter of time before a new primitive fossil brought the problem to heightened prominence.

For the qualitative traits, more similarities appear to be shared with the famous "Ardi" skeleton from about a million years earlier in time (belonging to Ardipithecus ramidus) than with extant humans.

According to the paper, there appears to be few derived, hominin-specific features in this foot. And that's even if you are on board with calling Ardi a hominin. Granted, we expect more primitive hominins to share few of those traits with us and traits evolve mosaically, even within the foot. And granted, not all hominins contributed directly to our evolution, so we might find species on the hominin branches of the TOL that share no derived features with humans. Still, the case isn't very strong for this Burtele foot being a hominin.

One of the reported hominin-like traits has to do with the shape and orientation of the proximal phalangeal joint where it articulates with the second metatarsal head. This appears to indicate that the foot was experiencing more toe-off during walking, more like human walking than like ape walking since it's not the condition found in ape feet. This functional interpretation seems to be based on that made of Ardi's foot. Unfortuantely I don't think anyone's looked at the biomechanics of this joint in apes (and ape feet) as they walk bipedally, yet. (But I could be wrong!)

Also, although the degrees of torsion in the second metatarsal is ape-like, the torsion in the first metatarsal is more humanlike. (Torsion describes the orientation of the proximal joint surface relative to the orientation of the head. The Burtele foot's MT2 and MT4 are reported to have less torsion than that seen in African apes (where this is associated with a grasping orientation), yet are still significantly twisted compared to human metatarsals.)  The torsion in the big toe (hallucial metatarsal) is reported to be unlike that of African apes, and presumably more like humans? It's not clear. But the actual measure of the degree is not included in Table 1, nor is it demonstrated in the Supplemental section as implied. So we have to take their word for it. And by the look of the Burtele first metatarsal in the photos, that's probably not a problem.*

So there are some leads away from extant African apes and weak ones toward Ardipithecus and possibly extant humans, but what the paper demonstrates better than the hominin status of the Burtele foot is just how difficult it is to pin down hominin-ness.  We don't always know it when we see it.

For most of us and I assume the authors of the paper, this is what makes paleoanthropology so fun! But it's also why we fight.

That's not because we don't know how anatomy and behavior are linked in extant humans, apes, monkeys, etc... we seem to have a pretty good handle on that. At least big picture. The problems arise when you zoom in. Are chimpanzees really arboreal? Yes. Are they really terrestrial? YES. Same for gorillas.

So reconstructing an evolutionary scenario in which our hominin ancestors went from a state of arboreality to a state of terrestriality is not that simple. The Burtele foot describers definitely understand this. It's just the popular media that doesn't have the time or the inclination to get this. Hence my intro above.

But that intro up there with the futuristic newspaper wasn't purely reaction-against-media-reaction. It's also me reflecting on issues closer to home for paleoanthropologists.

Do primitive "arboreal" traits in hominin skeletons really correlate with arboreal behaviors? Or are they ancestral relics? We don't do much climbing anymore but we have all kinds of anatomy that links us to our tree-hugging relatives, and presumably to our shared tree-hugging ancestors.

So at what point do we recognize primitive traits as being only that? Why must all traits we observe in extinct animals be so USEFUL?

The answer is.... because that is the only way to go about these functional studies. By definition functional anatomy indicates FUNCTION. So we go in with a bias toward identifying function, NOT with an aim to identify primitive relic anatomy that just happens to work for the animal. The latter is so much harder to perceive although one might argue it's what our null hypothesis should be. It's easier to assume that if an animal has the anatomy, then it's functional and therefore it's "for" doing whatever it is that it does with that anatomy.

Who cares about these subtly different approaches to comparative anatomy and evolutionary reconstruction?

Well if you're trying to determine when bipedalism became habitual and when hominins stopped relying on trees for foraging, fleeing, socializing, and nesting, then it matters whether functional anatomy is function or whether functional anatomy is just hanging on. (pun intended) Welcome to the nightmare that is paleoanthropology!

So where does the Burtele foot leave us?
1. Looks like we've got a separate ape-y thing that's not Lucy's species (A. afarensis) living around the same time and place as Lucy and her ilk.
2. Looks like whatever this animal is it doesn't have the few derived features found in some australopith feet. (those are even debated...)

But speciation, co-habitation, and bushy hominin phylogenies aren't even the coolest part of this story.

The two distinct foot morphs offer another kind of insight.

Lemme show you.

First of all, what is arboreality and how much arboreality is enough to require specific grasping adaptations in the big toe?

We know that cercopithecines (like macaques and baboons) vary pretty widely in their degrees of arboreality and terrestriality, but regardless, they're good in the trees and they're good in the trees despite their short and diminutive big toes! Arboreal behavior can be accomplished via many different evolutionary processes revealed to us via skeletal anatomy.

Here are two baboons showing off their small big toes.

Baboons are considered to have terrestrial adaptations in their feet that are similar to those in humans like with their shorter phalanges, but they're also the opposite of humans with their short, not long big toe.  Like with arboreality, terrestriality works via many different evolutionary processes that are revealed via the skeletal anatomy...and this goes beyond primates. (The crux of it all for human evolutionary reconstructions is determining how bipedal terrestriality differs from quadrupedal terrestriality.)

Gibbons (apes) are super aboreal, but their locomotion biases grasping hands, not feet. However, their suspensory behavior does use the grasping big toes.

Here are some gibbon feet displaying their big big toes.

Here's a film showing a gibbon using its grasping big toe:

Conclusion: The arboreal use of the strong grasping big toe is not necessarily about climbing or even walking on tree limbs;** it's probably more about suspension.  Suspensory behavior is typified by the gibbons, orangutans (many at least), and the rest of the apes and so is the grasping big toe. That's the strongest functional explanation for the ape's thumblike big toe.

Now, larger apes are not suspending as much with their toes. And they can get up into the trees just fine without grasping with their big toe.****

DeSilva (2008)
So think about who's doing much of the suspension particularly in African apes: Juveniles.

Suspending from vines and tree branches is one thing, but also, climbing trees with immature musculature is certainly helped by a grasping big toe.
And then of course, grasping not just trees and vines, but grasping mother is important as well.

Is there any way we can investigate this ontogeny-based hypothesis for a grasping hallux?

Let's consider the foot as a whole first, and then get back to the big toe as a grasping tool.

We can see whether foot size varies during ontogeny in different primates.

I have looked at the size of macaque (monkey), gorilla and chimpanzee (Pan) feet through ontogeny (granted, a cross-sectional sample, not a longitudinal one).***

Because macaques grow up faster than the two apes which share developmental pace, here's how I lined up the age groups (1-5; 1 = infants and 5 = adults) so that I could fairly compare growth among them.

And here are the results where I compared relative foot size during ontogeny (over age groups 1-5). Femur length was my measure for overall body size.

The box and whisker plots show you how macaques are born with big feet relative to body size, relative to adult proportions. The ontogenetic changes in proportions that they experience, over the age groups I've constructed, are significant according to the ANOVA.

By contrast, and granted I had small samples of chimp and gorilla infants ... those apes are born with adult foot size proportions which are small relative to what macaque infants are working with. Larger, more adult feet at birth contribute to macaques' relative behavioral precociality compared to young gorillas and chimpanzees who are relatively altricial. With those big feet macaques can better navigate a big world.

Chimps (and especially humans) are relatively behaviorally altricial at birth compared to macaques. Neurological development factors into this, but the small feet (in a big world) are also part of that altricial package.

Now, to bring this back around to grasping thumb-like big toes in apes!

During their longer periods of infant and juvenile dependency, gorillas and chimpanzees spend quite a bit of time clinging to mom. Big feet for body size would help cling to mother like with macaques. On a relatively small foot for body size--and in a relatively more dependent baby which is taking longer to achieve independence and is requiring more mother-infant care and bonding and learning--the thumb-like grasping big toe would come in handy (or footy).

(We've got some residual ability still...see here.)

In this scenario, the grasping hallux is an adaptation for surviving the earliest stages of life as a small footed more vulnerable, slower-developing, extensively dependent ape infant.

So what does this have to do with Pliocene hominins?

First of all, it means that morphology should be considered in an ontogenetic context so that our adaptive hypotheses and evolutionary reconstructions are as robust as possible.

And second of all, if you're basically a dedicated biped and you've got free hands to help carry your baby (rather than demand it hang on for dear life)  then selection for grasping ability in babies might be relaxed. Meaning that relaxed selection on grasping foot anatomy could have preceded any selection for all the derived features we associate with bipedalism. This isn't new territory. Most of us assume that bipedal behavior preceded the refined (as much as it can be considered refined) bipedal anatomy.

To sum up, at some point hominin infants lost the ability to cling to mother and at some point we lost our foot thumbs. These evolutionary events might be related.

What does this mean for Lucy and her kind (with their non-grasping feet) and for the Burtele foot's kind (with their foot thumbs)?

Lucy would have had to care for infants more intensely than the Burtele gang. These species would have had different mother-infant interactions.

Babyhood and motherhood...that's a whole lot more profound than straight up metatarsal anatomy.

*I was derided for not strongly-enough demonstrating claims like this in my one attempt at publishing fossil foot bone descriptions. I trust this was an oversight in moving information to the supplementary section or collateral damage due to some other editorial process. Or it's my inability to read this paper properly!
**Wunderlich (1999) measured metatarsal head pressure during walking on the ground and a pole (branch) in chimps and found that the peak pressures for MT1 were higher on the ground than on the pole. And that's not just absolutely, but relative to to the other MTs.
***Dunsworth HM. 2006. Proconsul heseloni feet from Rusinga Island, Kenya. Doctoral dissertation, Pennsylvania State University.
****Correction: should have better worded what I see as a diminished role of the grasping big toe, not the elimination of it.

"Bless the blog. Nothing else like it."

Note on the missing images: Apologies for the now dead links to once-adorable images! When I originally posted this piece, I didn't know how to best post images that could potentially disappear in the future from their source sites. 

Thursday, March 29, 2012

Dollo's Law: made to be broken?

Maybe it's time to retire Dollo's Law, the idea that once a trait has disappeared from a lineage, it can't reappear.  Louis Dollo was a Belgian paleoanthropologist who proposed in the late 19th century that once gone, a trait was lost forever.  Evolution could not repeat itself.  We have blogged about this before, e.g. when a paper appeared in the journal Evolution in 20011 suggesting that mandibular teeth had reappeared in a frog lineage after more than 200 million years.

Holly brought a new paper in Evolution to our attention, also detailing instances in which traits long lost have reappeared.  Most previous examples have been of hard tissue reversions, but in this paper, Rui Diogo and Bernard Wood document numerous instances of reversions to previous muscle structures in primates, and suggest what this might mean about development and evolution.

Diogo and colleagues have been involved in a long term comparative study of the anatomy of non-primate vertebrates, and of primates, looking at "homologies and evolution of the head, neck, pector and forelimb muscles of all major groups...based on dissection of hundreds of specimens and on a review of the literature".  They used this extensive data set to do parsimony and Bayesian cladistic analyses (a statistical method for classifying organisms into biologically similar groups based on whatever trait of interest) of the muscle data for primates.  They built a phylogenetic tree based on the cladistic analysis of 166  characters of head, neck, pectoral and upper limb muscles. 
...of the 220 character state changes unambiguously optimized in the most parsimonious primate tree, 28 (13%) are evolutionary reversions, and of these 28 reversions six (21%) occurred in the nodes that lead to the origin of modern humans; nine (32%) violate Dollo's law.
Without going into the anatomical details covered in the paper, suffice it to say that they found more anatomical reversions to an earlier state in head and neck muscles than in the chest or upper limb, and conclude that evolutionary reversions were significant in primate and human evolution.  Their explanation for why so many exceptions to Dollo's Law have been documented is that the developmental pathways that formed these structures were maintained over evolutionary time, perhaps because the pathways were used in the development of other structures, so that they could be readily recruited for the re-development of a once-lost trait.  Chickens, for example, still have some of the developmental pathways for teeth, although they haven't actually had teeth for 60 million years.  Some constraint on the pathways would explain its continued existence. 

It has also been found that during ontogeny, say of the hand muscles, various muscles develop that are subsequently lost as the embryo grows.  One example is the contrahentes muscle that extends to various fingers in an early human embryo, but is then lost later in development.  This is a muscle that adult chimpanzees do have, though adult humans do not.  Diogo and Wood report this same developmental story for multiple muscles.  This means that the developmental pathway has been retained, even if the specific trait has not -- 'hidden variation'. 
According to some authors, cases where complex structures are formed early in ontogeny just to become lost/indistinct in later developmental stages (the so called 'hidden variation') may allow organisms to have a great ontogenetic potential early in development, that is if there are for instance external perturbations (i.e., change in the environment, e.g., climate change, environment occupied by new species, etc.) evolution can use that potential (adaptive plasticity) (e.g, West-Eberhard 2003).
Others (Stephen J Gould, e.g.) have argued that rather than an argument for plasticity, this means that evolution is constrained, contingent on what is already there: the embryo couldn't develop properly if these pathways to nowhere were to change.  Some argue that hidden variation is not responsible for evolutionary novelty, though, as Diogo and Wood suggest, it can explain the reappearance of traits.

In essence, Dollo's "law" is a principle that something genetically complex is difficult to undo because mutation will remove order if not opposed by some form of selection.  The more steps removed, the more 'canalized' a trait would tend to be and the less flexible.  Indeed, 'canalized' is a word used early in the 20th century by CH Waddington for the persistence of fundamental traits.   Reversals may also be apparent rather than real: new mechanisms might bring about a similar appearance at the trait level without being a literal 'reversal'.  The nature of evolution is to avoid being put in a box out of which organisms couldn't evolve.

So whatever the explanation for these specific instances of violations of Dollo's Law, it's clear yet again that any evolutionary law is made to be broken.

Wednesday, March 28, 2012

Where are the COPs?

A recent popular book called Bad Science, by British op-ed columnist and physician Ben Goldacre, presents a clear and readable treatment of the way in which poor quality science, intentional misrepresentation, or poor understanding of science on the part of consumers, journalists, companies and even some scientists lead to, well, bad science.

He takes on those who claim that homeopathy works, he discusses controversies like the idea that MMR vaccines are dangerous, and he discusses the way Pharma manipulates and maneuvers with their bottom line rather than fidelity to facts (or society at large) in mind.  He points out ways in which study designs and statistical interpretations can represent, or misrepresent, a causal situation.  It's a good read, and would be especially good for anyone, especially students, in any field of science (including social science as well as genetics, medicine, or public health) to read.

Cost of Opportunity Patrols (COPs)
One of the issues that Goldacre raises has to do not with whether a given bit of science is bad, intentionally or not, but with the consequences.  After all, scientists tend to dismiss much of the criticism leveled at them and at over-claims and the like, by saying that science is a self-correcting process.  Incorrect results, whether dishonest or inadvertent, fail to have influence or mistakes are caught because people try to replicate the results and fail, then have to figure out why.

But that is only a partly good argument, probably doing more service in support of the status quo rather than stronger oversight and stricter standards, than it does to justify tolerance of bad science.  Of course, we can't catch all bad science when it's done, because much of science is far too specialized, costly, and complex for an experiment to be checked or replicated.  Bad results do eventually lose any luster they might have had, but history shows that that can take decades or (in the case of things like alchemy and phlebotomy) centuries.

The issue Goldacre raises is one that we have raised as well, here on MT and elsewhere.  It is this:  for every direction science takes, there is a cost.  That cost can be seen in the form of the lost opportunity to do something else, that might have panned out better.  The lost opportunity isn't just, or perhaps isn't mainly, due to 'bad science', but to the consequences of decisions to invest here rather than there, no matter how honorable and legitimate the science itself is.

We have very little in the way of Cost of Opportunity Patrols, to look out for and identify investments that should be diverted elsewhere, either because it's too costly, too unlikely to have payoff for what the funders expect (e.g., real health improvements), too marginally incremental, or simply that there are more urgent issues that need solving or are more likely to be actually solved in the predictable future.

In reality, such COPs would either not be tolerated, or would be manned (and womanned) by the same people who do the work that needs COPing. Science can be judged best by specialists, but they have to be disinterested, and in these days such people are in limited supply.

Further, suppose the COPs really tried to stop a major direction of science and direct the funds elsewhere.  The same kind of  waste would arise in the new area (because all the same people would flock to it, as happens now whenever something potentially new and fundable arises), often providing rationales that give their interests apparent 'fit' to the new area.  It's a hydra phenomenon.  But can we be blamed?  The problems are that our livelihood is at stake, and we've become so specialized that we can hardly even open a door with a new kind of handle.

Worse, the political opposition to loss of funds by current recipients would make bureaucrats shake in their boots, lest Senators would quietly threaten to curtail their budgets if the change is made, etc.  History shows that bureaucrats back away from real change or rationalize inaction and delay.  So rather than do something different, the pressures are to hyper-hype what you're doing now.

We are investing large amounts of money in genetically related projects like biobanks and GWAS that we and others think ought to be outed by the COPs, because they use a lot of funds for little payoff, relative to what could be gained by focusing on things that really are genetic, until we show that genetic knowledge really can lead to the elimination of such diseases.  Right now, the record is pretty weak in that department, but it should be and could be much better.

The problem is that while peer review is vested-interest review through and through to a great extent, finding better directions to pursue involves both new sets of similar vested interests but, from a scientific viewpoint, true uncertainties.  The riskiest kind of grant to apply for, as everybody knows very well, are the riskiest ideas.  How can we really know that this or that specific new direction would pay off better, per dollar spent, than GWAS or whatever else will?  After all, none are complete busts.

Scientists love to rationalize their low-payoff work by saying that next century will show that our fine insights led to a worldwide transformation.  One example we recently heard is defense of the expense of the Large Hadron Collider (which physicists are already hyping for LHC mega-projects beyond the discovery of the shy Higgs Boson that is expected this year, to keep those bosons spinning).  After all, they say, Maxwell discovered electromagnetism and that led to the transformation of the world by electric power and electronics a century later.  Think of your iPod and microwave oven.  The argument that the money could feed large numbers of children and give them longer, productive lives is just dismissed as if it were irrelevant (is it?).

Of course, there are such examples, and it's understandable that physicists would make use of them.  The same way that NASA justifies its budget in searching for life on Alpha Centauri (or Mars) because the moon project gave us Tang and Teflon and the profits from Apollo 13, The Movie.

Maybe a solid historiographical analysis of the past would give us criteria by which to set the COPs in motion, to know when to stop or cut back on projects that have been around for a while, and shift the investment elsewhere.  Maybe even just shaking up the system so it cannot become entrenched would do that to a useful degree.  Not, of course, as long as faculty salaries depend on grants and universities on grant overhead money.  But this kind of reform would be no different than letting industries thrive or shrivel depending on whether they generate good products that somebody wants, which seems perfectly wonderful in our society today.  Science by contrast really shouldn't be just driven by short-term bottom line, but we clearly see the need of COPs in the sense of things that really do become stodgy, despite excited reviews written to lobby for increased support.

It's easy to complain about limited successes in science, when serious science attacks tough problems that are truly hard to solve--as in genetics and evolution are.  But the more the hype, the more grandiose the promise, and the more the funds, perhaps the more should be the accountability.

Tuesday, March 27, 2012

Molecular Evolution -- Dr Masatoshi Nei's newly launched blog

It may be of interest to some MT readers that Dr Masatoshi Nei has just started a blog called Molecular Evolution.  His plan is that he and other contributors will post a link to a paper or even a book that should be of interest to the molecular evolution community, and add a few descriptive paragraphs and perhaps a critique, in the hope that readers will engage in discussion of the points.  His first post, now up, is about the genetics of caste production and altruism in bees.  It will surely be an interesting blog to follow.  This is hosted by one of the major-league centers of modern molecular evolutionary genetics, so it may be a bit technical for nonprofessional MT readers.  But it will be up to date and thoughtful.

Hay, you! The humble, unsuspected source of the modern world

We have many fancy theories to explain the rise of civilization as we know it.  It was thumbs that led to tool use, that led to agriculture, that led to settled populations.  But no! It was the brain and its ability to enable hunting or gathering parties and war strategies, that led one group to gain control and grow at the expense of others.  No!  It was our longevity that evolved so we could take care of our relatives and more of us could live in the population at any time.  No, no, no!  It was the domestication of draft animals.  No again!  It was the Foxp2 gene and the capacity it gave us for language. No, not that! It was how some races became more intelligent than others, and could live in colder climates than in our African homeland....and thus conquered the world with advanced civilizations.  Yes?

Well, no!  It was after all just the result of a weed, that had nothing to do with our evolution as smart and facile beasts.  What made the world as we know it was hay.  It's the Hay Theory of History, as described by the physicist Freeman Dyson (and cited by ecologist John Lawton on BBC Radio 4's The Life Scientific the other day -- and Freeman says he got it from someone else, who surely got it from someone else):
The most important invention of the last two thousand years was hay. In the classical world of Greece and Rome and in all earlier times, there was no hay. Civilization could exist only in warm climates where horses could stay alive through the winter by grazing. Without grass in winter you could not have horses, and without horses you could not have urban civilization. Some time during the so-called dark ages, some unknown genius invented hay, forests were turned into meadows, hay was reaped and stored, and civilization moved north over the Alps. So hay gave birth to Vienna and Paris and London and Berlin, and later to Moscow and New York.
The idea is that the Roman Empire didn't need hay (and thus didn't have hay) because grass could grow all year in the Mediterranean climate, and so horses could always be put out to pasture.

Trajan's column, Rome
This theory doesn't go entirely unchallenged, as you might imagine.  See this thread on the theory from 2005, e.g., in which one commenter cites no less an expert than his cousin, said to be an authority on Roman history.  Cousin X was thoroughly unconvinced and among other challenges, said this:
Cato the Elder (234-139 BCE), makes mention of hay in his 'De Agri Cultura', LIII, in a fairly offhand manner, indicating that this is no new-fangled or exotic process:
"Cut hay in season, and be careful not to wait too long. Harvest before the seed ripens, and store the best hay by itself for the oxen to eat during the spring ploughing, before you feed clover."
Panel from the Trajan tower, depicting haystacks
Indeed, a column, erected in AD 113, was built to commemorate the emperor Trajan, who ruled the empire from AD 98-117, and his  victory in the Dacian wars.  It's decorated in a spiral bas relief, and the very first panel shows what seem to be hay stacks (on left).

So, another theory about the rise of civilization bites the dust.  But it does sound good ... until you probe a bit. (And, incidentally, we couldn't find a cousin who would comment.)

If there is a serious point to be made it is how easy it is to make up a simple one-factor story to explain a complex event/occurrence/trait. Causation in human life is regularly so complex and indirect, and changeable, that we should not be surprised to find weak prediction from things we think we know, and concomitantly, regular discoveries of things with major effect that we hadn't suspected, which is why the Hay Theory of History can sound so appealing, as well as all the theories we cited above -- and all those gene 'for' disease or trait discoveries that come and go. One factor may be first, or important, but it leads other factors to develop or to other changes that then become causal factors in a continuing cultural evolution. That's how life really is.

Hay, you wouldn't be here otherwise!

Monday, March 26, 2012

The HKPP Hall of Fame

We posted last week about hypokalemic periodic paralysis (HKPP), a disorder that causes weakness and sometimes even temporary paralysis, due to a drop in blood potassium levels.  As it happens, one of our favorite BBC radio programs, In Our Time, discussed the 18th century philosopher, Moses Mendelssohn last week as well.  He is considered to have been one of the foremost architects of the Jewish Enlightenment, who helped to bring Judaism into mainstream European culture. He was a strong proponent of religious tolerance.

Because neither of us knew much about Mendelssohn (grandfather of the composer, Felix Mendelssohn), Ken googled him. Among other facts about his life, the Wikipedia article about him had an arresting paragraph about an affliction he was known to suffer from.
In March 1771 Mendelssohn's health deteriorated so badly that Marcus Elieser Bloch, his doctor, decided his patient had to give up philosophy, at least temporarily. After a short and restless sleep one evening, Mendelssohn found himself incapable of moving and had the feeling of something lashing his neck with fiery rods, his heart was palpitating and he was in an extreme anxiety, yet fully conscious. This spell was then broken suddenly by some external stimulation. Attacks of this kind recurred. The cause of his disease was ascribed to the mental stress due to his theological controversy with Lavater. However, this sort of attack, in milder form, had presumably occurred many years earlier.  
Could this be another addition to the HKPP Hall of Fame (we've previously written about Elizabeth Barrett Browning and the possibility that she had this disorder)?  We're on the trail.

Friday, March 23, 2012

You look just like.... well, almost like

Female hoverfly on cistus flower; Wikimedia

Ken mused a few weeks ago about mimicry, the traits that many species have evolved as defense against predators, according to Darwinians -- the butterfly with an eye spot on the back of its wing, the moth that has taken on the coloring of tree bark.  A commentary in this week's Nature, accompanying a paper about mimicry ("A comparative analysis of the evolution of imperfect mimicry", Penney et al.), quotes the British naturalist, Henry Walter Bates, who described these kinds of traits 150 years ago as "a most powerful proof of the theory of natural selection". 

Ken's point in his post was that, yes, Darwinians have explained these as examples of very clever adaptation, a way to outwit predators and increase the odds of survival, but that in fact these ruses aren't always all that successful.  As he said, after describing some examples of very effective mimicry in butterflies that he had come across in his travels, "...while I did see the effectiveness of protective coloration in these two instances, I also did, after all, see the butterflies.  I wasn't completely fooled."

Indeed, if it's so effective, why haven't all species evolved protective coloration?  And, the Darwinian's answer is that it's only one kind of adaptation, and there are many others.  Each trait has its own adaptive purpose, and it is the job of science to uncover what that is.  But, as Ken also pointed out, most adaptive explanations can't be confirmed, no matter how plausible they seem.  Furthermore, whether an organism is eater or eatee it may typically largely be due to chance, and the genetic contribution is usually very slight, and essentially undetectable (even industrial melanism in moths, recently confirmed statistically with new data, was a lot of work).  And, the assumption that natural selection detects all functional genetic variation is simply an assumption, but it makes Just-so stories about adaptation unfalsifiable.

We read the Nature pieces on mimicry, then, with this in mind.  The question posed in the Penney et al. paper is why some harmless hoverflies are such good mimics of stinging Hymemoptera (wasps or bees), and others are much less convincing.  They point out that evolutionary theory about mimicry assumes that some copies are pretty exact but that examples of inexact copying abound (though, there does come a point where one would ask how we're sure it's in fact a mimic or what 'exact' means in this kind of situation). 

Explanations of poor mimicry include that it may look poor to us, but it's good enough to fool a predator, or imperfect mimicry is even more adaptive than perfect mimicry, or that imperfect mimicry benefits co-specifics (this is a kin selection argument), or that there are constraints on the evolution of a more perfect copy, or relaxed selection, whereby selection for mimicry becomes weak enough that it is "counteracted by weak selection or mutation", that is, that there's no selective benefit to refining the mimic further. 

To try to determine which of these explains the poor hoverfly mimics, Penney et al. used "subjective human rankings of mimetic fidelity...across a range of species" of two different taxa (Syrphidae and Hymenoptera) and compared them for consistency against a statistical analysis of trait values. They found a strong positive relationship between body size and mimetic fidelity, and suggest that "body size influences predation behaviour and thereby the intensity of selection for more perfect mimicry."

The idea is that the larger the prey, the more benefit to the predator, and thus the more urgently the prey needs to figure out a way to avoid the predator.  Smaller or more abundant hoverflies needn't spend so much energy trying to fool a predator, as each insect is less likely to be preyed upon because there are more of them to choose from, or because they are less of a mouthful, and so less disirable. 

So, they explain imperfect mimicry as the relaxation of selection on mimicry, though they do not find this counteracted by weak selection or mutation, and they do not reject the constraints hypothesis.  They conclude that "reduced predation pressure on less profitable prey species limits the selection for mimetic perfection."

The same explanation always, but always different
Notice that each of the 5 possible explanations they offer assume that selection of some sort must be the explanation, if only they knew it.  This is the Darwinist assumption, that the ground-state of life is competitive natural selection.  If one selection story is shown to be wrong, then it must be another, as we've seen in this case.  This explanatory tack is very widely accepted, indeed, assumed without question.  But the assumption of selective adaptation is not itself ever questioned.  Is it true?

More accurate than that assumption is that the ground state of life is existence, or over time, persistence.  Whatever reproduces, reproduces.  This is an assumption, too, and is testable....but isn't very helpful at all in and of itself.  We can go a bit further: There is differential reproduction among differing genotypes, but even in a totally non-selective world this would be the case (in formal terms, genetic drift is inevitable).  Sometimes success may be due to a systematic relationship between the state that succeeds and its success, and that's natural selection, but this need not be the case.  The question is when and to what extent predictable, non-random  change is occurring, and that is not at all easy to show most of the time.

More profoundly, selection need not be (as Darwin seriously thought, and as most modern biologists accept without thinking seriously about) the force-like phenomenon it is usually, if often tacitly, assumed to be.  It can be weak, ephemeral, local, moveable, and even probabilistic.  Even from a purely selectionist point of view, all sorts of species with all sorts of variation are reproducing in all sorts of ways in relation to all sorts of factors -- including each other. There is no reason to expect that single factors, alone, will necessarily motor on through with some clear-cut force-like trajectory of change.  These statements are not at all controversial, yet seem to be in effect ignored when each investigator's favorite trait is being evaluated 'on the margins' as one would say in statistics: that is, evaluated in isolation, on its own.

We have a great parallel here with polygenic causation that is so pervasive and frustrates GWAS as we've said countless times here.  With polyfactorial ecologies, what oozes through the blizzard of factors will not necessarily be simple or explicable in terms of one factor on its own -- say, looking like something else.  This is a very different perspective than trying to analyze everything as if it is the type of selection we have to identify or explain why, surprisingly, it's not perfect.

Think of it in this very simple way.  It is almost always possible to change most traits.  Experimentally, this is reflected by the fact that most traits respond to artificial selection.  In this case, that means that it should always be possible for natural selection to lead to change in ways that make any species that somebody else eats look more like the background of where it lives (even against bacterial predators, some form of camouflage defense at the molecular level should always be possible).  If the selectionist stories are accurate, that camouflage increases your odds of living to frolic another day, then every species should be camouflaged and should normally dwell on its match. 

This is so clearly not how nature is that one wonders why fundamentalistic Darwinism ever took hold, even by Darwin himself.  Why isn't everything camouflaged?  The answer, which we referred to casually as the 'slop' of nature in earlier posts (e.g., here), is that evolution is persistence in the entire ecology of each organism, and sometimes something seemingly so obvious as mimicry clearly seems to happen. But not most of the time.  Or, each trait in each organism can be argued to have some such story. That is so ad hoc, or perhaps post hoc, it has a resemblance to creationism--in the epistemological sense of being something in which every individual observation has the same explanation (selection made it so), no matter what the details.  If we assume selection, we can always adjust our explanations for what's here to fit some selection-story.  One simple component of this, obviously, is that the predators are evolving their detection ability as well.  It's all an endless dance, and this is not controversial biology, but is within what we know very well.

Biology should grow up.  The ground state of life is persistence, however it happens.  And stuff happens.  There are lots of ways to persist.  Selection is one, but it's only one, and it is itself a subtle, moveable feast.

Thursday, March 22, 2012

Random events result in order -- how?

Development is ultimately very organized and predictable -- children look like their parents, legs are generally where they belong, and a lion never gives birth to a whale -- but yet another paper describes the randomness of the processes at the cellular level.  How can this be?

The paper is in the April BioEssays; "Genes at work in random bouts", Alexey Golubev.  Golubev says that things that go on inside cells are generally thought to be determined by the interaction of different molecules, which is itself determined by the concentration of those molecules in the cell.  Ordinary differential equations (ODEs) describing all this can be written, and, Golubev says, "ODE solutions may be consistent with oscillatory and/or switch-like changes in molecule levels and, by inference, in cell conditions."  This begins to make intercellular processes sound determined and law-like.

But, the article is basically about the stochastic (random, or probabilistic) events occurring in cells that affect their gene expression patterns, and hence the cycle between cell divisions or the time it takes the cell to express the genes related to its particular tissue  This variation, the author notes, makes stem cells--cells not committed to just one cell-type--plastic and flexible. 

But, as he points out, the idea of molecular concentrations is only true at the level of populations of cells, not in single cells themselves.  There's a lot of randomness in terms of what's going on in single cells, in cell differentiation and cell proliferation, particularly with respect to when genes are turned on or off, and thus which proteins are available, and what happens when. 

The question becomes, then, given all this stochasticity in cellular activity, how development is so organized.  The apparent problem is that once one reaction has taken place, it affects the next reaction, and this includes hierarchical changes such as changes in gene expression in the cell.  Thus, the cell is not just a mix of things, each in large numbers, that will 'even out' over time.  Differences that can be occasioned by chance in a cell can add up. Of course, if the cell continues to detect the same external conditions, its response may adjust so that things do even out.  But it doesn't need to happen. 

On the other hand, most tissues in most organisms are comprised of many cells of the same type.  Each may be experiencing stochastic changes, but their tissue-specific behavior may usually 'even out' because the variation will be slight and in different directions among the cells, so that on average they are doing the same, appropriate, thing.  In unusual circumstances, if this doesn't happen, the organism may be very different from its peers....or  it may not survive.

This perhaps reflects a fundamental property of populations, known as the 'law of large numbers'.  The theory behind this (Ken was just realizing from reading a book called The Taming of Chance (1990, by Ian Hacking, Cambridge Press)), comes from the study of populations of individually differing individuals, whose aggregate behaviors have regular distributions: the 'normal' or bell-shaped--or at least orderly--distributions of stature, incomes, and so many other things.  In another common phrase, they have 'central tendencies.'  Normal meant that most were near the norm.  This statistical idea was worked out over the 18th and 19th century, and raises interesting questions about causation.  Hacking's book shows how people had to learn that causation was not about precisely fore-ordained laws, but about probabilities, and that this applied to society.

The classic cases had to do with things like suicide.  One can't predict who will commit suicide in a given year, or by what means.  But the numbers, and the number who do it by each method, are very similar from year to year in a given population.  Likewise, the life expectancy is an average, and nobody lives exactly that long: some die younger, some older.  So are many social facts like political affiliations and so on.

Why is this?  It is the net, end result of many different individuals each with slightly varying characteristics.  There were many explanations of why this was, that are beyond this post, but in essence there are many contributing factors of diverse kinds, that mostly aren't known, so that a few individuals are exposed to many, others to only a few, but most of us to some 'average' amount of these factors.  The fraction exposed to many such factors is the fraction of individuals who are taller, more intelligent, ...., or who commit suicide.

The law of large numbers is a statistical fact that can be proven mathematically under rather general conditions.  This leads to central tendencies.  That is why population statistics took on a central role in social sciences, where often the underlying causal factors and their specific effects are unknown or hard to estimate accurately.  Social sciences can 'understand' society--at least predict some things about it--without understanding causation in the strict sense.  And in some situations, these things don't work very well--economics is one, in which stability of population outcomes occasionally, at least, takes a quick left-turn.

Probably the same applies to the populations of cells that make up a tissue, and if so this would make the high amount of probabilistic events in cells that would make each cell different, which can lead to a central tendency for the kidney to filter blood is similar ways, and so on.  Because of local differences among cells, different genotypes, and different life-experiences, kidney functions differ among people.  Some are at the extremes and we call that 'disease', but most are roughly near the norm.

Biologists routinely speak of chance, but often act as if they believe that genes 'determine' the organisms the way a program determines what a computer does.  They know about variation in populations, and how, for example, polygenic traits like stature or blood pressure (the darlings of the GWAS world) vary, even if they are driven to enumerate all the underlying causes that vary among individuals in the population.  In a sense, the population concept applied to tissue is of the same sort, and provides another source of variation between genotype and trait.

Wednesday, March 21, 2012

The plot thickens -- you are what you eat in more ways than one

Your genome on lettuce
Banish the thought that you've got complete control over expression of your own genes.  It turns out that what you eat is also a player.  A paper in the April issue of BioEssays, "Beyond nutrients: Food-derived microRNAs provide cross-kingdom regulation" (Jiang, Sang and Hong), reports that not only do we derive nutrients from food, but that food-derived micro RNAs can affect expression of our genes.  Yet another instance of inter-species cooperation.

Micro RNAs are a relatively recently discovered component of the genome that modulate gene expression.  They silence protein-coding genes by binding to transcribed mRNA and preventing its translation.  As Jiang et al. report, more than 15,000 miRNA loci have been found in 140 species, and are annotated in the miRBase database.  These are miRNAs found inside cells, but recently micro RNAs have been found in blood serum and plasma, urine, saliva and other body fluids.  While RNA is known to be easily degraded and rather fleeting, it seems that these circulating RNAs are highly stable, and resistant to the usual destructive elements; RNAses, and high or low pH or temperature.  And, say Jiang et al., these miRNAs seem to be highly correlated with disease, such as cancers and diabetes, and with tissue injury, which suggests they could be of use as biomarkers.

But, the paper only notes this in passing, primarily focusing on these miRNAs as signaling factors.
Despite numerous reports of the detection of secreted miRNAs, the exact mechanisms of how these miRNAs are transported and act as signaling molecules are not clear. They have been implicated in stem cell function, hematopoiesis, and immune regulation. Recently, several lines of evidence have suggested that miRNAs are selectively packaged into microvesicle (MV) compartments to function efficiently in mammalian cells. MVs are membrane-covered vesicles and can be released by various kinds of cells.
It may be that being packaged in MVs is what gives these miRNAs their stability, as they are sequestered from RNAses and so on.  And, the packaging process seems to be selective, as only specific types of miRNAs have been found packaged in this way.

Jiang et al. write that "cross-kingdom regulation through miRNA/double-stranded RNA (dsRNA) has been observed in many organisms and engineered systems."  And, it often alters gene expression in the host organism.  Examples include planaria or other parasitic nematodes, which can take exogenous double stranded RNA into their cells, as do insects when fed plants.
For example, when cotton bollworm larvae are fed on plant material expressing dsRNA targeting CYP6AE14, whose gene product helps the insect to counteract the deleterious cotton metabolite gossypol, the transcript level of this gene is decreased and causes larval growth retardation.
It has recently been demonstrated (and we have to take Jiang et al.'s word for it, because the paper by Zhang et al. is in Chinese, unreadable by us) that "mature single-stranded plant miRNAs are present in the serum and tissues of mammals that use plants such as rice as their food sources."  They verified that these are plant RNAs, and that they survived passage through the mouse gastrointestinal tract intact.
Moreover, the authors identified the low-density lipoprotein receptor adapter protein 1 (LDLRAP1) as a target for MIR168a, a plant miRNA that was present at a relatively high level in human sera. The presence of exogenous pre-MIR168a or mature MIR168a miRNA can significantly reduce LDLRAP1 protein level in culture. Furthermore, feeding mice with rice that produces MIR168a reduced the amount of LDLRAP1 protein in liver, which in turn resulted in an elevation of the LDL level in mouse plasma. Both the decrease of LDLRAP1 and the increase of LDL in plasma, however, could be blocked by the addition of an anti-MIR168a antisense oligonucleotide. These elegantly executed experiments not only confirm the role of circulating miRNAs in intercellular communication, but also suggest that miRNAs can transport and function in a cross-kingdom manner.
How miRNAs would survive digestion and be absorbed is a question but without simple answers.  The first issue has to do with how they survive digestion and absorption across the gut.  A second is why we haven't evolved means to detect and degrade them; after all, our immune system is very able to recognize foreign stuff.  Jiang et al. describe the possible conditions under which plant miRNAs can survive this passage, and we won't replicate it here.  Suffice it to say that they note that plant miRNAs are packaged differently from mammalian miRNAs, and that mammalian intestinal epithelial cells 'somehow' ingest plant miRNAs.  They wonder if there's a receptor or some such on the mammalian cell surface that might recognize plant miRNAs and pave the way.  After being taken up by intestinal cells, these miRNAs then are passed to downstream cells, such as the liver, where they are involved in gene regulation.

There's a lot of hand waving going on here, but if true, this is a thought-provoking example of the synergy between organisms.  As this field matures, you can be sure that potential medical uses won't escape pharmaceutical companies.  

Regular readers may notice that in describing these newly discovered miRNAs, we've invoked a number of principles that we think are at the core of life, and that we've recently enumerated on MT -- sequestration, modularity, cooperation, signaling, chance, adaptability.  It's always gratifying when these principles apply in circumstances that were not known at the time we compiled them. 

Whose genome is it, anyway?
But what about evolution?  If we interact with genes (miRNA are coded from the originating species' DNA) from other species, and in at least many cases we depend on that, then perhaps the view of genomes as all contained within a species' cells is misleading.  Perhaps 'our' genome includes that of E. Coli that we need in our gut, or in each location 'our' genome includes miRNAs from foods we depend on for survival.

Normally, one would expect us to lose genetic mechanisms if they are replaced by something else.  If we depend on exogenous genetic information, then mutations in our own genes that do the same thing would have no selective disadvantage and would disappear.  In that sense, the species' genome becomes joined at the proverbial hip to each other.

Far-fetched?  Well, long ago mitochondria and chloroplasts entered cells and evolved from parasite to necessary cellular components.  miRNA and bacterial genomes and so on aren't so thoroughly incorporated (yet), though some viral genomes are.  So there is probably a gradient of intergenomic dependency among species.  This is an extension of predator-prey dependencies, but is similar in concept, just more localized in genes.

Once again, this discovery (if confirmed and shown to be more than trivial) will add to the causal complexity of human traits.

Tuesday, March 20, 2012

Control issues

Happiness yoga image, Facebook
We have listened to a recent BBC radio interview on The Life Scientific with Michael Marmot, the epidemiologist who has published many books and papers showing that social status is correlated with health.  This is more than just that poor people live in dirtier, more dangerous places, and have dirtier, more dangerous lives, and eat junkier diets.

Marmot is know for his innovative, and well-replicated, studies, the "Whitehall studies", about the role of stress in health and longevity.  Before his work on British civil servants, it had universally been assumed that the boss has the most stressful job and that the price you pay for status and the higher pay grade is a lower health grade: more coronaries-under-stress.  Better to lay low, and stay healthy!

Well, that's not what his research has shown.  Instead, and to the contrary, the high stress of the high job is accompanied by better health and longer life!  It isn't that the boss has less stress, because the boss typically has more.  But the boss is in control, and that is the difference.

The bottom line is that health in this context is not an absence of stress but an absence of control.  Frustrating constraints are killers!  The problem, in Marmot's mind, is that our society gives most people little control over their lives, and he would like to see society adopt economic and social policies and conditions to liberate a large fraction of society from health-destroying servitude.

This is a particular issue, but is highly relevant to many topics we deal with here on MT, and they relate to what we believe are very simplistic ideas about genetic causation and how to find it.  If stress of the kind we're discussing here has a major effect on physiology that is associated with disease--or even death!--then clearly it isn't a genotype making the major difference.  More importantly,  epidemiologists can (claim or try or hope to) measure salary, diet, marital status, exercise, smoking, drinking, drug use, and so on.  But stress related to control of one's life experiences is much harder to measure, even if you think to look for it, and clearly would be omitted in the massive-scale case-control studies that are making up our current commitment to blimped-out GWAS studies.

There is no reason to think that a given genotype would have similar effects in environments that, overall, make substantial difference to overall health.  Even in regard to stress issues would similar effects be expected of a genotype if the person were a machinist, government minister, religious minister, railroad employee or school teacher?

When something so profound, perhaps fundamental, and subtle as feeling of control--as distinct from normal measures of life experience like education, salary, neighborhood and the like--can make such a difference then we need to adopt much more circumspect claims, and develop much better study designs, than is the case nowadays.  Of course, such more responsible studies are not in the cards because they'd be too much trouble to do.....

Monday, March 19, 2012

Periodic paralysis -- a single gene disorder striking close to home

One of the things Ken and I did while we were in New York last week was to have lunch with Dr Jacob Levitt, the head of the Periodic Paralysis Association (PPA).  The periodic paralyses are a rare set of ion channel disorders that are still not well-understood.  Partly of course it's because they are so rare (prevalence is 1 in 1 to 200,000), and partly because the normal functioning of ion channels isn't itself well-understood.  Channelopathies themselves are not rare -- epilepsy and cystic fibrosis are more well-known examples of ion channel dysfunction.

As the PPA website says,
Periodic Paralysis is a group of disorders whereby patients become weak due to triggers such as rest after exercise or certain foods.  These disorders are part of a broader class of disorders called ion channelopathies, in which a genetic defect in a muscle ion channel results in symptoms of episodic stiffness or weakness in response to certain triggers. 
We had a fine meeting, and, among other things, were inspired to learn more about ion channels, how they work normally, and how they can go awry.  Why?  Because our daughter has hypokalemic periodic paralysis (HKPP), and it is a life changer.  And not in a good way.  Dr Levitt, a dermatologist, has HKPP himself and he runs the PPA.

There are various periodic paralyses (hypo and hyperkalemic pp, and Anderson Tawil syndrome), and they are often difficult to diagnose.  Indeed, many people go for years without a diagnosis.  Most physicians may have heard of them once, long ago (or slept through that part of med school, or forgot their physiology, or just have never seen a case of these rare disorders).  Indeed, even now and especially in the past, people with these disorders could live a lifetime with neither diagnosis nor therapy -- an extensive bit of sleuthing has led us to think the famous pioneering Victorian poet, Elizabeth Barrett Browning, who was notoriously debilitated with a mysterious disease about which she wrote prolifically in her love letters to the poet (and her future husband) Robert Browning, had HKPP, as we surmised in detail here.  The disorder wasn't recognized when she was alive, so it's no surprise that EBB's doctors were completely at a loss as to what was causing her perpetual weakness.  It's more of a surprise when the diagnosis is missed today, as it needn't be.  But it too often is.

As regular readers of MT know, we write a lot about complex diseases, and in particular about how the idea of genes 'for' disease can be a naive one.  For many traits, perhaps most traits, in organisms, multiple genes contribute and most of the genetic aspect of variation of the trait is due to multiple, small contributions from many different genes.  Each individual with a given trait value (like, say blood pressure, height, glucose or cholesterol levels) has a unique genotype that contributes to that value (not to mention environmental contributors).  The hunger to find simple causation that we often write about is manifest, and understandable, even if the reality is different.  That hunger is what feeds the GWASification of everything, that is currently at such a fevered pitch.

So, it is a bit ironic that we have a daughter with what has generally been considered to be a monogenic condition -- a condition caused by a single mutation.  To date, causative mutations have been identified in a handful of ion channel genes, that disrupt the structure of the channel so that it malfunctions in response to specific environmental triggers.  Some are sodium channel genes, and at least one is a calcium channel gene, which is interesting because calcium channels don't seem to even be used by skeletal muscles, as sodium channels are, so it's difficult to understand why disrupted calcium channels can shut down these muscles.  Insulin is also related to the process, but it interestingly doesn't seem to be related to diabetes.

The problem exemplifies the importance of partial sequestration and modularity, and others of the basic principles of life that we often write about.  An ion channel is used by a cell to sense and relate to its environment: to shove excess negative or positive molecules out or import them in, to keep the ionic or pH (chemical) balance suitable for the reactions that must occur inside the cell, and an appropriate difference from the outside world of, say, the blood stream.  In simplified terms, if the cell is too salty relative to the blood stream, or too unsalty, the cell can burst, or be drained of water, or be unable to import needed ingredients or export waste, etc.  It's a fundamental way that cells relate to their environment.  And many different genes are involved in the ion channels, or chemical pores, through which these molecules shuffle in and out.

But, as we've blogged about before (here, e.g.), even these 'simple' processes are complex.  Many genes are involved, but it is not always the case that multiple minor contributions from different genes add up to trouble.  In some cases, and HKPP may be one, there is what is called multiple unilocus causation:  In a given case, only one variant gene may be responsible, but in different cases different genes--but only one gene per case.

Some people can trace this particular disorder through generations in their family, and others are the only known family member to be affected.  And, the same mutation in a single family can have very different symptoms, from very infrequent, or even no attacks of weakness, to waking daily with paralysis.  And, essentially the same phenotype, or at least spectrum, is due in different individuals to mutations in different genes.  Or different people with the same variant can have different symptoms. Other examples of similar multiple unilocus causation include retinitis pigmentosa, an inherited disease that leads to blindness in middle age, and another is congenital deafness.

Some individuals have none of the known mutations.  This is known because a physician in Germany, Dr Frank Lehmann-Horns, generously donates genotyping and sequencing services to anyone who has been diagnosed with one of these disorders.  Affected individuals naturally would very much like to know the cause of their disorder, however, and when the cost of whole genome sequencing really is $1000 per genome, they will likely have their genomes sequenced so that a systematic hunt for causation may be undertaken.

Of course, finding the causative mutation in such situations, with hundreds of ion-channel genes, and their regulation, to search through, won't be easy when, as in our daughter's case, there aren't other affected family members to compare.  We all differ from each other at millions of loci in our genomes, and determining which one causes a given case, even focusing in on ion channel genes alone.

Affected individuals don't need to know what causes their disorder in order to treat it, it's true, because it is the ion concentration that's the trait, regardless of its origin--at least up to a point as is understood today.  Indeed knowing the gene that causes a monogenic disease is rarely useful in treatment: hundreds of such 'Mendelian' traits are known but few really treatable based on the gene in question. But, patients do worry that in the future some doctor won't believe their diagnosis unless they have an identified mutation, so the identification can be comforting in that sense.  And, identifying as completely as possible the suite of mutations that cause this disorder could be useful in understanding how things go awry, and could in principle lead to better treatment.

Of course, we study and write about aspects of genetic causation and generally see complexity when others yearn for simplicity, so there is the danger that when the story strikes close to home, we like others might naturally drift towards a search for simple causation--making the very 'gene for' mistake we note when others do it.  We are interested in understanding more about these cellular disorders, but have to be wary lest we fall into that trap.  Indeed, that is a major reason for writing about this issue here.

So, while we do think that complex traits should not be treated as though they were simple, traits that really are relatively simple are a different matter.  The search to understand the genetic basis of complex multilocus disease is challenging.  The search to understand multiple unilocus traits, and to know whether they are only the clearest subset of multilocus versions in the population is somewhat different -- single gene changes might be easier to track and confirm when they are inherited.  The unexplained cases, like unexplained heritability that we've written about, may be those due to multiple, individually minor, genetic variants.  As we have often said, the truly genetic disorders are where the money should go, at least to show that understanding causation at the gene level is an important way to approach life. 

Similar issues apply to evolution.  A multiple unilocus trait favored by natural selection could arise in different individuals in a population because of mutations in different genes with similar effect.  Over time, the population could come to be made of individuals who had the favored trait.  But this doesn't mean that they share the same genotype or that there would be detectable evidence for natural selection in any specific part of the genome -- because many different genes could each have experienced only weak selection in the population as a whole.  If there are many roads to Toledo, none of them need to be superhighways.

Friday, March 16, 2012

The shelf life of a banana

We're in New York where I'm working with collaborators on a project in which we're developing  software to simulate complex genetic systems, so that knowing the 'truth' we can investigate some of the  questions being pursued these days in both epidemiological and evolutionary genetics.  Simulations don't generate actual reality, but when they generate what is very similar, one can hopefully make inferences about the truth, and in this arena the truth is in some ways unknowable.  The idea of simulation is to improve our ability to guess the truth from data, when things are complex as they clearly are here.

For some years our collaborators (Joe Terwilliger and Joe Lee at Columbia University) have been saying that much of the discussion of problems is needlessly about  limitations in the available genetic data.  GWAS and other similar genomic approaches to the genetic causes of traits like disease, and the results of evolution, have used increasingly extensive kinds of data.  For example, more sites of the genome are used to try to infer causation in parts of the genome that are near to those sites (we can refer to these as 'mapping markers').

Genetics has been going from one fad to another.  We first had mapping with limited sets of markers, that suggest regions of the genome that could be causally involved in some question (like adaptation to diet, or the risk of diabetes).  But the implicated regions of the genome were large, and many functional elements are in the region.  We couldn't easily identify the actual causal site or sites in the region.

Then someone discovers that there is more use of the genome than as an intermediate code for protein (that's messenger RNA, mRNA), but the RNA itself has direct function; some, called microRNA, (miRNA) affects the translation of mRNA into protein.  Someone else discovers that gene regulatory sites are important in the genome to control the expression of protein-coding regions.  Somebody else probes interactions among genes, claiming that these 'networks' are higher-order functional units.  Then it's found that chunks of DNA are duplicated or lost in some people but not others (called 'copy number variation', or CNV). Others explore the modification of DNA by various chemical means in cells, that affect which genes are expressed; this is called 'epigenetics'.

All of these things become 'omicizied--in the scramble for money, attention, and yes, even to actually do some real science, analytic platforms are developed for detecting these various elements in the genome: special genomewide tests for epigenetic sites, or non-coding RNA, or rapid sequencing methods for the protein-coding parts of the genome (called 'exome sequencing').

Many people recognize that these are temporary stop-gaps.  In part, those who think that CNV will be the killer-discovery that explains a huge fraction of the cause of diabetes or autism, or that miRNA is the key to regulation, argue about and develop special molecular and statistical tools for detecting it.

Even those specializing in one or another of these fads, or subsets of genetically related causation, know that the tools are rather temporary.  The individual applications are discovering complex causal elements, but everyone knows that in total they still promise only to account for a fraction of causation of the traits of interest.  They are holding actions, and this is openly acknowledged.  They keep the funds and research moving, and feed the technology companies, so they can develop the next level to genomic methods.  We know this, but we are institutionalized so we can't wait for better tools.  We must keep the factories moving with these methods.

But they have the shelf-life of a banana.

It's easy, and perhaps correct, to be cynical of the great hype machinery that keeps the system in high gear.  It's costly, but we need to be paid, the tech companies need to sell something today while they develop a tool for tomorrow.  We need to keep the graduate student pipeline flowing.

For years in our various talks and mini-courses and papers, Joe Terwilliger and I have been saying that rather than spending too much time arguing about the best way to use and analyze these kinds of bananas, we should just assume that whole genome sequences will be available for everybody in the population. Given the lock that the science establishment has on funds, and the way that technology makes serious increases in the amount of data we can generate and analyze, and the way that leads to drop in cost, it seems likely that barring international catastrophe, ubiquitous sequence data will be available.

This is now viewed by some as a kind of inevitable end point: finally we'll have all the data we need from a genomic point of view, and we can then really, truly, identify all the genetic causation that is involved in diabetes, cancer, how you vote, or whether you respond negatively to being sexually abused.

There are a couple of problems with this view.  First, the system will need to continue to produce and sell new gear, so clearly new things will be discovered that need documentation.  That itself means that even whole genome sequencing is likely to have the shelf life of a banana.  The kinds of things we measure will be shown to be incomplete, and in that sense 'out of date'.  The way we measure the trait--like diabetes or cancer--will be elaborated in this way, so that prior measures will be denigrated as primitive.  We'll have to do the same megastudies over again.

But no matter how comprehensive these tools and data will become--and some have already gone beyond genes, even whole genomes, to enumeration of all cellular processes and so on, as though in recognition that genomes really aren't the answer, but that the research machinery must march on--they will not in themselves solve the main issue that we face:  causation is often clearly complex, changeable, statistically elusive, and not really reducible to an enumerable set of causes.  In addition to making sure that each step is only a partial step--rarely if ever reaching the point where something is actually 'solved' (because that would put us out of business)--we have not really come to grips with the fluid and complex nature of causation of the traits we're interested in.

The more contributing 'causes' there are for a trait, and the weaker that each is on its own, the more unstable their actual effects will be, and the harder to estimate accurately, the less useful such estimates will be as predictors.  In a sense, the trait may be the same, but its causes always substantially different.  The problem is dealing with the trait, rather than attempting to enumerate its ephemeral causes.  How to do that is for the future, if we would really come to grips with it.  That, we think, is where real innovation, rather than the kinds of technical improvements that are steadily being made, will have to come.

Whether in a population that is already 7 billion strong, it will be good to continue to nibble away at the causes of traits that affect us as we age, or whether we'll just be creating countless new problems due to stress on resources and so on, is essentially a philosophical question.

Thursday, March 15, 2012

Integrators! Mount Up

Editor in Chief at BioEssays, Andrew Moore, recently called for greater recognition of what he calls "integrators."

These are scientists who earn fewer grants, churn out fewer results, and publish fewer papers than their peers. Instead, they synthesize and contextualize the work of others, often with a cross-disciplinary perspective, and often coming up with new tunes of their own.

Both kinds of contributions are crucial but the former tend to earn higher scientific regard than the latter. Moore asks us to drop this bias against this second level of analysis. No matter what it looks like, integrators are creators too and they're not taking the easy way out either.

Plus, as data and results pile up--as it becomes increasingly impossible to keep up in lock step with the output of highly specialized fields and for specialists to keep up with what others are doing--integrators are needed more than ever.

But they can't be any geeks off the street; Moore suggests that universities take an active role in training integrators. Perhaps anyone interested in training integrators should look to their local anthropology department. There, they might find some nice role models because of the cross-disciplinary, context-aware, and synthetic nature of the research.