Friday, January 31, 2014

Identifying and writing bad and good evolutionary scenarios (A classroom activity)

I'm always looking for ways to improve evolution education in my classroom.  So I want to share something with you all that I tried this week in my Sex and Reproduction in Our Species (APG 310) class and that worked pretty well. 

Here's the activity (with my commentary in red) and it's yours for the taking/modifying as you wish. Cheers!
Classroom Activity:
Identifying and writing bad and good evolutionary scenarios 

Part 1. In class, in groups


1. What is evolution?

2. What are the four main mechanisms? (just names for now please)
3. What is evolution not?
4. Write an evolutionary scenario, without invoking any form of natural or sexual selection, to explain how humans lost our body fur and are now what’s often called ‘the naked ape.’
5. What are the components of natural selection scenarios?
6. What are the components of sexual selection scenarios? 
7. Could sexual selection explain human body fur loss? Why or why not?
8. What are some potential benefits to body fur loss? In what ways are they dependent on context/environment?

Students worked in groups of four through all the questions above. Then we held a discussion as a class and I wove-in helpful slides to illustrate things better than I can on the wipe board, like  family resemblance with simultaneous uniqueness (and, hence, perpetual mutation/change/evolution). And like how we can blame (at least inchoate) genetic drift for Mitt Romney's Y chromsome existing at a relatively high frequency in the next generation



Part 2. Homework, to be discussed next class
All of the numbered paragraphs below are answers that APG 201 students wrote for the following final exam question:  

Give a plausible explanation, in Darwinian terms (i.e. using the components of selection, or if you want, sexual selection), for how humans lost our body fur and are now what’s often called ‘the naked ape.’ There are many ways to answer this for full credit as long as you incorporate all the components of selection properly.” 

Read each of their answers and find at least one error, mistake, or less than ideal part in each one (must involve evolutionary theory, not grammar, etc). Then give each one an overall thumbs up or thumbs down. 


From a strict (and arguably proper) position, sexual selection only explains sexually dimorphic traits, not things found in both sexes. However, it's not something I'm going to continue to do, but I introduced sexual selection to my intro students as being more broad than that, as being a term to hypothetically explain things like naked human skin or giraffes' long necks if something like runaway selection is used (with not just mate preference but maybe just mate recognition). I'm still struggling with how to conceptualize let alone effectively apply (let alone effectively teach) these theoretical distinctions. Regardless, this opened up a great opportunity to discuss the strict Darwinian definition with my class before they evaluated these answers below.

1.    Hypothesis explaining why humans lost their/our body fur: Cloths were developed enough to the point that humans did not need their body “fur” to rely on for warmth, they used the skin and fur of other animals to keep warm thus eliminating the need for a coat of fur. Over time that trait became less and less necessary for human survival and fitness. Male and females no longer needed their fur to enable their survival. They could reproduce without fur making the fur a diminishing trait among humans, and over time was replaced with a “naked skin” which was kept warm unnaturally through use of clothing and other heat sources.

Let's hope they all see the similarities here with their answers to #4 in Part 1.

2.    Random mutations occur in the genome of every organism. The random mutations could have favored a specific group of people that lived in a warm environment, over time allowing them to lose their fur. This happens because genomes act with the environment that an organism lives with. This eventually created variation among groups; those with less fur in a warm climate were able to live longer because the environment favored them. Food and water may have been scarce and with less fur, these humans were able to release less body heat and hold on to more water, allowing them to survive longer. Humans with less fur could have also camouflaged better which allowed them to be preyed on less. Over time the humans that never lost their fur died off, and only humans with less fur, or no fur were left to mate with one another.

3.    Over time humans have slowly evolved into the modern humans we are today. Sexual selection was the cause for the loss of a lot of the old traits our ancestors had. Over time we continued to reproduce and meiosis and mitosis played a part. Genes our ancestors had didn’t necessarily carry the same genes that their parents carried which is why it wasn’t pass on. We didn’t need body fur the way we were adapting anyways, although still to this day we have hair on our bodies. There are specific genes that get passed on to us from our ancestors and it typically is only the stronger one. If we needed body fur, there wouldn’t have been a change in our genes and we would have continued to reproduce with body fur.

4.    Over time there has been an apparent loss of body fur in humans which can be contributed to natural selection. This “loss” of body fur has become the source for our existence as “the naked ape.” Fur was originally a biological method of protecting the skin against harsh climate, weather, and other outside threats. As alternative methods of protection were developed such as knowledge of constructing shelter and tools, body fur became less vital to survival. Because of variation within a population due to mutations in the genome, some humans were born with more body fur than others. While body fur was previously a necessity for survival and, as a result, passing on of traits, changing environmental situations and social/technological situations no longer demanded the presence of such a trait, and perhaps resulted in the trait as a genetic disadvantage. Those who were genetically less “furry” became more successful in their environment, allowing them to stand up to competition, obtain food, and ultimately mate. Humans with less body fur thrived and survived, meaning they were naturally more fit for the “selection” of their environment, allowing them to pass traits for lesser body fur, which led to a decline in the presence of the fur trait over generations as a result of the natural and sexual success of those with less or no body fur. Those with the trait became less for the environment, so they were less likely to survive to the point or reproducing and without the continual passing on of the trait, it eventually became significantly inconvenient, prevented survival success, and as a results, slowly became uncommon and completely overwhelmed by the presence of those carrying the trait for no body fur.

5.   Humans are the product of evolution over deep time, and took up until 200,000 years ago for us to be anatomically and biologically the way we are today. As time has progressed, so has temperatures. One component of natural selection is adaptation. Humans adapted to the warm climates and saw that the extra hair on their bodies was heating them up quickly. Another proponent is the fact that the sun perhaps started to burn off the hair and our DNA realized that it was no use to generate more hair, due to it being burned off. So it developed another mutation, having the result of less hair. With the bodies of Homo sapiens evolving, so have they psychologically and sexually. Homo sapiens became more “choosier” in their mate and found less hairier bodies to be more attractive. Thus, mating with less hairier mates, producing less hairier offspring.

6.    The Homo sapiens species may have begun to lose body fur in warmer areas such as the Africa regions. The body fur may have held in unneeded warmth which may have led to overheating. The overheating may have damaged the brain or hindered the general efficiency of the individual. It may have been noticeable to females of a group that certain males with less fur were stronger and more fit. This means that those individuals with less hair were more reproductively successful. This mutation of less hair is therefore selected for and inherited by descendents.

7.    Humans lost their body fur due to natural selection. Human fur was naturally unfavor because it no longer serve a purpose. As the earliest humans migrated from rainforest like environment to more savannah related environment, the need for body fur was longer desire or needed. Over time as the earliest humans reside in such climate of savannah their body fur became infested with insects, and since it was no longer purposeful over time body fur slowly became extinct in humans.

8.    A possible explanation of why we lost our body fur can be explained through natural selection. First, there is constantly variation found among individuals as a result of genetic mutations and recombination during meiosis. Perhaps an ancestor received a variation for less fur. This trait in some way helped the individual to successfully live and reproduce; it served as an adaptation for their environment. Maybe this ancestor with less fur could cool more efficiently and not have as great a chance of dying prematurely of heat stroke since this individual has survived, their offspring can inherit this advantageous trait. Due to the force of differential reproduction, more or less individuals could inherit this trait. Perhaps individuals with this trait were able to produce more offspring (say because they are living longer because they aren’t dying from the heat) than their hairier counterparts. This would then likely cause the trait to be passed on in greater numbers. If the trait continued to be advantageous, it would have been passed on and resulted in a common trait of a species, in this case, loss of body fur for humans.

9.   Throughout a species this is always variation. No two organisms are exactly alike and differences occur through combination of genes (as in reproduction) or mutations. During humans lineage, a hominid was born with less body hair than usual. This trait benefited the hominin by keeping cool when it needed to travel long distances for resources. Therefore this trait became adaptive because it helped the hominin live. Since the hominin was able to live, it was also able to reproduce and so the trait was passed on through the generations because the hominin s with less body fur lived longer and were able to reproduce more than hominins who had more body fur. The trait was “selected” because it was beneficial to the species.

When discussed as a class the next meeting, students should be able to tell the difference between selection and drift, the importance of mutation and differential reproduction for both, and issues with using sexual selection as an explanation for non-dimorphic traits. Thanks to help from the accompanying readings (linked), they will hopefully be critical of the agency-infused, and also quasi-Lamarckian, language that they need to play up in #1 and avoid entirely in #2 in Part 3 below. 


Readings to accompany Parts 2 and 3 as homework.


Part 3. Homework, to be submitted and discussed next class

1. Write a scenario for the evolution of either human bipedalism (upright walking and running) or large human brains (encephalization) by making evolution and selection into agents that might have preferences, feelings, thoughts, intentions, decisions, needs, etc…

2. Write a scientific scenario for the evolution of either human bipedalism (upright walking and running) or large human brains (encephalization). You may use a known hypothesis or invent one.

The goal is to have this final question (#2, Part 3) be answered exquisitely, that is, scientifically, conservatively, all-natural, all modern synthesis, Darwin-friendly, use/disuse-free, and agency-free. 



Thursday, January 30, 2014

Why can we count?

Remember in 4th grade when your teacher tried valiantly to teach your whole class to play the recorder?  Learning the fingering was hard enough, but then you had to learn note values and how to count them, and halves of them, and quarters of them, and rests, and how to keep the beat and so forth.  This is one of the most frustrating aspects of learning music for just about any beginner.  But, by the time students actually enjoy playing their instruments, the whole issue of counting notes, and time between notes and all of that, has become second nature.  They've internalized something that the rest of us never really did.

Recorder; Wikipedia

Our son-in-law Niocla Barbieri, an Italian professional double bass player who plays baroque and classical music, put it this way when I asked him about the experience of keeping time in an ensemble, including about the rubato, an expressive stretching of the beat.  I quote him at length because it's such a beautiful and evocative description.
Staying together in synch is not usually a conscious thing and I would say that the more the ensemble we play with is good, or close, or “harmonious”, the less you have to think about that and the less it stays conscious… Or the less you have to want it, I would say, because it happens by “itself”… I wouldn’t say it is something automatic, either, because it is very far from any idea of being something stuck or rigid, but the feeling is more something towards a fluid idea of a continuous chain of perceiving and reacting, detecting and responding, more like a very relaxed dialogue… 
Sometimes, from the top of a bridge you overlook down into the water of the river and you see the flowing of it, the general flowing, you perceive a whole fluid movement of a big mass… But some other times you watch better and you can notice several small currents and streams that seem to have independent “will” from the main one… They seem to slow down and then accelerate and then move sideways, as if a part of the water is going to move away from the rest… That’s just an impression, because all the big mass is still traveling as one... 
This is the flowing of the time, in orchestra and this is the rubato, I would say… Of course in a good group, where nice things happens without effort and with a sort of natural relax (“sprezzatura", we used to call it in the Italian baroque era), like the one of the flowing of a river...
A new paper just published in the Journal of the Royal Society ("Optimal feedback correction in string quartet synchronization, Wing et al.) takes a look at how professional musicians correct lapses in synchronicity. Sensorimotor synchronization happens in many organisms (fireflies that synchronize their pulsing, e.g.), so synchronization is of long-standing interest for a variety of reasons.  Musicians do sometimes have lapses -- indeed, sometimes it's intentional -- but they also are adept at getting back in synch.  How?  What have they internalized?

Wing et al. analyzed two different professional string quartets playing the fourth movement of Haydn's  quartet Op. 74 no. 1.



The question was how the members of each quartet responded to expressive, unrehearsed variations in timing.  Generally, members of a string quartet follow the lead of the first violinist, although with more or less strict adherence to this rule, depending on interpersonal dynamics and the philosophy of the group and so on.  So, correcting timing may be a matter of getting back in synch with the first violinist, or it may be more fluid than that.  In any case, the authors propose a feedback mechanism to correct timing that gets off, a linear phase correction.
Time series analysis of successive tone onset asynchronies [that is, musicians who aren't in time with each other at the start of a tone] was used to estimate correction gains for all pairs of players. On average, both quartets exhibited near-optimal gain. However, individual gains revealed contrasting patterns of adjustment between some pairs of players. In one quartet, the first violinist exhibited less adjustment to the others compared with their adjustment to her. In the second quartet, the levels of correction by the first violinist matched those exhibited by the others. These correction patterns may be seen as reflecting contrasting strategies of first-violin-led autocracy versus democracy. The time series approach we propose affords a sensitive method for investigating subtle contrasts in music ensemble synchronization.
If musicians always played the music in front of them exactly as scored, it could be dull.  And, they aren't automatons -- they hear a piece in their own particular way and want to express what it means to them and they have plenty of freedom to do this, within limits.  We, the audience, give them that freedom, and also adjust whatever internal metronomes we listen to music with (even those of us who failed 4th grade music have developed a sense of timing) to go with the flow of the music we're listening to -- within limits.  This is the rubato.  But there are limits, and apparently they are internalized.

According to Wing et al., musicians use linear phase correction to regain synchronicity with either other musicians or with the tick of a metronome.  That is, they know when their count is off, because it has been set previously, by the relationship between note values and time between notes, and they are able to tell when they're off, and adjust to get back into the beat.  Musicians learn their skill by spending tens of thousands of hours counting notes; that they can internalize it quickly, and correct it when it's off is no surprise.

It is often said that musicians are good mathematicians and vice versa.  The earliest formal musical theory was by the Pythagorean school of mathematics, in ancient Greece.  They figured out the mathematics of basic harmonies.  But many musicians didn't take, or hardly squeaked by, in mathematics.  So are they doing implicit time-series math in their heads implicitly?  How?

And try this on for size:



This is an excerpt of an ensemble playing John Adams' "Shaker Loops", with its fiendishly minimalist, relentlessly repetitive and syncopated measures (with endless slight deviations) that goes on for around 25 minutes.  We heard an ensemble play this a few years ago at Ithaca College, and the violist who was a friend told us the stress and horrors of trying to stay in synch.  But they did, beautifully, and in that instance (unlike the YouTube) they had no conductor!

In this general vein, what explains this curious phenomenon that I've observed numerous times, after decades of knitting?  Just last night I was casting on stitches to make a scarf.  The pattern called for 85 stitches.  There are too many distractions for me to be able to count stitches as I cast on, so I just take time out to count them a few times as I go along.  Remarkably -- at least I think so -- last night when I stopped to count I had cast on exactly 85 stitches.  But I've had this happen when the pattern called for 285 stitches too.  No phase correcting there, do I have an internal counter that turns itself on as I start to cast on, and then alerts me when I've met the target number?  If so, it seems like a rather frivolous way to spend brain cells though, even if useful.

Casting on; Wikipedia

I remember once being at my daughter's youth orchestra rehearsal when one of the violin teachers told me to watch the conductor when he stopped to talk to the players.  "He'll pick up the beat right where he left off," she told me, and indeed he did.  After years of conducting, he had developed an internal metronome that kept on ticking even when he wasn't waving his baton.

And then there's the internal alarm clock that always goes off 2 minutes before the alarm we'd set.  I don't remember the last time I've heard an alarm -- except when I couldn't figure out how to turn the bloody thing off on my phone.

So, this mathematical explanation for musicians correcting themselves when they get off the beat.  It might well be a good description of what happens, but it's not an explanation of what they are doing or feeling. To us, the deeper question is why we're able to do all this counting and time keeping anyway.

It's said that infants can count.  Or at least have numerical awareness.  As do non-human primates, and even dogs. Crows can count, parrots can count to six, and even have a concept of zero, according to at least one source.  Apparently even frogs can count.



So the ability to count must have evolved long before humans. But why?  Or better put, what kind of ability is it really?  It's easy to imagine adaptive scenarios (the duck had to be able to count her ducklings, to shepherd any stragglers away from predators, the wolf had to know its pack was intact, and so forth), but like most such stories, impossible to test them.

But maybe it's not an adaptation at all, really.  Maybe it's just one of the many ways we take in information about our surroundings, a by-product of there being more than one of any of us, or of any food, or of anything else in our environment.  Maybe recognizing that there are two lions over the crest of the hill is exactly the same as noting their color or even just that they are there.  It's just another observation; when we turn it into a number, that's when it becomes higher math.

Many pre-agricultural cultural groups are reported only to have numbers one, two, and many in their language, which may be consistent with the idea that formal 'math' and counting are recent cultural add-ons.  It is interesting that math and music were seen by evolution's co-discoverer Alfred Wallace as attributes that could not have evolved because, after all (he argued), our primitive ancestors didn't need them and so could not have been selected for them.   He used this as a reason to invoke the existence of God and human exceptionalism.

We wouldn't go that far, and instead are grateful for music, which, if it is but an evolutionary spandrel, is one of the more beautiful things we do with our ability to count.

Wednesday, January 29, 2014

Age of disease onset is telling us something

Age of onset and what it tells us about disease
We tend to think of diseases as either genetic or not genetic, in the latter case due to infection or self-inflicted exposure to things we know we shouldn't do like smoke or overeat, or just plain bad luck.  Well, except that hundreds of millions of dollars have been spent trying to find genetic causes for these 'not genetic' diseases, to almost no avail.  But maybe there's a way to expand our understanding of causation a bit, by taking into account information we've had for a long time but don't always consider, either because we don't think to or because we don't know what it means.

Many or even most traits, including diseases and disorders but normal traits as well, have characteristic age of onset patterns, albeit with more or less variation around the mean.  Cystic fibrosis is usually manifest in infancy, while Huntington's disease usually begins to show its effects in middle age, although with a lot of variation.  Most cancers have typical ages of onset patterns, whether or not they are familial forms of the disease.  Epilepsy can first occur at any age.  Puberty generally starts at, well, puberty, and menopause in middle age.

And age at onset may well tell us something about causation -- for example, early onset can (though need not) indicate a strong genetic contribution to the disease, as with dementia or colon or breast cancer, for example.  But even diseases associated with clear, strong genetic risk, such as Huntington's disease, or familial breast cancer, can have delayed and variable age of onset, for both known and unknown reasons.  These diseases are not inherent, not present at birth, and this means that something has to happen in addition to the expression of a faulty gene to cause disease.

Age at onset for most disorders shows 'smooth', regular patterns in populations -- the disorders don't just wait until some age and then pop up.  This regularity suggests a pathogenic process at work, rather than just a 'cause'; it can indicate the extent to which exposure to environmental factors contributes to risk, even when there's a strong genetic component.  The genetic variant doesn't just 'give' you the disease, but instead it affects the pathogenic process.  The figure shows the regular increase, but with details specific to each organ site, for cancers.  Inherited risk typically increases the slope of the risk function, that is, accelerates risk to higher levels more rapidly than in persons who haven't inherited elevated risk.


Cancers strike at different ages, though with characteristic increase with age (log-log scale) Weiss, KM, Chakraborty, R, 1989.  

The general idea is the process is in part due to the accumulation of mutations, during life, in the tissue involved.  If the disease requires some number of such 'hits', but you've inherited one or more, you need to be exposed to fewer hits than someone who inherited none, and it thus takes less time for the disease to develop in you.  However, the details show that this is not a totally simple story by any means.

Retinoblastoma
Retinoblastoma is a childhood tumor of the retina of the eye.  It is found in children with a genetic predisposition in the RB1 gene named because it was discovered in this context, though of course its natural function is not 'for' eye cancer.  Children can be born with one or more tumors in one or both eyes, but it also occurs in children without familial risk.  Age at onset is generally before three, younger in children with the familial form of the disease.  The onset patterns of this disease triggered Al Knudsen in about 1970 to propose the two-hit model of cancer -- his idea was that tumors develop when two different genes suffer harmful mutations.  In this case, as it turned out, the key factor was two 'hits' in RB1, one in each copy that the fetus carries (one copy inherited from each parent).  After childhood, retinal cells don't grow and aren't really vulnerable to being 'transformed' into cancer precursors.  If you inherit two good copies it is most unlikely that any single retinal cell will have the bad luck to experience mutational damage to both copies, so spontaneous RB is rare.

But if the fetus has inherited one damaged and one good copy, there are enough millions of retinal cells ('retinoblasts') that it is likely that more than one, and/or at least one in each eye, will suffer this single new hit required to transform the cells.  Since the disease appears more or less at birth, the idea was that this is a two-hit, single-gene type of cancer.  There are other cancers with early childhood onset that seem mainly to involve one gene.  (How just one gene gone awry can lead to something as complex as cancer is generally not known, though there are some ideas.)

Familial breast and ovarian cancers
By contrast, some variants in the BRCA1 and 2 genes strongly raise the risk of breast and ovarian cancers, and even though age of onset is generally younger in women who inherit one of the risk variants than in women without a clear family history, tumors still don't develop for decades after puberty, when breast tissue begins to grow. This is because it requires numerous mutational changes, either inherited or acquired during a woman's lifetime, before any single breast cell is transformed and then divides and divides, forming a cancer.  Presumably exposure to hormones after menarche, during pregnancy and lactation, which also stimulates division of breast cells, perhaps components of the diet, and other possible risk factors all increase risk, and exposure to these factors varies among at-risk women.  And not all familial cases are early onset -- some only strike their at-risk victims at the more typical ages for non-familial breast cancer.

In addition, though they can confer very high lifetime risk, even in women who inherit a seriously damaging BRCA1 or 2 gene variant, this by itself is not enough to cause disease, as 10 to  40% or so of women with a variant associated with disease won't in fact have developed breast cancer by age 70. Indeed, BRCA1 and 2 are not cancer genes per se; they don't cause cells to divide uncontrollably.  They are genes that code for proteins that repair damaged DNA, or destroy cells that can't be repaired, preventing them from becoming cancerous.  So, in a very real sense, even if it's genes gone awry that directly lead to cancer, it's environmental exposures -- age of menarche, number of births, breast-feeding, dietary factors -- that one might say enable cancer to arise in these at-risk women.  So life-history as well as random exposure factors can affect the age at which such a partially genetic trait can arise.  And cancers are not the only example.

Psychiatric diseases
Many psychiatric diseases characteristically strike in late adolescence; schizophrenia, bipolar disease, severe depression and so forth, though with a lot of variation in onset age.  Why this is isn't at all clear.  The genetic contribution to these sorts of traits is not likely to be a two-hit kind of model -- genetic risk triggered by an interacting environmental factor -- because if genes are an important contributor to these traits, they seem to involve multiple genes, each with small effect, as well as, presumably, environmental factors.  The puberty effect is, one would think, due to changes in gene expression and cell behavior that happen then, but how or why this works is not known.  However, to the extent that genes are involved, the process must be more complex than just mutations in genes, and indeed finding major relevant genes for many psychiatric diseases has been a largely fruitless search.

Some, however, such as many severe intellectual impairments that are essentially present from birth are, not surprisingly, largely due to variants of single genes that affect normal brain development and function.  Again, this is consistent with a process involving a mix of effects in the delayed-onset cases, and a more direct developmental-genetic effect in the cases present at birth.  Supporting this is that in the latter instances we can often find the responsible gene.

Huntington's disease
Huntington's is very clearly a genetic disease, with a known cause, at least a predominant and highly predictable cause, in the number of 3-nucleotide (CAG) repeat sequences in part of the 'Huntingtin' gene.  The number of copies of this triplet repeat had for some years been thought to be highly predictive both of the occurrence and the onset age of HD.  The reason was not really understood, but in any case onset usually is after age 40 or 50 -- though it can be as early as infancy or as late as old age -- and with such strongly predictive gene variants it was curious that their effect could take so long to be manifest in a serious way. What process is involved is not yet clear, even though mouse models do exist that can be studied experimentally.

Even more perplexing is that recently careful study of more ample data have shown that the onset-age/copy-number relationship is far less clear than had been originally thought, and various modifier genes have been identified that may have an effect.  Still, why the genetic anomaly is present at birth and yet the disease doesn't manifest for decades isn't entirely clear, as far as we know.  Neurological damage of some kind clearly first needs to accumulate.  According to a 5 year old paper, it may have to do with oxidative stress.
Although no one specific interaction of mutant huntingtin has been suggested to be the pathologic trigger, a large body of evidence suggests that, in both the human condition and in HD mice, oxidative stress may play a role in the pathogenesis of HD. Increased levels of oxidative damage products, including protein nitration, lipid peroxidation, DNA oxidation, and exacerbated lipofuscin accumulation, occur in HD. Strong evidence exists for early oxidative stress in HD, coupled with mitochondrial dysfunction, each exacerbating the other and leading to an energy deficit.    
This is, however, as much guessing as knowledge.  One obvious consideration is whether HD is actually multigenic.  The genomic background -- variation in many other genes -- may affect the speed at which the neural problems arise and become more widespread during the victim's life.  The lifetime risk in those inheriting a damaged Huntingtin gene is manifest in ways that depend on other genes with which ht interacts in the brain.

Not too long ago, type 2 diabetes, the non-insulin dependent form of the disease, was typically a disease of middle age.  Indeed, it was called 'adult' diabetes.  But once again, the water becomes murkier, because even if there can be a strong if generally unidentified genetic component, environmental risk factors clearly have a major effect because children are now getting this disease in large numbers, presumably because of the sharp rise in childhood obesity.  Some, at least, of the physiological mechanisms are known.  One can hypothesize that if obesity were controlled not only would there be a large drop in the life time risk, but also a reversal in the age of onset to much later once again.  Those who, while not obese and eating healthfully, still got early type 2 diabetes would be expected to have strong genetic risk factors.  But that's hard to find when so much of our population overeats and under-exercises.

But what about when we can't explain it?
The above are diseases for which age at onset can be fairly convincingly explained, at least in general terms.  A disease can develop in utero or not long after birth because genetic susceptibility is pretty much all it takes, or the disease doesn't develop for decades because environmental exposures must accumulate, or must interact with genetic risk (anomalous protein, anomalous error correcting, and the like).  In such cases, a disease or risk of occurrence may be accelerated by genetic risk, but we think it's not accurate to say it's 'caused' by it.  The genetic variant 'contributes' to the risk process.  Indeed, there are people with 'causal' variants of many even devastating diseases who are disease-free.  How frequent this is is unknown, but accumulating information from the ever-increasing number of whole genomes of healthy people makes it clear that it's not insignificant.  Most of us seem to be 'at risk' of at least one genetic disease we haven't got.  

But, again, there are what seem to be single-gene diseases that are latent until, say, puberty even though the abnormal protein is present at birth.  The periodic paralyses are an example.  Rare apparently single-gene disorders, this family of diseases causes attacks of weakness, or partial to total paralysis, under specific, often predictable conditions -- heat, dietary triggers, illness, exercise, rest after exercise and so forth.  We've written before about our theory that English poet Elizabeth Barrett Browning may have had this disorder.  

There are three forms of periodic paralysis; hyperkalemic and hypokalemic periodic paralysis, and Andersen Tawil.  All involve disruptions in how cells respond to voltage differentials across the cell membrane, which are determined by potassium and sodium concentrations on both sides of the membrane.  Attacks are often sudden and severe, but can be interrupted with either an infusion of potassium or of sugar, usually oral, depending on which form of the disorder the person has.  

Variants in three different ion channel genes have been linked to these disorders.  Each variant causes the ion channel -- or rather, the millions of occurrences of the ion channel in skeletal muscle cell membranes throughout the affect person's body -- to respond aberrantly.  Usually a person has only one copy of the defective gene, so that half of the ion channels coded for by this gene would be normal, and half aberrant.

But still, half the ion channels have been defective since birth.  Why do so many people with these disorders not experience weakness or paralysis until puberty?  What is it about being awash in hormones that sets things off?  Age at onset must be telling us something about how this, and other diseases with a similar onset pattern, are triggered.  It's just not clear what.  And, equally perplexing, why do some family members have the presumptive causal variant and never have attacks?  A very similar story can be said of many forms of epilepsy, which involve ion channel genes expressed in neural cells.

And then, what causes puberty?  That is, what triggers the cascade of hormones that results in such dramatic physical growth and change 10 or 12 years after birth?  Or in recent decades, as early as age 8, a sharp decrease that would suggest that there are at least some environmental triggers involved.  Genes that trigger hormonal secretions have been identified (e.g., the hypothalamic Kiss1 system described here and here), but what triggers the triggers a decade or so after birth?

The age at which a trait or disease is manifest has a lot to say about the relative importance of life experience in the disease.  And this can be true whether or not the trait or disease is thought to be due to a single gene, many genes or no genes.  

Tuesday, January 28, 2014

When good cholesterol isn't

First we were told to lower our cholesterol.  This was back in the 1960's, when the first results of the then major new epidemiological project, the Framingham Heart Study, were released.  If the Framingham Heart Study taught us anything, it was that high cholesterol was a major risk factor for heart disease.  So we all started eating oat bran and granola and eschewing beef.

And then,  in the 80's we were told that it's not all cholesterol we need to be concerned about, that there's a good and a bad cholesterol, and we should be raising one and lowering the other.  How?  Eat healthy -- no eggs, no butter, no red meat.

Egg in a spiral eggcup; Wikimedia

And then it turned out that the people living on the Mediterranean had known all along what eating healthy is -- everything in moderation, except for olive oil and red wine, two foods that we should all be consuming more of.  From the Mayo Clinic:
Key components of the Mediterranean diet

The Mediterranean diet emphasizes:
  • Eating primarily plant-based foods, such as fruits and vegetables, whole grains, legumes and nuts
  • Replacing butter with healthy fats, such as olive oil
  • Using herbs and spices instead of salt to flavor foods
  • Limiting red meat to no more than a few times a month
  • Eating fish and poultry at least twice a week
  • Drinking red wine in moderation (optional) [not clear which is optional here, the red wine or the moderation]
So, follow these new rules and live as long as the Italians do.  Somewhere along the line, though, eggs were taken off the list of forbidden foods, and the usual American replacement for butter, margarine, turned out to have transfats which are bad for you, and anyway, saturated fats -- the stuff that's hard at room temperature, like margarine -- aren't good for you in any form.

And it turns out it's hard to lower your bad cholesterol with diet.  So, maybe try doing it with drugs.  Statins are good.  Indeed, the more people taking statins the merrier.  But whether statins are lowering all the risky components of LDL is still open to question (e.g., this paper).  Statins are designed to control circulating lipids (fats), which confer heart-disease risk. They inhibit an enzyme called 'HMG-CoA reductase' which is expressed in liver cells as they produce cholesterol from raw ingredients and secrete it into the blood stream. Lower enzyme activity, lower circulating lipids. Whether this is what they are doing is still not entirely clear, however.  There is evidence that statins may be reducing inflammation in irritated arteries and veins, which may be what reduces risk of heart disease rather than any effect on cholesterol.  Perhaps heart disease is an inflammatory process more than one affected by cholesterol levels, after all.

Oh, but then a rather confusing study was published last year, showing that Australian men who switched polyunsaturated fat for the saturated fats in their diet did in fact lower their LDL, but they also were more likely to die of a heart attack than those who hadn't changed their diets.  Indeed, most people who have heart attacks don't have high LDL.

But ok, assuming the cholesterol model of heart disease, along with lowering LDL, it would make sense to also raise your HDL, the good cholesterol.  But now it turns out that it's possible to have too much of a good thing.  A new paper in Nature Medicine (paywall) reports that while HDL normally should keep arteries clear and protect against heart disease, in arterial walls, HDL acts quite differently from circulating HDL, and can lead to arterial blockage and heart attack.

The BBC reports that the authors say people should still "eat healthily".  But, what this means, when the definition of a healthy diet keeps changing, and today's healthy diet can be the cause of ill health, is not at all clear.

Everything in moderation seems good to go with.

Monday, January 27, 2014

The Mismeasure of Dog

When you see a study that claims to predict behavior variation with cranial variation your anthropology radar might start pinging. If your anthropology radar's especially sensitive, it will ping regardless of the organism. And when that organism is the domestic dog, you might be tempted to ready the torpedoes before you've even read the abstract.

That's because, for one, people often like to compare human races to dog breeds. Conversations rooted in eugenics or racism often involve judgments of blood purity, much as American Kennel Club members and Westminster Dog Show contestants owe their worthiness to their pure bred ancestors. This isn't a perspective that many anthropologists abide.

And, secondly, there's a long history of measuring human heads, divvying them up by race, showing that they're distinct by race, and then explaining differences in behavior at the race-level in the general or specific context of those race-level differences in the head. Another practice that few anthropologists abide.

We're going to cover some ground on human heads before getting to the dogs'. So please... Sit... Stay... And if you do, we'll get to the dogs in just a second.

Brain size, or cranial capacity, has long been a favorite measure by folks interested in human variation. The cephalic index (CI) has too. These have been used in the name of science by racists, racialists, and folks who are neither. CI might be a favorite because not only does it appear to separate race categories and even populations within them, but it's easily, cheaply, and noninvasively obtained from live humans, while it also avoids phrenological subjectivity.

How phrenology-free is CI? It's not completely clear because, to my knowledge, no one has linked CI to behavior any better than they can with a lumpy left parietal. Remember, however, that many CI uses have not, and are not, for explaining behavior. CI's often measured to study human variation between or within populations, and maybe relatedness, and maybe change over time (evolution).

In anthropology, CI involves the ratio of the breadth to the length of the cranium.

Superior view of a human cranium. Red lines show the two parts of the cephalic index (CI = breadth/length x 100). (photo source)
People even created terms for different CIs: wide, broad skulls are brachycephalic and narrow, long ones are dolicocephalic. CI is influenced greatly by genes (accumulated in ancestry through any number of evolutionary mechanisms). The trait is complex and polygenic, sharing genes that affect other traits. For example, stature is linked to skull length: the taller a person the more dolicocephalic they might be. Environment also contributes to a person's CI which might explain apparent changes at the population-level seen between parents and offspring:

Depicting Franz Boas's CI data collected from 13,000 European-born immigrants and their American-born children in the New York area. (Figure 10.12 from Human Variation: Races, Types, and Ethnic Groups, 6th ed. by Stephen Molnar)
Notice how the changes in the American born children weren't all the same sort of change. That's probably one of the reasons that Corey Sparks and Richard Jantz wrote, "A reassessment of human cranial plasticity: Boas revisited." Boas's big classic dataset had long been used to demonstrate the strength of environmental effects on cranial variation. But with Sparks and Jantz's new consideration of the data, it looks like foreign born and American born people with similar ancestry have more similar head shapes to one another than foreign born compared to American born, regardless of ancestry.

Figure 1 from Sparks and Jantz (2002)
Sparks and Jantz make this point by extending the comparison:
"In America, both Blacks and Whites have experienced significant change in cranial morphology over the past 150 years but have not converged to a common morphology as might be expected if  environmental plasticity plays a major role (29).
We send out our genes wherever our genes may go and that includes, apparently, how our offsprings' adult heads will be shaped, roughly, regardless of where their heads live.

In conclusion, Sparks and Jantz discuss the context a hundred years ago:
"We also must consider the attitude of Boas toward the scientific racism of the day. Evidence of Boas’ disdain for the often typological and racist ideas in anthropology have been reviewed previously (45) and are evident also in his later publications (46–48). Boas’ motives for the immigrant study could have been entwined in his view that the racist and typological nature of early anthropology should end, and his argument for dramatic changes in head form would provide evidence sufficient to cull the typological thinking. We make no claim that Boas made deceptive or ill-contrived conclusions. In
Fig. 1 it is evident that there are differences between American and European-born samples. What we do claim is that when his data are subjected to a modern analysis, they do not support his statements about environmental influence on cranial form."
Acknowledging the strong possibility that genes are more powerful determinants of head shape than environment isn't offensive to many of us today, even those of us who are sensitive about these sorts of data because of their past uses or their potential for future abuses.

But maybe the environment affects different head shape genotypes differently? Why assume that similar environment will result in similar phenotypes if the underlying genotypes are different to start? Isn't the change in phenotype enough to show environment matters enough to consider it important? Perhaps Boas's data aren't sufficient for answering these questions.

And maybe humans just don't have very varied head shapes in the grand scheme of things, in the big picture of whatever "variation" is, and our perception of such typological differences is just what we're good at doing with things like human heads.

But humans and our variation and how we explain and perceive it aren't the main reason we're here today! 

I've introduced human CI, human cranial shape variation in general, and the history of typological race studies linking human head shape and human behavior, because...


One of the main findings of a paper recently published in PLOS ONE is that CI predicts some dog behavior:
link to article
[Now do you see what I meant by my opening paragraph? If not, there's a whole literature out there. Brace's Race is a Four-Letter Word is where you might start.]

For brevity, we're going to ignore the height and weight stuff in this paper and stick to the skull shape parts. But I recommend reading the paper for yourself.

The authors cite some papers to establish that skull shape might be linked to behavior.
One "noted that the morphology of working dogs' heads clustered according to their breed's original purpose. This observation was later supported by a series of studies focused on cephalic index."
"CI is correlated with a tendency for retinal ganglion cells to be concentrated in a form of an area centralis rather than a visual streak. This feature of short-skulled dogs means that they have more visual acuity in the centre of their visual field but less in the periphery."
And this is hypothesized to be linked to their being,
"more likely to follow a human pointing gesture, suggesting that the arrangement of retinal cell may link with aspects of canine social cognition."
Also, they cite a study of dog brain MRIs, linking skull shapes (particularly CI) to,
"progressive pitching of the brains, as well as with a downwards shift in the position of the olfactory lobe." 
This established their premise that differences in head shape reflect differences in brain organization and that,
"CI may be associated with changes in the way dogs perceive stimuli and possibly process information."
It's hard to write this through my insane jealousy, but they traveled to dog shows to collect their metric data. Photographs of the superior view of dogs' heads were taken and then used to measure CI.

They stuck to pure bred animals. Then they looked at a dog behavior survey called C-BARQ to see if CI predicted anything in the same breeds there. The survey's owner-reported, but of course, the author's explicitly acknowledge this source of bias that they can't do anything about (except not use it for science).

And lo and behold, CI predicts a few things:

Self-grooming, chasing, dog-directed aggression, allo-grooming, stranger-directed fear, persistent barking, compulsive staring, stealing food. 

All aren't exactly the kinds of behaviors that have been used to describe human races and populations, but some are.

Anyway, that's not what I really want to talk about. I want to talk about CI and its use for predicting behavior.

Here's how they measured it in the dogs:


Dog CI: "The length was measured from the fingertip to the tip of the nose, and the width was measured from each zygomatic arch, which was displayed by the tape placed around the widest part of the dog's head." (Figure 1 from McGreevy et al., 2014 with my blue lines added)

That's fine as a measure of something like the head. What're you gonna do otherwise at a dog show?

But how's that getting at brain shape? Remember, if the behavior's at all going to be about inbred biology as the premise of the paper sets us up to ask...

That is, if the behavior's not due to conditioning based on how the animal looks to humans and other environmental factors influencing behavioral development which are rightly acknowledged in the paper...

And if changes in behavior during life don't matter (which I don't believe are acknowledged in the paper but is obvious to anyone who's lived with a dog)...

Then we've got to be getting somehow at genes or brain shape or something with CI in order to use it to predict behavior.

But look at what their CI measures:

Boxer cranium with anthropology's CI (left; compare to human at top of post) and with McGreevy et al.'s CI (right). photo source 
Anthropology's CI gets at brain shape much better than McGreevy et al.'s. In fact, their measure isn't getting at brain shape any more than it's getting at meat helmet thickness (all that space between the zygomatic arches and the braincase) and snout length.

With this metric, the paper could be entirely rewritten to discuss not brain shape but meat helmet size as a predictor of behaviors instead.

There are many ways to take this discussion beyond merely pointing out that this measure of brain shape isn't. And I'll wrap-up with a few of those here.

First of all. I'm not an expert on the dog literature but it did stick out that they didn't cite a nice paper in American Naturalist from just a couple years ago--"Large-Scale Diversification of Skull Shape in Domestic Dogs: Disparity and Modularity" by Drake and  Klingenberg--that lends some support to their approach to dog head variation, by breed, and whether it's linked to behavioral variation by breed:
"The amount of shape variation among domestic dogs far exceeds that in wild species, and it is comparable to the disparity throughout the Carnivora. Moreover, domestic dogs occupy a range of novel shapes outside the domain of wild carnivorans."
Look at all the morphospace that dogs' heads are taking up! (from Drake and Klingenberg, 2010)
But Drake and Klingenberg's study also discovers and raises some really important issues that were not considered by McGreevy et al in PLOS ONE:
"The disparity among companion dogs substantially exceeds that of other classes of breeds, suggesting that relaxed functional demands facilitated diversification. Much of the diversity of dog skull shapes stems from variation between short and elongate skulls and from modularity of the face versus that of the neurocranium. These patterns of integration and modularity apply to variation among individuals and breeds, but they also apply to fluctuating asymmetry, indicating they have a shared developmental basis. These patterns of variation are also found for the wolf and across the Carnivora, suggesting that they existed before the domestication of dogs and are not a result of selective breeding."
So we're left wondering why this paper didn't even attempt to deal with genetics and development, that is, the phylogeny of dog breeds. If head shapes vary according to phylogeny, but within those clusters we see variation in behavior, then there's not such a strong link.

I think that phylogeny's ignored because it's not yet worked out. I just went back into the paper and found they do at least bring it up: "A cluster-based analysis of full genomes of these different breeeds may prove helpful in this domain."

It doesn't look like we're very far in dog breed phylogenetics according to (one of my oft go-to pieces for teaching examples of "simple" genetics) Parker, Shearin and Ostrander's review of dog genetics.

"An unsupervised cluster analysis of dogs and wolves. Using clustering algorithms with more than 43,000 single-nucleotide polymorphisms (SNPs), 85 dogs, representing 85 different breeds, along with 43 wolves from Europe and Asia, were assigned to 2–5 populations (inner circle to outer circle, respectively) based solely on genomic content. Each column represents a single individual divided into colors representing genomic populations. Blue indicates a wolf-specific signature, and red indicates a dog-specific signature. Note that the majority of crossover lies between ancient dog breeds and Chinese or Middle Eastern wolves. Figure originally published in Nature (158)." (Figure 1 from Parker et al., 2010)
That's not resolved well enough to be useful for McGreevy et al. (But I could be behind the times.). And granted, the figure I pasted there is aimed at seeing which dogs are most primitive or most like wolves and, maybe then, most like the earliest dogs. But domestic dog breed relatedness must be better worked out than this, given the things Wisdom Panel (using Ostrander's dog breed markers!) can do to pinpoint the breed ancestry of mutts like my pals Elroy and Murphy. (And here's where I learned from the horse's mouth what this test does.)

It does make one wonder, though, whether the dogs that have more wolf-like heads behave more like wolves. That kind of evidence would go a long way to support interest in linking domestic breed heads to their non-wolf-like behavior. But it's not in McGreevy's paper as far as I can tell.

But again, if you've got sensitive anthropoogy radar, it's pinging. First of all, isn't asking about which dogs are most like wolves the same as asking which humans are the most like chimps? Uh, no. But, yeah, kind of. And I'm in no state of mind to expound on that one here today given the trolled-up comment thread that would surely ensue.

But, second, it's troublesome that papers with easy measures and hypothetical (at best) or false (at worst) correlations between anatomy and behavior are published so far in advance of the best sort of data to answer their questions (e.g. a fully resolved phylogeny, let alone mapped genes for the brain shape or the genes for the behavior, etc etc...).

A deterministic and scientifically impatient (to be kind) approach to variation is what still haunts us about the history of physical anthropology. And to see it potentially playing out in other organisms like dogs need not raise any of our politically correct hackles, but I hope it arouses our scientifically critical ones.

Friday, January 24, 2014

Colony collapse disorder and TRSV - an answer or just another data point?

A new study of what's killing honey bees, reported a few days ago in the New York Times and elsewhere, suggests that a virus that normally infects plants might be involved.  The virus has apparently mutated and jumped to bees, much like influenza can jump from birds and pigs to humans.

The virus was found by serendipity, in a screen of the viral load of a sample of bees and the pollen they had collected.  Researchers were looking for both rare and frequent viruses and found the tobacco ringspot virus (TRSV), which is presumed to be transmitted via pollen.  The virus presumably works by attacking the bee's nervous system.

It seems that bees pick up TRSV when they forage for pollen. They then may share it with larvae when they feed them "bee bread", a mixture of saliva, nectar and pollen. In addition, mites that feed on the bees may also be part of the chain of transmission, presumably of other infectious agents as well as TRSV.

Tobacco ringspot virus; Forestry Images

Other RNA viruses are involved in colony collapse, but this is the first one that has been found to be transmitted by pollen.  TRSV infects many different species of plant, and can be devastating.
Of a number of plant diseases caused by TRSV, bud blight disease of soybean (Glycine max L.) is the most severe. It is characterized by necrotic ring spots on the foliage, curving of the terminal bud, and rapid wilting and eventual death of the entire plant, resulting in a yield loss of 25 to 100%.
Bees and other pollinators can transmit the virus between plants, but infected seeds are another mode of transmission.  The virus has been found throughout the honeybee body, and in an ectoparasite of the bee, Varroa destructor.  And, it is correlated with winter colony loss.  This is the first RNA virus that has been found to infect plants and animals.
Of ten bee colonies included in this study, six were classified as ... strong colonies and four were classified as weak colonies. Both TRSV and IAPV [Israeli acute paralysis virus] were absent in bees from strong colonies in any month, but both were found in bees from weak colonies. As with other detected viruses, TRSV showed a significant seasonality. The infection rate of TRSV increased from spring (7%) to summer (16.3%) and autumn (18.3%) and peaked in winter (22.5%) before colony collapse. .... The bee populations in weak colonies that had a high level of multiple virus infections began falling rapidly in late fall. All colonies that were classified as strong in this study survived through the cold winter months, while weak colonies perished before February
The authors of this study carefully do not claim to have found the cause of colony collapse disorder.  Instead, they suggest they've found a new mode of transmission of viruses to insects, and further suggest that TRSV is but one possible cause of bee decline.

Fundamental questions remain -- are weak colonies weak for unknown reasons, but thus susceptible to viral and parasitic infections, or do viral and parasitic infections weaken colonies?  Can bees and colonies withstand a certain amount of infection, but over a certain threshold more infection or parasite infestation is devastating?  Is there still a single cause of colony collapse to be found? 

Wednesday, January 22, 2014

The graveyard shift? You're not kidding!

A BBC report of a new study by sleep researchers suggests that night shift workers have higher risk of various health problems than do we daytime doodlers; heart attacks, cancers and type 2 diabetes.  This is because the expression patterns of many genes are based on the day-night cycle, and the 'chrono-chaos' of night work upsets lots of body functions, the story says.

The study, published in the Proceedings of the National Academy of Sciences, found that mistimed sleep caused gene expression to fall significantly.  Genes affected included those having to do with circadian rhythms, or the maintenance of our sleep/wake cycles.

One can't be totally surprised, although one might expect that the graveyarders would get used to their diurnal cycle and do just fine.  One has to wonder if there are other things about who chooses to do night work, or doesn't have options, so that nightshifting is a consequence rather than cause.  In that case, nightshifting would be a confounder relative to the health implications rather than their cause.

The point here is rather just a brief one, that we and many others have repeatedly made.  If these types of variables are not known or taken into account, or there isn't enough of this risk factor detectable in the study sample, then attributions of causation of what is measured will be inaccurate of misleading. This is one of the challenges of epidemiological research, including the search for reliable risk factors in the genome.

Then there is the question, related to an earlier point above, whether any genetic risk factors lead the bearer to look for nightwork and hence appear to be associated with some health result only indirectly.  What about variants in the chrono-genes?  Many such questions come to mind.

Inferential chaos?
Maybe, therefore, the chrono-chaos is a different form of informational and inferential disorder.  A disorder of incorrectly done studies.  As we know, many results of association studies, genetic or otherwise, are not confirmed by attempts to replicate them (and here we're not referring to the notorious failure to report negative results, which exacerbates the problem).  We don't know if the 'fault' is in the study design, the claimed finding of the first study, other biases, or just bad statistical luck.

A piece in Monday's New York Times laments the high fraction of scientific results that are not replicable.  This topic has not gone unnoticed; we've written about different reasons for nonreplicability over the years ourselves.  The degree of confidence in each report as it comes out is thus surprising, unless one thinks in terms of careerism, a 24/7 news cycle and so on.

Tuesday, January 21, 2014

Why do cholla cacti use torture?

Ken and I were in Arizona and thereabouts last week visiting friends.  It was lovely -- warm, sunny, lots of good food and good conversation.  We saw some excellent shows -- the Charles Harbutt exhibit at the Center for Creative Photography in Tucson, the Chihuly exhibit at the Desert Botanic Gardens in Phoenix, Penn and Teller in Las Vegas.

Chihuly glass sculpture, Desert Botanic Gardens, Phoenix

And we had some fine desert hikes.


The variety of cacti is astonishing, at least to a north easterner.  So many ways to live in the same harsh environment.

Now, everyone warned us not to touch the cholla cactus.  Or the jumping cholla cactus, or the teddybear cholla cactus, all close relatives.  They all have sharp spines, and whole arms of the plant seem to want to leap off the plant and embed themselves in your skin if you get too close.  Is it too much of a stretch to say it's almost as if the plant were striving to do so, like yesterday's bean plants?


Do not touch!

The arms do fall off easily and around the base of many cholla cacti you can see where pieces have fallen, taken root and started a new plant.

So, at the foot of the Superstition Moutains outside of Phoenix I was walking along, minding my own business when I lightly brushed up against one of these guys without even realizing it, and suddenly I had a 6 inch piece of cactus biting into the skin just above my elbow.  I only wish I'd had the presence of mind to take a picture, never mind video the whole experience like this guy did.



But I didn't because it hurt and I just wanted it -- them -- out.  It hurt particularly because each spine has tiny barbs all along the shaft that make coming back out really hard.

Wikipedia

I'll spare you the details and just say that, happily, unlike in the video, the piece that attacked me had a stem that we could hold onto and pull.  But you have to really pull.  We have since learned that we should take a comb into the desert -- apparently it's easier to disentangle yourself from one of these things using a comb.  Next time.

But it did make us think.  Why would a plant do this?  This brings us to one of our common topics to write about: the problem of the adaptationist assumption, that everything in nature has to be here because of natural selection and that we can infer the reason for that selection.  At least the second part often seems to be assumed when 'the' explanation is offered.

Here conventional wisdom would probably say the thorns are a defense mechanism.  Once poked, twice shy: animals would shun the cholla like the plague.  But why would plants 'want' to be left alone?  Plants, including cacti, can afford to lose a lot of themselves and still survive.  Why spend energy on growing all these spines?  And many plants build in attractors, not repellers -- flowers, aromas, colors, even hallucinogens or flavors. Being eaten, shaken, browsed, and so on is great for them and their potential to bear offspring.  So maybe that's not the answer.

A second explanation also seems obvious:  so many desert plants have spines that they must have something to do with water retention.  Otherwise, if they're just for defense, why aren't all temperate or tropic forest plants spiny?  The preponderance of spininess in the desert almost shouts 'succulence!' at you.  And maybe there aren't even as many animals browsing around in the desert as in rainier forests.

Or maybe it's a self-dispersal mechanism -- stick to a bear or wooly mammoth's coat and fall where you may.  Cholla blobs that land on the ground take root.   But this doesn't ring automatically true because  the mechanism is such over-kill that it's hard to imagine how these spiney blobs could fall out on their own.  So pity the poor javelina who gets one of these in its nose, and then tries to paw it out.  Near certain death -- though nice fertilizer for the plant.  Maybe it was planning ahead.  And if dispersal is the selected trick, why are most desert plants so short-spined?  Are those ones just for protection against animals?

The problem is that there may be no single reason, nor even any single kind of history involved here.  Maybe all these, and perhaps many other, reasons are or were true in the evolutionary past.  Botanists must have many clearer ideas about this than we do, of course.  But we think this illustrates why, even when the assumption that the trait is 'adaptive'--that is, is here ultimately because of natural selection--that assumption is hard to prove and in particular the reason is hard to be sure about.