Friday, November 30, 2012

Can we or can't we explain common disease?

Rare variants don't explain disease risk
We're still catching up on readings after a long Thanksgiving weekend, so are just getting to last week's Science.  Here's a piece that's of interest -- 'Genetic Influences on Disease Remain Hidden,' Jocelyn Kaiser -- in part because it touches on a subject we often write about here, and in part because it seems to contradict a story getting big press this week, published in this week's Nature.

Kaiser reports from the Human Genetics meetings in San Francisco that finding genes for common disease is proving to be difficult.  GWAS, it turns out, are finding lots of genes with little effect on disease.  This is of course not news, though the Common Variant/Common Disease hypothesis -- the idea that there would be many common alleles that explain common diseases like heart disease and type 2 diabetes -- died far too slowly given what was obvious from the beginning (never with any serious rationale, as some of us had said clearly at the time, we may not-so-humbly add), and the rare variants hypothesis that replaced it is rather inexplicably still gasping.  Or, as Kaiser writes, "...a popular hypothesis in the field—that the general population carries somewhat rare variants that greatly increase or decrease a person's disease risk—is not yet panning out."

Apparently the idea, then, is that there's still hope. Indeed, many geneticists believe that larger samples are the answer.  That is, studies that include tens or hundreds of thousands of individuals, because these will be powerful enough to detect any strong effect rare variants may have on disease, in theory explaining the risk in the center of the graph from the paper, which we reproduce here.  Kaiser cites geneticist Mark McCarthy of the University of Oxford in the United Kingdom: “We're still in the foothills, really. We need larger sample sizes."  Further, he says, "The view that there would be lots of low frequency variants with really big effects does not look to be well supported at the moment." 

Fig from Kaiser. New studies failing to explain the genetics of common disease.  


Even with larger sample sizes, it turns out that some variants are so rare that they're only seen once.  And probably explain only a small proportion of risk anyway, even in that single individual. And certainly can't be used to predict disease. But this doesn't stop geneticists from wanting to increase sample sizes, at this point usually by doing exome sequencing (sequencing all the exons, or protein coding regions) of tens of thousands of people and looking for rare variants with large effects.  Ever hopeful.  McCarthy, a seriously non-disinterested party to any such discussion, is not likely to give up on ever-larger scale operations; that would be research-budget suicide, regardless of the plausibility of the rationales.

Rare variants do explain disease risk
Which brings us to the big news story of the week, a paper in Nature by geneticist Josh Akey et al., described in a News piece by Nidhi Subbaraman in the same journal, 'Past 5,000 years prolific for changes to human genome.'  The idea is that the rapid population growth of the last 5,000 years has resulted in many rare genetic variants, because every generation brings new mutations, and that these are the variants that are most likely to be responsible for disease because they haven't yet been weeded out of the population for being deleterious.

The research group sequenced 15,336 genes from 6,515 European Americans and African Americans and determined the age of the 1,146,401 variants they found.  "The average age across all SNVs was 34,200±900years (±s.d.) in European Americans and 47,600±1,500years in African Americans..."  They estimated that the large majority of the protein-coding, or exonic single nucleotide variants (SNVs) "predicted to be deleterious arose in the past 5,000-10,000 years."  Genes known to be associated with disease had more recent variants than did non-disease genes, and European Americans "had an excess of deleterious variants in essential and Mendelian disease genes compared to African Americans..."

They conclude that their "results better delimit the historical details of human protein-coding variation, show the profound effect of recent human history on the burden of deleterious SNVs segregating in contemporary populations, and provide important practical information that can be used to prioritize variants in disease-gene discovery."  Indeed, the proportion of SNVs in genes associated with Mendelian disorders, complex diseases and "essential genes" (those for which mouse knockouts are associated with sterility or death) that were 50,000 to 100,000 years old was higher in European Americans than in African Americans.  The authors propose that this is because these variants are associated with the Out-of-Africa bottleneck as humans migrated into the Middle East and Europe, which "led to less efficient purging of weakly deleterious alleles."

The researchers conclude:
In summary, the spectrum of protein-coding variation is considerably different today compared to what existed as recently as 200 to 400 generations ago. Of the putatively deleterious protein-coding SNVs, 86.4% arose in the last 5,000 to 10,000years, and they are enriched for mutations of large effect as selection has not had sufficient time to purge them from the population. Thus, it seems likely that rare variants have an important role in heritable phenotypic variation, disease susceptibility and adverse drug responses. In principle, our results provide a framework for developing new methods to prioritize potential disease-causing variants in gene-mapping studies.  More generally, the recent dramatic increase in human population size, resulting in a deluge of rare functionally important variation, has important implications for understanding and predicting current and future patterns of human disease and evolution. For example, the increased mutational capacity of recent human populations has led to a larger burden of Mendelian disorders, increased the allelic and genetic heterogeneity of traits, and may have created a new repository of recently arisen advantageous alleles that adaptive evolution will act upon in subsequent generations.
This does seem to contradict the Kaiser piece we mention above, which concludes that rare variants with large effect will not turn out to explain much common disease.  This paper suggests they will -- which we don't think is right, for reasons we write about all the time.  But it does lend support to the idea that the Common Variant/Common Disease hypothesis is dead and buried. 

Serious questions
It is curious, and serious if true, that Africans harbor fewer rare variants than Eurasians.  African populations expanded rapidly since agriculture, just as Eurasians did.  It could be, but seems like rather post-hoc rationalizing, that Africa is more dangerous to live in, even for only mildly harmful variants.  Rapid expansion--the human gene lineages have expanded a million-fold in the last 10,000 years, will lead to many slightly harmful variants being around at low frequency, because slight effects aren't purged by selection as fast as they are generated in an expanding population.

In a sense the deluge has not been of functionally important but rather functionally minimal variants.  Maybe there is something about the raised probability that a person will have a combination of such variants, and the variants could be found by massive samples.  But then their individual effect probably isn't worth the cost of finding them, as a rule.

But where's the nod to complexity?
But, environments change, and genes now considered to be deleterious may not have been so in previous environments, or may even have been beneficial.  And African Americans don't represent a random sample from the entire African continent, as their ancestry is predominantly West African, and SNV patterns are likely to be different in different parts of Africa.  And, numerous studies have found that healthy people carry multiple 'deleterious' alleles, so the idea that 84% of SNVs will lead to disease is probably greatly exaggerated. Geneticists just can't bring themselves to acknowledge that complexity trivializes most individual genetic effects.

The more likely explanation for complex disease continues to be, "It's complex."

Thursday, November 29, 2012

Where should reductionism meet reality?

The dawn of empiricism
The march of modern science began, roughly speaking, in the period about 400 years ago when modern observational science replaced more introspective theoretical attempts to characterize the world.  The idea was that we can't just imagine how the world must be, based on some ancient and respected thinkers like Aristotle (and the 'authority' of the Bible and church).  Instead, we must see what's actually out there in Nature, and try to understand it.

Designs for Leeuwenhoek's microscopes, 1756; Wikipedia
Another aspect of the empirical shift was the realization that Nature seemed to be law-like.  When we understood something, like gravity or planetary motion or geometry, there seemed clearly to be essentially rigid, universal laws that Nature followed, without any exception (other, perhaps, than miracles--whose existence in a way proved the rule by being miracles).

Laws?  Why?
Why do we call these natural patterns 'laws'?  That is a word that basically means rules of acceptable behavior that are specified by a given society.  In science, it means that for whatever reason, the same conditions will generate the same results ever and always and everywhere.  Why is the universe like that?  This is a subject for speculation and philosophers perhaps, because there is no way to prove that such regularities cannot have exceptions.  Nowadays, we just accept that at its rudiments, the material world is law-like.

What it is about existence that makes this the way things are is either trivial (how else could they be?) or so profound and wondrous that we can do no more than assert that this is how we find them to be.  However, if this is the case, and as evidence that new technologies like telescopes showed that classical thinkers like Aristotle had been wrong that the laws were so intuitive that we could just think about Nature to know them, then we need to find them outside rather than inside of our heads.  That way was empiricism.

The idea was that by observing the real world enough times and in enough ways, the ineluctable regular patterns that we could describe with 'laws' could be discovered. Empirical approaches led to experimental, or controlled, observation, but what should one 'control' and how are observations or experiments to be set up to be informative so we could feel that we knew enough to formulate the actual laws we sought?  As the history unfolded, the idea grew that the way to see laws of Nature clearly was to reduce things to the fundamental level where the laws took effect.  In turn, this led to the development of chemistry and our current molecular reductionism:  If absolutely everything in the material world (the only world that actually exists?) is based on matter and energy, and these are to be understood in their most basic, or particular (or basic wave-like) existence, then every phenomenon in the world must in some sense be predictable from the molecular level.

The alternative was, and remains, the notion that there are immaterial factors that cause things we observe in the material world.  Unless science can even define what that might mean, we must reject it.  We call it mysticism or fantasy.  Of course, there may be material things we don't know about, along with things we're just learning about (like 'dark' matter and energy, or parallel universes), but it is all too easy to invoke them, and almost impossible for that to be more useful than just saying 'God did it' -- useless for science.

If anything, reductionism that assumes that atoms and primal wavelike forces are all that there is could be like saying everything must be explained in terms of whole numbers, assuming that no other kinds of numbers (like, say, 1.005) exist.  But science tries, at least, to explain things as best we can in terms of what we actually know exists, and that, at present, is the best we can do.

But 'observe' at what level?
Ironically, this view of what science is and does doesn't help in some very similar ways.  That is the case for at least two primary reasons.

First, the classical view of things and forces is a deterministic one.  According to that, if we had perfect measurement, we could make perfect predictions.  Instead, it is possible or even likely that some things are inherently probabilistic.  Even with perfect observation, we can't make perfect prediction.  In what is actually not a true example but illustrates the point, even if we know which face of a coin is 'up' when we flip it, we can't predict how the coin will land.  All we can say is something like that half the time it will land with Heads up.

There is lots of debate about whether things that seem inherently probabilistic and hence each even not exactly predictable just reflects our ignorance (as it does in coin flipping!) or whether electron or photon behavior really is probabilistic.  At present, it doesn't matter: we can't tell so we must do our work as if that's the way things are.  One positive thing is that the history of science includes development of sampling and statistical theories that help us at least understand such phenomena.

But this means that reductionism runs into problems, because if individual events are not predictable, then things of interest that are the result of huge numbers of individually probabilistic events become inherently unpredictable except, at best, also in a probabilistic sense like calling coin-flips.  But with coins we know or can rather accurately estimate the probability of the only two possible outcomes (or three, if you want to include landing on the rim).  When there are countless contributing 'flips', so that the result is, for example, the result of billions of molecules' buzzing around randomly, we may not  know the range of possible outcomes, nor their individual probabilities.  In that case, we can really only make very general, often  uselessly vague, predictions.
Pallet of bricks; Wikipedia

Second, reductionism may not work because even if we could predict where each photon or electron might be, the organization of the trait we're interested in is an 'emergent' phenomenon that simply cannot be explained in terms of the components alone.  A building simply cannot be predicted from itemizing the bricks, steel beams, wires, and glass it is made of.

Complexity of the emergent sense is a problem science is not yet good at explaining -- and this applies to most aspects of life; e.g., we blogged about the genius of Bach's music as an emergent trait last week.    It, too, is something we can't understand by picking it apart, reducing it to single notes. In a sense, the demand or drive for reductionism is a struggle against any tendency to be mystic.  We say that yes, things are complicated, but in some way they must be explicable in reductionist terms unless there is a magic wand intervening.  The fundamental laws of Nature must apply!

Herein lies the rub.  Is this view true?  If so, then one must ask whether it is our current methods, that were designed basically for reductionist situations, need revision in some way, or whether some entirely new conceptual approach must rise to the challenge of accounting for emergent traits.

This seems to be an unrecognized but currently fundamental issue in the life sciences in several ways, as we'll discuss in a forthcoming post.

Wednesday, November 28, 2012

The more we know, the more complex the web

In light of our recent post on helminth infection and its possible protective effects, Dan Parker, our resident infectious disease expert, alerted us to a new paper (Nov 15, open access) in Malaria Journal from a group in French Guyana about the effects of helminth infection on malaria infection and transmission. Titled "Helminth-infected patients with malaria: a low profile transmission hub?," the authors propose that concomitant malaria and helminth infection are a special problem.
Studies in humans have shown increased malaria incidence and prevalence, and a trend for a reduction of symptoms in patients with malaria. This suggests that such patients could possibly be less likely to seek treatment thus carrying malaria parasites and their gametocytes for longer durations, therefore, being a greater potential source of transmission. In addition, in humans, a study showed increased gametocyte carriage, and in an animal model of helminth-malaria co-infection, there was increased malaria transmission. These elements converge towards the hypothesis that patients co-infected with worms and malaria may represent a hub of malaria transmission.  

If this is true, because helminth infection and malaria overlap in much of the world, helminth control could be an important aspect of malarial control (though, does that then set people up for the chronic autoimmune diseases that seem to be associated with helminth control in the industrialized world?). The question of co-infection has not been ignored in the last decade or so, the authors write, but the results have been somewhat confusing. Infection with hookworm, for example, has been associated with increased malarial infection while infection with Ascaris is associated with lower incidence of malaria and decreased severity. Mechanisms have been proposed, such as that anemia increases susceptibility, or that helminth infection modulates immune responses, and differing study designs may account for some of the discrepancy.

But, if co-infection really is a factor in increased transmission and higher prevalence of malaria, this shows yet again the insidious complexity of malarial control. The new 'longform' digital magazine, Aeon, recently posted an essay by David Barash, evolutionary biologist "and aspiring Buddhist" that, among other things, described a project to understand why some years trees were more infested with gypsy moths than others. Gypsy moths cause great damage to North Eastern forests of the US, but only periodically.

 The suspicion was that gypsy moth periodicity was somehow connected with acorn crops, which vary greatly from year to year. High crop years were associated with low gypsy moth infestations, and vice versa. High crop years, as it turns out, are also associated with high white-footed deer mouse populations, too, as they love acorns. And they also love gypsy moth larvae, and feast on both.

But, white-tailed deer also love acorns, and they carry deer ticks, often infested with Lyme disease.
What are the practical implications? Foresters might be tempted to try to distribute additional acorns, inhibiting gypsy moth outbreaks in order to improve lumber yields. But this might bring about Lyme disease epidemics: more mice mean fewer gypsy moths, but also more ticks. Alternatively, public health officials who want to reduce Lyme disease might look into various ways of chemically suppressing mast production, which might in turn bring about gypsy moth infestations. Finally, it’s possible that Lyme disease outbreaks might be correlated, oddly enough, with how many acorns are produced that year by the forest.
The interconnected, ecological nature of life is, in part, what makes controlling deadly diseases like malaria, or debilitating helminth infections, so difficult. Each tweak has potential unintended consequences, to some degree because we don't really understand the whole web in the first place. Which only means we need to get better at seeing the whole.

Tuesday, November 27, 2012

"A vast conspiracy of scientists"? Well, it depends.

It is now often said by our nation's ostriches, that is, those afraid of the real world who want to live in  Disneyland, that disturbing ideas such as evolution and climate change are just a vast conspiracy of scientists.  Since this is almost exclusively the territory of 'conservatives,' scientists and liberals have a field day ridiculing them.  And, generally, the ridicule is muted relative to what it perhaps should be.

The idea that science is making all of this up for some dark conspiratorial reason resembles the idea that the Elders of Zion, in some vast but deeply secret conspiracy, controls the world and keeps the wealth in the hands of Jews, or various similar theories about the Masons, that have been around for a long time.

Anyone can make up a conspiracy theory, since they are inherently unprovable.  If they could be exposed, they could be dealt with, after all.  Secrecy makes alleging them unfalsifiable.  That comforts those who want to live in the real world, and get votes, so long as the real world is what they think their potential voters want--or so long as the real world is a cotton-candy land.  Religions, of course, sometimes promise such a place in the afterlife, but based on dogma that challenges the facts of this one.

Those with their minds open know that this conspiracy theory is just plan false.  The 'conservatives' who proffer it are either so ignorant as perhaps should be institutionalized for everyone's safety, or are immoral demagogues playing the fear card to get votes to advance their own personal careers and those with whom they conspire(!) for power and all that goes with it.

In fact, there is no mechanism by which this sort of quiet, unobservable agreement to hoodwink the world to a false doctrine, as if orchestrated by the devil, could take place in science.  And why would it?  Even we can't say that this is because it's good for getting grants, no matter how greedy science is.  That's because any sound theory would get grants.  If it weren't 'evolution', we would still have to explain the panoply of complex living organisms, or the dynamics of the atmosphere.  We would be just as greedy and materialistic, no matter the theory, because we have our egos, our jobs to protect, the equipment-makers and students and technicians who depend on science, and the journals who churn out our results to avid reporters to exaggerate for their own livelihood needs.

And yet....

On the other hand
But we do find something interesting and relevant now, and in the entire history of science.  In any era, and any science, there is an accepted view--call it a 'paradigm' or a 'theory' or whatever else.  Scientists share this view as a rule.  We're taught it in school, we structure our research around it, to show things it predicts or that are consistent with it.  The grant and publication system are fitted to it.  Textbooks codify it.  It is, in a rather real sense, a dogma.

It is a dogma because we usually ostracize or ridicule dissenters from it.  We question their ability, sincerity, or even sanity.  Curmudgeonly dissenters, especially those offering some new general theory, are always around.  But they are a threat to orthodoxy just as dissenters are to any tribal identity.  Mostly they are wrong, perhaps, or even on the whacky fringe.  But history shows that some of them have been right, and their ideas are adopted as the new working theory of their age.

It has this attribute in another sense as well.  No one scientist can know all the evidence.  We can try to understand the literature, but we can't re-do everyone's experiments.  We must rely on, yes, faith that what is reported is essentially truthfully reported.  We don't take publications as unchallengeably perfect but we do rely on their honesty.  In that sense, the theories we agree to are those reported in this literature, and we agree for that reason and because the theory fits the data that we do actually and directly know about.  But there is that element of faith--and it's not a 'conspiracy'.

So, in a sense, science does have some of the traits that could be seen, by outsiders whose views or desires are threatened by the theory du jour, as conspiracies.  After all, to them all the scientists in power somehow seem to hold to the same view!  How else could that happen, if they aren't agreeing surreptitiously, by some back-room meeting, on what to say?

Of course that is not what happens, but the thought control that the system does impose, has some of the characteristics of a 'conspiracy', the kind of in-group handshake you have to know to be a member of the group.

The difference, which is of course profound, is that the 'conspiracy' of science is vulnerable to the actual facts of the world.  Eventually, if those facts just don't fit the theory of the day, a new theory will arise.  It may have to overcome what can seem like conspiratorial resistance but unlike what is alleged by the conservative movement (a conspiracy itself?), science is in fact done mainly out in the open, for all to see or replicate if they can or want to.

Science may be a dogma to a substantial extent, despite its pious denials to the contrary, but it isn't a secret conspiracy and it does, even if too slowly, have to fit the facts of the world.

Evolution and climate change are based on current theoretical frameworks of science.  They are often held or invoked by scientists who don't know, or don't think about, enough facts to be as critical of elements of the sciences as perhaps they should.  After all, no science now or ever completely explains everything perfectly.

One just has to talk to the ordinary citizen who is not involved in science, to see how often and how utterly uninformed they are (this is separate from the astounding fact that apparently nearly  half the US population doesn't 'believe' in evolution).  This is an awful failure of our educational system.  One would like to say that we need to have scientists spend much more time educating the public.  Unfortunately, that could lead to a lot of dogma being purveyed, given the sometimes nearly ideological or cardboard view many even professional scientists have of 'evolution', a subject on which we expound regularly.

In any case, scientists should be as careful as we can not to be exclusionary in the way other dogmas are, and to own up willingly to our limitations and to aspects of our science where doubt is really in order.  Our posts routinely criticize the excesses and dogmas that we personally think are too influential these days, and we try to explain why we think as we do.

But areas not yet understood and even the role of dogma in science notwithstanding, the notion that science is a conspiracy in the way that the 'conservatives' allege is a complete fantasy.  Based on the evidence currently available, evolution happened.  Based on the evidence currently available, the earth's climate is warming.  Based on the evidence currently available, human activities are an important contributor.

And, sadly, based on all the evidence, Disneyland is just a theme park.

Monday, November 26, 2012

The beta globin complex and malaria


Blood disorders are a fascinating viewpoint from which to study human microevolution.  Some disorders, perhaps especially hemoglobin S, the abnormal hemoglobin variant responsible for the sickle cell trait and sickle cell disease, represent textbook cases of balancing selection and the ‘heterozygous advantage’.  Briefly, balancing selection occurs when a polymorphism is maintained in a population at levels higher than are expected by chance.  In the case of hemoglobin S, this occurs because heterozygotes have an adaptive advantage above that of the homozygote.  Homozygotes aren’t likely to survive long because blood cells are deformed, leading to various medical problems.  Conversely, heterozygotes have a less severe case of the sickle trait while also being protected against severe malaria.  In malarious regions the balance is therefore between the benefits of being a heterozygote and the severe costs of being a homozygote.

But the complexity of blood and blood-related disorders, and their interactions with malaria parasites, is usually glossed over.  A little background on hemoglobin:

Normal hemoglobin is composed of two α-globin proteins and two β-globin proteins, with each protein being capable of transporting oxygen (which binds to each globin’s heme) throughout the body.  ‘Normal’ individuals have two copies each of ‘α’ and ‘β’ genes, located within the α and β gene complexes, each of which contains the genetic information needed to construct four proteins per hemoglobin molecule (a tetramer).  The α globin gene complex is located on chromosome 16 while the β globin gene complex is located on chromosome 11, meaning that globin production is nicely synchronized with coordination from two very different genomic locations.

All vertebrates have α and β globins in their blood.  Most fish, amphibians, and reptiles appear to have globin genes lumped into one cluster, on the same chromosome.  It is thought that, over evolutionary time, gene duplications, changes in regulatory regions, and splitting of the genes onto different chromosomes allowed for developmental and stage specific expression of the globin genes we now see in humans.[1,2]    
However, not all hemoglobin is created equal.  Adult hemoglobin (as opposed to fetal or embryonic hemoglobin) is heterogeneous, with α chains combining with either β or δ chains (see images below).  Fetal hemoglobin is also heterogeneous, with a mixture of α and γ chains, and with the γ chains being either γG (for glycine at position 136) or γA (for alanine at position 136).    

For now, I will only focus on the β globin gene complex, which is sandwiched in between some olfactory receptor genes on the short arm of chromosome 11.  Moving left to right, we have a locus control region (which is essential for the expression of genes within the β globin complex), the ε globin gene, two γ globin genes (G and A), the ψβ1 ‘pseudogene’ (which is preserved in most mammals that have been studied), the δ globin gene, and finally, the β globin gene.


Which genes are expressed at which time is heavily regulated by the locus control region, which regulates the transcription levels of genes in the β globin gene complex.  Both human genetic disease studies and deletion studies in mice have shown this region to be imperative for β globin gene expression, however the exact mechanism through which it promotes expression isn’t fully known.  Some studies have indicated that DNAse hypersensitive I sites within the locus control region are what control gene exposure to transcription factories, and/or that the locus control region may form a chromatin loop, actually coming into direct contact with the genes within the β globin complex.[3,4]  

Regardless of how the process works, there is a symphony-like succession of instrumental players, which work together but are chronologically switched on and off during the mostly early parts of the lifespan.  Hemoglobin switching is what occurs when one gene largely ceases to be expressed while another’s expression is increased.  This switch isn’t exactly binary as there are cline-like changes that ultimately lead to shifts in the proportion of globins that are present in hemoglobin.[5]      

For example, see the image below.  You can see that embryonic (ε) globin is present very early after conception but largely ceases after ~ 6 weeks post-conception.  Adult, β or δ, globin is present very early but at low levels.  It rises slowly until there is a switch somewhere (probably early) during the first year of life.  At that point fetal (γ) globin has reduced in its proportion of total hemoglobin to a level lower than adult hemoglobin, and continues to decrease.  Î³ globin will probably continue to be produced, at very low levels, throughout the lifespan of this individual.  Notice that this progression follows the ordering of the genes within the gene complex, which correspond chronologically to developmental stages.  

Image adapted from Weatherall and Clegg 2001


There is a lot of natural (unharmful) variation in the β globin genes.  Conversely, there are also a lot of hemoglobin pathologies that are associated with the β globin gene complex, especially the β-globin gene (and to some extent the δ globin gene and the expression of fetal hemoglobin).  

For example, if the expression of one copy of β-globin isn’t expressed then there is an imbalance in hemoglobin chain production and the result is β thalassemia.  Most people with β thalassemia continue to produce higher levels of fetal hemoglobin than do people with normal hemoglobin production.  If neither copy of the β-globin gene produces globins then severe disease begins to manifest during the first year after birth, when fetal hemoglobin has decreased and adult hemoglobin has increased to a high proportion of total hemoglobin.  In the absence of blood transfusions, the outlook for these individuals is pretty grim.  Regular blood transfusions may prolong life.[5]  

There are several other things that can go wrong in the β-globin gene.  Hemoglobin S (sickle cell), hemoglobin C, and hemoglobin E, for example, are all related to variation in the β globin gene.  

While all of these hemoglobin disorders are an interesting story in themselves, our evolutionary history may have influenced their distribution both regionally and globally.  Î² thalassemia, as well as each of the other disorders that I’ve mentioned here, historically occurs in regions with heavy malaria burdens and appear to have protective benefits, in the heterozygous state, against malaria morbidity or infection.  [There is still debate about how that actually works and I may return to this in a later post.]  Î² thalassemia, which is quite diverse in its symptoms and genetics, occurs throughout much of Africa, the Mediterranean, Southern and Southeast Asia, and into the Pacific.[6]   

But the story isn’t over there.  I mentioned that fetal hemoglobin is frequently present at elevated levels in individuals with β thalassemia.  However, sometimes it is present in adulthood at the same high levels that you would see in the first weeks of life, a condition known as hereditary persistence of fetal hemoglobin (HPFH).  HPFH appears to have no (or at least very minor) side effects, meaning that you could potentially have the genetic susceptibility to severe β thalassemia without ever experiencing it.  The same is true for other β globin disorders.  

Some forms of HPFH are the result of deletion of the delta and β globin genes.  However there is also heterogeneity in HPFH, likely related to different deletions in different populations and potentially in mutations of gene repressors.  Perhaps one of the most interesting things about HPFH, aside from its protective effects against harmful variants in the adult hemoglobin genes, is that malaria appears to have a hell of a time getting into fetal hemoglobin blood cells.[7,8]  

Furthermore, while the potential protective effects of β thalassemia (with regard to malaria) haven’t really come to light it has been suggested that it is actually the extra production of fetal hemoglobin (especially in early life) that is protective.  Most mortality from malaria occurs in early life.      

And to add to the complexity, the aforementioned traits can be inherited together.  Because you have two copies of each, you could potentially have both hemoglobin S and β thalassemia.  The compound hemoglobin E and β thalassemia trait is actually relatively common in Southeast Asia (hemoglobin E doesn’t usually have severe side effects).  For that matter, since HPFH can sometimes be the result of over expression of fetal hemoglobin genes, it’s at least possible to have more than two of these hemoglobin disorders at the same time.  And we’re only talking about one half of the globin story here, I haven’t even discussed the α globin gene complex.  Finally, there are several other blood and blood related disorders that could be simultaneously inherited: G6PD-deficiency, Southeast Asian ovalocytosis, or even compliment receptor 1 polymorphisms.  The story with regards to inherited malaria immunity, when you consider all of the potential combinations, is inherently complicated.  And we’re not even talking about acquired immunity to malaria here.  

After a half century of work on the blood disorders related to this region of the genome, we do know a relatively lot.  What is clear, however, is that the story is super complex and that there is still a lot that we don’t know.  Given the harmful effects of being a homozygote for most of these traits, and the fact that their distribution largely matches historically malarious regions, I’m fairly certain that they are an example of adaptation to a pretty severe environmental stress.  However, I think this is an example of the complexities inherent in understanding ecology and evolution.  Even in relatively clear cut cases of human evolution there aren’t simple universal fixes to environmental stressors (like malaria) and in fact, simplistic evolutionary explanations aren’t likely to be fully correct.  

References
1. Hardison R (1998) Hemoglobins from bacteria to man: evolution of different patterns of gene expression. The Journal of experimental biology 201: 1099–1117. 
2. Gillemans N, McMorrow T, Tewari R, Wai AWK, Burgtorf C, et al. (2003) Functional and comparative analysis of globin loci in pufferfish and humans. Blood 101: 2842–2849.  
3. Dean A (2006) On a chromosome far, far away: LCRs and gene expression. Trends in genetics 22: 38–45.  
4. Fleetwood MR, Ho Y, Cooke NE, Liebhaber S a (2012) DNase I hypersensitive site II of the human growth hormone locus control region mediates an essential and distinct long-range enhancer function. The Journal of biological chemistry 287: 25454–25465. 
5. Weatherall D, Clegg J (2001) The Thalassaemia Syndromes. 4th ed. Wiley-Blackwell. p.
6. Williams TN, Weatherall DJ (2012) World distribution, population genetics, and health burden of the hemoglobinopathies. Cold Spring Harbor perspectives in medicine 2: a011692.  
7. Pasvol G, Weatherall DJ, Wilson JM (1977) Effects of foetal haemoglobin on susceptibility of red cells to Plasmodium falciparum. Nature 270: 171 – 173.
8. Shear H, Grinberg L, Gilman J, Fabry M, Stamatoyannopoulos G, et al. (1998) Transgenic mice expressing human fetal hemoglobin are protected from malaria by a novel mechanism. Blood 92: 2520–2526. 





Friday, November 23, 2012

Thanksgiving poems

A long-time and much-appreciated reader of this blog, Edward Hessler, has kindly included us again in his yearly sharing of a Thanksgiving poem.  When we wrote to thank him, telling him that the poem brought tears to our eyes, he sent another.  Which made us laugh.  He has happily consented to us thanking him here, and sharing his choices.


Perhaps the World Ends Here

The world begins at a kitchen table. No matter what, we must eat to live.

The gifts of earth are brought and prepared, set on the table. So it has been since creation, and it will go on.

We chase chickens or dogs away from it. Babies teethe at the corners. They scrape their knees under it.

It is here that children are given instructions on what it means to be human. We make men at it, we make women.

At this table we gossip, recall enemies and the ghosts of lovers.

Our dreams drink coffee with us as they put their arms around our children. They laugh with us at our poor falling-down selves and as we put ourselves back together once again at the table.

This table has been a house in the rain, an umbrella in the sun.

Wars have begun and ended at this table. It is a place to hide in the shadow of terror. A place to celebrate the terrible victory.

We have given birth on this table, and have prepared our parents for burial here.

At this table we sing with joy, with sorrow. We pray of suffering and remorse. We give thanks.

Perhaps the world will end at the kitchen table, while we are laughing and crying, eating of the last sweet bite.

--Joy Harjo, The Woman Who Fell From the Sky, W. W. Norton & Company, 1994.

Joy Harjo is from Tulsa, Oklahoma, a member of the Muscogee (Creek) nation and of Cherokee descent. She has also played alto saxophone with a band called Poetic Justice. She has played a powerful role in what has been described as the American Indian poetic renaissance. In January of 2013, Harjo will join the faculty of the American Indian Studies Program at the University of Illinois at Urbana-Champaign.



The Turkey Shot Out of the Oven

The turkey shot out of the oven
and rocketed into the air,
it knocked every plate off the table
and partly demolished a chair.

It ricocheted into a corner
and burst with deafening boom,
then splattered all over the kitchen,
completely obscuring the room.

It stuck to the walls and the windows,
it totally coated the floor,
there was turkey attached to the ceiling,
where there'd never been turkey before.

It blanketed every appliance,
it smeared every saucer and bowl,
there wasn't a way I could stop it,
that turkey was out of control.

I scraped and I scrubbed with displeasure,
and thought with chagrin as I mopped,
that I'd never again stuff a turkey
with popcorn that hadn't been popped.


--Jack Prelutsky



Thursday, November 22, 2012

Variations on a theme

It's Thanksgiving here in the US, and so we thought we'd illustrate our usual subject -- complexity --  much more pleasantly today, while we eat turkey rather than complain about the ones that get published.

In that holiday spirit, here's a video depicting Johan Sebastian Bach's "Crab Canon."  We stumbled across it on YouTube; it's attributed to Jos Leys and Xantox.



The mechanics of this canon are beautifully illustrated in this video rendering.  Musical canons come in a variety of forms but are essentially compositions in two or more voices, in which each voice can stand alone but they also weave together harmoniously.  Bach's Crab Canon, with its double counterpoint, is an example of one of the most challenging of canonical structures to compose.  And when it came to canons, Bach was the mad genius at following rules so intricate that it makes our ordinary heads spin.  As the video shows, this piece is Möbius strip-like in the way the two voices interweave.  (Indeed, here are instructions for making your own Möbius strip of this piece.)

But we wanted to understand more about canons, and specifically Bach's canons, so we asked our violinist daughter, Amie, some further questions, and she in turn sent us on to her polymath musician friend, Ben Grow.  He's a conductor, composer/arranger, teacher and multi-instrumentalist in New York City.

Ben writes:

"Bach is s a composer who, even during improvisations, could squeeze canons out of even the most unlikely-sounding musical material. Much of his music is imitative even if it isn't described as a canon, so when he set out to write a proper canon his powers were huge. That said, the "Crab Canon" is unusual even for Bach.

"A defining characteristic of canons is that their voices enter at less predictable time intervals than simple rounds and they often enter at intervals besides the unison. Also, canons can be imposed over an independent bass line. The canons of the Goldberg Variations, for example, occur at every diatonic interval through the major 9th and all but one have a separate bass that doesn't participate in the canon (the canon at the 9th is only two voices, so both participate; the bass line outlines the same chord progression as the aria that is used in every movement).



"The 'Crab Canon' is unusual because all the dissonances must be functional in both directions. Because we expect dissonances to resolve a certain way, it's difficult to give them a double role – it's even more difficult to make the thing sound like real music!

"The way a canon works is built on certain laws of tonality (the function of notes in dissonance and consonance), which I guess are slightly different between musical styles/eras, but are generally the same. Certainly, Bach's artistry is what's most impressive about the canon because the form seems academic and dry, but I don't think he stretched the rules. Canons either work or they don't, but the real test is if they sound like music or or something similar. This is where most others would follow the rules of tonality and counterpoint but create something awkward. Bach's mastery is that his ear and taste are completely connected to the more mathematical elements of music – I think his filters only showed him the beautiful options. Since he was such a master improviser the mere time it took to write music down was more than enough to edit the work to perfection. Canons are also rare, and I can't think of other composers of the time who wrote so many separately in addition to fragments nested within bigger works."

So, there are strict rules of musical form but whether or not a piece of music works, and beautifully does not depend on the rules being followed.  Indeed, following the rules is easy.  As Ben says:

"If you'd like to try to write a canon, just write a measure of music into one voice and then copy it a measure later into the second voice. The second measure of the first voice will need to fit over the second voice's first measure (which is the same as the first voice's first measure). You can trial-and-error stitch together a canon this way if you don't have the rules of harmony to guide you – you can just use your ear!"

So I did just that.  I have never composed music of any sort before, so this really was just following the rules to see what happened.  This was not, of course, the experience true composers are said to have, of simply transcribing the music they hear in their heads!  To make it easier, and in keeping with the genetic-y theme of this blog, Amie suggested I use just the letters A, C, G, and T -- though, there not being a note T, I substituted E instead (to stretch this analogy to the breaking point, we could say that's rather like RNA using U, uracil, instead of T.)

The result?  The ACGT Canon, composed by me and recorded by Amie.  She decided to play it pizzicato, and here, if you'll forgive the hiss in the recording because she recorded it on her computer, is what we got.



You'll agree that it follows the rules.  But, ahem, you'll also agree that clearly not just anyone could produce the kind of beauty that Bach did, over and over and over; music that transcends time and touches souls.  But if his greatness did not come from stretching the rules in ways that others hadn't thought of but instead he wrote within the rules in ways that others didn't or couldn't do, what made him great?  Perhaps that is an emergent property made up of, but not explained by, an assemblage of parts: in Bach's case, the result of a combination of instruments or voices, conception, inspiration and genius, the whole being greater than the sum of these parts.

That is, just like life, it's complex, it's irreducible.  But achingly beautiful.  An organism is an assemblage of atoms rigidly following the universal rules of chemical interaction.  Slap some nucleotides together, following the rules, and you'll get something, but the rules themselves don't tell you what it will be.

These atoms make up gargantuan molecules, like DNA, that, too, interact rigidly following chemical rules.  In turn, massive complexes of DNA, RNA copies, and proteins literally by the thousands interact following physical rules but in locally organized regions, and surrounded by millions of other molecules that make a cell membrane -- again, each rigidly following the rules of molecular interaction.  And if that weren't enough, a human is made of countless billions of cells that interact mechanically.

In all this, there is a chaotic buzz of chance factors, variations on the theme. But it is not chance that organizes these elements into the emergent, unpredictable, awesome, beautifully organized trait that is you -- you who, using your assemblage of countless molecules, can appreciate beautiful music.

Despite our hubris, whether we will ever truly understand how this happens is quite a thought to chew thankfully on.  So, whether it's a holiday for you today or not, we hope you simply enjoy this venture into the beauty of Bach.  Thanks, Ben.

Wednesday, November 21, 2012

Hold the presses! Growth factor genes do NOT affect head shape!

We tend to wax rather cynical about mapping studies (GWAS and the like) that find many miniscule effects and only an occasional major, replicable one.  We feel that this approach has played out (or been spent out) and that better, more thoughtful and focused science is in order these days.

One typical daily fact is the breathless announcement of finding a gene 'for' some trait or other, be it disease or even whether you voted for Romney or were one of the misguided bearers of the 'liberal' gene.  Extra!  Extra! Read all about it!  New gene for xxxx discovered!!!

We, too, are guilty!
Well, we must confess:  we are also involved in gene mapping.  Though, in fact we've confessed this before (here, e.g.), if in an MT sort of way.  In our work, we are using head shape data on a huge set of baboons who are deceased members of a huge and completely known genealogy housed in Texas, and  a large study of over 1000 mice from a well-controlled cross between two inbred parental strains.  Now these data sets are about as well-controlled as you can get.  In each case, marker loci across the genome were typed on all animals and the GWAS-like approach was taken to identify markers in genome areas in which variation was associated with various head dimensions (as shown in the figure).  These are then regions that affect the trait, and are worth exploration.

We regularly see  proclamations of discoveries of genes 'for' craniofacial shape dimensions.  Two recent papers report Bmp genes, thought to be involved in head development, and many others have reported FGF (fibroblast growth factor) results of major shapes.  These findings have been so casually, or even perhaps carelessly, accepted as to have become part of the standard lore.

But based on our large, very well-controlled study of two species we beg to differ!

But, no--this time Extra! Extra!  We really mean it: NO!!
After careful examination of signals from our mapping of baboons, and separately of the inbred mouse cross, we have come to the startling finding, with high levels of statistical significance rarely matched by other GWAS on this subject, that neither Bmp nor FGF genes are involved in head-shape development!

Scanning the genome markers separately in both large data sets shows clearly that there is no effect of these genes on any head shape dimension.  None!

In fact, this is not so unusual a finding, as many studies simply fail to find even what were thought to be clear-cut causal genes.  Yet, these findings are not eagerly sought nor published by Nature, Science, The New York Times (the leading scientific outlets these days--we do not include People in our list, despite its typically similar level of responsible fact-reporting). 

We are using lots of exclamation points, but we are totally serious.  Though we shout from the rooftops that we have found no evidence for these genes' involvement in head shape, we will have no hearers.  It seems just not in anyone's interest to see their previous, hard-won findings debunked. 

Or is there a different kind of lesson here?

What evolution leads us to expect
The lack of reporting of negative results is something of a scandal, because negative evidence tends to disprove previous findings by not supporting them.  That seems important, but even to skeptics like us there are important issues that should be understood.

A widely proclaimed tenet of modern science is called 'falsificationism'. The idea is that all the positive findings in the world don't prove that something is true, but a single failure to find it proves that the idea was just plain wrong.  Just because the sun has risen every day so far does not by itself prove it will rise tomorrow.  Night does not cause day!

But some negative findings don't really falsify previous positive findings.  There can be sampling issues or experimental failure and so on.  So our not finding FGF effects on head shape doesn't falsify previous findings.  We could be wrong.

But even if we are wholly right, as we're pretty sure we are, this does not in any way whatsoever undermine prior findings, that seemed quite solid, that these genes are in fact involved.  This is because standard ideas like falsification are based on a kind of science in which repeat experiments are expected to give statistically similar results: rolling dice, working out planetary orbits or chemical reactions are examples of things that follow natural laws and are reliable.

Here, however, we're dealing with life, and life is not replicable in that sense!  Each sample really is different: different sets of people have different sets of genotypes.  A negative finding is not a refutation of a prior assertion.  Negative findings are expected in this area of science!

The reason we did not find FGF effects in our baboon or mouse studies is not that the FGF genes were uninvolved in head shape during development but that there was no relevant variation in those genes in our particular sample.  Neither of our inbred mouse strains carried variants at those genes that affect head traits.  But they did both use FGF genes in making their own heads, and the intercross animals did, too.

Even if a gene is involved in a trait in a functional way, there is no reason to expect that the same gene varies in a given study at all, or enough to have a detectable effect.  Indeed, sometimes such genes are so central that they aren't free to vary without deleterious or fatal consequences.  It's one of the problems of mapping that it is not a direct search for function. Instead it is just a search for variation that may lead us to function. If by bad luck our sample has no variation in a particular gene, it can't lead us to those genes by case-control kinds of statistical comparisons.

Dang it!  Despite how important and under-appreciated these points are, this means that our negative findings aren't really negative and won't be of any interest to the Times

However, this doesn't mean that mindless GWASing is OK after all.  We've explained our view on this many times before.

But it does mean we need to contact the Editor at People.

Tuesday, November 20, 2012

Not much more than an excuse to see an amazing goal

If you want to maximize your chances of success today (unless you're competing in sports), you'd better banish the color red from anywhere around you, because red is associated with failure.  These results are reported in a paper published in 2007 ("Color and Psychological Functioning: The Effect of Red on Performance Attainment," Elliot et al.), but we just heard about them on a repeat of the Nov 16 BBC radio program, "The Why Factor," on which a panel discusses the cultural meanings of the color blue.

From the paper abstract:
Red is hypothesized to impair performance on achievement tasks, because red is associated  with the danger of failure in achievement contexts and evokes avoidance motivation. Four experiments demonstrate that the brief perception of red prior to an important test (e.g., an IQ test) impairs performance, and this effect appears to take place outside of participants’  conscious awareness. Two further experiments establish the link between red and avoidance motivation as indicated by behavioral (i.e., task choice) and psychophysiological (i.e., cortical activation) measures. The findings suggest that care must be taken in how red is used in achievement contexts and illustrate how color can act as a subtle environmental cue that has important influences on behavior.
Presumably this is known to psychologists, but we didn't know it -- there's a lot of empirical research done on the effects of color on performance and productivity, and some older theoretical work on color and psychological functioning.  A 1942 paper described physiological effects of certain colors in psychiatric patients, with, e.g., red and yellow being experienced as "stimulating and disagreeable" and serving to "focus the individual on the outward environment, whereas green and blue are "quieting and agreeable and focus individuals inward."

Apparently not much work was done on this between then and the work reported in 2007, and replicated a number of times since then (though much of it by Elliot and colleagues).  Various hypotheses about the effect of color on mood had been proffered, but based, according to Elliot et al., on suspect premises and not rigorously tested.  Naturally enough, people have tried to find differential effects of pink vs blue, but with no significant results.  So, in 2007, the time was ripe for a rigorous test of the effect of color on performance.  (The things people can get funding for.)

The idea they tested, in six different experiments, was that red is culturally associated with danger -- stoplights, fire alarms (fire alarms?), and warning signs -- and red marks on school papers are associated with the psychological danger of failing, so that exposure to red will elicit failure.  The idea that the association becomes causation seems a bit of a stretch, and even a logical fallacy, but apparently the psychological literature is replete with examples of just this; exposure to a negative object associated with failure causes fear of avoiding failure, which produces anxiety, which impacts performance -- i.e., causes failure. And of course this is all subconscious.

Well, so much for logical fallacies, apparently.  Each of the experiments, described at length in the paper, "provide strong support for our hypothesized effect of red on performance."  Not only did they measure the effect of color on achievement test results, but also, with EEG, discovered that different parts of the brain were activated after red vs green exposure.  

On the 20 question IQ test, the 15 participants in red conditions did significantly worse than those in green conditions; the former got an average of 10 or so questions correct, while the latter got an average of 13+ correct.  This same basic trend was observed for each of the experiments.  Not, it must be said, wildly different, but a trend.  Elliot et al. suggest that their findings have social consequences as achievement tests are "filtering devices in society."  They propose that cultural influences, such as our response to color, on achievement are profound. 

If these results are true -- and it's hard to tell from the paper whether the effect on IQ scores themselves would be significant -- and if IQ scores are so readily influenced by psychological factors, then what are these tests actually measuring?  Most arguments about IQ have to do with 'racial disparities,' the difference in mean scores by race, and whether that is innate or cultural, but if, e.g., a test proctor is wearing red, and red does have an effect, it would be on everyone in the room, not just one race.  That is, everyone in cultures for whom red is a danger signal.  If this effect is real -- and if it is, this doesn't mean that the explanation offered in the Elliot paper for it is necessarily correct -- these results would point to an inherent instability in IQ scores due to unmeasured confounders, whether or not you believe IQ itself is real. This, of course, is something that plagues many many types of studies.

The persistence of gene-worship in the face of these facts is somewhat striking.  A gene devotee would say that we knew that not all of a trait is genetics.  The 'heritability' or fraction of the variation that is due to genetic variation is usually around 40%.  This doesn't mean that a particular genotype accurately predicts a particular trait, like IQ, but it means there are other factors.  Perhaps seeing red is one of them.  This should pull some of the obsession away from genetics, because we don't know these factors well, there are many of them, and they may change rapidly in our culture, unlike genes, so that the predictive power of genes also changes correspondingly since genetic effects are mediated by these other, changeable, factors.

In any case, a quick review of the literature suggests that in sports red is associated with success -- apparently this is well-known.  In competitive sports, that is; red swim suits are good, and red soccer shirts, red t-shirts for any sport.  Red either makes you win, or makes your opponent lose, depending on your interpretation of the studies of this.  If the latter, then clearly the competitor sporting red just can't look down.

Hey, it worked when Ibrahimovic was playing for Barcelona!

Zlotan Ibrahimmovic, when he played for Barcelona






But here, Ibrahimovic is in gold, and the goalie in red.... (and, by the way, he's making what some have said is the best goal ever scored.)



 Enough said?


Monday, November 19, 2012

"Evidence" swamped by human foibles -- again

Here we are in the age of "evidence-based medicine," when medical practice is supposed to be standardized and based on empirical results.  So how is the kind of story in yesterday's New York Times still possible?  In 'How Back Pain Turned Ugly' Elizabeth Rosenthal tells how a batch of injectable steroids destined to treat chronic lower back pain ended up killing 32 people and injuring 438.  And, even on a good day it's not clear how many this treatment would have helped.

There are numerous aspects of this story that just shouldn't have happened.  Of course, the malfeasance in the factory, The New England Compounding Center, is number one.  They were lax in their quality oversight, so much so, Rosenthal writes, that some of the steroid vials they shipped to clinics were so contaminated that a visible white fuzz was floating in them.  If this is true, this is a tragic example of people only seeing what they expect to see.  The evidence that this was a tainted drug was right before the eyes of the medical personnel who took the vials out of their refrigerators and filled syringes with the contaminated stuff, but they didn't see it.  Too often, what constitutes 'evidence' is often what we expect to see.

Source: forgetomori.com

This picture of a hamburger, for example, is shades of red and grey, but if you aren't color blind you are seeing a red tomato, green lettuce, yellow cheese and a beige-ish bun.  But,  those colors aren't actually on your screen.  Your brain is filling them in.  This is a point made by Edwin Land, founder of Polaroid film technology.

In the clinic, a doctor will see sterile medicine when that's what she or he expects.  (Yes, experimental results can get interpreted in this way, too -- as can projected election winners, and, say, whether Fox News should have called Ohio for Obama so early in the evening of Nov 6.)

A second interesting aspect of the back pain story is that how to treat back pain is, speaking of shades of grey, still a grey area in medicine.  When should it be treated, and with surgery or with steroid injections?  Doctors pretty much agree that the pain should have lasted 4-6 weeks before any treatment.  Evidence?  As Rosenthal says,
... steroid shots are not a cure-all, even for the conditions for which doctors agree an attempt is worthwhile: low back pain accompanied by signs of nerve injury like tingling or weakness in a leg. One-third of such patients will get better, one-third will show some improvement and some will show no improvement at all...
This is just the kind of situation we write about all the time here on MT with respect to prediction of disease from genes; just as every genome is unique, every case of lower back pain is unique because every person has a different history of back injury, stresses, muscle strength, pain tolerance, and so forth, making it very difficult if not impossible to predict who will respond well to which treatment.  Thus, the probability of improvement is 1/3, but the probability of getting worse is also 1/3.  Though, the probabilities are likely to be about the same with no treatment at all.

Similarly, genetics wrestles with the degree to which a given genotype predicts 'the' trait and trying to specify some probability of that.  But if the trait is itself variable, perhaps the issue would be to predict, say, a range of severity or manifestation.  But this is a very slippery often perhaps self-confirming way to think, given that we can imagine symptoms if we expect them, or can force a diagnosis we want to see.

And this is just like trying to predict who will have a heart attack, a stroke, develop type 2 diabetes, or Alzheimer's disease, based on the best evidence we have -- genetic indicators and lifestyle risk factors.  That is, we're not yet very good at it except when very strong risk factors are involved.  Indeed, even one of the strongest environmental risk factors we know of, smoking, isn't all that useful in predicting on an individual level who will or won't get sick from it.

"Evidence-based" assessment and treatment of back pain shares something else with genetics.  And that is that it's equally confounded by, entangled with and driven by financial or other material or career or professional kinds of interests considerations. As Rosenthal writes:
The shots — which may include a steroid and an anesthetic — are often dispensed at for-profit pain clinics owned by the physicians holding the needle. “There’s a lot of concern about perverse financial incentive,” Dr. Friedly added.
The increase in treatment has not led to less pain over all, researchers say, and is a huge expense at a time of runaway health costs. “There are lots of places doing lots of injections for conditions that haven’t been shown to benefit,” says Dr. Janna Friedly, a researcher at the University of Washington, who added, “Sadly, some of the patients who got meningitis were probably in that category — they did not have conditions where steroid injections were indicated.”
And when that's true, it takes a lot of evidence to show that a technology or treatment that a lot of people have invested in really does not work, and shouldn't be done.  Sounds rather reminiscent of direct-to-consumer genetic prediction companies to us.  Indeed, Rosenthal says that "studies are at best inconclusive about exactly which groups of back-pain patients are likely to benefit from steroid shots."  Studies. Evidence. Murky conclusions.  

What does the evidence unequivocally show?  That medical practice is as equally confounded by human foibles, including greed, as is genetic research, and that complexity makes evidence very hard to interpret.

Friday, November 16, 2012

ASTMH 2012


I’m just back from the American Society of Tropical Medicine and Hygiene (ASTMH) 2012 meeting in Atlanta, Georgia, and once again my brain is swimming with the vast amount of information I’ve taken in.  The common theme to these meetings concerns diseases, mostly infectious, that afflict the tropical and subtropical regions of the world.  Attendees consist of medical doctors, veterinarians, disease modelers, public and global health workers, nurses, other health care workers, and social scientists, and they (we) come from all over the world.  And there appears to be a healthy mixture of students, professionals, and well-seasoned researchers.

I enjoy this meeting because many researchers and research teams present cutting edge research that I end up reading about in major journals for the rest of the year and beyond.  I was only there for two days this time around, but there were several really interesting talks to mention.

There was a lot of buzz about disappointing results with the new malaria vaccine (RTS,S), some disappointing results with regard to a dengue vaccine (it is not easily administered and doesn’t work for all strains), and decreasing sensitivity to artemisinins (blogged about here) in Africa, South America (Suriname), and some new places in Southeast Asia (Vietnam and Myanmar).  And there is debate about which is the best way to approach global health efforts.   

One symposium that I found particularly interesting concerned house architecture and vector control.  Essentially, some styles of housing are more prone to allowing mosquitoes in than are others.  For example, many of the houses in Southeast Asia are built on stilts and many of the important vectors prefer to fly low to the ground.  In the Thai villages where I work, people frequently keep livestock under their houses too, and these other large, warm blooded mammals may also act as a diversion for blood feeding insects that favor large mammals.  


Another important consideration with regard to housing and diseases concerns the comfort level of houses.  A frequently mentioned reason for not using bed nets is that they cut down on air flow at night.  I can attest that trying to sleep in the already stuffy tropics can be difficult in the absence of bed nets, let alone with them.  

Another very interesting symposium discussed some historical correlations between (frequently illegal) mining areas and malaria.  Mining areas tend to be rife with disease, perhaps especially mosquito borne disease.  These are places that have plenty of standing puddles of water (sometimes from the actual mining efforts), poor living conditions, poor sanitation, and potentially a high proportion of people who, for a variety of reasons, aren’t likely to seek and follow through with adequate medical treatment.  The symposium presenters discuss similar situations in Southeast Asia, South America, and Africa.  Anthropologists have discussed the influence of land-use patterns on human morbidity and mortality for a long time, but I think that this area of research is ripe for in depth investigation.

One of the problems with developing a vaccine for dengue is the lack of a good animal model of the disease.  However, several decades ago there weren’t such problems, medical researchers had ready access to actual human models in the form of ‘volunteers’ (frequently from prisons!)

Dr. Albert Sabin, who developed the oral, live attenuated polio virus, also worked toward the development of a dengue virus.  He infected over a hundred volunteers, kept track of their symptoms (including the timing of fevers), and then later re-infected them with another strain in order to look for immunity (strands DENV-1 and DENV-2).  He wrote a famous paper on his results in 1952 but the details of his experiment weren’t made public.[1]  However, he bequeathed his lab notebook(s) to Duane J. Gubler at the Duke-NUS Graduate Medical School in Singapore and some of the results (including some truly fascinating hand-drawn plots) were presented at the meeting by Gubler and several colleagues.  

One surprising finding was that none of the volunteers developed dengue hemorrhagic fever upon subsequent infection.  People who have been infected by one of the four strains of dengue appear to have some immunity.  However, that immunity is strain specific and reinfection with another strain is associated with much more severe symptoms, including dengue hemorrhagic fever and dengue shock syndrome.  For some reason, the volunteers in Sabin’s experiments didn’t develop these more severe symptoms.  (The presenter also noted that the vast majority of volunteers were middle-aged white males.  It could be that these strains don’t act as the others do, that there are some human host factors that are important, or a variety of other possibilities.)  Sabin’s research also indicated that strain-specific immunity appears to wane quickly (8 – 10 weeks).  

And one last symposium that I’ll mention was titled “Adventures in Tropical Dermatology”.  I learned a few things from this discussion, all of which I’m glad I learned in a nice hotel in Atlanta, Georgia rather than first hand in the field.  One of the more surprising to me concerned lobomycosis, a pretty terrible looking skin disease that humans can contract from dolphins.  I’ll spare you the nasty pictures but a simple google image search will result in plenty of images.  But if you’ve ever had dreams of swimming with dolphins, perhaps especially in Latin America, you might reconsider…

Monday night there was a discussion by Michael Alpers, one of the researchers who discovered kuru; a severe brain disease resulting from exposure to infectious prions.  Prions are oddly shaped proteins and prion infection seems to occur when these oddly shaped proteins come into contact with normal proteins, leading those proteins to become oddly shaped as well.  The result is porous neural tissue that resembles something like Swiss cheese.  Kuru has an extremely long incubation period, taking many years (5 – 20) to actually set in.  The group of scientists who discovered prions had to be extremely patient in their research, and had to persistently put their bold ideas about a new pathogenic agent in front of a frequently harsh scientific audience.  Alpers noted that Prusiner (another member of the research team) received a lot of abuse from scientific audiences when he began talking about the potential existence of prions.  

While prion infection is a widely accepted concept today, there is still a whole lot we don’t understand.  I think it’s relatively safe to say the same about most diseases, even the ones we’ve known about for centuries.  

My takeaway lessons from ASTMH 2012 are as follows:

  • Universal fixes to diseases almost always remain elusive
  • Long term fixes for most any disease are probably only going to come from multiple, diverse approaches 
    • This means that collaboration across sub-disciplines and fields is extremely important!
    • And many approaches will need to be place-specific
  • While we might be able to eradicate diseases from some areas, it is important to have community involvement and to address more distal, upstream socio-economic and political issues so that those diseases doesn’t simply return
  • Disease ecology, like most of life, is extremely complex

***I have clearly glossed over a LOT of other really good symposiums, posters, and talks***

1.        Sabin AR (1952) Research on dengue during World War II. American Journal of Tropical Medicine and Hygiene 1: 30 – 50. 




Thursday, November 15, 2012

Treating autoimmune disease the low-tech way?

Helminths and asthma
A map of asthma prevalence around the world shows that it is higher in industrialized parts of the world; higher in urban than rural areas, including urban Africa and South America, higher in what was West Germany after the wall came down than what was East Germany, higher in temperate zones than the tropics, and so on.  The question of why has been the subject of much research, much of that focusing on the lowly helminth, at least in tropical regions, parasitic worms that infect the gut of a high fraction of rural children in poverty.  The generic explanation for this has been the 'hygiene hypothesis,' which we wrote about here; basically, too much cleanliness can be a very bad thing.

Worldwide prevalence of asthma; from 'Global Burden of Asthma,' 2004


So, what's the mechanism that could explain the benefit of chronic helminthic infection?  The idea is that the parasites may suppress allergic inflammation, thus protecting against asthma, and indeed the allergy often associated with asthma.

A 2002 paper, e.g., in The Journal of Translational Immunology suggests:
There is good evidence that the expression of inflammation caused by helminth infections can be modulated by the host immune response, and that the failure of the expression of similar mechanisms among individuals predisposed to allergy may be responsible for the clinical expression of allergic disease. Further, there is accumulating evidence that helminth infections, particularly those caused by intestinal helminth parasites (or geohelminths) may be capable of modulating the expression of allergic disease.
Helminths and autoimmune disease in general
It turns out that a map of prevalence of any autoimmune disease around the world would show much the same trend as that of asthma -- higher prevalence in richer countries than lower, and presumably this is a true effect, not simply due to ascertainment bias based on poor access to health care in poorer parts of the world. Thus, the same question has been asked of other autoimmune 'diseases of westernization,' -- inflammatory bowel disease of Crohn's, rheumatoid arthritis, type 1 diabetes and multiple sclerosis. There are even suggestions that perhaps a third of the cases of autism could be due to autoimmune disease, as described in this piece in The New York Times in August. Could helminth infection be protective?  Many studies looking at preventing or treating these diseases with infection in mouse models have been reported, a few done in humans, including some self-experimentation, and many have been found to prevent disease entirely, or to alleviate symptoms (here's a pretty extensive table of the studies that have been done, in Parasitology Research Monographs). 

Now a piece in Nature ("Autoimmunity: the worm returns") reports the work of a gastroenterologist as he endeavors to determine the effects of helminth infection specifically on people with inflammatory bowel disease and multiple sclerosis. The author, Joel Weinstock, has worked for decades on inflammatory bowel disease, long wondering why it has become so prevalent in the last century.  He also knows his parasites, so that thinking about the possible connection between eliminating parasite infections and disease was not at all far-fetched.

Of course, as Weinstock also points out, parasite infections can have disastrous consequences, damaging the liver, bladder, or eyesight, e.g., so he had to proceed with caution.  But, as he describes, history and the map of the US seem to lend support to the idea of too much hygiene being a dangerous thing, so this was an insight he couldn't not pursue. 
In the United States and Europe, Crohn's disease first emerged in affluent populations living in hygienic conditions in the more northerly latitudes, where colder temperatures are less hospitable to soil-borne helminths. One of the last US groups to present with Crohn's disease was African Americans, who are, on average, poorer than their white counterparts. Similarly, in Europe, autoimmune diseases are more common in the richer Western Europe than in Eastern Europe.
Today, Native American reservations, which have relatively high rates of infection with parasitic worms, also have lower rates of inflammatory bowel disease. Latinos born and raised in South America rarely develop this gut disorder. If their children are born in the United States, where conditions are often more sanitary, they have a much higher risk of the disease.
Correlation does not equal causation, however, and the link had to be demonstrated. So, he began giving helminths to the mice in his lab that were models for inflammatory bowel disease, and did in fact show that they were protected against disease. He then moved on to treating volunteers, in whom he saw no adverse effects, and usually actual attenuation of disease, both bowel disease and MS. Pharmaceutical companies are now becoming interested, and double-blind studies of the effect of helminth infection on autoimmune disease are now being done.

How might parasites be protective?
Weinstock suggests that worms 'seem to have three major effects on the immune system.'  First, they cause changes in regulatory T cells so that they tone down the immune response, including autoimmune responses.  Second, they 'seem to act on other cells -- dendritic cells and macrophages,' which prevents the ramping up of the inflammatory response.  Yes, this is redundant, as Weinstock has shown in experimental studies.  And, third, they 'seem to alter the bacterial composition of intestinal flora,' in a way similar to ingesting 'probiotics,' helping to maintain intestinal health.

So if this work is right, and if cleanliness is next to godliness, it's starting to look as though the gods don't mind having a whole lot of sick people at their sides.  Weinstock is not suggesting that the industrialized world return to the heavy parasite loads of the recent past, rather that controlled infection might be a good thing.

So low-tech, and yet with the potential to eliminate a huge disease burden.  And not a word about genes!  Of course, one can expect the massive, heavy-handed vested gene industry to start to argue about genetic variation in susceptibility to the parasites.  Of course there will be some of that, but it is likely to be more GWAS minutiae rather than major causal factors.

BUT!
Herein we must add a caution, however.  One-size-fits-all explanations are rife these days, and it seems unlikely that intestinal worms could explain so many increasing disorders of different types. Usually, the miracle discovery turns out to be a mirage, relevant in some particulars but usually minor ones.  In this case, neither the immune system nor autoimmune diseases nor how the immune system responds to infection with helminths is well-enough understood for the cause and effect here to be convincing.

Empirical data seem suggestive, but the tropics/temperate zone gradient is also associated with numerous other factors, which has lead, e.g., to the sun/vitamin D exposure hypothesis with respect to multiple sclerosis, and clustering of cases has been suggestive of infectious causation.  The hygiene hypothesis is not confirmed in all studies, and data quality is surely not comparable across regions of the world, and so on.  A lot of caveats.  We'll just have to see how this one turns out.