Wednesday, February 29, 2012


People adhering to a particular faith, say X, are often called Xists or those who believe in Xism.  Sometimes this is just a descriptor, but often it's used as a criticism, or worse.  That's what happens when pole-headed right-wingers call President Obama a 'socialist' (something the pole-heads apparently know nothing about).
Sistine Chapel; Wikimedia Commons

This kind of ism/ist description applies in cultural combat but also in science.  Thus people can be reductionist, frequentist, or falsificationist and so on, in the words of their detractors -- so these aren't compliments.  People actually would not use the word about themselves if they feel that others use the term in a derogatory way.

Creationists (and they really are ists in the sense of holding tightly to a specific ideology or creed!) often denigrate people who understand the real world as 'evolutionists',as it is not clear exactly what these terms mean, which can be important.  It is a castigating characterization when used by creationists, and it may imply that one is an ideolog about, rather than an explorer of, the subject of evolution.

We often refer critically on MT, to evolutionary or Darwinian or genetic 'determinists' or 'determinism.'   However, our meaning is important to understand.  At a seminar in our department a couple of years ago, when we questioned the nature of genetic determinism being invoked in some darwinian adaptive Just-So story-telling, someone stated that he was a determinist--"and isn't everyone?" Well, in science the answer is clearly yes, and no.

Here's where language gets in the way.  If determinism means that Nature is causal and that every effect must have some cause, then most scientists would plead guilty to the charge.  To invoke effects without cause is to be mystic, and certainly that has nothing to do with science.  That doesn't mean that we have identified or understand the causation of effects under discussion, and there is where the legitimate issues lie.

Darwin, Museum of Natural
History, London;
Wikimedia Commons
Things are less transparent when it comes to 'selectionism'.  If one makes the Darwinian assumption that whatever is here had to have got here by adaptive natural selection, then it is perfectly legitimate not to be a selectionist.  Much in evolutionary reconstruction is of this type, and it often includes behavioral evolution or even morphology, where traits themselves can be hard even to define. If the trait must (by assumption) be the result of specific adaptive natural selection, then our task is to identify that selection.  But we can always find some such reason, since the function today can be equated to having been the advantageous function in the past.  This is entirely circular, and it's not science!

This doesn't mean that eyes or birds' wings or hominid locomotory apparatus got here 'by chance', as creationists still falsely often suggest, but it can mean that some functional elements ended up in our genomes by chance, if their initial harmfulness or helpfulness was slight compared to the populations they were in (in genetic terms, they got into the genome by 'drift').  Duplicate genes that have no harmful effect but provide redundancy that can subsequently be used for new function constitute one of many examples.

Selection must build on what's there, however it got there.  Such chance-installed elements don't suddenly produce wings out of reptilian forelimbs, because complex traits involve too many changes.  But the elements themselves need not have got here by selection, since most will have slight effect.  Likewise, truly harmful things are eliminated by not being viable, and that is a form of selection, but not Darwinian or adaptive selection, since the defunct forms weren't really competing with anybody or anything.  They just didn't work.

The bottom line here is that determinism depends on the degree to which (1) truly probabilistic cause exists, and/or (2) one believes that a specific cause under consideration perfectly predicts a specific outcome, and/or (3) the cause acts alone but only has predictive power if very accurately measured, and we can't get such quality measures.  If a causal effect is truly probabilistic, it does not in the usual sense 'determine' the outcome.  And if the cause is but one of many contributors, and hence has weak predictive power, or does so if inaccurately measurable, then arguing for 'determinism' stretches the truth and merits criticism.  It doesn't mean there is no cause, but in these instances we cannot reliably or accurately predict the outcome from observing the cause. 

Likewise for adaptive selectionism or 'Darwinism'.  If selection is weak, sporadic, erratic, or distributed over many different contributing factors, or if there is no selection but only drift, or if there is selection but we have no serious way to argue what its mechanism was, then the adaptationist argument stretches the truth and merits criticism.  Knowing the genes involved in a trait doesn't predict their change from on generation to the next and, indeed, different genotypes can generate the same phenotype, so that we cannot infer the cause from the result.  Again, this doesn't mean there is no cause, but it does mean that selectionism is over-stated.

What we argue when criticizing what we think are excessive claims of genetic determinism or selectionism is that the assertion being made does not bear scrutiny in these above senses.  We're not arguing for mystical causation or effects that are not 'determined' by physical causes.  We may be arguing that we have little idea or way of knowing what the cause(s) was or were, or that the assumption of a kind of causation can be made self-fulfilling rather than really scientifically testable.  It often seems to be true that based on methods and criteria we use today, some of these causal situations simply cannot in principle be worked out beyond some very imperfect level of precision.  Things too probabilistic, or too weak to be understood from the kinds of samples we can actually collect, are simply not accurately predictable from observing putative causes, and that also means we have inadequate ways of even identifying the causes.

We do seem to live in an orderly causal world.  There are 'laws' of Nature, even if the word is a human one that doesn't imply a law-giver.  Whether causation can be truly probabilistic, and whether there may be causal aspects whose very existence humans have not learned to detect or characterize, we have no idea.  Nor does anybody else.  There are wild theories of multiple universes to get around pure probabilistic causes, and things like dark matter and energy to get around some of what we observe in ordinary matter.  Who knows what else someone may some day discover.  Given this, we believe that more circumspection is in order about causal claims in the life sciences.  In part this is because science has practical implications for society, can be used towards evil or harmful ends (even if unintentionally), and costs resources that could be used for other things, if we had a less lobbying-based or ideological social environment in terms of making such decisions.

To argue that someone is a 'determinist' is not to label them with a slur as if they should instead by a mystic or crystal ball reader.  It is to argue that assertions should be tempered, and we should take more seriously the things that are clearly inadequately known but could be quite fundamental.  To be a 'Darwinian' or 'adaptationist' can mean not just that one recognizes the clear truth of evolution as a fact, and that survival requires success by definition, but can refer to someone who goes beyond that, to and assumes what is to be shown, and that certainly is not good science.  One wants to have an interpretive framework, without which science would be difficult if not impossible, but the framework needs to be tested rather than assumed.

Assuming a framework--being an 'ist'--may be good for hustling attention or grants, but not for a more serious--if avowedly less complete--understanding of things that we now have or than the ists of this world would lead you to believe.

Tuesday, February 28, 2012

Progress -- complex diseases are still complex

Ciliopathies are a class of disorders recognized only relatively recently.  They are genetic disorders that affect the function of the primary, or non-motile, cilium, of which most mammalian cells have one. The normal function of these organelles still isn't well-understood, and they were long thought to be vestiges of the eukaryotic cell's evolutionary past, but now they are thought to be 'cellular antennae', involved in sensing a wide variety of signals -- chemical sensing, temperature sensing, and the sensing of movement, at least, and in vertebrate development.  Here's a useful description of primary cilia.

Eukaryotic cilium diagram en
Eukaryotic cilium.

A number of rare diseases have been associated with cilial dysfunction, including spina bifida, some forms of retinitis pigmentosa, some obesity, some diabetes and liver disease, some breathing disorders, and so forth.  A paper in last week's Science by Lee et al. describes one ciliopathy, Joubert's syndrome, a rare genetic disorder that affects the cerebellum, and thus balance and coordination.  This is of general interest because, as a commentary in the same issue points out, it elucidates just one aspect of why complex diseases can be so difficult to understand.

Lee et al. identified a gene, a TMEM (transmembrane) gene, that seemed to be responsible for Joubert's syndrome in 5 of the 10 families in their study.  The disorder in the other families, who did not carry the same gene variant, seemed to be phenotypically identical, so they resequenced the area around the gene in question to look for possible causative variants nearby.  Sequencing of the 'exome' has become de riguer in recent years (that is, all the exons, or coding regions, in a genome; as this is only ~1% of the genome, it's a lot cheaper and faster than sequencing the entire genome).  But, as this paper and commentary point out, restricting the search only to exons can miss important variants.

Indeed, Lee et al. found mutations in the neighboring related TMEM gene.  Both genes, TMEM138 and TMEM216, encode transmembrane proteins, that is, proteins that rest across cell membranes with part sticking out into the space surrounding the cell where it can monitor aspects of the environment, and the  other part remaining inside the cell.  But the authors found no homologous regions in the genes or the resulting proteins, and thus nothing that explained why the disease could be the same in all families.  This prompted them to look for shared sequence in the regulation of the expression of the two genes.  
To test for coordinated expression, we examined tissue-expression patterns of human TMEM138 and TMEM216 using the microarray database and in situ hybridization of human embryos. We found tight coexpression values of human TMEM138 and TMEM216 across the major tissues, including the brain and kidneys, and similar expression patterns in various tissues, including the kidneys, cerebellar buds, and telencephalon, at 4 to 8 gestational weeks (gw) of human embryos. To test whether this coordinated expression was due to the adjacent localization, we compared mRNA levels in zebrafish versus mice, representing species before and after the gene rearrangement event. Using quantitative polymerase chain reaction (qPCR), we detected tightly coordinated expression levels in mice compared with those in zebrafish (correlation coefficient r = 0.984 versus 0.386), which suggests that TMEM138 and TMEM216 might share regulatory elements (REs) within the ~23-kb intergenic region. We further examined several experimental features and found that regulatory factor X 4, a transcription factor regulating ciliary genes, binds a RE conserved in the noncoding intergenic region to mediate coordinated expressions of TMEM138 and TMEM216
Further analysis leads them to suggest that both genes are necessary for normal development of the cilium, and that this is because they are regulated by a shared intergenic region, a 'cis-regulatory module', or CRM, a binding site for transcription factors that regulate nearby genes but that is not itself part of those genes.  How these modules arise or how the coordinated expression of genes evolves is not well-understood, but this CRM seems to explain the pattern Lee et al. found in the Joubert syndrome families they studied.

Aravinda Chakravarti and Ashish Kapoor say in their commentary on this paper, and Mendelian disease in general, that this work represents a maturing of the understanding of complex genetic disease.  The genetics community should no longer be focused on single gene mutations, or even exomes (the protein-coding sections of 'genes'), but instead should recognize that complex diseases will require complex explanations.
Mutation analyses of single-gene defects have identified two puzzles: One is that not all individuals with a specific disorder have identifiable coding mutations; the other is that not all individuals with identical mutations, even in the same family, are equally affected, and some may be symptom-free. The first mystery has many suspected causes: The disorder may be due to another gene—even the adjacent one, as Lee et al. demonstrate—or arise from mutations in a gene's regulatory sequences, or be a phenocopy (a trait that is not of genetic origin but is environmentally induced and mimics the phenotype produced by a gene). This is a persistent challenge in studying an outbred organism like humans; just because a disorder is monogenic does not imply that it is monocausal. The second problem is more mysterious and far less understood. Phenotypic discordance, or variation in disease penetrance, between identical mutation bearers could result from differential environmental exposures (such as normal intelligence versus mental retardation in diet-treated versus untreated phenylketonuria). 
The goal remains to determine causation as well as to predict disease.  The pendulum keeps swinging between the search for common and rare variants with which to do this -- as this commentary says, "Studies of Mendelian disease should also move from its preoccupation with rare variants to a focus on common polymorphisms, particularly at regulatory sequences affecting either rare disorders like Hirschsprung disease or common disorders like myocardial infarction."

One reason for the current focus -- should we call it a 'fad'? -- on rare variants is that the heavily touted promise that everything in the universe would be explained as being due to common genetic variants (and hence attractive to pharmaceutical companies and useful for widespread risk prediction) was that the theory has proven largely to be a bust.  So since nobody will give up on predictive genotyping, the move was to rare variants which, not incidentally, will require extensive DNA sequencing, data bases, analysis and the grants that go with them, to find and document.  It's difficult not to wax cynical in this way.

As we have written many times, here and elsewhere, when there are many pathways to the same phenotype, including gene by environmental interactions, and when everyone is genetically unique, the idea that most cases of rare or common diseases can be explained or predicted is likely to be an unattainable goal. As a rule, causation involves a spectrum of strong and weak, common and rare, interacting effects.

Still, it is a sign of progress when major players, not just those of us working on a smaller scale or even those on the sidelines, are cautioning about the ineffectiveness of looking for answers only at single genes or coding regions, or in enormous studies. 

Monday, February 27, 2012

The ethics of artificial meat

A lab at Maastricht University in the Netherlands has just announced the successful use of stem cells to grow muscle artificially.  They took some stem cells from a cow, and grew them in tissue culture, and now have strips of muscle 2cm long, 1 cm wide and a mm thick.  It's currently very pale in appearance, but they will eventually add fat and blood to better mimic actual meat.  Getting it to taste like meat is a another matter, but the researchers believe it's possible. 

Why make artificial meat?  As the world population grows, the demand for meat grows as well, but raising meat the natural way is very costly, and is likely to only get more so.  Agriculture is in many ways a destructive enterprise -- as currently practiced it requires many tons of fertilizers, herbicides, pesticides, and the diversion of much water, not to mention land, to produce what we eat.  And, no matter how you feel about killing animals to eat, the production of meat is particularly inefficient and wasteful -- 100g of vegetable protein yields only 15g of animal protein -- it's only 15% efficient.  Mark Post, the Dutch researcher whose lab is making the meat, believes that artificial meat can be 50% efficient.  It would also have less of an impact on natural resources.  And, the stem cells from just one cow (it's not clear from the reports whether the cow has to be killed for the stem cells to be harvested) can potentially feed millions.

From a humanitarian point of view, with artificial meat no cow needs to suffer feed-lot crowding and the kind of manipulation and minimal life that big herding often involves.  No fear, no line-up to the abattoir.  We presume that while muscle cells 'want' to live and stay alive, they do not have consciousness or sentient fear or suffering of the kind that whole animals with brains have.  So, regardless of efficiency considerations, this seems like a good use of science.

But then, one can ask, why make artificial meat when we can just not eat any meat at all?  That would be an even more truly efficient solution.  The demand for meat is rising especially rapidly in countries with a rapidly expanding middle class, such as India and China.  To expect them not to eat meat, when those of us in countries that have long been rich have been eating our fill for a long time, would just like expecting them to take equal responsibility for global warming, or to not destroy the rainforest when what they really need is land to plant vegetables.  Arrogant on our part.

Is this as proper use of technology, or should one resist it because it simply avoids facing up to the real global needs for food efficiency?  McDonald's will have their answers to these questions, but you as world citizens must have yours -- and express it.

Friday, February 24, 2012

Claiming more? Show it! Faster than a speeding neutrino!

Marcello Truzzi, sociologist and founder or co-founder of a number of organizations investigating extraordinary claims, is said to have coined the phrase, "Extraordinary claims require extraordinary proof."  Carl Sagan popularized this phrase as "Extraordinary claims require extraordinary evidence."  And so it is with the recent claim that neutrinos can exceed the speed of light, the proverbial c in e=mc^2.

We posted about this remarkable claim when it first came out (here), and then when the same group claimed to have replicated their own results (here), and of course the story was all over the web.  As it should have been, because if true, it would have overturned one of the most robust theories in physics.  The finding was not only extraordinary, it was revolutionary.  (Ethan Siegel's blog, Starts With a Bang, has a bunch of very fine, accessible and detailed explanations of the whole story as it has unfolded.)

Flaws in the experiment are now being reported.  Here's the piece in Nature, but it's also everywhere -- like on the BBCScience Insider broke the news, and Siegel explains the possible alternative scenarios here.

The same group, OPERA, that made the original finding is now identifying their likely errors.  As the Nature piece explains it:
...according to a statement OPERA began circulating today, two possible problems have now been found with its set-up. As many physicists had speculated might be the case, both are related to the experiment’s pioneering use of Global Positioning System (GPS) signals to synchronize atomic clocks at each end of its neutrino beam. First, the passage of time on the clocks between the arrival of the synchronizing signal has to be interpolated and OPERA now says this may not have been done correctly. Second, there was a possible faulty connection between the GPS signal and the OPERA master clock.
The Science Insider story writes that if the group does the equivalent of rebooting their computer -- simpler, actually; just tightening the connection between the fiber optic cable that connects to their GPS receiver -- that one fix would add those missing 60 nanoseconds back to the neutrino travel time.


Though, the BBC tells the story differently.  As they tell it, OPERA says there's another possible explanation, which has to do with "the oscillator used to produce the events time-stamps in between the GPS synchronizations.  These two issues can modify the neutrino time of flight in opposite directions."   The BBC says that tightening the connection would increase the apparent already ultra-fast speed, while fixing the oscillator would slow it down.  That is, either they are more right, or they know why they're wrong.

So, apparently, within the group there's still hope of a revolution.  They'll keep us posted.  And while they work on tightening up experimental conditions, a group at Fermilab in Illinois and a group in Japan are hoping to test this themselves.

While we leave this to the physicists to sort out, there are still some lessons to be learned for the rest of us. The speed of light is a given in physics -- it has been tested without serious challenge for a century, and any suggestion that it can be exceeded must be met with skepticism. There are simply too many direct experiments, and zillions more indirect ones, that seem consistent with the theory. Were he alive today, Marcello Truzzi would surely have written the neutrino results up in his journal, The Zetetic (The Skeptic).

We don't have the same kinds of laws in Biology, as a rule, but evolution and the nature of genes are aspects that seem to come about as close to fundamental theory as we currently can get. And there are important lessons to learn about life at large, compared with what we learn from the ultra-tiny neutrino.

Whether the speed of light is actually 100.000000000000% constant in every 'vacuum' and every part of the universe, apparently has to do with various theoretical issues or explanatory frameworks that are beyond what we know anything about. However, it is close enough and a robust enough finding that we can argue about whether a neutrino can violate this law at all. Any deviation, no matter how tiny, will grab major headlines and be good for the physics professor business!

We are not qualified to say whether a quadzillionth of a percent deviation from the proverbial c, will change much that is of even theoretical importance. Does every single last photon always stream along at exactly the same speed all the time? That kind of constancy would be basically unprecedented in the word of even science's everyday life. What if it simply showed that c is not an eternally totally fixed value but that photon-travelers, like neutrinos, sometimes hustle, sometimes dawdle a tiny tad?

Be that as it may, we have little if anything that is anywhere near so precise, exact, and universal about life or evolution. The proof of this is how easily--routinely, even--professors and their reporter-acolytes proclaim essentially revolutionary, major, dramatic, or transformative new findings.

A new fossil often is claimed to entirely overturn everything we said we knew about human evolution, or so the media and the discoverer will have you believe (as we have commented recently in MT). A fossil found sucking its thumb would be argued to completely revolutionize our understanding of the evolution of thumbs (and depending on its age at death, perhaps also about the length of childhood in our ancestors!).

In contemporary genetics, which is Gee!Wash in GWAS, first the idea that would revolutionize everything was that common variants cause common disease; then that to a great extent the same gene variants cause the same disease in all populations; then it was rare variants are the culprits to be discovered with wholegenome sequence; then epigenetics; then copy-number variation; then gene regulatory networks. The chain of 'omics' revolutions is, so far, endless. Medicine will be revolutionized by being genomically personalized.

We are truly learning a lot about life, but the major changes in views claimed for each new finding or paper shows clearly that we simply do not have our theoretical 'neutrinos'. Our knowledge is too easy to 'revolutionize' by the next technology that comes down the pike, to be considered theoretically very sound knowledge.

Now, physicists will be melodramatic about whatever is found in those little hyper-travelers, just as biologists are about how every new genetic variant they discover that will guarantee immortality. And, not least of the physicists' worries in all this is whether it will affect their funding. They have their sick side just as we do: that finding out truth will determine whether we can keep our jobs--even though our jobs are, supposedly, to find out truth!

The educated public, and scientists ourselves, need to realize and acknowledge how very far we are from physics-like understanding of life. And given that, and the topsy-turvy claim-laden recent history of genetics, medical genetics, and evolutionary biology, there should be some slowing down, taking stock, and tempering of our claims. When we are this far from absolute truths, and no really sound underlying theory, we have no business over-promising, much less being in such a frenzied (fund-seeking-based) race.

Perhaps it's time for some medicine for our ailment: some sanctions for claiming too much, and not acknowledging the depth of our own loose connections. Or some accountability for real progress in (say) curing disease, rather than the moving target of promises not met. This is because if we had some accountability or didn't rush for headlines and snow-jobs at every turn, we might temper our thoughts as well as our claims, and spend more time and effort understanding how multiple, variable, hard-to-measure causal elements worked together, and varied, in relation to biological traits normal and disease, and their evolution.

We face very challenging and legitimate issues in biology, both to understand evolution and to bring about major biomedical advances. We should be able to make much better progress if we knuckle down more intensely to understand the complexity of life's complexity, rather that slicing and dicing it up into this or that one-size, large-scale, comprehensively enumerative ('omic') style approach that essentially promises to turn complexity into simplicity. We know the professional pressures that push us in the latter direction, and we all feel them, and we inculcate new members of the guild into that environment. But nobody seems to be resisting these pressures.

Basically, like physicists, we should own up to our rather large array of loose connections. Until we do, there is something well-known that's Faster than a Speeding Neutrino. It is the speed with which biologists rush to call a press conference to announce their latest Discovery.

Thursday, February 23, 2012

A modest proposal: Please make us teach creationism

Many of you want us to teach other theories alongside evolution in science classes.

No problem!

We agree whole-heartedly and apologize that we have appeared to resist you, causing such consternation and turmoil.

Evolution is fundamentally important to teaching the natural sciences (as you've probably heard us say a million times), but teaching other theories alongside it can be extremely effective. So we are blushing with embarrassment that this curricular adjustment continues to be thwarted at the local, state, and federal levels.

Granted, there's a deep history of strife and for good reason. In order to get evolution into science classes, we demanded that creationism be taken out. But now with all that behind us and with evolution in the science classes where it belongs, not only are we cool with including the other theories, but we want you to know that we need them!

See, those of us who teach from an epistemological perspective try our best to convey to students not just what we know but more importantly how we know what we know. This is an eye-opening and empowering way to learn which is why we try to create this experience for students. The only way to do that is to teach about evolution now--how scientists have come to understand it--compared to how people used to explain the natural world, which greatly influenced unscientific beliefs about nature that people still hold today. This epistemological approach means that we strive for a non-dogmatic and non-indoctrinating presentation of the material, upholding prized scientific ideals that aren't shared by many of those who support creationism. 

Here's a taste of the experience. 

Look around the world. (You don't have to go anywhere. Click here,, and look at pangolins, cuttlefishes, polar bears, wolverines, belugas, aye-ayes, gorillas, etc.) Now that you've made observations, what can you make of all those common traits, features, trends, behaviors, that we see among all living things?

With generations upon generations of humans making these observations, there are still only two main theories for explaining them: 

1. At some point in history, a divine or supernatural force created all the living things exactly as we see them today. Patterns of similarity among groups are only the whim of the creator, nothing more: CREATION.

2. Similarities among groups indicate shared ancestry, just like family resemblance but on a larger scale. Any two living creatures on Earth share a common ancestor at some point in history. And so on. If you trace all lineages back far enough, everything alive today shares a common ancestor. Organisms did not always look as they do now because over deep time, all lineages have accumulated changes generation after generation: EVOLUTION.

For #2 there is a subset of important and exciting theories--some more popular and better supported than others--as to how evolution unfolds and how separate lineages arise. And that's partly because there is more than one way to evolve. Darwin's adaptation by natural selection is a good example of a process that's got lots of supporting evidence. Lamarck's ideas, which aren't so different but receive less support, are also useful in the classroom for contrasting with natural selection, for putting modern genetics in context, and for introducing our burgeoning understanding of epigenetics. Mutation, genetic drift and gene flow are some other processes that are fundamental to evolution, so are concepts of deep time, competition, cooperation, symbiosis, and metabolic and developmental constraint.  

But given all those scientific theories, creation theory is the most effective way to put evolution in scientific context. It's a perfect foil for evolution, illuminating the scientific nature of evolutionary theory by demonstrating what science is not and what the scientific method cannot address.

It's not just hand-waving, creation theory is also arms-in-the-air-tossing and shoulders-shrugging. To apply such a dead-end theory to all lines of inquiry would prevent all science, not just the natural sciences, from advancing with our young people. This is powerful stuff that students can appreciate when we're free to be open about creationism during evolution lessons.  

So, again, we're really embarrassed that it's come to such political fisticuffs. There's really no need for all this fighting because we're all on the same side. Please sign those scientific curricular demands into laws. We want you to force us to teach creationism and intelligent design in science classes. Being not only free but legally required to cover competing theories will only strengthen science education!

Thanks for all your efforts to strengthen science education.

Note:  "We" refers to Holly Dunsworth and anyone else who supports her by commenting here or elsewhere.

Wednesday, February 22, 2012

How many genes can we live without?

It's well-known that 'the' human genome doesn't exist -- despite the hype about the complete sequencing of this thing (and despite the incompleteness of its sequencing).  In fact, each of us has a collection of DNA variants that, added together, means that our genome has never been seen before in the 3.8 billion year history of genomes, and will never be seen again.  And it's no mean collection; we all differ from each other at something like 3 million loci.

We need to understand first of all that 'the' human genome is not from one person, and even so, what it is, is a reference sequence, useful for comparing other data, but not definitive of our species.  The donors of the DNA were healthy at the time of donation, but that's about it.  They were not particularly special in any way.

Human genome by functions; Wikimedia Commons
In fact, as it turns out, not only do we differ at single nucleotides -- you have a T where I have an A, I have a G where you have a C, and so on, times 3 million -- but we are each carrying around a not insignificant number of variants that result in loss of function (LoF) of some of our protein coding genes.  And many of these are genes we think of as essential.

A paper published in last week's Science, "A Systematic Survey of Loss-of-Function Variants in Human Protein-Coding Genes," MacArthur et al., estimates that each of us has around 100 of these LoF variants, 20 of which result in complete loss of a gene. The authors looked at three pilot data sets from the 1000 Genomes Project, (58 Yoruba, from Nigeria, 60 European American from Utah, 30 Chinese individuals from Beijing and 30 Japanese from Tokyo) and a European genome, and, after filtering a larger initial set of candidate LoF variants, finally analyzed 1285 variants that they found to be likely to cause protein-coding genes to lose function.   

As MacArthur et al. point out, other recent studies of the complete DNA sequences of healthy individuals have also found many LoF variants -- from 200 to 800.  People walking around perfectly normal so far in their lives, but without the use of substantial numbers of their genes.  The specifics vary, and what you can do without, your genes that aren't working, likely depends on what is working.

Before the advent of complete genome sequencing, LoF variants were thought to be rare, largely associated with severe Mendelian disorders such as cystic fibrosis or Duchenne muscular dystrophy.  The finding that they aren't so rare after all suggests to MacArthur et al. "a previously unappreciated robustness of the human genome to gene-disrupting mutations and [this has] important implications for the clinical interpretation of human genome–sequencing data."
LoF variants found in healthy individuals will fall into several overlapping categories: severe recessive disease alleles in the heterozygous state; alleles that are less deleterious but nonetheless have an impact on phenotype and disease risk; benign LoF variation in redundant genes; genuine variants that do not seriously disrupt gene function; and, finally, a wide variety of sequencing and annotation artifacts. Distinguishing between these categories will be crucial for the complete functional interpretation of human genome sequences.
After they weeded out false positives (which were due to sequencing errors; of course, false positives are a problem in their own right in a clinical setting), the variants included indels (insertions or deletions of 1 or more nucleotides) that changed the splicing of the gene (and thus changed the amino acids that got strung together in the resulting protein), single nucleotide variants that introduced a stop codon into the gene sequence (that is, that caused transcription of the gene to halt prematurely), and large deletions that removed some of the coding sequence.  Some of the variants were found to affect all known protein-coding transcripts of the affected gene, and some affected only some of the coding transcripts.  That is, some transcripts were normal. 

The authors find that the common gene-disrupting variants described in this study are not a significant cause of complex disease.  Most of the LoF variants identified that are associated with complex diseases, all but one of which were heterozygous in these subjects (they had one functional copy, one not), are at low frequency, presumably due to purifying selection -- that is, selection against alleles that are severely harmful, thus preventing them from reaching high frequencies in any population. 

Individuals in this study have about 120 LoF variants, about 100 of these heterozygous and 20 of them homozygous -- both the person's copies are non-functional.  It's the homozygous LoF's that seem to have no effect that are of most interest.  The individuals in this study are healthy -- so either their LoF's truly have no effect, or they haven't yet had an effect.  If they are truly what MacArthur et al. are calling LoF-tolerant genes, and don't lead to disease, the authors suggest that they can be used to "define the functional and evolutionary characteristics that distinguish these genes from severe recessive disease genes."
We examined the 253 genes containing validated LoF variants that were found to be homozygous in at least one individual. These LoF-tolerant genes are significantly less conserved and have fewer protein-protein interactions than the genome average. They are also enriched for functional categories related to chemosensation, largely explained by the enrichment of olfactory receptor genes in this class (13.0% versus 1.4% genome-wide), and depleted for genes involved in embryonic development and cellular metabolism.
The finding that so many of the genes that are 'allowed' to vary are olfactory receptor (OR) genes isn't much of a surprise, as we all carry many OR genes that are pseudogenes, OR's that no longer function.  So, the researchers eliminated these from the set of genes that could lead to Mendelian disease, and then compared the remaining 213 LoF-tolerant genes with 858 known recessive disease genes.  They found these 2 categories were very different, and suggest that the characteristics of the recessive disease genes that are not shared with the LoF-tolerant genes could be used to prioritize candidate disease genes.

This can be important because we all have so many variants, and identifying which of these is or are contributing to a disease we may have is often impossible without data from other affected family members.  If likely candidates can indeed be prioritized based on the results of this study, this could be very helpful.  But, many genes are only deleterious in a given environment, or after years of exposure to environmental factors, and one person's LoF-tolerant gene might be another's disease gene.

To us, the fact that so many genes can apparently be disrupted with no discernible ill effect is further evidence of the adaptability that evolution has built in -- DNA replicating errors are common, the timing and locale of gene expression are often imprecise, and environments frequently change.  We're redundant, buffered, pretty hardy creatures!  The fact that all this can happen and we can live on with no ill effect is a beautiful fact of life.

Tuesday, February 21, 2012

It could might IS! A HOME run! Harry Caray science

This post is triggered by things are said at genetics meetings we have attend, things said both in casual conversation and even in formal presentations.  The attitudes are in no way particular to the participants of a given meeting, but instead have become everyday kinds of comments in the sciences we are involved in.  So we thought we would comment in turn.

Sports Lead-in
Harry Caray was one of baseball's best-ever announcers (his son Skip is still a prominent sportscaster).  Caray was with the St Louis Cards when I was studying meteorology in St Louis and the Cards won a fantastic world series (against the hated Yankees). Then, after a bit of hanky-panky with a daughter-in-law of the Anheuser Busch family (the Cards' owners) he had to get out of town, and he ended his career working magic with the Chicago Cubs.

A wonderful broadcaster in the good old days of sultry summer radio listening (before designated hitters and pampered players).  It was true excitement and, yes, totally partisan.  But MT is about science, which is not supposed to be partisan, so what's the connection, besides something to write about on a slow news day?

Is anything worth saying, worth exaggerating?
We often write critically about what we believe are excess claims made by scientists, to the media, in papers, and in grant applications.  While we try to have responsible and legitimate reasons for this criticism, we are sometimes accused of just being negative, jealous of others' big grants, or being just plain dour.  There is no getting away from personal feelings about life, but we have been sufficiently, and consistently successful even if we have not run a big science factory or large operation, so that we don't believe our views can be chalked up to jealousy.  Rather our views on this subject are, at least as much as possible, based on what we think are appropriate rather than venal reasons.  Big science can ignore such criticisms, and does so, but many in science are going to be squeezed out of the game, and put on the bench, by what is going on.

The science of today is to a great extent a competitive rat-race in which we rats (and our students) are trained to claim more than is justified, and almost-openly admit that exaggeration, and indeed, even outright dishonesty is part of the 'game'.  This is often called grantsmanship, or sometimes even marketing.   "It's the new business model",  "I say this to funders with my fingers crossed behind my back", "'ll be good for technology development even if we know it won't do what we say",  "Yes, it' won't work but it's a holding action til the right technology comes along",  "I'll say that here in this meeting, but I wouldn't say it when I testify before Congress.....",  "We needed to find another high-throughput sequencing project to keep our lab running...".  Because of their obviously dicey ethics, these things are often said with knowing smiles.

In these and many other similar ways, science is seen as a game, about pouring out papers and scrambling for the media (or profits, or patents, etc), and claiming major discoveries because......because everybody else is doing it.  We have to process many graduate students and post-docs, whether or not there will be jobs for them, because we need them as workers to keep our labs running.  We'll get ours, and leave the exponential growth problem for the future to deal with.

There's scant sense of shame in these aspects of modern industrialized, institutionalized science.  Maybe we're lucky that our society is so used to this kind of self-promotion that few people feel they actually have to cheat or out-right lie, so that hopefully the wrongness component in science is dissembling rather than data fabrication.  That does seem to be the case.  Even science's Liar's Club does not tolerate pure invention!  We're lucky for that.

And, in any culture, the way of doing business is what it is, and never lives up to its self-proclaimed ideals.  So, science has its share of the resource pie, based on promises and practices that are not wholly honored, self-perpetuating rather than fully accountable, and only imperfectly adjusted to the societal needs it purports to satisfy.   We guard that share carefully, of course.  Perhaps one should just accept human imperfection and not worry about it.  In that sense, honesty is an ideal we compromise to the extent society and our guild's success let us get away with it.

"Holy cow!" every time!
One of Harry Caray's famous phrases, screamed with true enthusiasm after every play favorable to the home team, was "Holy cow!".  His enthusiasm was unrelenting, not dampened by whether success was as real as his expressions about it.  Indeed, he went untempered from the perennial winners the St Louis Cards, to the perennial losers the Chicago Cubs (sob, sob!).  But he kept on hoping, kept on striving, kept on exclaiming.  Caray died in 1998, but of course the Cubbies, like scientists, are still looking for the right answers.

Scientists need jobs like anybody else who's not in the upper class, but science should also be about seeking the right answers.  We should always be questioning and groping, because trying to understand Nature's puzzles is tough and most findings, sadly, are minor and have little influence.  Those are  demonstrable facts about the research bubble we're living through.

Science should not be like Harry Caray, describing what he knew was going to be a home run with uncertainties: it might could be....   as became his very appealing formula.  That was great for baseball.  But there is too much equating of might be to what must be in science these days.  Too much pretending that we are just testing Nature, when what we really do too often is design studies to prove some preonceived point or guaranteed incremental result to be used to justify the next, bigger incremental project.  This, for example, is what a lot of the GWAS and similar advocacy is about, at its root.  It's a costly way to do things.

But as is now almost-openly acknowledged, today's science-as-business worldview is a substantial change from the more academic conditions in the past, and while it was never perfect and science always has to consider how to get its costs covered, it has come to be about over-heated 'productivity' in the name of ever-more grant funding.  It's about the science establishment promoting he science establishment.  A lot of the pressure is due to the low funding probabilities, the addiction of universities for overhead funds, and weed-like growth of administrative bureaucracy--in the funding as well as recipient organizations.  But is this all just fine, or is this Hara Kiri rather than harmless Harry Caray enthusiasm?  There seems to be little pressure to alter the system.

All of this may be the consequence of the democratization of universities, where more than a tiny elite can take part, a system of institutionalized careers that may be no worse than any other, and probably better than most.  We need to earn our living, and we're in a culture fervently devoted to the ethos of competition.  That may or may not be good for science itself, or for what science does to improve the lives of the citizenry that pays for it.  When it came to baseball, nobody every got tired of Harry Caray.  But until or unless we decide to alter this environment in science, that society may eventually tire of, perhaps one can't blame us scientists for the way, with Harry Caray's enthusiasm, we excitedly proclaim "Holy Cow!! A Home Run!" every time we get a bunt single.

Monday, February 20, 2012

You are what you're infected with?

Rats infected with the parasite Toxoplasma gondii do crazy things.  They find the scent of cat urine sexy and attractive, they don't run from the actual beasts; they are more active in running wheels, which might indicate that the parasite induces increased activity which may more readily attract a cat's attention. When an infected rat is eaten by a cat, the T. gondii is passed on in the cat's feces to infect again.  T. gondii can only reproduce inside the cat.  Great survival strategy on the part of the parasite, this trick of making the rat no longer fear cats -- now that's really building a better mouse-trap! Did this strategy evolve by adaptive selection, or is it just something that happened?

Czech biologist, Jaroslav Flegr, thinks T. gondii infections do much the same to humans -- his story is told in the March 2012 Atlantic Monthly.  Toxoplasmosis, the infection caused by T. gondii, infects a significant segment of the world's population -- perhaps 20% of Americans, but 55% of French people are infected, probably because the French diet includes more rare or raw meat than the American diet.  The usual mode of transmission is from a member of the cat family to another warm-blooded animal via ingestion of feces from an infected cat, but raw or rare meat can be another source.  It can also be transmitted from mother to fetus, and can result in serious complications in an infected fetus, including stillbirth.  This is why pregnant women are told to avoid litter boxes.

Infection has long been supposed to cause mild flu-like symptoms in otherwise healthy individuals, but then it was assumed that the parasite lay dormant in cysts sequestered away inside brain cells.  People with weakened immunity were at greater risk, however; in the days before antiretroviral drugs for treating HIV, toxoplasmosis infections are thought to have caused much of the dementia in patients with end-stage AIDS.

But maybe the parasite actually does more damage than has been thought.
...if Flegr is right, the “latent” parasite may be quietly tweaking the connections between our neurons, changing our response to frightening situations, our trust in others, how outgoing we are, and even our preference for certain scents. And that’s not all. He also believes that the organism contributes to car crashes, suicides, and mental disorders such as schizophrenia. When you add up all the different ways it can harm us, says Flegr, “Toxoplasma might even kill as many people as malaria, or at least a million people a year.” 
Flegr's hypothesis comes directly from his own experience.  He wondered for years why he was willing to take risks that others wouldn't, like crossing a street in the middle of traffic, or speaking out against communism in Communist Czechoslovakia.  Entirely by fluke, he was tested for T. gondii by someone in his institution looking for infected people to study a diagnostic kit they were developing, and he was discovered to be positive. To him, this explained his bizarre risk-taking behavior.

He reasons that T. gondii is not the only parasite that affects behavior.  The rabies virus incites fury in infected animals, ensuring that they bite others, and thus pass on the infection.  Ants infected with parasitic Cordyceps fungi do all kinds of bizarre, self-destructive things, including climbing onto a blade of grass and then clamping on with their mandibles. Soon the fungus consumes the ant's brain, and fungal fruiting bodies sprout from the ant's head (as in the video) and burst, releasing spores into the air, to settle and find a home in another unsuspecting, soon to be robotic ant. Apparently the Cordyceps fungi release chemicals that change an ant's pheromone reception, which alters their sense of navigation. Is this coincidence, not specific enough to have evolved per se?  Or is it a specific adaptation?

Another example of zombie ants involves infection by the lancet liver fluke, Dicrocoelium dendriticum.  When infected, the ant again climbs onto a blade of grass where it clamps on, there to be eaten by a grazing sheep or cow.  The ant does this only in the evening, when the air cools, and if it survives the night uneaten, it climbs down and behaves normally again until the following evening, when the fluke regains control. Again, this is remarkable, but it is it specific enough and frequent enough to be a Darwinian adaptation?  And what's in it for the poor manipulated ant? 

Things that seem (to human observers) so bizarre probably would be expected to have a balance, or else the victim species would have evolved resistance.  So many questions are raised by these examples.  And there are many more like them. 

But in any case, parasite-induced behavior changes are not unprecedented.  Could T. gondii really do the same?
In the Soviet-stunted economy, animal studies were way beyond Flegr’s research budget. But fortunately for him, 30 to 40 percent of Czechs had the latent form of the disease, so plenty of students were available “to serve as very cheap experimental animals.” He began by giving them and their parasite-free peers standardized personality tests—an inexpensive, if somewhat crude, method of measuring differences between the groups. In addition, he used a computer-based test to assess the reaction times of participants, who were instructed to press a button as soon as a white square popped up anywhere against the dark background of the monitor.
The subjects who tested positive for the parasite had significantly delayed reaction times. Flegr was especially surprised to learn, though, that the protozoan appeared to cause many sex-specific changes in personality. Compared with uninfected men, males who had the parasite were more introverted, suspicious, oblivious to other people’s opinions of them, and inclined to disregard rules. Infected women, on the other hand, presented in exactly the opposite way: they were more outgoing, trusting, image-conscious, and rule-abiding than uninfected women.
Flegr confirmed these surprising findings with further research, finding that infected men were suspicious, sloppy dressers, and introverted, while infected women were well-dressed and gregarious.  Reaction times of infected people were considerably slower than uninfected, and he found that they were 2 1/2 times more likely to be in traffic accidents -- this statistic has been replicated in other countries.  Flegr says that the personality changes are generally subtle, only detectable on a statistical basis.  But, it turns out that a fairly substantial percentage of people diagnosed with schizophrenia are T. gondii positive. 

What's the mechanism? 
Many schizophrenia patients show shrinkage in parts of their cerebral cortex, and Flegr thinks the protozoan may be to blame for that. He hands me a recently published paper on the topic that he co-authored with colleagues at Charles University, including a psychiatrist named Jiri Horacek. Twelve of 44 schizophrenia patients who underwent MRI scans, the team found, had reduced gray matter in the brain—and the decrease occurred almost exclusively in those who tested positive for T. gondii.
That's not clearly a mechanism, however, as the shrinkage could be entirely unrelated to schizophrenia.  Indeed, since only 1/4 of the patients tested showed reduced gray matter.  Anything more convincing?

Apparently, sequencing of the T. gondii genome suggests that it has 2 genes that can make the infected animal increase production of dopamine, and elevated dopamine levels are a mark of schizophrenia. Infection also, apparently, increases the infected animal's gregariousness, and in humans, increases sociability -- even infection with the influenza virus.  Infection can, apparently, even increase a person's (or a rat's) sex drive, and because many of these infections can be transmitted sexually, this improves their chances of being passed on.  This relates to any kind of infection that has been tested, not just T. gondii.

As it turns out, schizophrenia has been associated with a number of infections ("maternal rubella (German measles), influenza, Varicella zoster (chicken pox), Herpes (HSV-2), common cold infection with fever, or poliovirus infection while in childhood or adulthood, coxsackie virus infection (in neonates) or Lyme disease (vectored by the Ixodes tick and Borrelia Burgdorferri) or Toxoplasmosis" -- from a 2011 paper by C.J. Carter), and in fact, while genomewide association studies haven't found genes with major effects, or reliably replicated what they have found, for schizophrenia, itself, they have found 600 genes with small effect, many associated with inflammatory response, others implicated in the life cycle of the associated pathogens.  The same paper suggests that:
Schizophrenia may thus be a “pathogenetic” autoimmune disorder, caused by pathogens, genes, and the immune system acting together, and perhaps preventable by pathogen elimination, or curable by the removal of culpable antibodies and antigens.
That is, the authors suggest that the susceptibility genes code for proteins that are homologous to the pathogen's proteins, and that the latter might be intermingling or replacing endogenous proteins, and they are different enough to disrupt normal function, and lead to disease.
Pathogens' proteins may act as dummy ligands, decoy receptors, or via interactome interference. Many such proteins are immunogenic suggesting that antibody mediated knockdown of multiple schizophrenia gene products could contribute to the disease, explaining the immune activation in the brain and lymphocytes in schizophrenia, and the preponderance of immune-related gene variants in the schizophrenia genome. 
All of the pathogens implicated in schizophrenia express proteins with homology to multiple schizophrenia susceptibility gene products. The profile of each individual pathogen is again specific for different types of gene product, but all target key members of the schizophrenia network including dopamine, serotonin and glutamate receptors as well as neuregulin and growth-related or DISC1 related pathways.
So, the idea is that our genomes, our particular DNA variants, determine which human/viral matches we carry, and thus which pathogens we're susceptible to damage from.  So, in that sense, Carter, and others, suggest, schizophrenia and other behavioral disorders may be 'genetic', but environmental exposures, our vaccination history and so on determine the pathogens we might be infected with, and our immune system determines how we respond.

To be sure, these are statistical findings and there are so many genes associated with schizophrenia -- or perhaps more accurately so many genes not clearly but weakly, possibly, maybe, but not replicably associated, that it is possible one could almost always find some potential association with these pathways.  That makes it hard to evaluate the infectious scenario.

One clear point, though, is that even when what we are is genetic, the genes need not be those we were born with.  Bacteria, and hence their genes are vital to our survival and that appears just to be for starters.  When parasites affect our gene expression or function, their genomes become part of ours.  And from a biological point of view, our genetic battle for persistence -- for staying alive -- may have more to do with microbial challenges than with wearing out, which is basically what many GWAS targets are about (cancer, diabetes, etc.)

Even more important, perhaps, and a hint that we need to pay more attention to, is that many GWA kinds of studies are finding genes in immune-related systems, or those related to 'inflammation' for what appeared to be totally non-infectious and non-behavioral diseases, even including diabetes, intestinal disorders, retinal disorders of the eye, and many others.  These would be genetic in the sense that genetic susceptibility is involved, but not in the sense of intrinsically harmful genetic variants.

Is this behavioral parasite work definitive?  Do we now know that schizophrenia, and other disorders, are infectious in origin?  No.  Many questions have yet to be answered.  Maternal or early childhood exposure seem to be associated with risk, but why does schizophrenia have such a relatively late age of onset, given early age of exposure?  And why so stereotypically in late adolescence?  And so on.

But, it's intriguing that many GWAS have found an albeit small proportion of risk of many diseases explained by immune genes.

Friday, February 17, 2012

First we were snapped, now we're SNP'd

I got a grant from my university to purchase genotyping kits and analysis with 23andMe for all 130 of my biological anthropology students this semester.

Can of annelids
I know what you're thinking: CAN OF WORMS!

But maybe that's not what you're thinking at all.  Because if you follow this blog you may be familiar with how poorly we understand the causal relationships between genes and so many of our emotionally- and socially- and politically-charged behaviors, disease risks, and causes of death. That is, at least for the genes we know about. And what's a 'gene' these days anyway? It's not necessarily restricted to a stretch of nucleotides.

But it's only nucleotides, or more specifically SNPs (single nucleotide polymorphisms; pronounced "snips"), that 23andMe analyzes. They genotype thousands of SNPs that vary in people including those that link us to our relatives around the world, to interesting phenotypic traits (like whether our pee stinks when we eat asparagus), and to diseases and risks of developing diseases (like diabetes and Alzheimer's).

Ah, there it is...CAN OF WORMS!

But can a DNA test tell you that you're going to die? No, you know that already. Struggling with death is a part of life and certainly not a topic new to college classrooms.

But if wild classroom worms make you squirm beyond your comfort zone, then this definitely is not the right curriculum for you to implement. On the other hand, if you're like me and you believe that opening cans of worms is an often awkward but crucial part of your job then...'re with me. Still, maybe you're wondering about the disease risk information? That's still pretty harsh, right?

It can be. Which is why I told the students that agreeing to 23andMe's terms of service is basically acknowledging that you are willing to be traumatized and that you can't blame anyone but yourself  for choosing to go through with it.

Some didn't. It's completely voluntary. Students who do not want to send in their saliva for analysis can make a demo account and still participate fully and still earn an A in the course. They can also opt out of seeing disease risk results (and only see ancestry data) when the results come back to them.

Totally on board with being traumatized, many took the codes home with them to order their spit kits on-line. They did this despite the small chance that peering into their genome may show them evidence to suggest that their father did not father them.

Wait. What?

Yes. That really is possible... THE KING OF THE WORMS!
Part of what students have to do this semester is form a 'plan of action.' That's what I've called their assignment where they predict what their SNPs will  hold and where they explain what they will do if they find out they're at high risk for a disease or even, yes, that they might not be related to their father. (This discovery doesn't require paternal DNA. Since half of your genome is from your father, and since a few traits are pretty simple, the rare participant with the rare SNP can deduce that they did not get their DNA from their father who doesn't show the trait in question.)

But like I said, regardless of the high risk that they'll learn of risks,  most students will still spit in the tube and send it off.  And I believe this will be true not just of my students but this generation at-large in just a few  short years (or less).

The cost is already ridiculously low compared to just a few years ago. With about 200 of your dollars and up to eight weeks of your patience, 23andMe can do what just recently took thousands or even millions of dollars not to mention years to accomplish. And the price just keeps dropping. Once it's low enough, everyone will be having large portions of their genome analyzed, with just a click of the mouse and a spit in a tube and a stamp in the mail. That's it. And most of those people who have or who will participate in such a thing would not have the support of an entire college course, their peers, the faculty and the stellar guest experts (ethics, counseling, law, adaptation, geographic variation, disease risk variation, and the Personal Genome Project) that are coming to visit us in April.

The personal genome can of worms is already open. This curriculum may be cutting edge but it really is just about going along for the ride, but with eyes wider open than the dude in his mom's basement on his brother's old laptop with his dad's credit card.

We're pioneers of introspection and explorers of identity, much like the first folks to be photographed. We all take for granted seeing snaps of our person; soon we'll take for granted seeing SNPs of our personal genome.

First humans to be snapped: Boulevard du Temple by Daguerre, 1839 (wikipedia)

Some of the students involved in this curriculum started a terrific blog. Check it out!  

Thursday, February 16, 2012

Making you feel good, but making you feel bad!

Here's an example of where science meets society....and the result is a standoff.  A story in the BBC points out growing resistance to an anti-obesity ad campaign launched by the organization Strong4Life in Atlanta, Georgia and designed to encourage overweight children to work on losing weight, because it can also make people feel bad about their weight.

Obesity is, for many at least, a cause of illness and limiting of quality of life.  Traditionally, in human history, obesity was apparently rare because a regular surfeit of food just wasn't available, especially relative to the energy required to get the food in the first place.  Genetic variation that was efficient at stowing away calories you didn't need right away, for future use, in the form of fat could perhaps have led to a survival and or fertility (or gestation or lactation) advantage and hence have been advanced by natural selection.

In some countries today, men feel proud to have fat wives and wives want to be fat to please their husbands, because it signals wealth and that he's a good caretaker.  Nigerian 'fat farms' fatten up brides for this purpose.

But health-wise, if you're fit and trim, you'll feel better, more energetic, with more activity options open to you.   And your life and health expectancy are better.   These are statistical statements, not universals, but our media drive and tend to standardize quality judgments and imagery, and they generally give quite a different message.

In our rather passive lives today, where the media are the message, being overweight means not looking like pop stars, and hence not sexually desirable or not socially 'cool'.  In our delicate states of mind, in which it seems everyone is on the verge of needing a personal therapist, we don't want anything or anyone to make us feel bad.  Without rippled abs or various feminine curvatures, you are made to feel somehow inferior as a person.  Since social judgments are subjective, and we mold them through advertising and fashion, being overweight may make you feel bad, but ads against it may make you feel even worse.

The facts about obesity, its causes and genetics, are quite complex as many MT posts have tried to point out.  It's an example of a heavily studied problem that eludes simple answers, pointing out how difficult it it so understand this kind of causation, even if we are just focused on the technical issues of what you eat and what it does to you physically.  Our scientific arrogance is matched to a great extent by our scientific ignorance.

Nonetheless, overall pronouncements from science about obesity have at least substantial if not perfect evidence  behind them.  But psychological feelings are also facts, also a part of health, and also as elusive as buttered eels.  So when we try to implement pubic health,  who decides what's best, and why, and for who?

We have no answers, but the questions are worth taking much more seriously than perhaps we do.  And how do we decide what research is costly, problematic, and off the point in any case?

Wednesday, February 15, 2012

Close the windows, please (on life)

We flew down to Houston over the weekend, for a festschrift for Jack Schull on the occasion of the 40th anniversary of the Center for Demographic and Population Genetics at the University of Texas that he founded in 1972, as well as to celebrate his upcoming 90th birthday.  I was a founding member of the CDPG faculty, and a privilege it was to be in such a distinguished group!  I stayed for 13 years, and Anne worked at the Center for 5 years before we came to Penn State.  The celebration and its associated symposium on genetics now, and back then, made for a fine occasion -- but that's more than we can say about getting there.

Personal entertainment system, Wikipedia
We got on the plane in DC, and there was, as it turned out, a warning sticker advertising DirecTV on the side of the plane by the door as we entered.  Inside, the screens on the seat backs were already turned on, and set to the worst of morning television (which, in truth, is probably all of it).  After enticing us for the duration of the time we spent at the gate, we were then offered the opportunity to pay $8 for access to DirecTV for the duration of the flight.  Wow, we didn't have to turn it off!  We could keep watching stories about surrogate sisters/mothers, fighting lovers, and all the rest of the best the networks have to offer.  And, in fact, we were surrounded by people who swiped their credit cards -- and paid to watch ads.  Willingly.  Without anyone twisting their arms. Then, we were told by a comparable blaring, flashing, relentless high-octane ad that there was still time to sign up for the wonders of 100 channels throughout the flight!  Flash! Flash! Flash! went the endless glaring computer-graphic advertising this wonderful opportunity.

This went on until we were in flight, and then returned several times.  But the most impressive fact of all was that in this beautiful sunny day, as we traversed the country, all the window shades were pulled--so that the sunlit skies wouldn't infringe on anyone's television watching.  The shades were down when we boarded, which had seemed strange, but either reflected the prior flight or the United crew setting the stage for their Flight Mart (later to be supplemented with repetitive hawking of lunches and drinks for sale).

What's wrong with all this, you ask.  Anyone could have chosen not to watch!  And no one was required to.  Well, 15, 10, even 5 years ago, advertising wasn't nearly as ubiquitous, and a lot of people would have protested the presence of ads everywhere you turn.  Now there are even ads in the bottoms of the bins you put your shoes in to go through airport security!  We as a population are even willing to pay to be bombarded by ads!  Not to mention to give up all manner of personal information to Facebook, Gmail, etc. so that they can target ads to our taste.  And it all comes in fast, jerky, Sesame Street fashion.  And, in jammed-up coach seats, if your neighbor is wedded to the television, you get the flash, flash, flash of image-switching that characterizes not just ads but movies and programs themselves these days.  It's some Madison Avenue idea of how to get your attention, manipulate your desires, or whatever.

Advertising really is brilliant when it comes to manipulating behavior.  If only health education were even a tenth as successful (if the message would just sit still long enough to be taught!).  While it's not clear (to us) whether people are willing to sit through ads to watch the talk shows, or sit through the talk shows to watch the ads, what is clear is that the acquiescence is the result of a kind of brainwashing, and it's creeping and insidious.  No wonder zombies are so popular these days.  But the joke's on us.

There's nothing like 3 hours of enforced proximity to complete strangers of all ages and from all walks of life to catch you up with current trends.  One couple traveling with a baby probably 4 months old was using all the latest techniques to keep him from crying.  The mother had a well-used book that was clearly the product of much research, and she kept moving it slowly through his field of vision as his eyes followed.  The pictures were simple and black and white, and there was a red spot on every page.  Surely some child development expert has shown that these kinds of images turn an ordinary child into a brilliant one.  All above average.  Indeed, here's, where "High Contrast Colors are  STILL the best".

But then there's this, from another site, which may be why geniusbabies is so insistent that babies really do need High Contrast:
It is true that objects with patterns having 100% contrast (that is, black-on-white) are the easiest for newborns and young infants to see. However, it is now known that they can distinguish much subtler shades of gray. For example, in the first month babies can distinguish two shades of gray that differ by only 5% in gray level (5% contrast). As good as that is, by 9 weeks of age, infants' contrast sensitivity becomes 10 times better, so that they can see large patterns or objects that have less than 0.5% contrast. This is nearly as good as adult contrast sensitivity (0.2%). This means is that by about 2 months of age your baby is capable of perceiving almost all of the subtle shadings that make our visual world so rich, textured and interesting: shadings in clouds, shadows that are unique to your face; even see a white teddy bear on a white couch!
geniusbabies has invested a lot in black, white and red all over, and now research is saying babies really can see everything.  Oops.

But ok, whatever.  Whether it's high or low contrast images, maybe we should rethink fixating babies to any image at all.  How do we know that's actually beneficial (whatever that means -- IQ scores?)?  Maybe making a baby fixate on such images so much of the time actually stimulates, say, attention deficit disorder.  Maybe a baby is better served being allowed to gaze at his or her fist, as s/he tries to figure out whose fingers those are at the end of it anyway.

We just completely made that up, but here's what one website says, in praise of exposure to high contrast images: "Stimulation appears to foster a desire for more stimulation leading to a cycle of continued engagement and subsequent development."  We could be on to something.  Maybe it leads to a cycle of demanding stimulation?

So we're stodgy and don't want to spend our lives glued to Fox 'News' or ad-heavy rapid-fire violent dramas, and the like.  We won't argue.  This is currently a part of our culture, for whatever reason.  But our point for MT readers is not (just) to gripe, but to suggest that this kind of environment may--must?--have some long term neurological, behavioral, endocrinological, immunological or other effects on people over the long term.  We would wager that such effects would be as great, or even much greater, than the genetic effects everyone is so spaced out over.  But how will we ever know?  Who will remember well enough, or measure accurately enough, or even think to ask about, this frenetic daily experience from cradle to hospital bed.

If as is widely or even openly acknowledged, we are awful at measuring or even identifying environmental risks, that still doesn't slow down the momentum to claim everything is genetic.  Even a serious-minded epidemiologist would be hard pressed, 30 years from now, to include the Flash-Flash effect in risk-factor assessment.  Considering the other subtleties of disease risk, especially in later years, one has evidence that even the month you were born affects risk.

So, that's a problem for epidemiology.  But have we allowed the media industry, not exactly a selfless altruistic part of society, to dizzy us relentlessly, causing much more disease and disorder than we spend so much time trying to prevent as it is?

Who knows about that.  But what we do  know is that we'd much rather spend our flight gazing out the window at the serenely passing parade of the clouds, the skies, and the landscape below.

Tuesday, February 14, 2012

Ptolemaic genetics: epicycles of lobbying

That was then...
Ibn al-Shatir's model for the
appearances of Mercury,
showing the multiplication of
epicycles in a Ptolemaic
enterprise. 14th century CE
(Wikimedia Commons).
Way back then, in the dark ol' days of science, the Roman astronomer Claudius Ptolemy (90-168AD) tried to explain the position of the planets in terms of divinely perfect circles of orbit around God's home (the Earth).  The idea that we were at the center of perfect celestial spheres was a standard 'scientific' explanation of the cosmos and our place in it.

But the cantankerous planets refused to play by the rules, and their paths deviated from perfect circles.  Indeed, occasionally the seemed to move backward through the skies!  Still, perfect circular orbits around Earth simply had to be true based on the fundamental belief system of the time, so astronomers invented numerous little deviations, called epicycles, to make the (we now know) elliptical orbital pegs fit the round holes of theory.

And then along came Nicolaus Copernicus (1473-1543 AD).  And the cosmos was turned inside out: the earth was not the center of things after all!

Thomas Kuhn famously described in The Structure of Scientific Revolutions how the best and the brightest scientists struggle valiantly to fit pegs into holes they don't really fit, until some bright person ccomes along and shows the benighted herd a better way to account for the same things.  Copernicus, Galileo, Newton, Einstein, and others were the knights in shining armor who inaugurated some of the most noteworthy of these occasional 'scientific revolutions.'  Darwin's evolutionary ideas are also a classic example.

The same kind of struggle is just what is happening now in genetics and evolutionary biology--indeed in many other fields in which statistical evidence runs headlong into causal complexity.  Whether, when, or what knightly change will occur is anyone's guess.

And this is now
Everyone remembers the hoopla the sequencing of the human genome was met with when it was announced (or rather, each time it was announced) -- we were promised that we would by now not only know why people were sick, but we'd be able to predict what we'd get sick with in future.  It was promised that this would be a silver-bullet reality by the early 21st century by no other than Francis Collins.  Others were promising lifespans in the centuries: all of us would be Methuselahs!

So, all those illnesses would now be treatable or preventable in the first place. How?  Well, the genome would allow us to identify druggable pathways, and common diseases must be due to common genetic variants (an idea that came to be known as common disease common variant, or CDCV), and if we could just identify them, we'd be in business.  After all, didn't Darwin show us that everything about everything alive was due to genetic causation and natural selection?  If that's the case, we should be able to find it, and our wizardry at engineering would take the ball and run with it.  Big Pharma jumped on the 'druggable' genome bandwagon and people running big sequencing labs jumped on the CDCV idea, and genomewide association studies (GWAS) were born.  And then the 1000 Genomes project, and all the -omics projects....  Big is better, of course!  Not that these efforts weren't questioned at the time, based on what everyone should have known about evolution and population genetics, but the powers-that-be plowed ahead anyway.

Well, we're no longer in a minority of naysayers.  It's widely recognized that GWAS haven't been very successful, relative to the loud promises being trumpeted only a few years ago.  And even the successes they have had -- and numerous genes associated with traits have been identified, it must be said -- typically explain only a small amount of the variation in disease, or any trait, in fact.  So now researchers are working on automating the prediction of disease from gene variants based on protein structure and other DNA-based clues.  But the assumption--the belief system, really--is still that the answer is in the DNA, and disease prediction is still going to be possible.

A piece in Feb 9 Nature describes a number of state-of-the-art approaches to predicting the effects of DNA variants, in part based on what amino acid changes do to proteins.  The idea now is that diseases are going to be found to be due to rare variants, and the challenge is to figure out what these variants do.  In part, evolution will help us to do this.
"Sequencing data from an increasing number of species and larger human populations are revealing which variants can be tolerated by evolution and exist in healthy individuals."
But, are we trying to explain a current disease, or predict the diseases someone will eventually get? These are different endeavors, though it may often be inconvenient to acknowledge that.  Rare pediatric diseases that are due to single genetic mutations, or genetic diseases that cluster in families (and, again, usually with young onset age and rare) are easier to parse than the complex chronic diseases that most of us will eventually get.  But, based on the comparison of the genomes that have already been sequenced, we now know that we all seem to differ from each other at something like 3 million bases.  That is, we all have a genome that has never existed before and never will again. Assigning function to all that variation is from daunting to impossible -- not least because a lot of it might not even have a function.  And the idea that we'll eventually be able to make predictions from those variants is based on questionable assumptions.

It's true in one sense that every disease we get is genetic -- everything that happens in our body is affected by genes -- but in another sense, much of what happens is a response to the environment, and so is environmentally determined--that is, is not due to genetic variation in susceptibility.  Predicting a disease from genes when it's due to combined action of genes and environment, therefore, is a very challenging problem.

Here is just one example of why: Native Americans throughout the Americas are about 65 years into a widespread epidemic of obesity, type 2 diabetes and gallbladder disease, diseases that were quite rare in these people before World War II.  There are a number of reasons to suspect that their high prevalence is due to a fairly simple genetic susceptibility.  But, if gene variants (still not identified) are responsible, they have been at high frequency in the descendants of those who crossed the Bering Straits from Siberia for at least 10,000 years -- which means that variants that are now detrimental were "tolerated by evolution and exist[ed] in healthy individuals" for a very long time.

If geneticists had wanted to predict 70 years ago what diseases Native Americans were susceptible to, these variants would have been completely overlooked, because they weren't yet causing disease.  And indeed these 'risk' genes, whatever they be, were benign -- until the environment changed.  We're all walking around with variants that would kill us in some environment or other, and since we can't predict the environments we'll be living in even 20 years from now, never mind 50 or 100, the idea that we'll be able to predict which of our variants will be detrimental when we're old is just wrong. In fact, we're each walking around with substantial numbers of mutant or even 'dead' genes, with apparently no ill effect at all -- but who knows what the effect might be in a different environment.

But, ok, some of us do have single gene variants that make us sick now.  Many of these have been identified, most readily when a family of affected individuals is examined (though the benefit of knowing the gene is rarely of use therapeutically), but many more remain to be.  The current idea is that this can be done by looking for mutations in chromosome regions that are conserved among species, and figuring out which of these change amino acids (and thus the protein coded for by the gene).  The idea is that unvarying regions are unvarying because natural selection has tested the variants that arose and found them wanting, thus eliminating them from the population.  They must, therefore, be functionally important!
A host of increasingly sophisticated algorithms predict whether a mutation is likely to change the function of a protein, or alter its expression. Sequencing data from an increasing number of species and larger human populations are revealing which variants can be tolerated by evolution and exist in healthy individuals. Huge research projects are assigning putative functions to sequences throughout the genome and allowing researchers to improve their hypotheses about variants. And for regions with known function, new techniques can use yeast and bacteria to assess the effects of hundreds of potential mammalian variants in a single experiment.
This is potentially useful, because for those with single gene mutations that cause disease -- 1 variant among 3 million other ways in which each person differs from everyone else -- homing in on the causative mutation is, again, difficult to impossible if you don't have a large family with similarly affected individuals in which to confirm the association of mutation and disease.

Well, if we can do with or without a protein (or other functional DNA element), depending on the variation we have across the genome, then even when the element is important its variation in a given individual may not be causal: there are many examples where that is clearly true.  Further, the same kind of evolutionary reasoning would say that centrally important -- and hence highly conserved -- parts of the genome probably cannot vary much without being lethal, largely to the embryo.  So, from that equally sound Darwinian reasoning, we would expect that disease-associated variation will be in the minor genes with only little effect!  So the 'evolutionary conservation' argument cuts both ways, and it's not at all clear which way its cut is sharpest.  It's a great idea, but in some ways the hope that searching for conservation will bail us out, is just more wishful thinking to save business as usual.

Methuselah (Della Francesca ca. 1550) 
To complicate things even more, not all amino acid changes cause disease, or even do much of anything.  And again, sometimes they will only be harmful in a given environment.  And, of course, not all diseases are caused by protein changing mutations -- sometimes they are caused by disturbances to gene regulation.

In fairness, the multitude of researchers trying to make sense of the limitless genetic variation that is pouring out of DNA sequencers recognize that it's complicated.  But then, why are they still saying things like this, as quoted in the Nature piece: “The marriage of human genetics and functional genomics can deliver what the original plan of the human genome promised to medicine.”

What's to the rescue?  Do we need another 'scientific revolution'?
We have no idea when or if our current model of living Nature will be shown to be naive, or whether our understanding is OK but we haven't cottoned on to a seriously better way to think about the problems, or indeed whether the hubris of computer and molecular scientists' love of technology will, in fact, be victorious.  If it comes, it could be.  But we are certainly in the midst of a struggle to fit the square truths about genetics and evolution into the round holes of Mendelian and Darwinian orthodoxy.

Perhaps the problem to be solved is how to back away from enumerative, probabilistic, reductionistic treatment of complex, multiple causation, and to make inferences in other ways.  We need to understand causation by numerous, small or even ephemeral statistical effects, without our current enumerative statistical methods of inference. In terms of the philosophy of science, doing that would require some replacement of the 400 year-old foundations of modern science, based on reductionistic, inductive methods that enabled science to get to the point today where we realize that we need something different.

The situation here is complicated relative to scientific revolutions in Copernicus', Newton's, Darwin's or even Einstein's time by the large, institutionalized, bureaucratized, fiscal juggernaut that science has become. This makes the rivalries for truth, for explanations that this time will finally, really, truly solve the complexity problem even more frenzied, hubristic, grasping, and lobbying than before.  That adds to the normal amount of ego all of us in science have, the desire to be right, to have insight, and so on.  Whether it will hasten the inspiration for a transforming better idea, or will just force momentum along incremental paths and make real insight even harder to come by, is a matter of opinion.

Sadly, the science funding system, including the role of lobbying via the media, is so entrenched in our careers, that dishonesty about what is claimed to the media or even said in grants is widespread and quietly acknowledged even by the most prominent people in the field: "It's what you have to say to get funded!", they say.  But where does dissembling end and dishonesty begin when it comes time to the design and reporting of studies (and, here, we're not referring to fraud, but to misleading results and over promising the importance of the work)?  The commitment to the ideology and the promises restrains freedom of thought, and certainly dampens innovative science.  But it's a trap for those who have to have grants and credit to make their living in research institutions and the science media.
Zip-line over rainforest canopy,
Costa Rica (Wikimedia)

But right now, scientists are like tropical trees, struggling mightily to be the one that reaches the sunlight, putting the others in their shade. What we need is a conceptual zip-line over the canopy.