Monday, September 30, 2013

The latest in science news makes your head spin

The BBC online, ever guardian of shocking, dramatic, fundamental new science discoveries, reports that ballet dancers' brains adapt to their pirouettes, and they don't get dizzy!  They appear to suppress signals from their inner ear to their brain, normal signals that would make us non-spinners dizzy. This doesn't come naturally and must be learned since our ancestors' dancing was more related to stomping around a campfire muttering incantations (and, mainly, this was the men's preserve while the women watched, admiringly).  Of course, if the usual contemporary ideas of science are correct, there must be a gene for it (call it, say, Dizzy1).

Here's proof, sort of -- the dancer spins, and then keeps dancing:

Image from the BBC website
No, that photo's not really proof.  Fortunately, now real science has been brought to bear, the science some might call intra-phrenology (looking for virtual bumps inside the skull rather than on top of it).  To show that the balanced ballerina theory really is true, and not just spin, so to speak--that is, to show convincingly that ballet dancers really don't stumble around dizzily after every pirouette, scientists have brought in an fMRI analysis and show that the area of the brain responsible for dizziness shrinks in people who frequently pirouette.

These investigators assert reasons for doing this hi-tech approach, based on a promise that this may lead to therapy for people with dizziness problems.  But since such justifications often are rationales for applying exotic technology, we are not in a position to judge the cogency of that argument relative to our current culture's apparent need to view every question through the spin of new instruments.

Regardless of one's level of skepticism about that, or belief that technology-first is the best way ahead, it is reassuring to learn that the human brain can do what we knew very well it could do--and that now we can show it on a video monitor, which the media seem to feel turns it into real science, a story that leaves us dizzy, or at least relatively speechless.  So we end this short post!

Friday, September 27, 2013

Test your color acuity!

Here's a test for color acuity (click on the image caption to get to it). Not color blindness, mind you, but acuity.  Whatever that is. This might help: one link to the test says, "Learn how you really see color!".  Another, "Discover your problem spectrum areas!"  Ok, I'll do it.  Who wouldn't want to know their problem spectrum areas?

The X-rite online color challenge

So, you drag and drop the colored tiles to arrange them in graded order between the blocks on each end of the four rows.  Then you get your score, which can range from 0 (perfect) to, well, very high (e.g., some 20 - 29 year old male has scored 444445389, whatever that means).  Maybe you've seen this test (ok, advertisement).  Maybe you've taken it. Maybe you've taken it twice, or three times.  It's kind of mesmerizing.  But does it tell you anything about your ability to discriminate color? 

I got a 20 the first time I took this test, and then, shamed by my co-test takers, took it again and got an 8, resulting in fewer but different problem spectrum areas.  A friend, an artist, first got an 8. then a 4, and still not satisfied, took it again and finally got a perfect score.  My daughter got a perfect score the first time she took it.  As did my friend's partner, another artist who, incidentally, has been reading a lot lately about color.  Could that explain his score?  Yes, they bragged. 

But what is this test actually testing? Anything about how we really see color?  Well, it would have to either be a sloppy version of color acuity (whatever that is) if scores can bounce around as they do, and if our problem spectrum areas can change.  Or is it measuring trainability for taking tests of visual acuity? Or ... nothing?

Should we make an actual serious point or two here?  Meaningless tests like this one aren't the only kinds of tests that can claim to be measuring one thing but actually measure another ... or nothing.  The easy target is psychological testing, from IQ tests to fMRI scans, but hearing tests in noisy doctor's offices are another one, and. hell, let's even throw in background checks for gun purchases.

Or is the real problem the test's claim to precision?  We have neural wiring and chemical (and mechano-chemical) processes that provide input to our wiring from the outside world.  Even assuming no change in the wiring (no brain cells die, are produced, or change their synapses) or the detectors (no retinal or auditory or olfactory cells die or are produced), there will be probabilistic variation in strength of signals detected and sent to the wiring and among the responding signals sent among the wiring.  In this sense, no one has a level of visual acuity, or other kinds of detection of the world.  Cold feels colder sometimes than others, smells are more intense at times, and so on.

We vary in our ability to do a task, and only someone very rash would suggest that we understand all the sources of that variation.  Reducing the phenomenon to individual causal elements--specific neural connections, specific genetic variants, may work for some strong-effect instances, but is illusionary, as so much research shows, most of the time.  We may get approximations, but we usually don't know how approximate.

This is the challenge of complexity.  It is one of the central challenges to science today.  It's a serious challenge.  Our natural tendency to wish it away or claim that we understand it, often obscures how deeply challenging it is.  Maybe science doesn't even know how to frame the questions that need to be asked about it.  If every acuity test, or every case of diabetes, is complexly different, then the 'it' we want to explain isn't really, well, 'it'.

Thursday, September 26, 2013

Good health news -- progress on HIV/AIDS

It could be said that when it comes to health news we tend to accentuate the negative here on MT, so we were happy to see this story.  In 2000 the 189 member states of the United Nations, joined by 23 international organizations, established eight Millennium Development Goals (MDG), to be met by 2015. We've pulled a succinct list of these goals from Wikipedia:
  1. Eradicating extreme poverty and hunger
  2. Achieving universal primary education
  3. Promoting gender equality and empowering women
  4. Reducing child mortality rates
  5. Improving maternal health
  6. Combating HIV/AIDS, malaria, and other diseases
  7. Ensuring environmental sustainability
  8. Developing a global partnership for development[1]
There has been disagreement over the goals and how to attain them since they were established.  It was likely, not surprisingly, that the countries furthest away from the goals were least likely to meet them, for example.  And there is currently disagreement over whether progress on some of the goals has had anything to do with the effort -- the first goal, e.g., reducing the number of people living on less than $1.25 a day by half, has been met, but economic development in China is responsible for much of that.  And extreme poverty is nowhere near being eliminated; there are still over 1 billion people living on less than $1.25 a day (one source for this figure is the Earth Institute at Columbia).

The Guardian has an excellent piece on this, including an interactive display of progress toward attainment of the MGDs as measured by 60 indicators, by nation.  They ask what progress has been made, and what can setting global goals achieve anyway?  Further, what should happen after 2015? 

However you feel about the whole endeavor, whether or not the goals were challenging enough or what kind of progress has been made, the UN reported the other day on the MGDs, and it is undeniably true that the world is close to reaching targets on HIV/AIDS. The UN says that the spread of HIV/AIDS has been halted and even reversed.  The report cites 'dramatic' progress -- deaths and new infections are down and the number of people on antiretrovirals worldwide is up. 
New HIV infections among adults and children were estimated at 2.3 million in 2012, a 33% reduction since 2001. New HIV infections among children have been reduced to 260 000 in 2012, a reduction of 52% since 2001. AIDS-related deaths have also dropped by 30% since the peak in 2005 as access to antiretroviral treatment expands.
By the end of 2012, some 9.7 million people in low- and middle-income countries were accessing antiretroviral therapy, an increase of nearly 20% in just one year. In 2011, UN Member States agreed to a 2015 target of reaching 15 million people with HIV treatment. However, as countries scaled up their treatment coverage and as new evidence emerged showing the HIV prevention benefits of antiretroviral therapy, the World Health Organization set new HIV treatment guidelines, expanding the total number of people estimated to be in need of treatment by more than 10 million.
Yes, 2.3 new infections is still significant, just as the number of people in extreme poverty is still significant, and meeting the 2015 goals shouldn't allow the world to lose sight of that.  But a 33% reduction in HIV infections is also significant. We hope that this becomes a long-term trend.

Big goals are noble goals, and the challenge is to put resources where the rubber meets the road, and not to have too much gobbled up by bureaucrats, politicians, profiteers, and yes, professors before it ever reaches its intended end.  The MGD report suggests that at least to some extent, this is happening.

Wednesday, September 25, 2013

Incidentally,...... (Interpreting incidental findings from DNA sequence data)

Genome sequencing yields masses of data.  It's one of the founding justifications of the current cachet term Big Data.  The jury is still out on how much of it is meaningful in any sort of clinical way as opposed to other sorts of data; that's fine, it's early days yet.  But too much of current thinking seems to rest on ideas that are outdated in fundamental ways.  It would be helpful to get beyond this.

A new paper in The American Journal of Human Genetics, ("Actionable, Pathogenic Incidental Findings in 1,000 Participants’ Exomes", Dorschner et al.) reports on a study of gene variants in 1000 genomes of participants in the National Heart, Lung, and Blood Institute Exome Sequencing Project.  How many variants associated with genetic conditions that might be undiagnosed does each individual carry?  This is addressing the issue of how much individuals should be told about "incidental findings" in their genome or exome sequences.

The investigators looked at single nucleotide variants in 114 genes in 500 European Americans and 500 African American genomes.  They found 585 instances of 239 unique variants identified by the Human Gene Mutation Database as disease-causing.  Of these, 16 autosomal-dominant variants in 17 people were thought to be potentially pathogenic; one individual had 2 variants.  A smattering of other variants not listed in HGMD were found as well.  The paper reports a frequency of ~3.4% and ~1.2% of pathogenic variants in individuals of European and African descent, respectively.

The 114 genes were chosen by a panel of experts, and pathogenicity determined by the same. 
“Actionable” genes in adults were defined as having deleterious mutation(s) whose penetrance would result in specific, defined medical recommendation(s) both supported by evidence and, when implemented, expected to improve an outcome(s) in terms of mortality or the avoidance of significant morbidity.
Variants were classified as pathogenic, likely pathogenic VUS (variant of uncertain significance), VUS and likely benign VUS.  Classification criteria included allele frequency of the variant (if low in the healthy population, it was considered to be more likely pathogenic than if high, relative to disease frequency), segregation evidence, number of reports of affected individuals with the variant, and whether the mutation has been reported as a new mutation or not. The group decided not to return VUS incidental findings to the individual if the variant was in a gene unrelated to the reason they were included in the study in the first place.

Reviewers of the data followed stringent criteria to classify alleles.  For example, variants were considered suspect if they were identified by the HGMD as disease-causing.  But they were not considered disease-causing if the allele frequency was common enough that this meant, relative to disease frequency, that the allele alone couldn't be causal.

This raises a point that we've made before, too often to a deaf audience; the variant is not 'dominant' and the 150 year old term due to Mendel should be dropped from usage, in favor of a more accurate conception of probabilistic causation (see below)If the allele was more common than the disease, other alleles or factors must also be involved.  Or, perhaps the allele was improperly classified as causal in the first place.
Punnett square showing results of crossing yellow and green peas; Wikipedia

And,"maximum allowable allele frequencies for each disease were calculated under a very conservative model, including the assumption that the given disorder was wholly due to that variant."  Disease frequencies were overestimated when they weren't known. But, is dominance a function of disease frequency?  Suggesting such a thing should raise big red flags about semantics and the conceptual working frameworks being used.
The eight participants with confirmed pathogenic (versus likely pathogenic) mutations included three with increased risk of breast and ovarian cancer (MIM 604370, caused by BRCA1 mutations, or MIM 612555, caused by BRCA2 mutations), one with a mutation in LDLR, associated with familial hypercholesterolemia (MIM 614337), one with a mutation in PMS2, associated with Lynch syndrome (MIM 614337), and two with mutations in MYBPC3, associated with hypertrophic cardiomyopathy (MIM 115197), as well as one person with two SERPINA1 mutations, associated with the autosomal-recessive disorder alpha-1-antitrypsin deficiency (MIM 613490).
Fewer actionable alleles were found in African Americans than European Americans, presumably because fewer studies have been done in this population and fewer causal alleles identified.  Dorschner et al. did not have access to phenotypes of the people included in this study, so can't know their health status with respect to these variants.  Nor, of course, can they know whether individuals might have a condition for which a causal allele was not found. 

They report few pathogenic alleles, though the fact that they looked only for alleles associated with adult onset conditions could partially explain this.  And, their criteria were stringent.  And, of course, they were looking only for single gene disorders, so it's not a surprise that they identified so few potentially pathogenic alleles, in fact. Of course, single gene disorders are only a small subset of conditions that might affect us.

A 2011 Cell paper we've mentioned before ("Exome sequencing of ion channel genes reveals complex variant profiles confounding personal risk assessment in epilepsy," Klassen et al.) looks at this question from a different angle.  Klassen et al. compared the exomes of 237 ion channel genes (known to be associated with epilepsy) in affected and unaffected people.  They found rare variants in Mendelian disease genes at equivalent prevalence in both groups.  That is, healthy people were as likely to have purportedly causal variants as those with sporadic, idiopathic epilepsy.  They caution that finding a variant is only a first step.

The unjustified dominance of 'dominance'
We feel compelled to comment again, and further, on the terminological gestalt involved in papers such as these, as we commented yesterday (and have done on earlier posts).

The concept of dominant single-locus causation goes back to Mendel, who carefully chose traits in peas that worked that way.  He knew other traits didn't.  He learned that not all 'dominant' traits showed 'Mendelian' inheritance and even came to doubt his own theory later in life.

There are traits that seem to 'segregate' in families in classical Mendelian fashion.  There is, for such traits, a qualitative (e.g., yes/no) relationship between genotype and phenotype (trait).  When a dominant allele is present, you always get the trait.  This means we see (in the case of dominance) about 1/2 of offspring of an affected parent who are affected.   Much of the 20th century in human genetics was spent trying to fit patterns of inheritance to such single-gene models.  But it was very clear that dominance (even when there was evidence for it) wasn't dominance!  The traits were not always black and white (or should we say green and yellow?).  And the segregation proportion wasn't 50%.  What to do?

The conviction that the trait was due to the effects of a single locus seemed to have strong support, so the concept of 'penetrance' was introduced.  This is not the first time that a fudge factor has been used to force a model to fit data when it didn't really.  The idea in this case is that the inheritance of the allele (variant) from the parent had a 50% probability, but that the allele, once present, did not always cause the trait.  If you can add a factor of 'incomplete penetrance', that can vary from 0 to 1, then you can fit a whole lot of data that otherwise wouldn't support Mendelian causation.

What we now know is that there are many variants at genes associated with single-gene traits, that other genes almost always also contribute (along with environmental factors as well, most of the time), and that the trait itself is quantitative: the same 'A' dominant allele doesn't always cause the same degree of severity and so on.  In other words, the trait is mainly (in a statistical sense) due to the presence of variation in a single gene, but the effect depends on the specific allele that is involved in a given case and also is affected by the rest of the genome.

In other words, there is a quantitative relationship between genotype and phenotype.  This is a general, accurate description of the pattern of causation.  The pattern is not 'Mendelian' and the causation is not dominant.  Or, to be clearer, the extreme cases are close to, or even exactly, single-allele dominant, but this is the exception that proves (tests) the quantitative-relationship rule.

We should stop being so misled by constraining legacy terminology. Mendel did great work.  But we don't have to keep working with obsolete ideas.

Tuesday, September 24, 2013

Genes for rare diseases

We blogged a while back about why people who have unexplained illnesses might want to have their genomes sequenced.  The best outcome of genome sequencing would be that if a genetic explanation is found, it could lead to therapy, or that if understood early enough the effects of some such diseases can be ameliorated.  But this is the outcome not nearly often enough -- indeed, causal genes are identified in something like 20-25% of cases these days.  To a great extent, the search is for 'Mendelian' diseases, that is, diseases caused by a variant (a mutational change in DNA) in a single gene. We'll come back to this aspect of the search, below.

In the US, a rare disease is defined as one that affects fewer than 200,000 people, and in Europe, as a disease with a prevalence of 1 in 2000.  Cystic fibrosis is the most common Mendelian disease in European populations, and even it is rare, at 1 in 2500 Caucasian births.  Sickle cell anemia is the most common single gene disorder among African Americans, with 1 in 500 newborns being affected.

Single gene effects that are serious and appear in childhood are generally rare because they so often aren't transmitted to the next generation and so aren't maintained at any appreciable frequency in the population.  Even recessive variants would be kept to limited frequency in this way (as, for example in CF).  With late onset, the variant could be more common since, while it may be devastatingly unpleasant its effect arises only after reproduction is over, so its frequency isn't kept in check by natural selection.  If the disorder has complex environmental interactions and the like, it could have almost any frequency.  Ironically, one might say that if it were very common, it can't be too too serious, and we might even consider it 'normal'.

While each single gene disorder may be rare, there are an estimated 7000 such disorders known; the causal gene is said to have been identified for about half of these, as reported by a recent paper in Nature Reviews Genetics ("Rare-disease genetics in the era of next-generation sequencing: discovery to translation," Boycott et al.).  Most of these were found before modern very high throughput (e.g., 'next generation') genomic DNA sequencing methods were in use, and the work was laborious but Boycott et al. suggest that next-generation sequencing will circumvent methodological issues that prevented gene discovery in the past and that the genes for essentially all rare diseases will be identified by 2020.  Of course, new single gene disorders and new alleles for known disorders are always possible as new mutations arise every generation, so the idea that all rare diseases will be explained by any given date can't be literally true.

How it's done
This post is a mix of what Boycott et al. say and our own cautions.  Nothing we have to say is new, or unknown to people in the field, but we think it's worth saying again anyway.  Whole genome and whole exome (protein coding segment) sequencing (WGS and WES) have both proven to be useful in the search for genes causing disease.  WGS includes all 6.2 billion nucleotides, while WES is protein coding sequence only, about 2% or so of the genome, so it's a lot less expensive to sequence and, in theory, easier to interpret.

Boycott et al. report that more than 180 new genes have been discovered with WES. Methods for analyzing the sequence data are standard now, and analysis has been aided by the growing number of complete genomes of healthy people that are accessible to researchers for comparative purposes.  The idea is that if the genomes of an ill and a healthy person share a variant, it's unlikely to explain the disease, though this isn't always the case.

Figure 2 from Boycott et al.: Gene identification approaches for different categories of rare disease
Mode of inheritance and family pattern of disease are both pieces of information that can, in theory, help narrow the search for causal variants.  This does generally require sharing of the variant and the symptoms in parent and child.  Each of us has a small set of protein coding variants, perhaps fewer than 500, that our parents don't share, but that arose in the egg or sperm cell, or in the fertilized egg when we first began to develop.  If a child has a single-gene so-called 'dominant' disorder that neither parent has, and the cause is in a protein coding region of a gene, this can be fairly straightforward to identify.  The child's variants are compared with 'normal' genomes, and unusual alleles considered as potentially causal.

But there may be sharing of variants but not the disease, because the variant doesn't always cause disease (isn't complete 'dominant'); or variants may be shared but not related to the disease.  This can complicate the story.  Those variants that do seem to be good causal candidates (because, say, they interfere with protein structure) will be considered first as possibly causal, and hopefully confirmed in other people or testable in the lab in various ways, such as introducing the variant into a laboratory mouse.

Or, if multiple affected unrelated people share a variant, or have a variant in the same gene, it may be considered a good candidate.  If the causal variant arose in a specific tissue by somatic mutation, comparison of sequence from affected and unaffected tissue can be informative.  But that currently is usually impracticable for various technical reasons.  If a child has an autosomal recessive disorder, meaning that s/he must have two causal alleles, one from each parent, comparing two affected siblings can pinpoint the alleles.  Disorders caused by clearly X-linked variants (when, generally, only males are affected) are fairly straightforward to investigate, as all variants on all other chromosomes are ignored. These are classical approaches, nearly a century old, but with new DNA technologies they are easier to pursue.  The real questions, of course, are how accurate are such models and how often will they work.

Boycott et al. note that finding genes for autosomal dominant disorders that are shared by multiple family members is challenging because they will share multiple variants across their genomes simply because they are closely related.  If no candidate gene is suspected, it will be difficult to narrow the variants down to likely disease-related candidates.

These are now standard approaches to analyzing WES data, but the glaring problem with whole exome sequencing is that it is impossible to search for variants in regions of the genome that regulate gene expression, and there are numerous ways in which gene regulation can go awry and cause disease.  In these cases, whole genome sequencing is necessary, but it is currently not possible to easily discern causal variants in the much larger sea of variants that whole genome sequencing will yield.  Indeed, Boycott et al. state that this class of variants will be the one that hinders completion of the atlas of genetic causes of rare disease.  Whole genome sequencing sounds like a savior, but so many variants will be found, and so little is currently known about what non-coding parts of DNA are doing, that this is currently an uphill battle, lots more promise than delivery.

And, identifying disease-causing genes isn't necessarily much progress toward understanding the disease, or finding treatments because, in Boycott et al.'s words,
In general, we are hampered by our incomplete understanding of the biological function of most genes and proteins; the linking of a poorly characterized gene to a human disease does not necessarily make the protein function clearer. Although in vitro analysis with cell lines from patients can considerably contribute to our understanding of protein function, often a more comprehensive investigation at the tissue, organ or whole-organism level is required. Thus, there is a need for coordinated model-organism research platforms to put disease-causing genes into a biological context.
That is, there's a lot of work yet to do.  And, as the authors also point out, progress on identifying genes causing rare diseases does not equate to progress in therapeutic approaches, and that's largely because there is little financial incentive to develop drugs for rare diseases that will bring few customers.

The Mendelian assumption: outdated thinking
The Boycott et al. paper is a review of the state-of-the-art, that is, it reflects widespread current thinking about genetic disease and how to identify genes that may be involved in causation.  However, current thinking is too often classical thinking, still derived from Mendel and based on what is found in the easy cases, almost what one might call the genetic reductio ad absurdam.   As we've posted and written often before, Mendel carefully chose single-gene, wholly deterministic examples in peas, to reduce the inheritance problem he was interested in (improving agricultural plants by hybridization).  He believed in and wanted to find, single causal 'atoms' that applied to biology the way what chemists were finding applied to chemicals.

This led to the idea of genes 'for' disease, that is, genes named after disease because a mutation in them is a sufficient or even necessary cause of the disease.  We still use Mendel's terms, dominant and recessive, for this kind of causation--identified traditionally by parent-to-offspring transmission. While this has gotten the field far in some ways, as we've said before, we're still prisoners of Mendel in some ways, and this has stymied progress, and surely will continue to do so.  

A recent prominent geneticist reported a widespread search of the human gene-coding regions (exons) to find inherited disease-causing variants in offspring (and hence erroneously, and commonly, referred to as 'mutations,'  erroneous because the parents also had the variants).  The study reported what to them was the surprising finding that many diseases were due to variation in two, not just one gene.  Indeed, this was referred to as a 'paradigm shift'.  

The use of such a phrase for what is in fact not at all a surprising finding reflects the trivialization of the important concept of true revolutions in thinking for which the term 'paradigm shift' was coined by Thomas Kuhn (we've posted on this before, too).  Why trivial?  Why not new?  Isn't disease Mendelian?

The answer is that genes are transmitted in Mendelian fashion but traits, including disease, only manifest when genes go into action.  Traits are not transmitted.  And the focus on single-variant traits, like Mendel's peas, characterized by AA, aa and aA gene transmission, showed decades ago,that the story was much more complex than big-A sick, little-a well kinds of concepts.  Instead, even just in regard to a single disease-affecting gene, we know there are tens, hundreds, or even thousands of different variants seen in different combinations in different patients with different genotype-related quantitative effects on the nature or severity of the trait.  And rarely are all cases of a disease due to a single gene.

To document multiple-gene causation is an important task of human genetics.  But to cling to old terms and concepts out of ignorance or habit, and to manifest surprise when the concepts are shown to be wrong is actually quite revealing of the mental prison to which, even despite decades of corrective knowledge, the history of science encages its present.

There are many cases of diseases whose individual cause is primarily the result of the genotype at a single gene.  These are, diagnostically if not therapeutically at least, very valuable to know about.  The phenomenon of Mendelian disease--properly understood--is certainly real.   But it is the easy exception that lures us into thinking everything is roughly the same and hence being surprised when finding it's not.  

If we view every modification of our ideas as a 'paradigm shift' rather than just a modification, or expansion of current working models or theory, then the term loses any real meaning.  That's happened, because it's a kind of self-flattering act to liken the finding of one's research to paradigm shifts.  Kuhn was talking about Copernicus vs Ptolemy, Einstein vs Newton, Darwin vs Genesis, and unifying discoveries like continental drift, as the paradigmatic sorts of eye-opening changes. 

It is very difficult to shake old concepts, and we like to think that we're not just making 'findings' but 'discoveries', to use subjective adjectives.  Still, that aspect of science sociology should not, in this instance or any other, blind us to the real progress that is made, even if the idea of paradigms and revolutions are turned into slogans.

Good needles to find in the haystack
Finding genes for 7000 rare diseases is not the same as explaining disease in the millions of people who have one of them.  The more than 2000 alleles in different parts of the CFTR gene in different people, that are assumed to cause cystic fibrosis because they're found in patients, comprise just one example of the complexity of 'simple' genetic diseases.  They show that terms like 'dominant' and 'recessive' which we got from Mendel, are conceptual dinosaurs and really should be dropped (CF is said to be a 'recessive' disease but most patients do not have two copies of the same defective variant).  And, many rare diseases, even if caused by a single gene in every affected person, are caused by variants in more than one gene, sometimes many genes. So we should drop the idea of a gene 'for' a disease, too. But conceptual changes are hard to root out, and it's probably fair to say that the easy diseases and disorders have largely been done.

But still, we fully recognize that identifying genes for rare diseases is inordinately important for people who suffer from these diseases, for multiple reasons.  Indeed, the search will go on despite the many obstacles, and there will be successes.  And deep gratitude when there are.

Monday, September 23, 2013

Does Mars--or can Mars--tell us much about the origin of life?

The latest hot item from NASA is the statement (is it an admission?) that there's no methane, and thus no trace of E. coli on Mars after all.  Well, they never hinted it would actually be E. coli, but at least some kind of bacteria (or 'microbe').  Now, after a year of exploration, apparently the rover has come up as empty of evidence as the Martian canals are of water (and maybe with old wrecks of gondolas).

Actually, for a few microseconds there was some advertised reason to think that the Red Planet was the birthplace of microbes: SARS from Mars, if you will. Some geologic formations in meteors found in Antarctica and apparently from Mars, were touted as essentially fossilized microbe remains.  It was quickly shown by sober scientists that these were natural geological or mineral formations, with no real evidence of actual life, but some clung to the hope (or was it hype?) that the bug-like formations really were bacterial coffins.

Still, hope forever in the human heart, so to convince us we should pay for missions to Mars rather than Mars bars to missions (to feed hungry people), the continual drumbeat from NASA has been the search for life.  In more tempered terms, even the scientists involved have been warning that the signatures of past life (nobody thinks Mars still has living wigglies, to our knowledge), will be very indirect.  Signatures of molecules related to life as we know it here on Earth.  In this case, Curiosity was seeking traces of methane (bacterial flatulence), but in vain.  Even the enthusiastic but responsible investigators noted that there are other ways for molecules such as earthly life uses  to get to Mars, such as bombardment from space, for one.  We'll return to this later.

Instead of a thrilling Nature cover story, Mars is as dusty and devoid as a pancake on a griddle:

From the BBC online story of Sept 19, 2013

What a crushing blow!  Think of the billions of dollars lost to Hollywood alone, by depriving them of countless movies about the past (or evil subterranean present) on Redsville.  In fact, as we've said in several prior posts, Hollywood could do the story a lot better than NASA, with probably a lot more drama and imagination. And Hollywood has the money, too (so do the makers of Grand Theft Auto V).  So Republicans rejoice: you don't have to pay for it with (ugh!) government funds.  Indeed, why bother to actually go to Mars?  Though Hollywood could certainly do it, the studio lots and video game staff could do it much better, maybe even a slight bit more cheaply.

But let's play a bit of a thought experiment game, where genes will actually tell the tale.....

What if NASA's dreams were true--and there really was life on Mars?
Now, the usual kinds of evidence for BioMars takes two tacks, which we've mentioned.  First the rod-like forms in the meteorite--the whole 'microbe' fossils, and second the idea of molecular life-residue that has been sought.

Electron microscope view of meteorite from Mars reveals bacteria-like structure; Wikipedia
Suppose a real, not imagined, microbe were actually found on Mars.  Then the NY Times and Science, really would, for a change, have a story that deserved their typical excitement.  Suppose that, for some reason, the idea that life elsewhere would be just like it is here, and that BioMars evolved microbes.  Hell, if we're going to be fanciful and just extend our rather unimaginative earth-thinking to the entire universe, however implausible and egocentric that is, let's further suppose that sheltered from decay, but at Curiosity-shovel depth, this little Martian bug had DNA in it.  What a genuine thrill (though many would assume this was either a Hollywood stunt or CIA plot)!  But what would it show?

Well, DNA sequences from Mars bugs would be used to reconstruct the time of origin, just as we do with real bugs here on Earth.  What would it that origin time be?  It would have to be more or less what we see on Earth!  Why? Because we know how old the solar system is, and the sun is the energy source for Mars, so Martian life would have to have arisen roughly when it arose here.  Now, let's go further and suppose that life here is, after all, descended from some splattering of living microbes that somehow survived the explosive meteor-borne trip in near-zero degree Kelvin space, from being blasted off from Mars as rocky debris on some sort of volcanic eruption, to survive the entry into Earth atmosphere and land here.

We date life as having a common origin about 4 billion years ago (the Earth being around 5 billion).  So, the common origin of life on Earth, even if it came from Mars, would be as we currently estimate it, and all biology would remain the same as it is now.  We would just make the admittedly fascinating change that the primal soup was there, not here (and that would justify the idea that there must have been liquid water on Mars at the time, etc.).

Now the estimated common ancestral time from earthly creatures is 4 or so billion.  If the Martian DNA sequences didn't seem related to Earthly ones, we would have to consider that life more or less the same as here, arose separately there.  But that makes no sense, really, because if life here were seeded from there, and was just like Earth life (which is why everybody searches for extraterrestrial life that's much like us), then the Martian microbe's DNA would have to look like our estimated early life forms on Earth.  Hardly anything at all would change except, again, the immense fascination of it all (and one might have to rule out that some erupted detritus from Earth was the origin of Mars life).  We could estimate when Solar-system life originated, but perhaps not where.

But what if it's from elsewhere?
Now suppose that we say that Martian bio-like molecules rained down upon it from space, so that Mars life actually originated elsewhere.  Then the time of origin would have to be much older than 4 billion years, because there aren't any remotely nearby stars with planets that could have served as the source, so the splattering ET microbes would have come from light years away, etc.  Their origin time, that we'd reconstruct from their DNA variation (again assuming DNA-like basis and microbe-like parallels to earthly life, which is what everybody's looking for), would be for when life started including the duration of the space journey to here, which means an origin time hugely older than 4 billion years.  And if such stuff reached Mars we'd certainly expect that the same splattering rain from some other star or galaxy would have seeded life here, too.  But our 4 billion time is inconsistent with that. It didn't happen (despite what Francis Crick believed).

Even if that had happened, however, and even if some remnants were to be found on Earth, it would have had nothing to do with life as we know it today, and its origins. Our reconstruction of evolution of life here would not change, because on Earth the common ancestor of all living species is about 4 billion years, so the evolution of what has survived would be just as we already reconstruct it, and would have occurred--and originated--right here at home.

What this means is that, even considering most of the fairly stories being spun, it's hard to see how life on Mars could change anything we know about life--here or there--by very much if at all.  Again, if anything believable like that were found, it would be very interesting, but it's more than a stretch to tantalize us with the kinds of things we routinely see in the media.

If we want a real surprise about life, not just a video game, it will almost certainly have to come from the discovery of something we'd call 'life', truly soaring through interstellar if not intergalactic space to reach us, that's organized and made of something other than our known organic chemistry, not based on DNA, not destroyed beyond recognition on its frigid cosmic-ray blasted way here, with no resemblance to what we see here, or that could plausibly see on Mars.  And no SARS or other microbes.  And how likely is that?

It all seems by far the most likely reality that whether we're alone in the universe, we're alone in the solar system.  And that most lively stimulation we'll get from Mars, is from the candy bar.

If there's a good scientific, or even geopolitical, reason to go to Mars with people or more robots, that's worth the cost, let's discuss it--without the search-for-life red herring.

Friday, September 20, 2013

Is there truth? Does science find it?

I listened yesterday to the first installment of this seasons's BBC Radio 4 radio program, In Our Time. Happy it's back!  This episode discussed the life and work of the French Enlightenment-era genius and polymath, Blaise Pascal.

The Enlightenment period was an intellectual fervor that turned from authority to empiricism in the belief that understanding the real world comes from observation and empiricism, not just consulting authority (like the Bible or Aristotle).  Main names of that era include Galileo, Descartes, Newton, Bacon, and so on.  They all were part of the push to understand theories or laws of nature based on careful experiment and observation.
Blaise Pascal, 1623-1662

These thinkers, and others, rejected the notion that perfect truth could be seen in the classical or sacred texts, but that we could at least better understand the world through careful scientific methods.  But how 'better'?  Is there truth about the  world?  Does it follow universal regularities--laws of nature, as Newton and his intellectual descendants to the present day say--or not?  If so, the job of science was to identify and understand those laws.  But there is a further implication of that view: there must be a truth!

If truth exists, does science find it?
The popular idea about science is that it seeks the truth about physical nature and, for those not holding out for non-physical truth such as religion, even about the nature of 'mind'.  Perhaps the popular belief is that in many areas science has done its job already.  For most, however, science is seen as approaching better and better understanding of truth in various areas like physics, chemistry, geology, biology and others.  The idea is that truth exists and we approach it asymptotically, that is, gradually getting ever closer--even if we, as imperfect creatures, may never get 100% there.

Pascal calculator

Philosophers of science had a longstanding view that science works by gradually refining its insights with very focused methods that get closer and closer.  Yes, there may still be major or even transforming insights yet to come, and yes, we can't now know what they'll be.  Only when the next Newton, Einstein, Maxwell, or Darwin come along will some jump rather than gradual approach to truth occur.  Often we hear the phrase 'paradigm' to refer to the current theory, and 'paradigm shift' to such huge transformations of our view--such as when evolution replaced creationism: you can't look back, once you see the new view and realize that it is.....what?  'true'?  'better'?  more convincing or convenient?  more useful for engineering and business?

Blaise Pascal and others, apparently even Newton himself, did recognize that science wasn't necessarily approaching Truth.  Only God could know the truth and (according to many of them) so had we until the fall in the Garden of Eden.  Still, they felt we could approach truth, and many believed that mathematics was some ultimately true and consistent language for doing that.

In the mid-20th century Thomas Kuhn and others looked at science not as a purely objective path of a train moving to the station of Truth, but instead as a social phenomenon, a society of individuals who accept a given explanation (that's the 'paradigm') and press towards making that explanation fit ever more facts.  This might be seen as just a sociological analysis of the way science works. But Kuhn and others said that that is not correct!  Science is a social mechanism for what we accept as truth, but there is no way to say that we are approaching truth.  A paradigm can explain facts we care about today, but not things we're not even asking about.  Nor, since we don't actually know what the truth is (or if there is such a thing!), we can't say science is 'progressing' towards understanding it.  Science leads to better explanation or more accurate prediction or account of aspects we currently care about, but usually omits consideration of things not convenient or interesting--or issues that haven't yet arisen.

Many of our posts on MT are about the struggle to understand the world and the various sociological aspects about the claims, activities, and so on that characterize modern life and health sciences.   An important aspect of this is the notion of causation and whether it can ever be purely deterministic or, perhaps more perplexingly, can be truly probabilistic.  We use  probability and statistics routinely in science, for various reasons, but often implicitly explaining that this is because there are always measurement and other sources of error, or things too miniscule for us to identify perfectly, or that we have to collect incomplete samples and try to generalize from them to the whole in some way.

One truth?  Many?  None?
Still, do we believe that Nature has universal--truly universal, everywhere on earth and everywhere else--laws or truth that apply without exception?  Such truth would be deterministic--if you know a situation perfectly you can predict the future perfectly.  If so, we might expect that scientific methods will be truly be able to reveal that truth, at least asymptotically.

Or, if truth is that causation is probabilistic (as it seems to be in quantum mechanics as it's done today), there isn't one state of Nature but an array each with some probability of being observed.  Even there, we assume a fixed or deterministic distribution, or set, of those probabilities.  If that's a correct view, then science--if we do it right--should be able to asymptotically approach the true values of those probabilities.

Could it be that we're more fundamentally wrong?  Could Nature not have just one truth in a given area, nor a universal truth?  If that's so, how would we know?  How would we know how many truths there were, or when and where they applied?  Are they discrete--that is, does one truth stand alone relative to other truths about the same thing?  If so, where are they relative to each other: do truths overlap, or is one truth more dominant than another?   If these questions had specific answers, we might expect that science could asymptotically approach this set or distribution of truths.  It's weird, but theories such as multiverse or many-worlds cosmologies entertain such ideas and treat them quite seriously.  Can we, or should we, try to build such concepts into our daily scientific life?

Today,  scientists can excuse ourselves as still being in an infantile state of understanding Nature, and plead for further support by saying that complete truth has eluded us so far but, because it exists, it's important to continue to refine our asymptotic approach to it.  That is, the more work we do, the closer we get to complete understanding.

Pascal's answer was that in frustration we'll be driven ultimately to theological religion (Christianity, for him) as the only, or at least the safest, way to go. It that's what happens to science, it will be quite a surprise....even a paradigm shift!

Thursday, September 19, 2013

Cute, cuddly....and off the mark

Well, the gag-writers on the editorial staff of the entertainment publication called Nature have done it again!  This cover, like all of their covers these days, is arrayed with cute tag-lines for the stories within.
Awww, how cute!
The cuddly picture is of some furry little mice helping each other out--cooperating.  Heavens! So anti-Darwinian of them!  Richard Dawkins must be crying in his beer!  This story is based on the very latest hot major discovery in evolutionary science: cooperation exists!  And now we know why--it's oxytocin!  (Single gene evolution; what else would you expect for a Nature story?)

This finding (and we have no way or reason to question its validity) may be viewed as a refutation of the harsher Darwinian world of relentless competition, asserted by generations of deep evolutionary thinkers, many of whom were wont to popularize pared-down ideas of what is really much more complex nature.  Until recently, any cooperation has been viewed as just a touchy-feely way to describe what is really competition in disguise.  But now it's cooperation we have to explain.

Whether or not, or to what extent, cuddly or combative characterize the true state of Nature, we seem to be going like a pendulum from one view to another.  Each leads to its set of cover stories, each touted as a major discovery.  Each the Truth du Jour.

Of course, Nature is a business and a very successful one at that.  But if they were more about science rather than sales, perhaps the pendular nature of science would itself be the story, rather than the position of the pendulum at any given time.

As we repeatedly comment on, a similar oscillatory Truth-O-Meter is routine in biomedical and other aspects of genetic research, where every story is the Big Story and is paradigm-shifting.  This partly reflects the careerist and other rather venal interests of our time, but it also reveals the realities of the kinds of data we have available to us today, that is, the state of the science itself.

What the real story is, and what the best science should be about
The actual story that should be getting our attention is what one might call the 'meta-story', the story about the stories being published every day.  That story is the fact that we have new Truths du Jour every day.  If science were what we scientists tend to blare so confidently that it is, then we should not be reversing our views every few days.  The pendulum should come to a near-stop.

The fact that we can shift from the view that all life is driven by Hamilton's rule (be nice, but only selfishly), to the latest Wilson (EO and David Sloan) view (that cooperation is what life's about), or that coffee prevents, or coffee causes disease, and so much else like it--that changeability itself is the story.  When our knowledge is so fickle, supposedly each time supported by funded (and presumably well-designed) and ever-larger technically sophisticated studies, then there is something rotten in the state of Science.

We obviously do not have an adequate understanding--an adequate theory--of the nature of life.  The main job of science is, at least as we teach to our students, to understand the truth of nature--the laws of nature, not just to teach the current fads of scientists.  If truth is our aim, we will never get all the way as there's always more to learn that we don't yet know.  But we should at least have a firm grasp on seemingly simple questions such as whether our mission in life is to kill or cuddle.  Or whether coffee, eggs, or a daily drink are good or bad for your health.  Or which gene raises and which reduce your risk of being a great athlete or getting colon cancer.

Instead of boasting of each new ephemeral finding and assigning cosmic import to it, we should be worried that we're throwing money and intellectual energy away on routine, if technically sophisticated, incremental but far off the mark research.  The push to find what amount to superficial answers, in the huge operation that is modern science, should be changed to a strong push to bypass the superficial and instead to find and understand the deeper truths.  Many of us, perhaps especially those of us who are more senior, tend to blame the tenure and research-funding pressures, as being materialistically short-sighted.  We may just be reactionary cranks, but whatever the reason, a greater stress on the basic epistemology, our most general and strongest ways of understanding living nature, should be a primary objective, and what we stress to our students.

Perhaps one might think some exception would be in order, for applied fields like medicine.  But the see-saw patterns in medicine show that even there, where the idea that research is not about abstract theoretical generalities but how to stop a specific disease, requires a much deeper basic understanding, and less reliance on sampling and statistical analysis, than we have now.

Wednesday, September 18, 2013

Plants and their shocking immune response

Most plants can't move, so they can't escape by running away when under attack.  Which is not to say they can't defend themselves -- they do have thorns and poisons, they can emit noxious odors or taste unpleasant, they have tough bark or leaves, and so forth.  And when pathogens descend, plants can mount a quick response.  Most have evolved gene-for-gene pathogen-specific responses, involving the products of the plant's resistance (R) genes with proteins coded by the pathogen's avirulence (Avr) genes. We discussed this in our book The Mermaid's Tale, because it shows how very different species have somewhat similar ways to combat a similar threat.

Foxglove produces chemicals that can be toxic if consumed; Wikipedia

It's a fascinating dance of co-evolution, actually.  The plant's R genes code for receptors to the proteins produced by the pathogen's Avr genes: the pathogen releases the Avr product into the plant at the site where it invades, and the Avr product interacts with the product of the R genes, induced in the plant by the presence of Avr proteins.  This in turn activates a cascade of plant defenses, including the oxidative burst, which initiates the hypersensitive response, or rapid cell death at the site of infection.  This is beneficial (to the plant) because when the cells surrounding the site of attack die, the invading pathogen is deprived of nutrients and the opportunity to grow or spread.  It also induces the production of antimicrobial proteins.

Pathogen attack also can induce non-specific responses, that is not associated with R or Avr genes.  The attack may induce the plant to crosslink its already rigid cell walls, chemically toughening them to better resist the invading pathogens, and the oxidative burst and the hypersensitive response it triggers may be non-specific as well.

Plants have also evolved at least one systemic response to pathogenic attack, called systemic acquired resistance (SAR), analogous to the innate immune response in animals.  In SAR, a pathogen attack induces a state of heightened resistance throughout the plant, which helps to protect it against subsequent attack by the same pathogen as well as a wide range of additional potential attackers.  This occurs even if the pathogen is only attacking part of the plant.  Once induced, this heightened state can persist through an entire season, just in case the bastards dare to try again!  SAR involves the accumulation throughout the plant of a set of proteins that confer increased nonspecific resistance to the plant, probably through antimicrobial activity.  The roots of plants are often colonozied by nonpathogenic bacteria, and plants can respond with a process called induced systemic resistance which induces the release of defense-related proteins.  

How do they do that?
It has long been known that plants communicate systemically, sending long-distance signals when under attack.  In that sense the plant is a unified organism, not just a set of cells all spread out.  Now, a new paper on plant immune responses in the 22 August issue of Nature , "GLUTAMATE RECEPTOR-LIKE genes mediate leaf-to-leaf wound signalling," by Mousavi et al., reports on a new understanding of how this is done.

Plant systemic responses rely on a family of regulatory lipids called jasmonates which are known to quickly build up in wounded tissue.  But how do they then send danger signals?  Plasma membrane depolarization --  a chemoelectric change -- has been observed in various plants when a tissue is wounded, and it's known that this can stimulate the expression of jasmonate-regulated genes.  So, Mousavi et al. monitored the electric response of Arabidopsis plants (mustard plants) under varying conditions.

On herbivore attack, levels of the plant hormone jasmonate increase, triggering defence responses. Mousavi et al. show that leaf injury, caused by herbivory or mechanical wounding, induces the transmission of electrical signals that are generated by the activity of glutamate-receptor-like (GLR) ion channels. These signals induce the formation of jasmonate at local and distant sites in the plant. Source: Christmann and Grill, Nature, 21 Aug 2013

And it turns out that plants don't respond when larvae crawl on the leaves, or to simple touch, but they do when the larvae begin to feed.  To determine whether the response is to the chemical elicitors the larvae inject into the leaf, rather than the simple fact of wounding, Mousavi et al. wounded the leaves mechanically, and this, too, elicited a response, which they call "wound-activated surface potential changes (WASPs)".  And, the response was rapid, traveling at a top speed of 9 cm per minute, and induced expression of JAZ10, an indicator that the jasmonate pathway has been activated.  Further, the authors tested for a direct link between the jasmonate pathway and electrical activity by wiring up a leaf, sending current through it, and monitoring the result -- surface potential changes away from the site of injection. 

The authors determined that hundreds of genes were up-regulated upon wounding of the plant (or current injection), and genes that code for proteins involved in ion transport through the cell membrane, such as ion channels or pumps, have been previously found to affect jasmonate signalling.  The GLUTAMATE RECEPTOR-LIKE (GLR) genes are ion channel genes known to be involved in various aspects of plant development and responses to stress, as well as ion channel transport.  Arabidopsis have 20 GLR genes; Mousavi et al. found that mutations in several of these genes reduced the surface potential changes upon wounding, and thus expression of jasmonate regulator genes, but how GLRs are activated by these mechanical insults isn't well understood.

Mousavi et al. conclude that they've identified ion channel genes, GLRs,  involved in plant defense through electrical signaling.  Further,
Our results now show that GLRs control the distal wound-stimulated expression of several key jasmonate-inducible regulators of jasmonate signalling (JAZ genes) in the adult-phase plant. Finally, GLRs are related to ionotropic glutamate receptors (iGluRs) that are important for fast excitatory synaptic transmission in the vertebrate nervous system42. They and their plant relatives may control signalling mechanisms that existed before the divergence of animals and plants43. If so, a deeply conserved function for these genes might be to link damage perception to distal protective responses.
The animal/plant link
The plant signal goes from the affected leaf to others, much as (and using similar chemoelectric mechanisms) neuronal and other inter-cellular signals in animals. There are two classes of glutamate receptor in animals; ionotropic glutamate receptors, that form ion channels that operate when the receptor binds to glutamate, and metabotropic GRs, that indirectly activate ion channels when the receptor binds to glutamate (glutamate is an amino acid).  Glutamate receptors in animals are primarily found in the central nervous system and are involved in neurotransmission and, it seems, response to stress.  

The points here are many.  That we have these same receptors shows deep evolutionary connections between plants and animals.  These are old mechanisms involving similar and hence very old genes and cellular activities.  The information transferred by these sorts of means is highly varied.  Ion channels are adjustable pores in the surfaces of cells, that open or close depending on conditions, to adjust the ion concentrations (that is, of charged molecules like calcium or potassium) between the outside and inside of the cell. The changes can adjust the ionic difference within the cell compared to the outside, or the signal can travel along the cell (as in muscle and nerves in animals).  In plants there are no muscles or nerves, but the relevant mechanisms are used to adjust the cell behavior--and to transmit the information about the threat to help other, as yet unaffected tissues in the plant, so they can anticipate an attack and defend themselves.

The authors speculate that "a deeply conserved function for these genes might be to link damage perception to distal protective responses."  But we are observing the mechanism countless millions of years since it arose, and there's really no way to know from today's world what the original function was.  However, the general usages of ion channels suggest, to us, that the original use was 'simple' inside-outside adjustments, and that the warning usage came later.  And plants have their separate mechanisms for immune defense, that we described at the top of this post.

The plant behaves as an integrated organism, but communication from one part of the plant to another is not the same as having a central nervous system.  We might try to imagine what it feels like to be a plant (normally or when under attack), but even if the plant is a unitary creature, its sensory awareness doesn't have to be 'conscious' or centralized for it to be an example of the integration of many parts into a whole.

No matter how you might want to interpret it, this is but one more instance of similar widespread phenomena with diverse uses.

Tuesday, September 17, 2013

The US Health Disadvantage

 Mobilization of an unprecedented kind is now necessary in the United States. It requires a campaign to remove the public veil of ignorance about the evidence.
So states the public health Policy Forum in the Aug 30 issue of Science ("Confronting the Sorry State of U.S. Health," Bayer et al.*), which raises some important questions about health and sickness in the United States.  The authors are commenting on a recent report published by the U.S. National Research Council and Institute of Medicine, "US Health in International Perspective: Shorter Lives, Poorer Health," (Jan, 2013) which asks why the US is among the richest nations in the world, and yet the health of its people is far down the list.  The report is the outcome of 18 months of work by a panel charged with exploring the problem and identifying causes and solutions.

The panel compared health outcomes of Americans with those of 16 other wealthy countries.  They found that Americans have had a shorter life expectancy than people in the comparable countries for many years, and that the differential is growing, especially for women.  The health disadvantage affects everyone up to age 75, it's worse among poorer Americans but exists even in the wealthy, and includes multiple diseases, risk factors and injuries. 



It's worth quoting the panel's findings in detail.
1. Adverse birth outcomes: For decades, the United States has experienced the highest infant mortality rate of high-income countries and also ranks poorly on other birth outcomes, such as low birth weight. American children are less likely to live to age 5 than children in other high-income countries.
2. Injuries and homicides: Deaths from motor vehicle crashes, nontransportation-
related injuries, and violence occur at much higher rates in the United States than in other countries and are a leading cause of death in children, adolescents, and young adults. Since the 1950s, U.S. adolescents and young adults have died at higher rates
from traffic accidents and homicide than their counterparts in other countries.
3. Adolescent pregnancy and sexually transmitted infections: Since the 1990s, among high-income countries, U.S. adolescents have had the highest rate of pregnancies and are more likely to acquire sexually transmitted infections.
4. HIV and AIDS: The United States has the second highest prevalence of HIV infection among the 17 peer countries and the highest incidence of AIDS.
5. Drug-related mortality: Americans lose more years of life to alcohol and other drugs than people in peer countries, even when deaths from drunk driving are excluded.

6. Obesity and diabetes: For decades, the United States has had the highest obesity rate among high-income countries. High prevalence rates for obesity are seen in U.S. children and in every age group thereafter. From age 20 onward, U.S. adults have among the highest prevalence rates of diabetes (and high plasma glucose levels) among peer countries.
7. Heart disease: The U.S. death rate from ischemic heart disease is the second highest among the 17 peer countries. Americans reach age 50 with a less favorable cardiovascular risk profile than their peers in Europe, and adults over age 50 are more likely to develop and die from cardiovascular disease than are older adults in other
high-income countries.
8. Chronic lung disease: Lung disease is more prevalent and associated with higher mortality in the United States than in the United Kingdom and other European countries.
9. Disability: Older U.S. adults report a higher prevalence of arthritis and activity limitations than their counterparts in the United Kingdom, other European countries, and Japan.
It's not all bad -- if an American reaches 75, s/he has a higher survival rate thereafter; the US has higher cancer screening and survival rates, blood pressure and cholesterol are better controlled, we're more likely to survive a stroke, we smoke less and our average household income is higher, suicide rates aren't higher than comparison countries (faint praise, that), and the health of recent immigrants is better than that of people born here. Otherwise, and even though health care spending per capita is much higher in the US than the comparison countries, health outcomes here are significantly worse. Though, of course, we're ahead of the curve in some respects, obesity rates e.g., with other countries fast catching up.  

So, why the dismal picture in the US?  The panel considered this at great length (it's a 400 page document).  You'd think it might be because we have more people without access to health care than other countries, but the disadvantage holds even for those with access to care.  We smoke and drink less, but eat more.  We have more accidents and have more guns.  Our educational attainment is lower than other countries, and poverty rates and income inequality higher. and social mobility lower.  And, the panel also points out, a less effective social safety net.  But, even those of us with "healthy behaviors" are more likely to get sick, and have accidents, than our counterparts in other wealthy countries.

So, understanding what's behind the sorry state of health in this country is not straightforward.  Indeed, the panel seemed sorely tempted to describe unhealthy social and environmental conditions in the US, and ascribe our health conditions to the whole sorry mess.
Potential explanations for the U.S. health disadvantage range from those factors that are commonly understood to influence health (e.g., such health behaviors as diet, physical inactivity, and smoking, or inadequate access to physicians and high-quality medical care) to more “upstream” social and environmental influences on health (e.g., income, education, and the conditions in which people live and work). All of these factors, in turn, may be shaped by broader national contexts and public policies that might affect health and the determinants of health, and therefore might explain why one advanced country enjoys better health than another.
That's of course not very helpful in policy terms because public health measures must be directed at something specific, like cleaning dirty water or vaccinating against disease. The situation reminds us of too many attempts to explain complex disease with simple, enumerable factors -- for example, we dream of simple genetic causes, but in fact it's multiple gene and environment interactions.  Here, the Affordable Care Act won't be the answer, nor would gun control be, nor enforcing seat belt laws, nor banning supersize drinks or increasing the availability of fresh fruits and vegetables in poor neighborhoods.  It's complicated.  And surely a combination of many factors, social and environmental.
 
The panel recommends, though, more data collection, more refined analytic methods and study design, and more research.  They recommend focusing on children and adolescents, because early life experiences and habits can affect the whole life span. They also recommend that research should be on the entire life course rather than more localized cause and effect.    But the study urges that the situation is so critical that action must be taken while research is ongoing, and they provide a long list of actions they believe should be taken, from increasing the use of motorcycle helmets to increasing the availability of public transport to improving air and water quality and increasing the proportion of adolescents who don't use illegal drugs.  More generally, they recommend:
(1) intensify efforts to pursue existing national health objectives that already target the specific areas in which the United States is lagging behind other high-income countries, (2) alert the public about the problem and stimulate a national discussion about inherent
tradeoffs in a range of actions to begin to match the achievements of other high-income nations, and (3) undertake analyses of policy options by studying the policies used by other high-income countries with better health outcomes and their adaptability to the United States.
But what kind of issue is this?  A public health issue?  Public policy?  Economic, educational?  Here we come to a fundamental question of causation. What, we might ask, causes AIDS? Is it HIV?  Needle sharing?  Poverty?  A confluence of factors at all levels?  Epidemiology has long struggled to take multi-level causation into account, acknowledging the role of many different kinds of factors including biological and social determinants (see Nancy Krieger's old but seminal and still good 1994 paper on this, "Epidemiology and the web of causation: has anyone seen the spider?"), but once the web extends into social causes, the field of public health is pretty much stymied when it comes to fixing things.  And throwing this into the political arena is a sure recipe for a lot of grandstanding but not much else.

Is more research really needed into why Americans are sicker than our counterparts in other wealthy countries?  No doubt it is a serious problem, and very costly in both human and monetary terms.  But of course the request will be for more mega-scale, long-duration highly technological studies--more grant money.  You'd expect us to say that.  But is the plea for more funding a reflex or is it really the answer? 

It does not seem obviously so, except for the many small factors that would be found.  We know enough to know that the answer is going to be complicated, and causal factors changeable.  Indeed, we surely will be found to be leading the pack in some measures, and other countries will catch up.  And, whether the fix is deemed to be personal behavior or political, or a mix of many approaches, once we go beyond requiring vaccines or seat belts, we are the master of none of them. And they're always changing.  Perhaps research money should be going into things like how to improve health education (that is, how to get people to do things they'd rather not do, like exercise or eat less fat). 

If history is any guide, we're betting that when another such study is done in the future, we'll be better than we are now in some measures and worse in others.  And we won't know why.  And we'll say that 'more research is needed'.  Cardiovascular disease rates have risen and fallen over the past 60 years or so, and we still don't know why -- and that's just one disease.  A serious question is how to deal with phenomena that are so changing, and so subtly complex, that we have to keep surveying to understand them.  Could there be some better way, a different approach?
 

---------------------
*Thanks to Bob Ferrell for bringing this to our attention.

Monday, September 16, 2013

Teacher's Anonymous (a new organization to help the troubled)

Drug addiction can be a serious disease.  It disorders one's life in so many ways.  Drinking may seem to many just to be fun, it may make you friends and even gain you respect (if you can hold your liquor better than the next fellow).  But to many others it can be tragically destructive.

The severity of alcohol addiction and its effects of people's lives was the impetus behind the founding of Alcoholics Anonymous (AA). It has worked for many people, with a structured path of self-help, plus the support of other AA members whenever the urge to drink seems overwhelming. A major part of AA is self-recognition and admission.  Covering up, and pretending is an obstacle to recovery from the horrible addiction and its toll.  At least within the protective help of the organization, you are not anonymous.  A drinker seeking help goes to an AA meeting, and stands up and publicly announces:  "I'm [his/her  name], and I'm an alcoholic!"  Confession is the first step in redeeming the soul.

It turns out that this success has stimulated other organizations to help those with problems like drug use, over-eating and so on.  It's a path to recovery for many who are plagued by an annoying compulsion.  And now there's a new group forming.

Teacher's Anonymous (TA)
One of the worst things that can afflict a university professor is the urge to actually teach.  Being a good teacher is the foremost criterion for being denied tenure.  In the same way that social drinking is encouraged, but excess is discouraged strongly and shamed, nominal teaching is encouraged, to give a patina of concern for students, but putting any real effort into it is as bad as alcoholism (or maybe even a sign of alcoholism, because why else would a sane person do it?).  It can cost you your job and be a disaster to your life.

The swath of destruction that follows those who actually play by the rules and don't pay any attention to teaching, that is, the path of relentless pandering after grants, forcing oneself to write more papers than one has content to put in them, bragging to the news media about one's every minor thought, and so on, is a path of self-damnation.  It impedes having a sane life.  But it has one redeeming feature: it puts food on the table (even if one's spouse and kids are long gone from the table, owing to lack of attention).

Now, just as with drinking, here and there one runs across someone with the guts and talents and strength of character to actually like teaching and to be good at it.  It's an art and a skill, and to do it right takes a hell of a lot of work.  That's why it is so suicidal in our reward system--that is, because it doesn't reward Deans and Chairs with lots of overhead money to spend.

But we happen to know a few truly remarkable people who are afflicted with this set of talents.  They're societal gems, contributors to the well-being of students rather than just themselves and their administrations.  They may be rare, and they may be, well, sick in the way we describe.  Yet, the very best of them also manage to do original, thoughtful, creative research and scholarship (yes, believe it or not). Though few in numbers, such people do actually exist.  We think they deserve to be 'outed', but we shouldn't do it ourselves.  They need to do it for themselves.  That is what the new organization, Teacher's Anonymous is for:  Indeed, we just heard that sort of confession by our MT confrere, who epitomises the problem:

"I'm Holly Dunsworth, and I'm a teacher!"

Let's give a hand to this unusual confession, and the other TA members like her!

Friday, September 13, 2013

We are not the boss of natural selection. It is unpwnable.

We came, we saw, we conquered natural selection.
It should come as no surprise that I didn't actually speak to David Attenborough* recently. I'm just here writing about what I read about what he said about human evolution...
"We stopped natural selection as soon as we started being able to rear 90-95% of our babies that are born. We are the only species to have put a halt to natural selection, of its own free will, as it were," he tells this week's Radio Times. [source: The Guardian]
I think a lot of these conversations come about because people like to ask people like David Attenborough (and even me!) what will happen to us in the future. And instead of saying that it's impossible to predict the future of evolution (because it depends on so many probabilistic, seemingly random, and maybe truly random events, big and small), many public intellectuals please the crowd by offering some kind of speculation. Often the questions are about whether we'll grow tails or larger brains; whether our brains will shrink because of computers; whether our wisdom teeth, pinky toes or appendix will completely disappear. And often the experts play along with tongue in cheek, or sometimes seriously, building future scenarios.

The problem with Attenborough's answer--which might be aimed at avoiding this sort of speculation about the future--is that natural selection cannot possibly actually stop, not even at the hands, the hearts, the minds of humans. We're pretty amazing, but not that amazing.

Jeepers. What are we even looking at here? (source)
To start, natural selection doesn't just enable the most obvious, visible, awesome changes to our bodies over time.  To be sure, natural selection explains adaptations of all sorts--from fur to feather, prehensile tail to flagellum, macro to micro. (Ahem... as long as we're sure whether what we're covered with, dangling from or whipping around is actually an adaptation.)

But natural selection also explains how we are here, alive and working well enough to be here, alive. In that sense, in its purifying sense, natural selection's at work, if you will, on practically everything about us and on practically everything about everything else that's alive now or that's ever been alive. Natural selection is always happening. Always. Culture or no culture... and because of culture!

Furthermore, even if we could consider all environments (and our cultural abilities to adapt to them) to be equal and constant, there will always be both new combinations of old genes and new mutations in each and every person (100+ brand new base changes to each person’s code compared to mom and dad)--some of which will cause human lives to end before they've passed on those combinations and those new mutations.

That each of us is unique, demonstrates that evolution is always occurring even though scientists prefer to think of it as change over time at the population level. Regardless, and this is very important, natural selection allows for most of this perpetual change in lineages and in populations, which is why so much life on the planet is humming and thrumming away, and has for the last 4 billion years.

http://evolutionpsa.tumblr.com/
However, any genetic-based infertility, any genetic condition that directly or indirectly inhibits procreation, or any genetic disorder or disease that ends a person's life before they pass it on will disappear due to natural selection, along with their entire genome, including everything that had little to do with early death or infertility. So regardless of medicine and birth control, there will always be lineages that are more prolific than others (i.e. differential reproductive success) and there will always be lineages that disappear--both due to constant natural selection. The same is true about differential reproduction due to constant genetic drift--that is, chance change in a gene or trait’s frequency over time or differences in a gene or trait’s frequency between populations due to chance differential reproduction and other evolutionary processes occurring differently in those populations. Like selection, drift is always occurring, but it can escalate in intensity, for example, after a tsunami.

Teddy Roosevelt conquering a moosevelt. (via @HistoryInPics)
So, it may be true that, generally speaking, humans today have more egalitarian reproductive "success" compared to our ancestors who were arguably more vulnerable to nature red in tooth and claw.

And it may be true that the odds are greater for any one human to produce offspring that go on to bear their own offspring than for an individual in another, and maybe many many other, species—those that are, arguably, more vulnerable to nature red in tooth and claw.

And it may be true that because we birth relatively few offspring--compared to, for example, octopuses that bear thousands and thousands at once--that there's not a whole lot of difference between Alex's fitness, having 3 kids and 3 grandkids, versus Alice's having 3 kids and 2 grandkids.

And furthermore, it may be true that compared to innumerable species, we have more lineage continuance and less lineage extinction due to fewer juvenile deaths per birth thanks to opposable thumbs, throwing ability, weapons, medicine, extended family, extended love, extended memory, food storage, food production, cooking, sanitation, and so much more that contributes to or falls under the “culture” umbrella.

Classic examples of recent and potentially currently occurring natural selection that people call on in discussions like this are lactose tolerance and malaria resistance. Both are usually used to argue that natural selection has not stopped. But leaning on them can give the impression that we know of only two ways humans are presently adapting. There are others, like amylase and immune system genes, which are potentially powerful too, but again, examples of human biological adaptation in present or near-present times are not exactly overwhelming us. No matter! Because there are, unfortunately, myriad mutations (individually rare, but not so rare in total), both de novo and inherited, that are selected against every day, all around us, too numerous to list. This is natural selection.

So it's just wrong to say that natural selection has stopped unless to "stop" is a relative term. And even then, saying natural selection has relatively stopped or has effectively stopped in humans is making an assumption about what's good for humanity (that will sadly never evolve) and what's bad (that we're so busy propagating). It's also making an assumption about the relative strength of selection for forming adaptations out there in the "wild"--one that isn't grounded in firm understanding or consensus among many thoughtful scientists.


If we're going to consider natural selection to be so strong an evolutionary mechanism, then we have to consider mutation and genetic drift to have the potential to be just as strong--depending on the snapshot in time/space/organism and there have been many of these snapshots, to put it mildly. Further, to ignore the potential for gene flow to spread and to create new combinations of genes with potential to create new phenotypes that fail or flourish (i.e. under natural selection and/or genetic drift) in different circumstances is to lack an imagination--and a guttery one at that.

And these assumptions about selection's great strength relative to other mechanisms of evolution illustrate our biases and fashions, as well as the limits of our observations, our methods, and Science. These major issues in evolutionary biology are real and they're certainly not respected by statements like " We stopped natural selection." For a public that often mistakenly equates evolution with natural selection, it’s just reinforcement. And, no, evolutionary understanding isn’t just about being academically correct; many of us strongly believe that how we understand or misunderstand evolution affects how we think about social inequality, race, other organisms, caring for the environment, etc and, therefore, how we behave regarding those issues.

I’ve written a few posts that speak to the importance of evolutionary understanding, like:
And, granted, Attenborough admits he’s privileged so this isn’t a comment aimed at him, but. … Isn’t it just a bit soon to be including the entire human species in such a healthy, well-nourished, low youth mortality, long life expectancy state of nature that has “stopped natural selection” on ourselves? I think we have a long way to go before such a definition of humanity can be granted to all of us. And even if or when we, as a whole, achieve such a comfy state, we still won’t be able to say that natural selection has stopped for the reasons I discussed above, and for others that I didn't.

What’s more, if we did achieve the kind of wizardrous skills that deem us impervious to natural selection, it couldn’t have occurred until recently. So it’s probably a bit premature, based on our limited observation, to conclude that’s where our species is.

I'm right there with Attenborough in sharing his astonishment at how we affect our own evolution, and that of other species, in ways that might not have unfolded if our cognition and culture had not evolved first. In fact, I think it's so fascinating that I'm writing a book about it right now. But if we claim that human cognition and culture have ended natural selection, we're denying our place in a universe that we cannot completely control. We are not immune to effects of the world around us, no matter how masterful we are at manipulating it. Climate, weather, geological processes, disasters, infectious diseases, parasites, symbionts (we're full of 'em and covered in 'em!)… these are just some of the things that affect our evolution through natural selection (and other means) as well as all the evolution of the species we depend on for food and all the species besides us that our food depends on. Evolution that's occurring just as constantly in everything around us and on us and inside us as it is in our own genomes will directly and indirectly affect our future evolution.  There are few blanket rules in biology but here’s two: Evolution of an organism is always happening and always will. Evolution in one organism is affected by evolution in another.

Conquering a sperm whale.
So, sure! I can speculate about our future just like anyone else. Here’s how natural selection will affect human evolution at some point in the future. It's not so much a tale of our interconnectedness with other species as a tale of our interconnectedness with the planet. Everyone knows that heat is bad for sperm production. It’s possible that the earth will eventually get so hot (thanks to our cognition and culture???) that sperm production ceases in many men living in the hottest parts of Earth and that it only persists in the men who live in cooler regions and in the men with mutations for overcoming the obstacle. As a result, we’ll lose lineages evolving in the tropics and maybe all lineages. And if the warming occurs quickly and uniformly, and if there are no mutants already alive who can make sperm in the heat, global warming will certainly cause human extinction. There. See how easy it is to speculate about future human evolution? You can’t prove me wrong.

And, like our speculation about the evolutionary future, many of our hypotheses for the evolutionary past are nearly as unpwnable as natural selection and evolution are.

***

Thanks to Barbara J. King for sparking me to think about these things. Her reflection, including some of my input, is up on her blog at NPR. 

*I don't know him, but like many of you, I have admiration and maybe even a little (or a lot of) affection for David Attenborough. So this isn't about reacting to a popularizer of science, as a person. These are just my thoughts about something he said about how evolution works. This isn't about Attenborough, it's about us.



Thanks to the folks at io9 for reposting this on their site! http://io9.com/we-are-not-the-boss-of-natural-selection-it-is-unpwnab-1325126849