Friday, June 29, 2012

Boning up in a hearty way on what to eat!

A June 25 piece in The Atlantic about a paper published in the current issue of the cardiology journal Heart (Li et al.) warns that calcium supplements may do more harm than good.  This would be but one of many times that some standard, obviously good, recommendation for better health proved later not to be true or accurate, even though all such recommendations (even the basic food pyramid!) started out as the results of supposedly good, large-enough studies.

Dietary calcium, the paper reports, is associated with lower risk of cardiovascular disease, but calcium supplements are correlated with increased risk of heart attack.  The authors surmise that this is because dietary calcium trickles into our systems throughout the day while blood levels of calcium from supplements spike once or twice a day, causing too much of the mineral to be deposited in the body at a time. 

Li et al. write that epidemiological studies of the risks and benefits of dietary calcium intake have reported inverse associations between intake and risk of high blood pressure, obesity and type 2 diabetes.  Three prospective studies they cite found that dietary calcium intake was significantly inversely correlated with risk of stroke, and an additional study showed an inverse correlation between dietary calcium and death from heart attack. 

But there are several rather obvious flags to raise: It is notoriously difficult to assess diet, in particular to determine the effects of single dietary components independently of other nutrients because they are never consumed in isolation and quantities are frequently under (or over) reported.  And, in the Li et al. study, they used a self-administered food questionnaire, based on recall of food eaten in the 12 months before the study participants were recruited.  This means that it relied on memory of rather crude quantities in rather crude categories (presumably asking things like how many times a week you drink milk, or eat broccoli, or cheese, and whether they regularly took vitamin/mineral supplements in the last 4 weeks -- data on supplement dosage was not collected). 

For many reasons, including relying on recall and assumptions about the kinds of data that should be collected and so on, dietary studies are far too often difficult to interpret -- and difficult to replicate.  And, in the Li et al., food data collected at the beginning of the study were assumed to apply for the duration of the study, 11 years.  In addition to the difficulties measuring how much calcium people are actually consuming, it's possible that there are systematic differences in the diet and/or lifestyle of people taking supplements vs those getting their calcium only from their diet.  For these and other reasons, dietary studies, including the Heart calcium study, should always be approached with healthy skepticism! That said, let's look at this study in more detail. 

Physicians commonly recommend calcium supplements to elderly patients, particularly to postmenopausal women, to reduce risk of osteoporosis and sometime to improve cholesterol levels.  Hundreds of millions of people take the stuff without any thought to risk.  The authors of the Heart paper prospectively followed 24,000 people in Heidelberg, aged 35-64 years at the start of the study, for 11 years to evaluate the associations of dietary calcium intake and calcium supplements with risk of heart attack, stroke and mortality from cardiovascular disease.
Results After an average follow-up time of 11 years, 354 MI and 260 stroke cases and 267 CVD deaths were documented. Compared with the lowest quartile, the third quartile of total dietary and dairy calcium intake had a significantly reduced MI [myocardial infarction, or heart attack] risk, with a HR [hazard ratio; the chance of MI in those taking supplements vs those not taking supplements] of 0.69 (95% CI 0.50 to 0.94) and 0.68 (95% CI 0.50 to 0.93), respectively. Associations for stroke risk and CVD mortality were overall null. In comparison with non-users of any supplements, users of calcium supplements had a statistically significantly increased MI risk (HR=1.86; 95% CI 1.17 to 2.96), which was more pronounced for calcium supplement only users (HR=2.39; 95% CI 1.12 to 5.12). 
So, a moderately higher dietary calcium intake was significantly associated with lower risk of MI -- but the authors report that this became non-significant for men, and more significant for women with increased calcium.  Presumably, the authors say, because of unmeasured confounding nutrients; maybe so, but maybe this is a way to wriggle out of uncomfortable non-results.  They found no association between total, dairy, or non-dairy calcium intake and CVD mortality.  And, they found a significantly increased risk of MI among users of supplements, with a larger effect for people who used supplements only, suggesting to the researchers that the supplements themselves are the important risk factor.  There have been other studies who did not find this association.

Li et al. suggest that if the effect is real, it may be because of an effect of serum calcium levels on vascular calcification.  Other studies have found associations between serum calcium and "some predictive biomarkers of CVD, such as fasting insulin and lipid measures."  So, it's possible that this is real.

But keep in mind that 354 heart attacks in 24,000 people, over 11 years, is a 1% risk.  And if we think about annual risk rather than risk for the entire length of the study, that's 32 heart attacks per year in the 24,000 subjects, or a risk of about 0.1% per year.  Don't know about you but that strikes us as very low risk.  Even those taking supplements were in fact at an absolute low risk.

On a population level, however, if the results are real-- and remember there are serious limitations to this study -- this translates into real excess heart attacks in real people.  So, what are people taking calcium supplements for bone health supposed to do?  Chuck their pills because there's a slight chance they'll cause a heart attack, based on a rather iffy study, or keep taking them because many studies have shown supplements do improve bone density and decrease risk of fractures?  How significantly isn't completely clear, nor is it clear to us whether supplements improve bone health more effectively than dietary calcium.  Perhaps it would be most sensible to chuck the pills and eat more yogurt.

However, it is possible that the benefit of calcium supplements in both relieved morbidity and reduced mortality from the benefits to bone health far outweigh the small, rather ephemeral or tentative excess heart disease risk.  Indeed, lower physical activity because of bone brittleness (which could have been ameliorated by calcium) would be expected to lead to more obesity and greater heart disease risk.  But then there is another finding by a huge study, that such dietary supplements raised the risk of kidney stones.

This is the kind of dilemma epidemiological data like these often create for people having to balance risk and risk factors for many possible outcomes as they make dietary, life style and medicinal decisions about their lives. And when the media report these kinds of studies irresponsibly and uncritically, this does a disservice to the public who must make these decisions.

And, in the end, we have another example of hardly any bang for a great lot of bucks.  Factors strong enough to worry about as a rule can be found with small enough studies to be affordable, sparing the funds for addressing really important issues.  But spending more for less is becoming so widespread in the way science is being done these days that perhaps its more the rule than the exception.

Thursday, June 28, 2012

The Queen of Sheba's every desire

If you accept the Bible's account, and the very unclear archeological evidence, after King David consolidated a central Middle Eastern kingdom, he was succeeded by the intellectual, peace-loving (but enslaving) King Solomon.  A fine BBC InOurTime discussion of this is worth listening to.

This took place around 950 BC and there is no direct documentary evidence beyond the Biblical accounts that were written centuries later and are not entirely consistent.  But one thing that was recounted was the King Solomon was widely admired and respected as a wise and temperate leader.  Visitors such as royalty and scholars came to see his kingdom for themselves and to meet the great Solomon.  Among those visitors was the legendary Queen of Sheba.  Powerful and important in her own right, Ms Sheba visited Solomon, bringing gifts of course, and she had a good visit.  But there is no serious evidence that the encounter was sexual.  The only, wholly enigmatic (at most) passage, in Kings, is that when Solomon gave her "all her desire, whatsoever she asked," she left satisfied. 

This makes for good movie fodder,  helps keep Biblical scholars employed, and gives preachers things to bore their congregations with on Sundays.  Whether it is literally true or not is disputed, but most scholars seem to think these characters did exist and were important in their time.

Now along comes....Science!  The latest Big Story of hyperbolic marketing is that new DNA sequence analysis has found that there seem to be some Middle Eastern genetic variants found in Ethiopians.  Unbelievably, this has been hyped by BBCNews and by the scientists involved, as supporting the Solomon-Sheba tryst--something not even at all clearly mentioned in the only source about these people that exists.  The lead author, an otherwise respectable scientist, argues that the estimated timing of the detected gene flow, 3000 years ago, 'fits' the Biblical story:
"By analysing the genetics of Ethiopia and several other regions we can see that there was gene flow into Ethiopia, probably from the Levant, around 3,000 years ago, and this fits perfectly with the story of the Queen of Sheba."
Lead researcher Luca Pagani of the University of Cambridge and the Wellcome Trust Sanger Institute added: "The genetic evidence is in support of the legend of the Queen of Sheba."
Well, that's within 2000 years of the alleged tryst so it must be true!  A real Night to Remember.

In an incredible segue, according to the BBC story, the scientists used this supposed finding to justify looking at (you guessed it again!) more DNA sequence from these populations, probably to include whole genome sequence so that not one lascivious tidbit is missed.  And what was the segue?  It's to reconstruct the history of this part of the world....60,000 years ago!   We guess this is to scope out the really first Shebanger.

Except to ask about the cost justification, there's no reason not to be interested in using DNA to reconstruct history--anywhere in the world.  There is no special justification for doing this in the Middle East except our own Middle (navel) East gazing.  DNA data are a powerful too for reconstructing our population ancestry, to be sure.  But the whole thing is a non sequitur, because the reported--or should we say distorted--genetic finding is absolutely irrelevant to any Biblical story. And the authors know this very well.

Any geneticist worth his or her PCR machine would have to tell you that to find some evidence of gene flow between populations long inhabiting reasonably nearby parts of the world is nothing of a surprise.   If we know anything, we know that humans gradually emerged out of northeastern Africa tens of thousands of years ago.  Given human mating patterns between nearby local populations, especially along rivers and so on where they populations would interact, genetic variation 'flows' from place to place.  And once large, complex, agricultural cultures arose, with their trade networks, navigation by boat or beast, and other social and political interactions, gene flow would increase if anything. 

Even if only a small fraction of all mating, if you look at enough DNA from enough people, you expect to see traces of this.  And the Queen of Sheba, if there ever was a Queen of Sheba, may well have visited King Solomon, if there ever was a King Solomon.  But to make some special story about it, as if it reflects anything other than normal human geographic variation, is to misrepresent the data and, of course, its import. 

There's a point at which science should not be confused with Hollywood.  For anyone--scientist or journalist or journal--to portray this as any kind of evidence whatsoever for a particular not-even-Biblically-alleged liaison is pure BullSheba.

Wednesday, June 27, 2012

Paradigms are like glue

Bateman; John Innes website
Uh oh.  An iconic study of sexual selection bites the dust. Well, with a bit of help. Researchers at UCLA attempted to replicate a 1948 study by English geneticist Angus John Bateman, but without success.  The original study, said to be second only to Darwin's Origin of Species in importance to the study of sexual selection, suggested an evolutionary advantage for males who are promiscuous and females who are choosy in selecting mates.  Indeed, one of the foundational paradigms of the field is named after Bateman.  "Bateman's principle" is (quoting Wikipedia) "the theory that females almost always invest more energy in producing offspring than males invest, and therefore in most species females are a limiting resource over which the other sex will compete." As the work was the basis of much subsequent research in the field, the failure to replicate is big.

A June 26 story in ScienceDaily cites the lead author of the new study, published June 11 in the online edition of PNAS:
"Bateman's 1948 study is the most-cited experimental paper in sexual selection today because of its conclusions about how the number of mates influences fitness in males and females," said Patricia Adair Gowaty, a distinguished professor of ecology and evolutionary biology at UCLA. "Yet despite its important status, the experiment has never been repeated with the methods that Bateman himself originally used, until now.
"Our team repeated Bateman's experiment and found that what some accepted as bedrock may actually be quicksand. It is possible that Bateman's paper should never have been published."
In the original experiment, Bateman isolated selected fruit flies in jars, 5 males, 5 females, or 3 of each, and allowed them to mate freely.  He then determined the number of offspring of each mating, which he was able to do because he chose adults with distinct mutations, such as narrow eyes or curly wings or thick bristles, that appeared in the offspring and were markers for parentage.  From Mendelian expectations, and assuming that the mutations did not affect survival, one-quarter of the offspring should be double-mutants, carrying a mutation from each parent, half would be single mutants, and one-quarter would be mutation-free. But the proportions of offspring in each class found by the replication done by Gowaty et al. "departed strongly from Mendelian expectations." 

Fly with curly wing mutation
Gowaty says that while this identification technique was clever, it turns out it also was responsible for the study's fatal flaw.  For his identification purposes, the offspring that Bateman counted had to be double-mutants but these flies turn out to be less viable.  Thus, Bateman "overestimated subjects with zero mates, underestimated subjects with one or more mates, and produced systematically biased estimates of offspring number by sex."  Indeed, he assigned more offspring to fathers than mothers, which is why he concluded that male fruit flies produce many more viable offspring when they have multiple mates but that females produce the same number of children that survive to adulthood whether with one mate or many.
In their repetition -- and possibly in Bateman's original study -- the data failed to match a fundamental assumption of genetic parentage assignments. Specifically, the markers used to identify individual subjects were influencing the parameters being measured (the number of mates and the number of offspring). When offspring die from inherited marker mutations, the results become biased, indicating that the method is unable to reliably address the relationship between the number of mates and the number of offspring, said Gowaty. Nonetheless, Bateman's figures are featured in numerous biology textbooks, and the paper has been cited in nearly 2,000 other scientific studies.
This is not the first time Bateman's study has been questioned -- in fact so many species violate the principle that many evolutionary biologists have suggested that it shouldn't in fact be considered valid.  But, as Gowaty points out, "Bateman's results were believed so wholeheartedly that the paper characterized what is and isn't worth investigating in the biology of female behavior."  And, she says, "Paradigms are like glue, they constrain what you can see.  It's like being stuck in sludge -- it's hard to lift your foot out and take a step in a new direction."

The term 'paradigm shift' was first proposed by Thomas Kuhn in his influential The Structure of Scientific Revolutions, first published in 1962.  He argued that most of the time we are doing 'normal science', designing studies and interpreting results in terms of an accepted theoretical framework.  We cling to that theory the way a religious zealot clings to a sacred text, resisting any breech of the accepted truth.  Difficult results are explained or rationalized away, until a truly better idea comes along.

In the paper, Gowaty et al. conclude
The paradigmatic power of the world-view captured in Bateman’s conclusions and the phrase “Bateman’ Principles” may dazzle readers, obscuring from view methodological weaknesses and reasonable alternative hypotheses...
Here we don't see a new theory but just a refutation of an iconic proof of the existing theory, but the clinging drive is the same. This is of course true in any field, but it may be even more true of beliefs we (including scientists, who we idealize as being objective) hold about ourselves -- our inherent nature and our behavior, and their evolutionary origins and value.  What is, or what we choose to see, must be good or 'natural' because it evolved.  It is why so many stories, including fossil accounts (see Holly's award-winnning post), lead to resistance or even resentment when they are questioned by those the majority typically denigrate as nay-sayers.

We as scientists like to believe we interpret our data free from ideology.  That is rarely if ever true.

Tuesday, June 26, 2012

Inherit the wind?

The famous 1955 play about the Scopes trial, Inherit the Wind, is based on a quote from the Bible that warns that if you make trouble in your own house all you'll inherit will be the wind -- "He who brings trouble on his family will inherit only wind, and the fool will be servant to the wise."

The Scopes trial was nominally about teaching evolution in schools, but it's not well understood that a main part of the underlying issues was not just stupid biblical literalism or bigotry, but objection (yes, by the prosecuting attorney -- and biblical literalist -- William Jennings Bryan) to the justification based on notions of genetic determinism for racism and inequality.  I've discussed this elsewhere.

One of the most delicate issues in genetics has for more than a century had to do with intelligence.  There are two camps, those who think IQ (whatever it is, something we need not debate here!) is largely 'genetic' and unchangeable, and those who think it is malleable and responsive to environments.  As usual, the argument is largely political (for good reasons) but also ideological rather than scientific.

The brain is a complex organ that develops rapidly as countless genes are expressed in different tissues, in a highly orchestrated dance of interactions.  So IQ, if it's any sort of brain function, must be 'genetic' in the sense that the brain is a product of genetic interactions.  More socially important is that all these countless genes and their regulatory DNA sequences are subject to variation among individuals, and that must have some effects on the resulting IQ (again, let's not quibble here about whether experts actually know how to define or measure IQ).  Family correlations show evidence of this genetic variation, whether or not the amount of heritability is being correctly estimated, and whether or not the genetic effects do or don't determine the person's achieved IQ (e.g., depending on environment, schooling, etc.).

The most fiery aspect of the debate is whether specific genes can be identified that cause or predict IQ.  It's bad enough to assume that certain people or groups of people we might not like have low IQ and nothing can be done about it, but even worse if one can identify the responsible genes, because then something can be done about it, and the eugenics era proved that that something was dreadful.  There was a lot of pain but all our society inherited from it was wind, nothing of substance.

Intelligence and its genetics are involved in all sorts of social science discussions because they affect many aspects of society including investments to address systematic inequality and so on.  Some argue that we should not waste money on those with low-IQ genes, others that at least we should help those people raise their achieved-intelligence.

These are social and political issues, and yesterday we discussed the problems with the social sciences, so self-described, in terms of whether or how they are actually 'scientific,' or have failed to produce useful results, or whether the experts in this field know more than anybody else -- or can predict the future better than a chimp (untrained at that).  We criticized the persistent funding of go-hardlyanywhere research in those fields, as we've done for genetics, our own field.

A recent paper by Douglas Wahlsten sheds light on these issues.  This is a paper showing from the actual scientific literature how clearly and substantially we have so far failed to identify genes with useful contribution to normal behavior.  Nobody doubts the clear evidence for major mutations in genes that can cause serious, clinically relevant behavior problems, and many of these involve mental impairment.  However, even the understanding of many serious psychiatric disorders has eluded serious advancement based on genetics, genome mapping and the like.  Schizophrenia and autism are two cases Wahlsten mentions.  These traits are clearly familial and seem 'genetic' but GWAS have basically identified a plethora of different genes that make individually minor contributions, and no blockbuster single-genes.  Even MZ (genetically identical, monozygotic) twins do not have dramatically high concordance.

These are apparently polygenic traits, affected by genes to be sure, but generally not by major, common genetic variants.  Among the best indicators of this are the paucity of really replicable gene-specific findings for these traits.  That means the original finding could be false, or the effect so weak or low in frequency that it simply doesn't recur in different samples.

Wahlsten specifically goes after IQ genetic arguments.  In essence he shows from the data that even huge samples would identify genes whose effects are less, or much less, than 1 point on the IQ scale (which is scaled to have mean of 100).  And these are elusive if they exist--most effects may be even smaller.  More than that, the effect seems almost inevitably not to be an inherent one but depends on the variation in genes elsewhere in each individual's genome.  Even the total of things found by some of these tests would not account for even 1% of the variation in IQ among the subjects.

Yet the hunt for genes for intelligence goes on, in the face of these clear patterns.  Even though intelligence is essentially and clearly polygenic.  There are few if any smoking gun genes, for the normal range.  Even the genes whose variation is known to cause clinically serious impairment typically do not show up as 'hits' in genomewide mapping studies.  Predictive power based on genotype is thus very weak indeed.

Yet behavioral genetics motors on, with the implicit or explicit promise of prediction based on genotype, a potentially dangerous kind of determinism that history has shown can be badly abused by powers that be.  We keep pouring funds into this, as well.

We are not arguing that IQ whatever its reality is not 'genetic', but that it is generally not useful to evaluate people based on the geneIQ but on their actual performance.  And because the more that is learned about the development and workings of the brain the more is learned about its plasticity, it's clear that intelligence can't be understood without due attention to the considerable environmental influences on the trait.

Pulling the plug on this kind of fruitless genomics, which is growing despite these kinds of facts basically across the board, would save resources for things where that kind of science can really make a difference, and be less potentially contentious.  What to do as a society about variation in intelligence is a thorny problem, and we don't have any answers.  Sadly, as we discussed yesterday, the experts don't have any answers either.  But if they stir the pot of our society, hopefully all they'll inherit will be the wind of their pronouncements.

Monday, June 25, 2012

Hats off, Holly!

The winners have been announced in the 3QD blog contest!  Please join us in congratulating Holly for taking 3rd place!!
1. Top Quark: Aatish Bhatia, The crayola-fication of the world: How we gave colors names, and it messed with our brains 
2. Strange Quark: Cosma Shalizi, In Soviet Union, Optimization Problem Solves You 
3. Charm Quark: Holly Dunsworth, Forget bipedalism. What about babyish?


Holly's winning post was just one of many she's done here, and indeed deserving.  Hats off, Holly!  We are so very glad you have been so vital to making MT a successful, thoughtful place for thoughts to be expressed about life and its evolution, and the scientific challenge to understand it.  You did yet another great job and we hope this won't spoil you so that  you don't leave us for greener pastures!

And, congratulations to the first and second prize winners as well!  Brilliant posts, all!

The sky is falling! (or not)

Now, here's a question prompted by an op/ed piece in the Sunday NYTimes: would chimps throwing darts at a dartboard predict a person's risk of disease as well as any GWAS results to date?  The op/ed piece is about the recent uproar in the political science world that arose when the US House of Representatives in May passed an amendment to a bill that would eliminate National Science Foundation funding for political science research.

The writer of the op/ed piece, Jacqueline Stevens, a professor of political science at Northwestern, says that her doomsaying colleagues will disapprove of her saying so, but that for once she agrees with this Republican initiative, even if it's motive is anti-intellectualism rather than any real understanding of the issue.  As she says, political science is spectacularly unable to predict major world events, and chimps throwing darts do just about as well, and millions and millions of dollars have been wasted on meaningless research.
...the government — disproportionately — supports research that is amenable to statistical analyses and models even though everyone knows the clean equations mask messy realities that contrived data sets and assumptions don’t, and can’t, capture.
That is, NSF rewards simplistic views of the world, and politically motivated views at that.  Or, said in terms we often use about genetics, political science has become a reductionist field, reducing complex events to single determining variables.  With no predictive power.  Stevens writes, "Many of today’s peer-reviewed studies offer trivial confirmations of the obvious and policy documents filled with egregious, dangerous errors."

And of course this doesn't apply to political science alone, even if Washington politicians are targeting just that field. It's true of economics, psychology, sociology, any aspect of social science that is reductionist and attempts to predict future events based on simple models. Economists and social scientists -- note the label -- aspire to being scientific.  If that meant careful factual analysis and proper data collection rather than views based on ideology, one might agree that there is something scientific about these fields.

But it has become a kind of dictum that one needs big mathematical or statistical models, computer data bases, and formal mathematical theories to be scientific.  Too often, if not typically, one either works with 'toy' models that are so stripped of detail as to be useless to the real world (despite the rationale that they tell us about the real world and where to look for effects), or they are so intricate that they snow people into believing that this is science because its arcane.

The humanities, religions, politics, and other areas of human endeavor have similar traits, but in this case we're talking largely about the kinds of social science that is done in universities, which are supposed to be about seeking truth rather than smoke-screening.  We doubt anyone can seriously argue that society is better off overall in terms of its psychological or social health than it was before the age of big research grants that are now the life blood of universities, and how social scientists justify their status and keep their jobs (teaching some, if and when they really have to).

Of course, while we think social sciences really do deserve all of this kind of critique, which is coming not from us but from their own ranks, MT readers will know that we certainly believe that genetics and medical sciences are somewhat comparable.  So, in fact, is physics with its mega-colliders, hype about life on alpha-centauri, and strings. Big science now is about building empires devoted to particular research strategies, exotic and impressively complex, despite knowing very well that they will not deliver what we promise. And the track record, overall, supports this. Research empires and establishments are made that work a certain technology or worldview, and they then are like oil takers, slow to change direction because they depend on continuity.  It's not really an evil so much as the normal way humans behave, especially when we do make at least some progress with societal benefit (perhaps much moreso, or more stably and accumulative in genetics than in social sciences), when we need to earn a living.

Research funding cuts or reform including some type of accountability for real results, not just CVs padded with long lists of publications might help.  Of course, the problems we want to solve in social and physical sciences alike are difficult so the failure to find easy answers isn't the issue. It's the failure to own up, the knowingly false promises, the need for grant continuity, and the amount of wasted public resources that's the problem.

Stevens offers this solution:
To shield research from disciplinary biases of the moment, the government should finance scholars through a lottery: anyone with a political science Ph.D. and a defensible budget could apply for grants at different financing levels. And of course government needs to finance graduate student studies and thorough demographic, political and economic data collection. I look forward to seeing what happens to my discipline and politics more generally once we stop mistaking probability studies and statistical significance for knowledge.
In any field, social or biological or physical, there are valuable kinds of data to collect. Census data, data on incomes, age and sex related aspects of well-being, and so on. And there are unsolved problems that ought to be soluble if we focus on them. What those more focused questions are in polysci, sociology, psychology, education, and economics is not for us to say. 

But in genetics, these involve traits that really are genetic in the usual, meaningful sense of the term. Huntington's disease, sickle cell anemia, muscular dystrophy and many others are examples: we know the responsible gene, even if other genes may contribute in minor ways. Genetic technologies should be able to do something about those disorders. It's where the funding should go.

Tomorrow, we'll comment on a related issue, the determination of behavioral scientists to prove that behavior is genetically determined and simple -- and that they will find the genes.

Friday, June 22, 2012

Informed consent -- who's it supposed to protect, anyway?

A piece by Erika Check Hayden in this week's Nature on current confusion with respect to informed consent  begins with a discussion of the patent 23andMe filed in late May to protect their findings from their research on Parkinson's disease.  As Hayden points out, many consumers of 23andMe, the direct-to-consumer genotyping company, were surprised to learn that the company was filing to patent a gene -- gene patenting having been one of the earliest and most controversial aspects of the commercialization of genetic research and one that several of the company's founders were known to oppose. We blogged about this when the patent announcement was made. 

Hayden's piece hits close to home because in her second paragraph she quotes MT's own Holly Dunsworth who left a comment on the 23andMe blog post announcing the patent application.  Holly said that she didn't believe she'd agreed to the possibility that one of her genes could be patented when she signed her consent for their genetic services. (She's aghast that genes can be patented, and is against it -- that makes it unanimous here at MT.)  But Hayden points out that "the language is there" in the consent forms.  Well, that may be so if you're a lawyer and it occurs to you that "If 23andMe develops intellectual property and/or commercializes products or services, directly or indirectly, based on the results of this study, you will not receive any compensation" means that if they find something they want to patent in your genome, they will and it's theirs.

The non-legal crowd isn't as likely to have read it that way, and didn't realize they were signing on to the possibility that the company might patent one of their genes, without sharing any of the profit, and without explicitly broaching this sensitive ethical issue with participants.  It's more than a bit disingenuous of 23andMe to claim to be open and free with knowledge on the one hand, engaging customers because of this open access policy and on the other hand committed to constraining that freedom when there's profit to be made -- by them! 

But beyond what this means about 23andMe, this does point out one of the problematic issues having to do with informed consent, and that is that when what can be done with the materials a subject or donor consented to give a researcher or company access to can change as technologies change or as findings accumulate, "informed consent" is in fact not that.  23andMe's consent asks their customers to agree essentially to blanket use of their genetic data.  It's an open-ended consent, as many researchers and institutions prefer these days, seeking permission to do anything with the sample in perpetuity.  Some consent agreements, though, impose specific time or usage limits instead.  Given how quickly technology changes however, these limiting agreements may well be used less and less as genetic research goes forward.  You can be pretty sure that once researchers get their hands on your samples they aren't going to want to give them up.

Another approach is George Church's no-consent-at-all model for the Personal Genome Project.  When people sign on to the project he requests that participants allow all their biomedical data to be freely accessible online, along with their whole genome sequence.  The idea is that because researchers can't predict what they'll find, or what new technologies will come along, science will best progress if participants allow anything at all be done with their data and their biomedical information now and in the future. 

Of course this kind of consent can't be properly 'informed' in any serious sense, relative to being what is fair to call a blank check for the researchers. They know this, of course, and any benefits that do happen to become possible will likely go to them.  Researchers don't get to set all the limits, though.  Any consent for a study using federal funding must allow research participants to be free to remove their samples and data from a study.  And,  Hayden says that several US states, including California where 23andMe is based, are considering laws to constrain what can be done with a person's DNA.

However, genetic data are different from, say, demographic data because even anonymized, no names attached, it's possible to identify individuals if reference genetic data are available, and identifying ethnicity is often trivial.  Not to mention that no genome is an island, so if you are genotyped, that tells you a lot about your parents as well as your siblings.  So, if you consent to genotyping or whole genome sequencing but your parents don't, that can be a problem.  And of course once a sequence is published or summarized in statistical data, it is no longer removable.

Who is informed consent for, though?  Who is it designed to protect?  It stems from Nuremberg Trial protections that were the result of the Nazi experiments in World War II.  They were indeed for protecting human subjects.  Human subjects protections have evolved over the years however, and now while still nominally for protection of, well, human subjects, there's a whole lot of CYA going on, as universities and companies try to avoid the kinds of legal difficulties that researchers such as those working with the Havasupai in Arizona got into not long ago (a case we blogged about here).

Indeed, Hayden quotes Mishra Angrist, genome policy analyst at Duke University, and participant in the Personal Genome Project saying, “Institutions use informed consent to mitigate their own liability and to tell research participants about all the things they cannot have, and all the ways they can't be involved. It borders on farcical.”

The interests of researchers and subjects don't always coincide, of course.  It is only human to expect those with their hands on the data to be tempted, and at least occasionally to yield to the temptation, to seek private gain from the samples, beyond their salaries and job status.  Also, researchers generally and naturally want to be allowed to do whatever they want with their hard-earned samples, while subjects often want constraints -- if they even understand the study.
Many researchers say that the obvious solution is a broad consent document that gives researchers free rein with the data. But many non-scientists think participants should be able to control how their data are used, says lawyer Tim Caulfield of the University of Alberta in Calgary, Canada, who has surveyed patients about this idea. “There's an emerging consensus within the research community about the need to adopt things like broad consent, but that hasn't translated out to the legal community or to the public,” he says.
 And then there's 23andMe, which gives their customers access to all their own data.  Until they don't.

Thursday, June 21, 2012

Noise becomes music -- but is it Darwinian evolution? And is it music?

A paper just published online in PNAS, and getting lots of play (yeah, pun intended), seeks to disentangle the forces that create the kind of music people want to listen to -- is it composers, musicians, listeners or a combination of the three?  And, what forces cause music to evolve over time?  Says one of the authors,"We believe music evolves by a fundamentally Darwinian process - so we wanted to test that idea."

The system they developed to test this they call "DarwinTunes." With this system they turned the selection process over to listeners.
...we constructed a Darwinian music engine consisting of a population of short audio loops that sexually reproduce and mutate. This population evolved for 2,513 generations under the selective influence of 6,931 consumers who rated the loops’ aesthetic qualities.  We found that the loops quickly evolved into music attributable, in part, to the evolution of esthetically pleasing chords and rhythms. Later, however, evolution slowed.
Fitness was defined by appeal to public taste; the tunes the public liked best "got to have sex, got to have babies," and the tunes the public disliked died.  Here's a video of one of the authors explaining the experiment, with musical examples, and here's a link to a demonstration.

So, the argument is that they've demonstrated Darwinian evolution, and, incidentally, that good music doesn't require composers.  But of course both arguments are patently false.  The only halfway credible way to demonstrate the latter is to do the same experiment with people who've grown up with zero exposure to music because exposure -- to music written by human composers -- shapes the consumer's taste and people's tastes shaped the outcome of this experiment.  So whether or not a composer actually wrote the notes to the tune that consumer forces determined to be the fittest one here, centuries if not millennia of composers wrote the notes that shaped the taste that determined the selections made by the consumers of the music.  That's the most important loop here. 

And of course this experiment doesn't demonstrate Darwinian evolution by natural selection because the selective force was artificial.  But more importantly it was teleological, goal directed, the goal being music that people like, and that has been shaped by years of listening to music.  Western music at that.  You don't hear eastern influences in the final generation of acceptable tunes.  The evolution of this music was clearly headed in a predetermined direction from the start, with among other requirements, whole and half steps and little or no dissonance.

And, who's to say that consumer taste has always driven musical style?  It may be a powerful force now in our Top 40 marketing age, but even in the age of marketing The Beatles really did change the kind of music people wanted to hear. 

None of this is meant to take away from the cool factor of this experiment.  But why ruin that with the inappropriate Darwinian analogy that will only serve to further confuse a public that already has a dim enough understanding of evolution?  And what does it say about PNAS that its reviewers didn't spot this egregious mistake and/or permitted the study's misrepresentation as 'Darwinian'?   Is this model so deeply ingrained in today's culture that 'evolution' is confused with Darwinian natural selection, or is it that using Darwin's name was another example of hype or marketing for the study?

And ok, it's conceivable that an internet full of musical consumers could eventually produce something like J.S. Bach's Suite for Lute in G minor (the beginning of the Prelude to which is to the left), just as it's possible that a roomful of monkeys at typewriters could eventually hammer out a Shakespearean play random typing, but the idea that composers are not needed to create music belittles the complexity of great music, and the role of the composer in its creation.  Really, how much more Muzak do we need?

The study constitutes science of a sort, and is certainly cute and even interesting.  But it says nothing about Darwinian evolution and should not have been portrayed that way.  If our taste really did evolve in a way that makes Bach pleasant (to the educated elite, anyway), this may reflect the way we are, but it says nothing about how we got that way.

Wednesday, June 20, 2012

The 3DQ finalists -- pretty good company

We are excited that Holly's March 30 post has been chosen as a finalist in the 3DQ best science blog competition.  In this post she writes about what new fossil finds do or don't tell us about the evolution of the hominin foot.  It's a beautiful, thoughtful, learned post, but we do grudgingly have to say that she's in excellent company.  If you haven't read the other finalists yet, you really should give yourself the pleasure.  There's some mighty fine science blogging out there.  
  1. Boing Boing: What Fukushima can teach us about coal pollution
  2. Empirical Zeal: The crayola-fication of the world: How we gave colors names, and it messed with our brains
  3. Quantum Diaries: Helicity, Chirality, Mass, and the Higgs
  4. Scientific American Guest Blog: Trayvon Martin’s Psychological Killer: Why We See Guns That Aren’t There
  5. The Mermaid's Tale: Forget bipedalism. What about babyism?
  6. The Primate Diaries: Freedom to Riot: On the Evolution of Collective Violence
  7. The Trenches of Discovery: The War of the Immune Worlds
  8. Three-toed Sloth: In Soviet Union, Optimization Problem Solves You
  9. viXra log: Higgs Boson Live Blog: Analysis of the CERN announcement
Sean Carroll will choose the winner, to be announced on June 25. 


Tuesday, June 19, 2012

Empathy: an ex-student's path to discovery

For some years Ken and I co-taught an upper level undergraduate course called Biology, Evolution and Society.  It was a small seminar course, and almost invariably the students wanted to be there and were bright, interested and engaged.  Some have gone off to medical or veterinary school, some to law school, some on in anthropology or history, and we still hear from a number of them.

One of our ex-students is currently just finishing his first year as a Peace Corps volunteer in Paraguay.  He writes a blog called Tones of Home about his thoughts and experiences in a small Guarani village there, and it is always interesting and always thought-provoking.  His June 18 post, however, is more than that -- it is a reminder of how thoroughly his current world is not our world.  Indeed, in this post, and in the blog in general, he chronicles his path to a deeper empathy than anything he could ever have learned in school.  Our concerns about open access journals or the meaning of gossip or the appropriate uses of genetics and direct-to-consumer genotyping are irrelevant to the people he is living and working with now.  

This is not to trivialize these concerns.  They are real and can have real effects on people's lives, but in the overall scheme of things, we don't always remember how privileged we are to be worrying about these things rather than about whether we can scrape together enough money to get the antibiotics to save our 4 year old's life.  To a great extent we are 'privileged' because we assign  privilege to ourselves by not having enough empathy, and instead finding endless reasons to rationalize inequity in our favor.

Thanks, Mario.  You're an inspiration.

Monday, June 18, 2012

Finalist for the 3 Quarks Daily science prize

Thanks to your votes, we made the longlist. Then the editors at 3 Quarks Daily chose six plus three wild cards to pass along to the judge. And we're on that list! See the list of fantastic finalists here.  If you aren't yet familiar with Sean Carroll, this year's judge of the competition, he's pretty awesome. Kevin and I are fans of his Cosmic Variance and I show the influence here and here.  Awards will be announced around June 25. Though, it's honestly prize enough to be a finalist and to be read by Sean Carroll.

Idle gossip?

A story in the New York Times yesterday reports that rather than merely malicious, gossip can have positive benefits.  The piece quotes from a paper published in the May Journal of Personality and Social Psychology describing experiments done by a group of sociologists that demonstrate that gossip serves good purposes such as reining in selfish behavior and alerting others to injustices done by members of the group, thereby reducing the likelihood that the gossip's recipients will be exploited.

It's interesting that this paper is getting so much notice since anyone who has taken an anthropology class in the last 50 or 100 years will know that cultural anthropologists have long recognized gossip's function in a group.  The study of gossip is the study of how people behave and this has been true at least since anthropologist Bronislaw Malinowski lived and worked among the Trobriand Islanders in the early 1900's, but gossip probably came into its own as a subject of significant import in the field in the 1960's.

Naturally, however, anthropologists don't necessarily agree on the function of gossip.  There are the functionalists who see talking about other people and how they misbehave as something that maintains the cohesiveness of the group, keeps people in line, and makes it clear what's expected of group members.  This could be important in primitive bands with no official leaders, or as a way quietly to resist bossy chiefs, especially by women who were not in the hierarchy.  And there are the economic anthropologists who see gossip as an economic transaction, the exchange of information for everyone's benefit.

There are the transactionalists, who don't believe that a group is cohesive or shares common goals, but that gossip instead serves the purpose of the individual as s/he aims to get ahead.  Gossip is the way to get the group to go along with your own self-interest.  And then the symbolic-interactionists see culture as constantly changing and gossip as a way for members of the group to assess those changes and their own and others' roles in that change.  And that's just for starters.  Every school interprets gossip in its own way.

And these are all very different interpretations of the same phenomenon, the sharing of information about people in the group quietly, in a social kind of context, avoiding direct challenge to dominant individuals or constraining those who would get out of line.

Of course our own culture isn't immune.  Not only do we all gossip about people in our daily world, but we gossip about people we will never meet -- People magazine, celebrity gossip columns, stories all over the web serve the purpose of fulfilling an endless demand for gossip.  Why do we do it?  We do it because _________.  You fill in the blank.

And that's another interesting thing about this story getting such play.  Not only is the paper the reinventing of a rusty old wheel but it illustrates a problem that is true in all of the social sciences, yes, but also the humanities and even in the life sciences.  It might even define philosophy.  The same observation can be interpreted in a multitude of ways that are invented and continually reinvented over time, largely with updated jargon.  We've had anthropology's many takes on gossip, and now the sociologists are jumping in, reinventing and seemingly oblivious to the fact that these ideas have already had play and shown their likely relevance.

Will someone finally get it 'right'?  Nope.  'Right' is determined by one's ideological perspective, and functionalists and symbolic anthropologists will argue until they are blue in the face, none of them giving an inch.  And, if their arguments are logical, they'll all be equally right.

Every anthropologist who ever wrote about gossip was right, from Malinowski on up to the present day (if anthropologists are in fact still writing about it).  Well, forgive us, but perhaps excepting evolutionary anthropologists who will (do?) say that we're genetically programmed to speak ill of each other, for the same reason we cited above; _________.

Of course, this is one of countless instances where professional careerism requires doing research and that often means re-doing research, and of course claiming deep insights to a press hungry to report it (by reporters often oblivious to history).  And then the readership is gullible in thinking some new discovery has been made.

But, hell, maybe gossip is just fun!

Sunday, June 17, 2012

Thanks for voting! It worked.


Thanks to your votes, my post is a semi-finalist for this prize! Now, hopefully the editors will choose it for judging by Sean Carroll. They'll post that shortlist of finalists soon. See details and list of semi-finalists here if you're interested.

Friday, June 15, 2012

Direct-to-consumer racism?

Ken and I have often cautioned here on MT and elsewhere that society may be flirting with a new era of eugenics with our new genetic technologies that claim to be able to assess our ancestry and our risk of disease or other traits.  More ominous are these new technologies coupled with current attitudes about genetic determinism.  History shows the horrors that the misuse of science in this way have been visited on millions of people.  This could be a legitimate concern even if scientists, today, unlike the past, would  cringe at the thought of such uses of their work.  But are we wiser, kinder, and gentler than our benighted forebears, or is vigilance just as needed now as ever?

A story in this week's Nature, "Genome test slammed for assessing 'racial purity', is a case in point, suggesting that this isn't just hysteria on our part. A genetic diagnosis company in Hungary, licensed to do genetic testing for health purposes, has certified a far right member of the Hungarian parliament to be free of Roma or Jewish heritage.  The company scanned 18 loci for variants that they say are at higher frequency in Roma and Jewish populations.  They produced certification of the candidate's pure Hungarian ancestry in time for the election, which this candidate went on to win.  How much the certification had to do with the win, who knows, but that's not the point. 

Nature reports that the company argues on its website that it “rejects all forms of discrimination, so it has no right to judge the purpose for which an individual will use his or her test result, and so for ethical reasons it could not have refused to carry out the test."  Even so, many in Hungary have reacted negatively -- the government has condemned this misuse of genetic testing, Hungary's Medical Research Council secretary says it's "professionally wrong, ethically unacceptable -- and illegal," a Jewish member of the company's board resigned. 
“The council’s stand is important,” says Lydia Gall, an Eastern Europe and Balkans researcher at civil-rights group Human Rights Watch, who is based in Amsterdam. In Hungary, “there have been many violent crimes against Roma and acts of anti-Semitism in the past few years”, she says. Politicians who try to use genetic tests to prove they are ‘pure’ Hungarian fan the flames of racial hatred, she adds.
It's important to note that the testing did not precede the racial hatred, but it does serve to feed the frenzy.  This is why we have so often cautioned that scientists need to be very careful about the kinds of uses for which they use or advocate genetic testing.  Not long ago similar ancestry testing was proposed for screening immigrants to the UK--for national 'security' purposes.  Sound familiar?  It's just what the British eugenicists were up to 85 years ago. 

The problem is that when the horse is out of the barn, it's too late.  So, indeed, it's a very delicate and difficult issue, but there are arenas into which scientists should not tread -- or not without some clear and controlling societal agreement on what can and cannot be investigated.  



Thursday, June 14, 2012

Open access -- its time has come?

The end is nigh?
The rumbling about open access to scientific publications is getting louder.  If you've been out of earshot, here's a brief run-down of some of the recent major events.

A boycott of academic publisher Elsevier began 6 months ago when British mathematician Tim Gower issued a 'call to action' to urge academics to refuse to submit to or review for any Elsevier journal.  He  explained in his call to action why he decided to go public with his anger over the exorbitant profits Elsevier and others are making from his free labor.  To date, 12,000 scientists have joined the boycott. You too can add your name to the list.

The fundamental issue has to do with open access to the results of scientific research, particularly research funded by public money.  Gower's specific objections are that major academic publishers charge usurious prices for subscriptions to single journals, often $1000 or more, that libraries have to buy large bundles of journals, many of which they don't want, in order to get the major journals that an academic library can't do without, and that Elsevier supports attempts to restrict free exchange of information. So, you pay for the research with your taxes, and that includes fees the libraries (and the researchers) pay (again with your taxes) for their subscriptions.  That's double-dipping!

One of many Boycott Elsevier images on the web;
this one's a poster, available here
Three major publishing houses, Elsevier, Springer and Wiley, produce the bulk of the more than 20,000 academic journals published worldwide, and over 40% of the journal articles that appear each year.  This is big business, with a profit margin of 35% or more, according to numerous sources (here or here, e.g.), worth on the order of $12 billion a year.  This profit, more and more academics are coming to feel, depends too heavily on their free labor.  Publishers pay academics nothing to write the papers published in these journals and nothing to review the work that their peers submit, nor come to that, do academics get paid for the work they do to review and screen the grant applications in the first place.  Peer review is the process that is meant to ensure high quality contributions to journals, though it's the highest tier -- and most lucrative -- journals that can get the most well-known academics to do the reviews, and they might be asked to review hundreds of papers a year.

Journal subscriptions already can be prohibitively expensive, and even access to a single paper can cost $30 or more.  Yes, publishing costs money, and if authors and reviewers were paid it would cost even more, but when the work has been funded by public money, and the writing and reviewing is free to the publisher, and then the results sequestered behind a paywall, given the increasingly organized protests, the system is looking more and more unsustainable.

Open access isn't a new issue, of course.  PLoS (Public Library of Science) has been publishing open access journals since 2008, and they quickly became well-respected -- high impact -- places to publish.  One of the journals, PLoS One, will publish essentially anything that is well-written, presents a study done with acceptable methods and that hasn't been published previously; it's the reviewers job to ensure that a paper meets these criteria.  As a result, PLoS One does publish some lesser work, but they also publish some very fine work -- presumably by well-funded researchers who don't want to deal with the hassle and delays of the review system, and its arbitrary nature, or who just want to circumvent the private-gain system.  Someone's got to pay for these journals, either authors or readers: PLoS is a non-profit publisher but they do charge to publish (hmm, where does that money come from?), though to be fair they will waive the fee if necessary.  The other PLoS journals have more stringent standards for publication.

ArXiv is an early adopter of e-print service, online since 1991.  It's a database maintained at Cornell that publishes and provides open access to physics, mathematics and statistics papers, and hosts probably every physics paper published in the last 2 decades.  Open access is old hat to people in the hard sciences.

More recently, a petition from the White House to require open Internet access to taxpayer-funded research reached the magic number of 25,000 signatures and so the idea that publications of publicly funded research results must be freely available online is now officially in play in the upper echelons of government.  Here's evolutionary biologist and co-founder of PLoS, Michael Eisen's reasoning on this issue.  The National Institutes of Health enacted public access policies requiring that scientists who are publishing results of NIH funded research to submit final peer-reviewed journal manuscripts to a publically accessible digital archive (PubMed Central) upon acceptance for publication.  There are currently 2.4 million articles in the archive, and journals can choose to be full participants but most do not or only selectively deposit.  The library, and Faculty Advisory Council, of at least one major university, Harvard, are urging their faculty to submit their work to open access journals, primarily because journals have become prohibitively expensive and this is a way to chip away at the status quo. 

Further alternatives to the traditional publishing model are beginning to appear, as Reuters reported on June 12.  One  is called PeerJ, the founders of which come from PLoS and the research database, Mendeley.  Their website has just come on line.
PeerJ provides academics with two Open Access publication venues: PeerJ (a peer-reviewed academic journal) and PeerJ PrePrints (a 'pre-print server'). Both are focused on the Biological and Medical Sciences, and together they provide an integrated solution for your publishing needs. Submissions open late Summer.
They offer a 3-tier subscription scheme; until September for $99 you purchase a lifetime right to publish a single paper a year in their journals and to freely access any of the papers they publish; for $169 you can publish 2 papers a year and for $259 your publication rights are unlimited.  The fees will go up some at the end of the summer.  The founders hope to hit the ground running, publishing significant papers by well-established researchers immediately. As in PLoS One, papers will be reviewed for scientific validity, not potential impact.

Another open access journal about to come online, eLife, is a joint collaboration of the Howard Hughes Institutes, Wellcome Trust and the Max Planck Institute.  They also hope to be a high-impact journal right off the bat.  Because of the well-heeled backers publication is going to be free, at least initially.

Let's be fair...
Of course, not everyone is a fan of open access, and it's not just the journals that stand to lose who are against it.  Opponents say it's anti-free market, expensive for authors rather than libraries and thus it disadvantages those without grants, as well as scientists in poorer countries, peer review will be scarified and quality will be scarified, or it's just too big of a change, after 350 years of publishing science this way. 

After all, it is also true that until recently there was no way to get your work known to any kind of broad audience other than publishing on paper, individual and library subscriptions.  Even then it took a long time to get the peer review done and the paper in print.  Without centuries of such publication there would hardly be any scientific establishment and much slower progress!

Indeed, everybody has to earn a living, and that includes people who work for Elsevier.  And our pension plans are based on stock which means on profits.  It is not surprising that the status journals work hard to preserve their market-share.  Nor can we be blamed for succumbing to this aspect of the status system, given how rewards are doled out in academia.

But not too fair!
Just because publishers are people with kids to feed, and because they were the only, and an effective, way to disseminate information that made widespread access possible, doesn't mean things can't or shouldn't change.  After all, think of the scribes in monasteries whose work on scrolls and tablets was undermined by Gutenberg and the disrespectful paper inkers.  They had families, too -- well, the Church was their family.

Directly or otherwise, publishers play on your fears: that if you don't publish in their expensive high-visiblity journals, you won't get tenure or a grant.  Making us vulnerable, they make it difficult to resist the game they're playing -- that we let them play be going along with the 'impact' system (maintained by another private, profit-making company) that determines which journals people submit papers to, and how their Deans judge their influence.

Jean Miélot, author and scribe
The peer review argument also rings rather hollow.  First, peer review can also mean insider-club exclusion, and we know that innovative ideas have a tough row to hoe.  Second, anyone in academe knows that you can always find someplace to publish your work if you try hard enough (PLoS One is one place).  Peer review may once have served as a guard at the quality gate, but those days were long gone decades ago in terms of serious quality control.  We think there is no serious evidence that e-journals have lower standards -- and indeed, PLoS One, often the whipping boy for this argument, has published some very fine papers, indeed.  And almost anyone who is candid knows that the 'main' journals publish a lot that is weak, sensationalized or over-claiming, and most of the results turn out not to be replicable (as Ionnides said in a paper we've referred to before, published in PLoS Medicine, in fact).  So we need not weep for the poor publishers and their role as Guardians of Quality.

So, given everything, change is hard, always has uncertainties, and we're all in a tangle that keeps costs higher than they need to be. And those not in science are paying the price for it.  They pay for research and expect to reap the benefits.  But that is, to many of us, too indirect and inefficient, with too many in the middle skimming of the top, before the research ever sees the light of day.  But, it does look as though the push for open access is getting stronger, big money is starting to rally behind it, and a consensus is perhaps beginning to build.

Wednesday, June 13, 2012

Extremeophile microbes - evidence of one of the fundamental principles of life

Life is nothing if not resourceful.  If there's an environment, it's very likely that some living thing will have adapted to living in it.  In the last decade or so, with genomic sequencing as routine as it has become, researchers have been taking samples from a variety of environments and sequencing them to see what's there.  Craig Venter, e.g., launched his Global Ocean Sampling Expedition in 2004, sailing his yacht around the world for 2 years collecting samples of ocean water and then sequencing everything he found in the hopes of identifying unknown microbes.  And he did, discovering "more life forms than anyone in history," according to the New York Times.  And recently researchers found previously unknown organisms in acid mine drainage by sequencing everything that cropped up in their samples.  We blogged about that here.

Universal phylogenetic tree of life based on comparison
of small subunit rRNA (ribosomal RNA) sequences; source
Now researchers are reporting 'extremophile bacteria' from soil samples taken from volcanoes in the Atacama region of South America.  In fact, bacteria, fungi and organisms from the branch of life called archaea, the third domain along with bacteria and eukaryotes.  Archaea, single-celled organisms with their own evolutionary history, were once thought to live only in extreme environments like deep sea vents or hot springs but it turns out they can be found just about anywhere, including the human colon.

Not just for its own interest!
Speaking of extremes, bacteria living in underwater thermal vents (hot springs) were discovered some time ago and turned out to have extreme importance to biology.  Like any organism, Thermus aquaticus has to copy its DNA when it reproduces (by dividing).  Copying DNA requires a polymerase protein, that moves along a DNA strand and makes a copy, by grabbing appropriate matching nucleotides and stringing them together.  But at high temperatures most proteins lose their 3-dimensional shape and don't function.  But T. aquaticus's polymerase doesn't come apart in that way at high temperatures.  So a few decades ago, scientists including one Kary Mullis, developed a way to extract and use T.a's  polymerase in a machine, to make copy after copy of any fragment of DNA you put into a test-tube along with the polymerase, some nucleotides, and the kind of solution required for DNA to be copied.  That reaction is called PCR (Polymerase Chain Reaction), and it allows us to clone or increase the amount of DNA in countless kinds of experiments, including DNA sequencing, that are at the basis of much of modern molecular and evolutionary biology.

Even more extreme
These newly discovered volcanic micro-organisms appear to have a way of converting energy that hasn't been seen before.  They are being characterized as being 5% different at the DNA sequence level from their closest relatives -- a difference that's in same ballpark as the estimated genetic difference between humans and our closest relatives, chimps.  That is, not terribly different.
Atacama region; Wikimedia
Commons
Life gets little encouragement on the incredibly dry slopes of the tallest volcanoes in the Atacama region, where [Ryan Lynch, a microbiologist with the University of Colorado in Boulder, one of the finders of the organisms]'s co-author, University of Colorado microbiologist Steven Schmidt, collected soil samples. Much of the sparse snow that falls on the terrain sublimates back to the atmosphere soon after it hits the ground, and the soil is so depleted of nutrients that nitrogen levels in the scientists’ samples were below detection limits. Ultraviolet radiation in this high-altitude environment can be twice as intense as in a low-elevation desert. And, while the researchers were on site, temperatures dropped to -10 degrees Celsius (14 degrees Fahrenheit) one night, and spiked to 56° C (133° F) the next day. 
How these micro-organisms survive is still an open question. Lynch and Schmidt and colleagues looked for evidence of photosynthesis or chlorophyll, but didn't find it (it's not clear from the press piece whether this was in the fungi or all the micro-organisms; we'll have to await the paper soon to be published in the Journal of Geophysical Research-Biogeosciences). "Instead, they think the microbes might slowly convert energy by means of chemical reactions that extract energy and carbon from wisps of gases such as carbon monoxide and dimethyl sulfide that blow into the desolate mountain area." These guys are apparently not fast-moving, so don't need a whole lot of energy. But the researchers suggest that it's possible that these organisms are making energy in a way that isn't yet known -- this seems unlikely to us because 5% genetic difference doesn't seem like enough to support a whole new metabolic process.

The diversity of life in this particular ecosystem is pretty sparse; they identified only 20 or so in these samples. The microbiologists, used to finding thousands of lifeforms in soil samples of the same size, say this is likely to be due to lack of water.

Not to be satisfied understanding the extremes of life on Earth, the team of microbiologists is teaming up with astrobiologists with the idea that the conditions in the Atacama region must be as close to those on Mars as anyplace on Earth.

But that's not why we like this story -- in fact, that makes us like the story a little bit less.  Why can't these organisms be interesting in and of themselves, for what they tell us about the extremes of life on Earth, as well as the evolutionary principles that drive the ways that life has adapted to just about any condition the Earth throws at it?  Indeed, to us this finding is another piece of evidence for the beauty of evolution, that life here on Earth is eminently adaptable.  This is one of the fundamental principles of life we've written about numerous times, including here, and in our book.  To us it helps explain and predict the kinds of diversity we see wherever we look.

Prokaryotes and phungi -- two more entries in Holly's evolutionary P Hall of Fame.

Tuesday, June 12, 2012

Evolutionary genomics and interior decorating. Part II. Why is there genetic variation in anciently adaptive traits?

Animals are humans too?
Yesterday, we discussed a human-interest NYTimes story on the widespread sharing of human traits by other animals. The details of the piece were interesting but in a sense, none of what was discussed was new or surprising in principle, if one thinks in evolutionary terms.

If some trait of ours, or something we do, is also found in other animals, especially distantly related species such as between us and fish, observers tend to automatically characterize it as 'adaptive'.  In the NYT piece an example was dopamine release as a reward for evolutionarily advantageous behaviors.  We may not really know what any such trait evolved 'for' in fish, leopards....or humans. But regardless of the almost-always speculative nature of the specific scenarios suggested, the idea is that the trait must at least not have been very harmful, if not beneficial (the usual interpretation) and must in some way be due to the action of genes.

Ribs (sc=scapula); Photo © Kuratani and Nagashima / Science AAAS.
Fish have ribs, frogs, snakes, birds and mammals have ribs.  These are produced during embryogenesis by the action of various genes, some of them known, that are similar from fish to humans.  They may not be as similar as was once thought (by Ernst Haeckel in his famous argument that 'ontogeny [development] recapitulates phylogeny [evolution]'), but the essential mechanisms are conserved--fixed among diverse species (except perhaps for harmful mutations that always arise and are quickly removed by purifying natural selection).  Every member of these species has the genes whose action produces ribs.

The same will be true of  disease--because related species share so much genetically and metabolically, they also share ways in which these systems can go wrong, whether due to genetic or environmental causes, or a combination.  So it is wholly expected that other species will get cancer or diabetes, and so on.  And the same will apply to any shared traits, including behavior.  The article we discussed in Part I yesterday gave a number of fascinating examples. 

However, despite the irresistible temptation to make up Just-So stories such as how mood-altering substances may have been good for the ancient animals, and favored by natural selection, that need not be true.  There can be incidental effects.  Humans do interior decorating, an example in the NYT piece of how we get our kicks, but what we like comes and goes and varies among cultures.  However our brains work, we have genes enabling esthetic taste; but those genes need not have been 'for' esthetics.  We do math, but some use Roman numerals or an abacus, and others do calculus on paper or computers.  We have these abilities even though our ancestors, even just among humans, much less red snappers or dung beetles, did not do interior decorating or calculus.  Whether the same genes underlie quantitative analysis in ants, or bower birds' nest-shape esthetics is something we'll leave to the experts.  That's because it's irrelevant to our point here, which is that a trait once here for one reason can be used for other things.  We used to throw spears for survival, now we swing bats for runs in baseball and cricket.

Fixed vs variable traits
The usual casual, default, text or pop-sci view is that a trait with adaptive fitness value should come to be fixed in a species, as ribs and other things generally are.  They shouldn't show much variation  because selection has favored the trait, getting rid of genes that didn't confer it!  This might seem  especially so if the trait has been essentially fixed for tens or hundreds of millions of years.  So what about the traits we're discussing here?  If we share them with mammals or even fish, how can they still vary? Weren't they fixed long ago?

The reason to ask this is that most traits, normal, disease, behavior or physical, vary at least to some extent in any population.  This can be due to the chance aspects of cell interactions during development, or to environmental effects, and so on.  But, remarkably, most traits show evidence for genetic variation.  Relatives resemble each other.  In those cases where you can do an artificial selection experiment, the population responds: the trait values in offspring shift towards the direction you favored compared to the trait values in the parents.  Familial clustering is, after all, one of the rationales for universal GWASing of any trait one has enough intelligence to spell in a grant application.  Often the degree of clustering or the evidence that it's genetic and not environmental or cultural is very weak, but never mind. Money is money and it keeps epidemiologists and geneticists out of soup kitchens. In any case, there are several possible explanations for the resemblance among relatives.  And they're very instructive.

Well, it's just not genetic!  You don't need genetic variation for it
One could say that behavior (like taste in draperies or interior wall colors) is something our brains make without any particular genetic basis for it.  Familial resemblance could be due to shared experience: you grow up in a house with wallpaper, you may like to have wallpaper; no genetic variation has to be involved.  We are in such haste to assume genetic variation is responsible that, without nearly adequate evidence, we make up evolutionary stories (as in those about other animals' behavior resembling ours), and far too many people swallow the stories uncritically.

We usually do that because we have Mendel's peas in mind.  They were either green or yellow, but that's because he crossed two strains, one all green and the other all yellow.  There were only two states under study.  If Mendel had then selected for, say, green, he'd have quickly eliminated the 'yellow' variant from his hybrids.  By extension to evolution, it's natural to think that if a trait were of adaptive importance, and selection arguments are valid (regardless of the specific selective reasons) then new variation that made you like paisley wallpaper (or, rather, made cheetah's like paisley wallpaper), rather than not like paisley wallpaper, would have advanced in frequency and been fixed in the species.  If wallpaper colors were like pea colors, the variation we see today in wallpaper preference simply couldn't be due to genetic variation--interior decorators have jobs because what we like on our walls isn't genetically determined.

But this brings us back to the problem, because basically every trait seems to be genetic, and yet it also varies at least slightly, in any species!  And in a sense the subtle influence of Mendel is the problem.

It's simply not simple!
If, rather than being like Mendel's peas, whose color variation was due to a single variant in a single gene, a trait is affected by many genes we can easily understand how it can at once have been selected 'for' long ago, persisted because of its importance, and still vary today in any species we'd like to study.

Each of the contributing genes, being a gene, is subject to mutational variation.  So are the DNA sequences that regulate the use of each of the genes.  Mutations with awful effects will quickly fail and disappear from the population.  But most mutations have little (if any) effect on the trait, and thus on the adaptive fitness of the trait--that is, on the bearer's chances of  reproduction. Their fate is largely determined by chance.

Genotype combinations of variants at these many genes will be removed from the population, but the same variants can stay around because most of the time they'll be in combinations that are not harmful or helpful.  The trait is maintained by natural selection, but individual genetic variants can change over time.  Most individual genetic variants have only trivial effects on the trait.  Each generation, there is new genetic variation introduced in these genetic regions, and some existing variants pass out of existence.

This is all wholly consistent with what we know about evolution, and is also wholly consistent with what we find in GWAS.  And we'd find it if we did GWAS for interior design preferences, too!  But there is an important implication, if you think carefully about it. What it means is quite important: the long evolutionary conservation of traits does not mean they are fixed, as would be the case if they were due solely to a single gene or if most variants were important.  What it does mean is that traits are complex, just as we're finding them to be with genome mapping studies (GWAS and other types of study).

The bottom line lesson: why we see what we see but don't want to see
To the extent that this reasoning is correct, it is an indirect but perhaps quite informative bit of evidence about genetic control:  Variation and persistence suggests or even implies complexity.  In evolutionary terms the more conserved they are among distantly related species, the more likely they are to involve many different genes, because that is basically how they can still vary today in complex ways.  They are not Mendelian by and large, even if a few, usually very rare, variants exist any time that have major effects (such as is the case with most diseases we study).

Thus, the fact that wildebeests, monarch butterflies, and even humans share a sense of esthetics shows that the trait has likely been maintained, for whatever reasons, by some sort of selection so that there is not major variation within the population.  Whether they and their relatives prefer paisley or pastel wallpaper is likely not to be 'genetic', or to be very important biologically.

 Our traits are due to many interacting factors.  In a way, such redundancy and complexity protects species from being too vulnerable to harmful mutations (or unexpected environments).

If, thinking Mendelishly, we expect otherwise, or hope for simplicity, we are deluding ourselves. We may talk 'evolution' but are not thinking clearly enough about it.