Friday, October 29, 2010

This year's acorn crop cont.

More on this year's magnificent crop of acorns.  I did hear back from the forester on the question of why so many acorns this year.  He says that oaks are generally sporadic fruit producers, with really good crops every 4 to 7 years.  There are several reasons for this, one being the weather and the other an ecological adaptation.

A late spring frost is hard on oak flowers, and will lead to a low yield, he says.  And, insects play a role.  There are on the order of 30 different species of acorn weevils "that can destroy up to 90% of any given year's production either while it is on the tree developing or after they fall in the autumn."  The cyclic nature of fruit production helps keep the insect population down.

And, he says that there are advantages to sporadic fruit production.  It keeps predator populations down, which increases the chances that some acorns from a given tree will survive and grow.  If not, my informant says, the tree would always be having to produce more and more fruit to stay ahead of the rodents.  Similarly, the fluctuation keeps weevil populations down, and thus acorn destruction down.  Good for the tree, not so good for the predators. 

Both explanations sound plausible.  However, regular MT readers won't be surprised if we are a bit reluctant to accept the adaptive explanation right off the shelf. First, an oak tree is lucky if even a few of the acorns it produces in any given year makes its perilous way to treehood.  Even in a bad year, oaks way overproduce acorns relative to what will take root, or replacement needs and so on.

However, sporadic fruit production in response to the vagaries of climate or other means of destruction of flowers or developing acorns is completely in keeping with the adaptability or facultativeness that is a core evolutionary principle.  Oak trees need to be able to adapt to change, and good and bad fruit production years is one way they do so.  It's easier to suggest but a lot more difficult to conceive how a tree 'knows' (genetically evolves) to adjust for variable predator loads in the hypothesized way, when climate itself is unpredictable.

Thursday, October 28, 2010

Size really does matter, after all!

Well, hot off the E-press is the latest Lusty Science story about findings reported in the latest British Journal of Cancer.  Apparently, being tall raises your risk of testicular cancer.  Yes, and risk is a matter of the proverbial inches. Every couple of inches raises your risk (if you're a guy--ladies, relax, this isn't about you!) by a whopping 13%.

Now, cancer is nothing to sneeze at, though testicular cancer is often, perhaps usually, completely curable.  But a 13% increase in the normal risk of about 4 in a thousand is roughly 5 in a thousand.  And the baseline of this risk the standard Size for Guys?  Five foot nine inches.

Now why this is, is a mystery, but it certainly merits taking a long look, so to speak.  However, maybe the long look should be at why otherwise idle epidemiologists would think this is a big story.  We don't want to demean the importance of testicular cancer in any way at all, but there are many statistical issues with studies of this kind (which was a meta-analysis of many different studies and, for example, found a lot of inter-study variation in results that the authors couldn't explain). The issue is what such a study actually can mean that is of importance.  The effect is small (so to speak) and no mechanism is suggested (because the same study couldn't find body fat or weight effects).

Now, if you happen to be 7 ft tall, that's 15 inches above average, which still only doubles your risk, if we could even assume the 'trend' were steady enough for such estimates to mean anything reliable.  And the number of 7-footers is so small that hardly any would, at the risk of 8 per thousand, experience the tumor.  Of course, there's no harm in getting regular physicals, or self-checking (just to detect tumors!), just as applies to women and breast cancer risk.  Now, if you're a Shorty, you should feel happy about that, after all.  But remember to put all those SizeMatters pill and device emails in your spam box!  Be happy with what you are.   Stop worrying!  You may not be winners in every size competition, but you come out ahead in other ways.

Your research dollars at work!

Wednesday, October 27, 2010

Nut cases, or The Mystery of the Acorn Bonanza

Penn State is situated in the middle of what used to be vast hardwood forest.  Some areas of the county have been more completely deforested than others, and we happen to live in an area in which the developers retained as many oaks as they could when they put up houses in the 1970s.  One consequence of this is that we spend many hours raking leaves this time of year.

Many many leaves, and this year, many many acorns.  Masses of them.  It's a good year for acorns.  I wanted to know why.

Taking Holly's message to heart, I wondered, could I apply the scientific method to this question?  Well, unfortunately no, not without having collected data on possibly relevant variables last year as trees were making their acorns -- or even two years ago, since the acorn production of some oak species is a two year affair.  And that would have required knowing the relevant variables. 

So I did the next best thing.  I Googled 'variation acorn production', thinking someone must have applied the scientific method to this question and done the requisite hypothesis testing.

Yes, they have (of course!), and it turns out that people look at this from various angles.  Some are thinking of the downstream effects, so advise hunters on how to maximize acorn production, because acorns are a staple food for deer.  Others are thinking about sustainable forests, and about how to keep deer from eating the acorns that they hope will go on to produce new trees.

Even so, this approach was not nearly as productive as I'd hoped.  In fact, I kept coming up with a similar theme:
The number of seeds produced by a population of woody plants can vary markedly from year to year. Unfortunately, knowledge of the patterns and causes of crop-size variation is limited....
However, little is known about the proximate factors that control the yearly variation in acorn production in oak species...
Most acorn production studies note wide and consistent differences in acorn productivity among individuals, but none clearly demonstrate determinants of productivity.

Hm.  Well, this site looked more promising --a list of actual variables!  
  A number of factors affect acorn production in oaks. 
∙masting cycles ∙acorn predators
∙oak species   ∙tree age and diameter
∙weather ∙tree dominance
When combined, these factors make acorn production highly variable from year to year, between the different oak species, between trees of the same species, and from one property to another. 
And further,
Much of the variability in acorn production is the result of a natural cycle in oaks called "masting".  In this cycle, oaks produce low or moderate acorn crops most years, and an abundant acorn crop once every two to five years.  Acorn production during an abundant crop year may be 80 percent higher than in a low production year; the difference to deer can be hundreds of pounds of acorns per acre.  Although the exact mechanisms that control masting are not fully understood, biologists believe that oak species, weather, and genetics are important factors that determine how often oaks produce abundant crops.  
If we knew what masting was, this might be helpful, but probably not to answer my question -- there's that 'not fully understood' thing again.  And really, only 'weather' is a variable in this equation, as the species and genes of a tree don't change season by season, so this isn't really very helpful after all.

This was interesting:
We examined long-term patterns of acorn crop sizes for five species of shrubby oaks in three xeric upland vegetative associations of south-central peninsular Florida for evidence of regular fruiting cycles and in relation to winter temperature and precipitation.

And potentially rewarding -- by looking at different species in a single area they were able to control for variation in all the possibly relevant factors.  What did they find?  "[E]vidence that annual acorn production is affected by the interactions of precipitation, which is highly variable seasonally and annually in peninsular Florida, with endogenous reproductive patterns." Oh, so it's rainfall.

Except that, as it turns out, a number of people have studied variation in acorn production in 5 local species in different areas.  There's a report of a study in California and one in Appalachia, and even one in Japan in which sea breeze was a factor, none definitively confirming the rainfall explanation.

In frustration, I emailed a local forestry agent.  I haven't heard back.  It's possible he's out counting acorns. 

Ok, so I accept that there's no simple answer to this simple question.  The serious upshot of this little exploration is that here, too, complexity reigns.  Despite the list I cite above, who can really say what all the relevant variables are, not to mention measure them at the right time or place?  Oak flowers are wind-pollinated -- maybe acorn production depends greatly on wind catching the pollen at just the right time.  Which would be essentially unmeasurable.  And, perhaps variation in rainfall is a significant factor, but where and when?  The roots of mature oak trees run wide and deep, and when are which roots feeding which flowers?  And so on.

And how does one construct believable evolutionary (that is, adaptive Darwinian) scenarios for this?  There's no acorn gene!  (But, of course, it has been tried.)

And think how utterly confusing this must be for any squirrel who's just trying to use his experience to get ahead, to put away a good cache of meals, and wonders if he's going nuts because he's losing his memory.

But one interesting thing caught our eye here as we ventured away from our usual comfort level, scientific literature-wise.  Ecological studies, by their very nature, are less prone to reductive thinking than what we're used to.  "When combined, these factors make acorn production highly variable from year to year." By and large, these studies accept that the cause of variation in crop production is the result of interactions among various factors.

If only this were so readily accepted in genetics and anthropology.

Tuesday, October 26, 2010

Postulating causation

A prominent geneticist was on campus the other day to give a talk about what genomewide association studies (GWAS) can tell us about biology.  While he is more favorable about this method than we are, we were pleased to hear him say that GWAS alone are not enough, and that generally this method can, at best, be expected to identify a gene or genes that act in concert with other genes to produce a phenotype.  That is, polygenes -- a polygenic trait is affected by many genes and its variation is due to the joint contribution of variation in many genes.

But we aren't here to debate the merits of GWAS today -- we've done that often enough before.  We want instead to talk about the problem of demonstrating causation.  The speaker said that GWAS can be used to zero in on candidate genes, but once candidates are identified, the researcher must then demonstrate the function of these genes and their relevance to the trait or disease of interest.  And to do that, he said, we need a set of 'molecular Koch's postulates' applicable to genetics.

Robert Koch was a German physician and bacteriologist, and winner of the Nobel Prize in Physiology or Medicine in 1906 for his work on diseases such as anthrax, cholera, tuberculosis and rinderpest.  In 1890 he published a set of criteria that he believed should be used to establish the cause of infectious diseases.   These are the 'Koch Postulates', still in use today:
  • The bacteria must be present in every case of the disease.
  • The bacteria must be isolated from the host with the disease and grown in pure culture.
  • The specific disease must be reproduced when a pure culture of the bacteria is inoculated into a healthy susceptible host.
  • The bacteria must be recoverable from the experimentally infected host.
The corollary postulates for demonstrating molecular causation are these:
  • the wildtype version of a gene must lead to the wildtype phenotype
  • the mutant version must lead to the mutant phenotype
  • mutant + wildtype genes must lead to the wildtype phenotype (that is, the wt must rescue the phenotype)
  • mutant + mutant genes must lead to mutant phenotype (that is, the human mutant allele must lead to the mutant phenotype when inserted into a mutant mouse)
And, demonstration of causation includes study of the biochemical effects of a mutation, as well as the cellular, tissue and organismal phenotypes.

Insistence on taking GWAS beyond the identification of candidate genes is important, and is in large part a response to the many criticisms of the method.  It doesn't mean that the method will now identify single genes with large effects, as advertised, of course, but it does mean that the onus is on the researcher to back up his or her claim of genetic causation. So we applaud the idea of molecular Koch postulates.  

There are well-known problems with the original Koch Postulates, however, and one should take care when using them as a model for demonstrating causation.  Many microbes can't be grown in the lab (the bacterium that causes leprosy is one example), and there are no animal models for a number of human infectious diseases.  And indeed, with respect to postulates for demonstrating molecular causation, many transgenic mouse experiments have shown 'no phenotype' when the mutant allele is introduced into the host, and there is often great variation in phenotype among transgenic litter mates. 

So, the call to researchers to demonstrate causation is an excellent one, but all the usual caveats about how difficult that can be still apply.  

Monday, October 25, 2010

Dog Eats Book: A domestic application of the scientific method

[July 19, 2013: Now a teacher resource at Understanding Science

We had been on a bike ride to our favorite burger joint and as soon as we opened our apartment door, we knew something was off.
Both dogs—Elroy and Murphy—usually greet us with varying degrees of affection, depending on how sleepy they are and depending on how long we’ve been away.

They express themselves by vocalizing, swishing their tails, jumping up with their forepaws, licking our hands or our faces if we bend down, and sniffing our legs and shoes. But this time, although Elroy was as hyper as ever, Murphy held back, sat down, pressed back her ears pulling her face with them, and hung her head as she stared up at us, weakly wagging the tip of her tail.

She looked guilty.

And her mopey expression was reminiscent of the Great Goose Down Throwdown of 2005 (see photo).

Something was definitely up. So we reluctantly hunted around for evidence of destruction and quickly found it.

The book that Kevin just bought yesterday was naked, the punctured jacket strewn in the corner, the back cover was chewed, and the front cover had two puncture holes. When the jacket is placed back onto the book, the holes line up.

"Who did this?" Yes, we actually asked our dogs, trying not to smile or laugh and accidentally reward anyone.

We decided to get to the bottom of this. Of course we don’t want to punish the guilty party; that’s not effective and we’re not mad. The book’s still readable and even if it wasn’t, it’s just a book. We’re just curious about who shreds magazines, sucks on dish towels, rips twenty dollar bills in half and eats half, etc. when we’re not home and we hope to maybe someday figure out how to stop this behavior. Because we've witnessed Murphy wilt in the presence of Elroy's criminal behavior, we couldn't assume that her emotions were betraying her own crime.

So who dunnit? This chewed up book gave us an opportunity to find out which dog was the perpetrator, at least for this particular caper.

And thanks to the SCIENTIFIC METHOD, we can find out who the culprit is!

Who chewed up Kevin’s book?

Theoretical Orientation - Any biases and a logical, testable, reasonable model(s).
We assume that nothing supernatural is to blame for the maiming of the book. No ghosts or werewolves attacked the book. The book did not bite itself. We exclude the possibility that an intruder broke into our house, terrorized one book on Kevin’s nightstand, touched nothing else, and left without leaving a trace of his/her presence. Since there were two live animals with sharp teeth in the house, we assume either of the two animals are responsible and not an intruding animal with sharp teeth from outside.

Hypotheses - Falsifiable predictions generated from the theory. All possible, testable answers to our question.
Hypothesis 1: We cannot say whether it was Murphy or Elroy who destroyed the book.
Hypothesis 2: Elroy did it.
Hypothesis 3: Murphy did it.

Operationalized Hy
potheses - The “recipe” that describes measurement, experiment, observation and how another could replicate the process.
Materials: A measuring device (ruler), the book, Elroy's teeth, Murphy's teeth.

Methods: Measure the distance between the upper and lower canines on both Elroy and Murphy. Measure the distance between the puncture holes on the book.

Hypothesis 1: None of the dog tooth measures match the puncture hole measure. (We cannot say whether it was Murphy or Elroy who destroyed the book.)

Hypothesis 2:
The distance between Elroy's upper or lower canines is the same as the distance between the puncture holes in the book. (Elroy did it.)

Hypothesis 3:
The distance between Murphy's upper or lower canines is the same as the distance between the puncture holes in the book. (Murphy did it.)

Data Collection - In the laboratory, field, library, kitchen, …
Elroy upper canine distance - 4.5 cm
Elroy lower canine distance - 4 cm
Murphy upper canine distance - 3.8 cm
Murphy lower canine distance - 3.5 cm
Distance between puncture holes in book - 3.5 cm

Conclusions - These should be cautiously limited because they are only based on the above steps.
Hypothesis 3 is supported (and Hypothesis 1 is not) because Murphy's canine-to-canine distance matches the distance between the puncture holes. Hypothesis 2 (Elroy did it) is not supported because canines that are further apart than 3.5 cm could not have made the holes and Elroy's are further apart.

Based on the distance between canines, we conclude that Murphy and not Elroy attacked Kevin’s book.

Further evidence is needed to determine whether Murphy also chewed up the back cover. However, we predict that if that evidence were available it would implicate Murphy as well. But, because our study focused on canine-to-canine distance, we can only conclude that Murphy made the puncture holes on the front cover.

Repeat/Replicate or Revise Theory
To affirm our conclusions we would have multiple measures, repeated by multiple observers, including perhaps a digitized computer observer. We could also recreate the book destruction by offering books to each dog, having them chew them up, and studying the remains. That could verify and distinguish Murphy's destruction patterns from Elroy's. However, we are satisfied with our study and our results and will not continue to further humiliate Murphy. We will apply these data to evaluate any future puncture holes. We could also use this study to launch a new one into the exhibition of guilt and guilty emotions in dogs!

In no way did our study suggest a revision of our theoretical orientation (i.e. natural, domestic causes versus supernatural forces or intruders). However, we cannot claim that our study falsifies the possibility that either supernatural forces or weird intruders had it in for Kevin’s book.

On the other hand, our study and our application of the scientific method deemed those untestable notions unnecessary.

And there you have it. What would we do without the scientific method?

Elroy is thrilled to be exonerated...

Friday, October 22, 2010

Nothing's fishy here--after all

Well, if you thought the story was fishy you were right....and wrong!  Many women have been fishing for their diet during pregnancy because they were told -- or they listened to the hype -- that one ingredient in the wrigglers (DHA) will make their babies more intelligent when the hatch out into the world.  Alas!  It seems to be untrue.

The latest Professor Sez Bulletin, this time from the unchallengeable source of all that's important (the NY Times) is that all this was just an old fishmonger's tale.  Every expectant mom's little Einstein-in-waiting isn't. It'll just be a human, faulty as all get-out, but loveable and precious just the same.
...a large study published Tuesday in The Journal of the American Medical Association suggests that the DHA supplements taken by pregnant women show no clear cognitive benefit to their babies. The study also found no evidence that DHA can reduce postpartum depression, except perhaps for women already at high risk for it.
Some previous studies have suggested that DHA, an omega-3 fatty acid in fish oil, can aid in a baby’s brain development if taken during pregnancy. But many of those studies were small or observed women already taking fish oil, who might be more health-conscious. The new study, with more than 2,000 participants, was a clinical trial in which women received either fish oil with DHA or a placebo (vegetable oil).

Is this too bad?  Only if you think that science, unlike God, works miracles upon request.

Of course, we have to add that our assessment here assumes that this latest Professor Sez Bulletin is correct.  But as you know, there's no reason to believe it, because the definitiveness of science stories is, too often, another old fishmonger's tale.

Anyway, there are lots of apparently actual good reasons to eat fish, until we fish out the seas, so we can place our bets however we want, as long as we keep our expectations within reality. 

Thursday, October 21, 2010

A Midsemester's Read

In today's post we review a nice play about science and society.  We are co-teaching an upper-level class called "Biology, Evolution and Society" -- which is just what it sounds like.  We cover genetics, evolution, the history of the ideas, their place in society, how they've been received, and so forth.  The students are great; interested and interesting, and fun to work with.

This week we're reading a play called An Experiment with an Air Pump, by Shelagh Stephanson, published in 1998.  The action takes place in England at the turn of the 18th century and the turn of the 20th.

It's a good read.  When the play opens, a scientist, his family and a friend are re-enacting experiments done,  by Robert Boyle and others in the 1600's with air pumps to explore the nature of gases, air, and vacuums.  One experiment by Boyle first demonstrated that animals require air to live, as depicted in Joseph Wright's painting above, An Experiment on a Bird in the Air Pump, from 1768.  Wright paints a complex array of emotions into this one small scene -- fear on the children's part that the bird will die, scientific curiosity and remove, the absorption of two lovers, the boy in the corner pulling the drapes or getting down the bird cage, it's not clear which.  The central figure seems to be inviting us, the observers, to be curious about the bird's fate as well.

Wright, a British landscape and portrait artist, painted this picture at the height of the Enlightenment, a time when it was thought that science and reason were the path to understanding and human social progress.  He produced a series of candle-lit paintings depicting scientific experiments with the kind of reverence usually reserved for religious scenes.  The Bird in the Air Pump is one of these. 

Stephanson bases her play on this painting.  Her18th century scientist, Joseph Fenwick, is a radical, sure that science and reason will put an end to the monarchy, democracy will come, and society will be much improved.  Calm during an uprising that he can hear from his window, he says, "The best tonic in the world is the sound of institutions crumbling." His two daughters keep trying to get him to watch a play they've been rehearsing, "a hymn to progress."  Fenwick's friend, Roget, however, questions Fenwick's belief that science will solve all social ills.

One of the characters is fascinated by the skeletal anomaly of a homely hunchbacked servant girl, whom he woos so that when she is naked he can examine her unusual skeletal anomaly, to tragic effect.  Anatomical knowledge was central to medical science in those days, the days of the grave-robbers.....

The 20th century scenes echo those of 200 years before, with scientists, questioners, lay people with strong ideas of their own.  The modern scientist, Ellen, is a geneticist, trying to decide whether to go to work for a company that is able to noninvasively determine the genotype of embryos soon after conception.  Her husband is disturbed by the moral and ethical implications of this kind of science, but Ellen has no qualms.
I could have avoided filthy commercialism and struggled along on bits of funding from now till doomsday.  I did consider it actually.  But this is too exciting.  I can't resist it, basically.  It wasn't an intellectual decision.  It was my heart.  I felt it beat faster when I thought of all the possibilities."
 Indeed, the play opens with Ellen explaining why she loves Wright's painting so much:
I've loved this painting since I was thirteen years old.  I've loved it because it has a scientist at the heart of it, a scientist where you usually find God.  Here, centre stage, is not a saint or an archangel, but a man.  Look at his face, bathed in celestial light, here is a man beatified by his search for truth.  As a child enraptured by the possibilities of science, this painting set my heart racing, it made my blood tingle in my veins: I wanted to be this scientist; I wanted to be up there in the thick of it, all eyes drawn to me, frontiers tumbling before my merciless deconstruction.  I was thirteen.  Other girls wanted to marry Marc Bolan.  I had smaller ambitions.   I wanted to be God. 
 The painting described the world to me.  The two small girls on the right are terrified he's going to kill their pet dove.  The young scientists on the left is captivated, fascinated, his watch primed, he doesn't care whether the dove dies or not.  For him what matters is the process of experiment and the intoxication of discovery.  The two young lovers next to him don't give a damn about any of it.
But the elderly man in the chair is worried about what it all means.  He's worried about the ethics of dabbling with life and death.  I think he's wondering where it's all going to end....
 And the play goes on from there.  Stephanson manages to cover in 96 pages -- or 2 hours on stage -- many important issues related to the role of science in society.  Can science be value free?  Should scientists be detached from the objects of their work?  Will reason lead to progress?  Can scientists play God?

These questions are, of course, relevant to many in our society today.

Wednesday, October 20, 2010

B vitamins and dementia -- time to start taking them yet?

So, the latest story about vitamin B12 protecting against dementia is making the rounds this week. The original paper is published in PLoS One, and describes the randomized, double-blind placebo-controlled two-year trial of high-dose folic acid and vitamins B6 and B12 in 271 older individuals.  These people underwent MRI at the beginning and end of the study to assess change in brain size that could be attributable to treatment, and the study concludes that brain atrophy in older people with mild dementia can be slowed by lowering homocysteine levels with B vitamins. And thus, presumably, the development of dementia can be slowed.

The BBC story says:
Certain B vitamins - folic acid, vitamin B6 and B12 - control levels of a substance known as homocysteine in the blood. High levels of homocysteine are associated with faster brain shrinkage and Alzheimer's disease.
The study authors believe it was the B vitamins' effect on levels of homocysteine that helped slow the rate of brain shrinkage.
And the size of the effect surprised the authors of the paper. 
"It's a bigger effect than anyone could have predicted," he said, "and it's telling us something biological.
"These vitamins are doing something to the brain structure - they're protecting it, and that's very important because we need to protect the brain to prevent Alzheimer's."
It's nice to see the focus being taken off genes, and put on environmental risk factors that may lead to this devastating disease.  This was a small and fairly short-term study, and of course the authors say that more research is needed, but it does sound promising, and perhaps we should all think about taking B vitamins with our cereal.

But hold on. 

A paper that seems to have gotten a lot less notice (we wonder why!?) is one that appeared online ahead of print in Neurology on Sept 22. This is a report of another double-blind controlled study of 299 men over 75 given either folic acid and vitamins B6 and B12, or placebo, and who were assessed for signs of dementia initially, at 2 and at 8 years. This study found that treatment had no effect on cognitive function in men, even while lowering homocysteine levels by 22.5%.

They point out that high homocysteine levels (tHcy) have been associated with dementia, and that levels have been shown by other studies to be lowered by 20% or so with B vitamins, as the PLoS paper describes, but that it's not clear whether this link is causal.  They also point out that this link has been studied before, but it has not been successfully demonstrated that B vitamins can affect risk of dementia.  Why not? 
Observational studies have consistently linked high tHcy to cognitive impairment, but the results of randomized trials have thus far failed to show any obvious benefits associated with tHcy-lowering therapy. These conflicting findings may be due to bias and confounding in observational studies, inclusion of prevalent cases of cognitive impairment in some trials, lack of power to measure small but important treatment effects, insufficient treatment duration,and recruitment of excessively healthy volunteers.
Ah, our old friends bias and confounding.

And, outcome measures differ between studies, which may affect conclusions.  In just these two studies alone, one used a measure of cognitive impairment and the other brain size (though the authors state that they will publish on their cognitive results at a later time).  The paper in Neurology concludes by suggesting that "elevated plasma homocysteine is not a risk factor but merely a marker that reflects underlying common processes responsible for both dementia and high tHcy, and that homocysteine-lowering treatment with B-vitamins does not affect the long-term cognitive function of people at risk."

We will certainly be hearing more on this subject.  But we wouldn't buy stock in B vitamins yet.

Tuesday, October 19, 2010

Awww, I take it all back! Retraction in science

Scientists, like other people, don't like to be wrong. We make our reputations, satisfy our egos, please our children, and make our careers by being right--by seeing farther than has been seen before.  Being wrong gets old in a hurry, and it may be hard to recover from that if it happens too often.

No one likes getting old either, so, when the idea was published that one could reverse some aspects of aging transgenically,  it was great news.

Until.  Until it turned out that maybe it wasn't so true after all.
Harvard researchers have retracted a far-reaching claim they made in January that the aging of stem cells might be reversible.
A set of papers on this have been retracted, or warning given that the reported findings may not be all that trustworthy. This story was in the news, but there's a site called Retraction Watch, that tries to catalog and report these instances, so other investigators or interested parties will know about the retractions. These are more than the Errata notices at the back of many journal issues (and there are routinely more of these than heretofore because of the haste-to-publish rat-race we're all in).

No matter whether it's complete or not, this is a fine service if people know about it.  But while retractions are often reported in the news as splashy scandals, and they may involve miscreant or sloppy scientists or their lab staff, retraction should be encouraged rather than discouraged.  And the quicker the better!

Care should be exercised before submitting something for publication, though it is probably regularly undermined by haste.  But if mistakes are caught, they need to be made widely know. We build our careers on the backs of others' published work.  Cost, time, effort, and even policy or medical advice can be based on what is published.

Haste and other pressures lead to dissembling and mistakes in science, but there is (we think) very little outright fraud.  There are cases, however, and the offenders need to be sanctioned by ostracization from the profession.  But, again, it's rare and should not pollute attitudes towards science.  These mistakes are not as problematic as science going off in poor-payoff directions, which is systematic and can be common and costly.

We're all human and saying we've been wrong is important, and should be encouraged.  More is often learned that way, or even by 'negative' results, than by incremental or trivial positive results.  We were glad to learn of Retraction Watch.  Repeat retractors' operations need to be put under scrutiny.  But retraction for honorable reasons should be praised, not turned into scandal.

Monday, October 18, 2010

"Multikulti is dead"

Sometimes we ask ourselves if we're way out in left field when we occasionally caution that the wave of new genetic determinism needs to be watched lest it be a harbinger of a new era of eugenics.  Objections to this say that the new genetics is for biomedical and other improvements, not negative discrimination--but that, of course, privacy is needed to protect against the misuse of data by, for example, insurance companies.

And then an Angela Merkel will declare, to a standing ovation, that multiculturalism has 'utterly failed' in Germany.  Germany, of all places.

At "the beginning of the 1960s our country called the foreign workers to come to Germany and now they live in our country," said Ms. Merkel at the event in Potsdam, near Berlin. "We kidded ourselves a while. We said: 'They won't stay, [after some time] they will be gone,' but this isn't reality. And of course, the approach [to build] a multicultural [society] and to live side by side and to enjoy each other ... has failed, utterly failed."
The crowd gathered in Potsdam greeted the above remark, delivered from the podium with fervor by Ms. Merkel, with a standing ovation. And her comments come just days after a study by the Friedrich Ebert Foundation think tank (which is affiliated with the center-left Social Democratic Party) found that more than 30 percent of people believed Germany was "overrun by foreigners" who had come to Germany chiefly for its social benefits.
The study also found that 13 percent of Germans would welcome a "Führer" – a German word for leader that is explicitly associated with Adolf Hitler – to run the country “with a firm hand.” Some 60 percent of Germans would “restrict the practice of Islam,” and 17 percent think Jews have “too much influence,” according to the study. 

And the far-right is gaining all over Europe from an anti-immigrant backlash -- largely, but not entirely, anti-Muslim.  Many Roma have been deported from France in the last few months.  And of course this anti-immigration fervor is not restricted to Europe.  Here in the US we've got a level of rage we haven't seen for decades, as Frank Rich writes in the Sunday NYTimes.  And we've got the Tea Party movement, which officially is calling for smaller government, lower taxes, and strict interpretation of the Constitution, but the group is apparently most attractive to those with, shall we say, less than inclusive views.  Here we've lifted a paragraph from the Wikipedia description of the Tea Party:
Various polls have also probed Tea Party supporters for their views on a variety of political and controversial issues. A University of Washington poll of 1,695 registered voters in the State of Washington reported that 73% of Tea Party supporters disapprove of Obama's policy of engaging with Muslim countries, 88% approve of the controversial immigration law recently enacted in Arizona, 82% do not believe that gay and lesbian couples should have the legal right to marry, and that about 52% believed that "lesbians and gays have too much political power."

So the background of us vs. them is building.  What about the will to translate that to genetics?  Well, here's an excerpt from in an interview published in New Scientist with Slavoj Žižek, Slovenian philosopher and commentator:

You were in China recently and got a glimpse of what’s happening in biogenetics there.
         In the west, we have debates about whether we should
         intervene to prevent disease or use stem cells, while the
         Chinese just do it on a massive scale. When I was in China,
         some researchers showed me a document from their
         Academy of Sciences which says openly that the goal of
         their biogenetic research is to enable large-scale medical
         procedures which will “rectify” the physical and physiological
         weaknesses of the Chinese people.

Is this true?  We can't confirm it, but it wouldn't be a surprise.  Not because it's China, but because we are in an age of belief in biotech and our ability to harness it to our will.  If it isn't true yet, it will be -- somewhere.

The New York Times is reporting that there is now a new museum exhibit open in Germany that shows that the holocaust was not something Hitler and his henchmen foisted off on a benign, unaware populace.   Instead, the populace put him into power.

There are many parallels between the rhetoric of the early Darwinian age, that started out piously voicing the idea that science could now improve humankind via genetics, and the kinds of rhetoric, such as we're citing here, that we hear so often today.  It started out mainly benign or even positive in the early eugenics era (encouraging the best of us to reproduce, and discouraging voluntary restraint on the rest of us unwashed).  But of course it turned coercive -- first in medicine, by the way -- and then murderously hateful.

Historically, this kind of thing happens most when a society is under stress.  We're seeing a version of that stress in the current recession -- the anger is palpable.  Is democracy robust to the schemes of the demogogues who would like power and would use emotive, anti-immigrant or religious crusading arguments to start a 21st century version of the eugenics era?  Hopefully so.  But there is much general societal parallel, including much of the rhetoric and even invocation of Darwinian concepts (or their pious, medicalized, benign-sounding equivalent), so that one can't just dismiss the possibility.

Personalized genomic medicine can become personalized genomic discrimination.  If concepts like 'racial profiling' take hold, 'personalized' can become 'personalized + race'.  And we have to realize that if we believe that genome sequence can predict risk for essentially all disease traits -- and that's basically the claim, or hope -- then there's no reason to also believe that other traits, including socially sensitive traits, will not be equally predictable.  This is what happened before, except that specific genes were not known (other criteria were used, such as family patterns).

And if you believe that you can predict complex disease effectively and if you believe in 'Darwinian' medicine, then you'll also be prone to argue that normal traits -- whatever suits your personal interests to advocate -- also reflect natural selection.  And that by definition means you place value differences on different versions of the trait -- like IQ. And if you believe these were selected differently to an important extent between  human populations (i.e., 'races'), then there's a short line from there to drawing value judgments about races.  And then you can worry about the inferior individuals out-reproducing the superior and being a danger to society down the road.  This is exactly the trail or reasoning that we've been through before.

We aren't saying that this is all happening now, as we write.  We're just saying that, for those with antennae for this sort of them vs us view of the world, the antennae are picking up signal. The lesson from the history of eugenics and what it spawned is that, like the fog in Carl Sandberg's poem, this stuff creeps in on little cat feet.

Friday, October 15, 2010

Science ethics in the genetic age

There are many discussions about research ethics and how genetic data may be relevant to problems.  A recent story with an interview of the person who unearthed it, is of the syphilis experiments done in Guatemala.  This was not all that long after WWII and the Nazi abuses, which along with  the Tuskegee study led to research protection protocols to protect subjects. All of these things were done by ordinary, otherwise respectable scientists, including medical doctors.  How, right after WWII, of all times, could they even think of doing such experiments?  The answer is that people have ways to rationalize what they do, think of themselves as basically good, and can easily objectify other people.

The debate today in regard to genetics often cites those studies, but it's fair to ask whether we're past anything like these studies, which had nothing to do with genetics anyway, or whether the possibility that such things could be repeated lurks underneath genetic data.  Is the sky falling, as Chicken Little feared, or is a new day about to dawn for human benefit?

Nobody expects any developed country to put in orders for box cars to haul off the Undesired, planning to be scientific this time, testing directly for genetic inferiority rather than, as the Nazis did, by assuming it, or by finding the helpless and experimenting on them (as in Guatemala and Tuskegee).  But many subtle forms of discrimination or abuse could occur that would not generate an Edvard Munch scream.  While many scientists are working furiously to use genetic data to develop better treatments, early disease prediction, or even direct transgenetic interventions, the possibilities that concern people are of things like using genetic data to deny insurance, employment, school admission, and so on, especially if done with data supposedly private or without some policy or consent that makes such things socially acceptable.

Much worse could happen, including direct abusive use of genetic data, in warfare or in other ways that are easy to imagine.  Whether cloning from stem cells, genetically designing children, and the like will happen nobody knows.  Whether the complexity of genetic mechanisms, which is still highly under-acknowledged, will thwart such attempts, time will tell.

Evolution has worked successfully (if not equitable or kindly!) to design us, with our strengths and weaknesses.  Genetic engineering could be less crude of a cut, or improvement--a different, kinder evolution.  It would almost inevitably have social consequences, and not all of them would be good.  But no change is all good, and society always adjusts in response, if never perfectly.

Harder to control is the impulse to study 'them', as in Guatemala and even recent HIV/AIDS testing in developing countries,where protections are less and where things can be done that can't be done here.  Anyone who thinks otherwise perfectly respectable scientists would not engage in such things (American doctors have apparently been supervising torture, just as in the bad old days), doesn't understand human beings very well--or know their history.  Who doesn't realize that genetics can be used to do ill as well as good?  How and how much such work can or should be constrained has to be thrashed out in the political arena.

The issue is where the balance among benefits, regulation, abuse, litigation, and so on lies--between laissez-fair and all's fair--and how to find that balance.  It's no surprise that people's views roughly correlate with other aspects of their social politics.

We have our own views, as most people do, but the point for public forums is to hash out the issues and try to find the most optimal path.  There needs to be resistance, but it shouldn't just be obstruction.  The problem is that unlike more rabid kinds of discrimination, that can be done on emotion alone, genetics and what we can know from it and can do, and can't know and can't do, are highly technical. That makes it harder for the public or politicians to take the best decisions.

Thursday, October 14, 2010

Getting the Chile chills

We write a lot about what we think is wrong in science.  It's not because people should be perfect but we think science should always be under scrutiny because we, like any other group of people, can go off the deep end and become trapped in our own perspective--and self interest.  There is a lot that is selfish in society, including science.

But sometimes, in the most noble of ways, we human creatures can do incredibly generous things.  The latest example is, of course, the effort to rescue the Chilean miners.  How much cost, coming from how many peoples' pockets, has been invested in saving just 33 men's lives--and they're not even scientists!  In the scope of things, it would have been cheaper to just say 'too bad, how tragic!'  A few years from now nobody would remember.  Even not so many years from now these 33 men will have died and been forgotten.

But it's hard to imagine anyone not being deeply moved by the effort and selfless generosity of what has been achieved.  This has been a global response.  Only after we drafted this post did we learn, much to our pleasure, that a company from near to where we live in Pennsylvania was responsible for some of the rock drilling that got through to the miners.  Somehow, even though the miners are not related genetically to their rescuers, no kinship or fitness criteria came into play, except the kindred spirit of humanity.

The cold chills that one experienced thinking of the fate of the trapped miners a couple of months ago, are now chills of elation as one by one they emerge from their rescue tube.

So, we wanted just to take a breather and recognize something that seems unadulterated good, and to congratulate the many people who made it possible.

(There is, by the way, despite lots of selfishness, extensive sharing and cooperation in genetics, even if it's not heroic in the same way as the mine-rescue. And the credit goes to many people, including a lot of it to Francis Collins for working to keep data in the public domain)

Wednesday, October 13, 2010

Cognitive ability: use it or lose it?

A study just published in the Journal of Economic Perspectives tells us that if we want to retain our cognitive ability into old age we shouldn't retire.  The old 'use it or lose it' axiom.  But, as Gina Kolata points out in the NYT write up about the study, it could just be that people who are losing their cognitive skills retire earlier than those who aren't.  Does this study sort this out?

The authors analyzed the results of memory tests given to 22,000 Americans over age 50, administered by the National Institute on Aging every 2 years.  The test is a measure of how well people remember a list of 10 nouns immediately after hearing them, and then 5 or so minutes later, after having been asked other survey questions.  A perfect score would be 20.  Studies have been done in a number of countries in Europe and Asia, so that cross-cultural comparisons can be made.

As Kolata wrote,
People in the United States did best, with an average score of 11. Those in Denmark and England were close behind, with scores just above 10. In Italy, the average score was around 7, in France it was 8, and in Spain it was a little more than 6.
Examining the data from the various countries, Dr. Willis and his colleague Susann Rohwedder, associate director of the RAND Center for the Study of Aging in Santa Monica, Calif., noticed that there are large differences in the ages at which people retire.
In the United States, England and Denmark, where people retire later, 65 to 70 percent of men were still working when they were in their early 60s. In France and Italy, the figure is 10 to 20 percent, and in Spain it is 38 percent.
So, if it's true that retirement leads to cognitive decline, what is it about retirement that would do this?  It's hard to think of a single factor that all employment shares, other than a paycheck -- not everyone has a schedule, or punches a clock, or irons a shirt in the morning, or has to be at an office at 9, or socializes at the water cooler, or leaves their work behind when they go home at 5.  We can't even say that everyone must be competent at their job (see our post on the Ig Nobel prize winning paper about this very subject.)  And certainly not everyone is happy at work.  Similarly, not all retirements are equal.  So what's the common denominator?

The authors propose two possible answers.  One is the "unengaged lifestyle hypothesis", or mental retirement which accompanies actual retirement.  The other is that with the prospect of retirement, the soon-to-be-retiree slows down mentally in preparation, and stops "investing in human capital".  The "on the job" retirement effect.

Well, neither of these are very compelling explanations to us for a number of reasons including that a lot of people are busier in retirement than when they were employed, and we all know employed people who checked out long before they even gave retirement a thought.

Indeed, a number of further questions remain unanswered, in addition to Kolata's question of which came first, retirement or decline.  What would the slope of the line look like in 10 year olds cross culturally, for example?  Or  40 year olds?  That is, is the difference in ability (and, by the way, the top score, 11 words remembered correctly out of 20, doesn't strike us as all that good!) only apparent in older people?  Does the ability to remember 10 words actually indicate cognitive abilities such as reasoning or logic?

And, of course, there's the Bear Bryant phenomenon that we have to think about here at Penn State. Bear Bryant was the legendary coach of the University of Alabama football team.  At the tender age of 69, perhaps knowing that his best years were behind him, he decided to retire.  When a reporter asked him what he would do in retirement, he joked that in a week he'd be dead.  Four weeks later, he was!

Now this is relevant because our coach here at Penn State, the even more legendary Joe Paterno, is about to turn 84, with 2 years remaining on his contract.  That may be more games than his team will win the rest of the year, but the point here is that there have been quips that he doesn't want to end up like Bear Bryant.  So if he never retires.....he'll be immortal!
So the premonition spook could be a factor in peoples' retirement decisions, when, or perhaps if, they have a choice.

But before we decide to work til we drop, or work so we won't, we need to see the results of a more rigorous study.

Tuesday, October 12, 2010

Direct-to-consumer genetic testing -- again

An opinion piece published in Science on Oct 8, entitled "Regulating Direct-to-Consumer Personal Genome Testing" calls for the Food and Drug Administration to enact a multi-tiered policy for regulating DTC genetic tests.  The strength of regulation would be determined by the level of risk for a given test, determined by the kinds of decisions test results might lead the consumer to make.  The determination of what kind of earwax you have based on your DNA, for example, would be considered essentially risk free while breast cancer testing might be high risk, since you might decide to have a double mastectomy if the results suggest that your chances are great.  High risk tests would require pre- and post-testing counseling by qualified professionals.

In addition,
All tests should be analytically valid (able to accurately and reliably measure what they say they are measuring), and any clinical claims made about the test must be accurate and substantiated. Safety and effectiveness data should be developed, but for many tests, this should be done through enhanced postmarket surveillance and clinical studies, rather than a more stringent premarket approval process. Premarket assessment would focus on identifying tests with potential for egregious harms (e.g., tests with uncertain validity or utility that could profoundly alter the course of medical treatment) and keeping those tests off the market until further studies show an acceptable risk-benefit ratio.
The idea of regulating these tests for the benefit of the consumer is all well and good, but as we've said before, the idea that regulation is based on ensuring the accuracy of risk prediction is predicated on the assumption that such predication can even be accurate.

Some genetic testing is highly accurate.  That would be prenatal tests for single gene diseases of childhood.  And the field of genetic counseling is successfully based on these kinds of risk assessment.  But that's not what the DTC genetic testing companies are selling.  Earwax genes, may well be harmless recreation but DTC's are primarily selling predictions of the chances you'll get things like type 2 diabetes, or heart disease, or age-related macular degeneration or Parkinson's disease.  These are conditions that have late ages of onset, and are most likely due to a combination of genetic and environmental factors.

And, it turns out that risk estimates for particular genes routinely vary wildly depending on the study, the study population, environmental exposures and other factors.  In addition, risk estimates for specific genes are generally quite low.  That is, they don't raise the risk of disease by very much.

There's another very important point here, too, that nobody wants to hear.  We know from extremely well documented data that risks of most of the relevant diseases (diabetes, cancer, heart disease, etc.) have increased greatly in living memory.  That's environment, not genes!  Even breast cancer risk fluctuates, even for people with the high risk alleles, depending on when a carrier was born.  Genetic effects depend on context.  Yet knowing as we do that the effect of context can change we know that we don't know the future environmental exposures -- and hence the future risks associated with the same genotypes.  Many or most DTC risk estimates are, quite literally, passé.

So, again as we've said before, regulation of what these companies is selling is a legal issue that must be thrashed out, but probably even more important is the epistemological issue of whether what they're selling means anything at all.  It's in nobody's interest -- except the consumers -- to acknowledge, much less deal with this serious issue.

Monday, October 11, 2010

BIG shoes to fill!

China has apparently decided to take mythological beasts seriously.  They have launched a search for the mythical Himalayan Yeti, often known as BigFoot in the US because the Wild West mountains have also been rumoured to be inhabited by the same, or at least some, large ape-like non-human primate. 
Chinese researchers have been searching since the 1970s. There have been more than 400 reported sightings of the half-man, half-ape in the Shennongjia area. In the past, explorers have found inconclusive evidence that researchers claimed to be proof of Bigfoot's existence, including hair, footprints, excrement and a sleeping nest, Xinhua reported.
Is there no storytelling that we simply will not believe?

This particular quest has a long (and checkered) history.  There are legitimate Asian (though not North American) fossils of very large hominid primates called Dryopithecines.  There are apparently ancient Chinese manuscripts with references to, and we vaguely remember hearing that they included drawings of, large apes.  There was at least one lunatic anthropologist named Grover Krantz, who had tenure and drew salary in an otherwise legitimate  university (Washington State) who spent years tracking 'Big Foot'.  Geneticists in Anthropology departments get reports of Big Foot sightings -- and requests to do DNA testing -- from the public all the time.

There's a book called The Long Walk, about some WWII prisoners of the Soviet Union who escaped and through many trials managed to cross all the way, over the Gobi Desert, to India.  The book is the personal recounting of the adventure by one of the survivors.  At one point, in a matter of fact way, the author describes how crossing mountains, they came upon several large, reddish apes of some sort--on the ground, not in trees.  The escape party waited til they felt safe before crossing the little valley, skirting these creatures.  The validity of this story has been attacked, but there can be various reasons for that.

Now, all of the supposed physical evidence is as bogus as a $3 bill.  But at what point do we say that there might be some truth worth searching for?  Many cryptozoologists put 2 and 2 together and get 5, which is at best what's going on here.  If there really were such a creature (alive today, that is), the odds are vast that we would have found bones or carcasses, or have pictures from someone who stumbled across them.  Too many people crawl over the earth for this not to be the case--certainly in North America where there are no wilds too wild not to have been explored or settled.

But one thing that keeps these searches going is that there are many species being discovered in various parts of the world, some of them unexpected.  But these are either the many small critturs, like insects and the like, that teem the jungles, or they are deep sea species that are hard to get at.  Nothing so spectacular as a huge man-like ape.  It's always possible, of course, that there are remote refugia.   But such mythical beasts necessarily must be claimed to be in such places, because that's the only way they could have escaped discovery.  Even Loch Ness, though not remote, has its deeps.

At least, the Chinese aren't going to spend US taxpayers' money on this wild-ape chase.  Well, at least not directly--since a lot of the money in China got there because Americans wanted their cheap junk, maybe we're paying for that wasted research, too.

Friday, October 8, 2010

Leave me alone or I'm going home!: The Heisenberg uncertainty principle in evolution and epidemiology

To a non-physicist, the gist of the Heisenberg uncertainty principle, or the observer effect phenomenon associated with it, is that studying an object changes the object.  You want to know the position and movement of a subatomic particle, say, but to find that out you have to study it with energy, like a light beam, which allows you to identify the position by seeing how the collision with your measurement beam occurs. But that alters the target particle's movement so you can't know both.

Similar kinds of issues apply to modern-day epidemiology.  We referred to this yesterday in a comment about the effect of maternal drinking during pregnancy affecting the future of their offspring.  A Heisenberg analogy for epidemiological studies goes like this:  if by studying something that has many contributing risk factors, the persons being studied change their behaviors, and thus their exposures because they know the results of the study, you can no longer estimate what the exposure risks will be.

Often, the change in behavior is of a magnitude that it's a major effect relative to the signal that's being studied.  If you stop eating pork because a study says that eating pork gives you a somewhat increased risk of warts (it doesn't--this is just a made-up example!), then the effects of pork-eating will be changed by virtue of the exposure change and the knowledge that this is being studied.  If this happens often enough--as it does with our 24/7 many-channel news reports--then tracking risks or measuring them becomes very problematic, except for the really major risk factors (like smoking and lung cancer) which are robust to small changes.  The science and the scientist become part of the phenomenon, not the external observer that they need to be to do the science.  This leads to many of the serious challenges to modern epidemiology, behavior, education, political, economic, etc. studies, including those of genetic causation.  And since trivial risk factors are mis-reported in the news as big ones, the signal-noise ratio is even less favorable to clear-cut conclusions.

There is a kind of Heisenberg analog in evolutionary terms, too.  If relative fitness--reproductive chances--are affected by both genome and ecologic contexts, and the differences are small, then what happens tomorrow to a given genetic variant, is highly dependent on all sorts of environmental or other genotype changes. A given variant won't have the same relative effect tomorrow as it did today, and since evolutionary models are about relative fitness, the evolutionary landscape changes.

This becomes Heisenberg-like not because it's about observer-interference effects in this case, but because the context changes can be as great or much greater than the net fitness advantage of a genetic variant.  This means fate-prediction is difficult, and in this case the observer analog has to do with the screening-efficacy of natural selection.  Changes in the frequency of an allele can change its net fitness effect.  When fitness (like electron positions?) is not just contextual but essentially probabilistic, something that affects position (current relative fitness) affects evolutionary trajectory.  That's one reason evolution is  essentially unpredictable, except under unusually strong conditions, and in that sense not deterministic as it is viewed in the usual Darwinian concept, especially as put forward by those not versed in evolutionary theory--and that includes many biologists and all the blathersphere that invoke Darwin or natural selection in making pronouncements about society.

Thursday, October 7, 2010

The chaff for the grain: the effects of alcohol during pregnancy

A study just published in the Journal of Epidemiology and Community Health reports that light drinking during pregnancy does no harm to the child.  That is, children of mothers who were 'light drinkers' during pregnancy had no increased risk of 'socioemotional problems and cognitive deficits at age 5'.  This is a follow-up of a study that showed no risk of light drinking to three year olds in the same sample.

The story on the BBC website says:
Drinking one or two units of alcohol a week during pregnancy does not raise the risk of developmental problems in the child, a study has suggested.
Official advice remains that women abstain completely during pregnancy.
A study of more than 11,000 five-year-olds published in the Journal of Epidemiology and Community Health found no evidence of harm.
There were more behavioural and emotional problems among the children of heavy-drinking women.
Mothers were first interviewed when their infant was 9 months of age (sweep 1).
Questions were asked about mothers' drinking during pregnancy, other health-related behaviours, socioeconomic circumstances and household composition. Sweeps two and three of the survey took place when cohort members were aged approximately 3 and 5 years. At the age 5 years home visit cognitive assessments were carried out by trained interviewers and questions were asked about the cohort members' social and emotional behaviour, socioeconomic factors and the psychosocial environment of the family. 
Now, this raises a number of alarm bells for anyone used to thinking about study design issues.  For one, sensitive subjects are hard to measure accurately, especially by recall interviews.  Sexual behavior, alcohol consumption, and so on are notorious for that problem.    Doctors, for example, routinely double the number their patients give them when they're asked how many drinks they consume per day. And alcohol consumption during pregnancy is particularly sensitive. So whether or not the data are reliable in this study, especially given that it was recall data -- mothers were asked to remember how many drinks they had per week during each trimester of their pregnancy when their infant was already 9 months old -- is one possible problem here.

But perhaps that's not the most important issue here.  Confounding is an important potential study killer any time, but  this study seem particularly fraught.  The authors do recognize that confounding variables could be a problem, that many other factors could influence socioemotional behavior, not just alcohol consumption during pregnancy, and they try to control for some of these.  But, given how many different things can affect a child's behavior between pregnancy and age 5 (that is, in fact, everything), it's very hard for us to believe, no matter how well possible confounders are controlled for, that it's possible -- or even sensible -- to try to boil down the explanation for behavior differences between 5 year olds to the difference between 3 and 5 drinks a week.  Especially given the fragility of the data on alcohol consumption. And the idea has been prevalent that alcohol can pose a fetal danger even so early that the mother doesn't yet realize she's pregnant; so if she quits or cuts back when she does learn, reliance on recall may lose some accuracy.

That said, let's turn to the results.  Here's the link (if you can get access) to the table that interests us most, the prevalence of socioemotional problems according to whether the mother never drank, didn't drink during pregnancy, was a light, moderate or heavy drinker.  Note that in every problem category -- every category -- the prevalence of the problem under study (conduct problems, hyperactivity, emotional problems, etc.), and the odds ratio, are higher in children of mothers who never drank than in either all children, or all children except for those whose mothers drank heavily during pregnancy.  That's in all the models they tested, from controlling for just one variable, age of child, to controlling for many different variables. Not drinking is a stronger risk factor than almost any amount of alcohol.

Interestingly, and curiously, the authors use not-in-pregnancy as their reference, rather than not drinking at all.  Clearly they can be missing some sociocultural or other environmental confounders by doing that.  Also, the data are reported in terms of 5% significance, with little if any mention of the correction for the huge number of tests they have done.  They did find some sorts of trends, which mollifies this concern, but only somewhat.  They report this as a confirmation of earlier findings, but that, too, is a bit uncalled for since this is an extension of, and hence not independent from, the earlier phases of the same study (it includes the earlier results, essentially).

When there are huge numbers of covariates, and potential confounders, and clearly countless other factors could, in principle, be unmeasured confounders, one has to be circumspect about this study.  Even if the results are correct as reported and interpreted, the net impact of moderate drinking only applies after regressing out the other measured factors.  They may, in aggregate, have a much greater effect on childhood behavior risks than the net effect of pregnancy-imbibing, so that the absolute effect of drinking (even if alcohol is the actual cause itself) is going to be small.

If we can believe the data, and that never drinkers really never drank and so on, this certainly makes one wonder about the effects of unmeasured confounding variables, and raises the question of how much of the reported effect due to heavy drinking really is due to alcohol.

And yet again, about publishing obviously inconclusive studies....  Such studies are very costly, and likely to change years hence as envirionmental exposures and confounders and ways of measuring things change.  It is not a shock to learn that a touch of a drink now and again is not a particular problem, especially if it relaxes the mother, say.  The most important question is not that, but whether very early drinking can lead to birth defects of various sorts.  The reason is that the very early embryo is only a few cells with most of its differentiation yet to come, so a damage to any cell can have proliferative effects.

But later, during the period of organogenesis and then mainly growth, fetuses are generally much more robust to small external exposure effects. There are many more problems about pregnancy and child health than the rather trivial effects that this very large study, even if all its results are true, deals with.  And the solution to many of the real problems is:  moderation, improvement in socioeconomic differences.  But those are deeper problems that society doesn't want to deal with.  It's safer to look at a blizzard of statistical data and talk, with serious demeanor, about the minor things that might be guessed at and perhaps even changed.

Wednesday, October 6, 2010

Air pollution causes diabetes?

Here's a paper in Diabetes Care that we found because it was written up in the New York Times on Monday.  Why?  Proper scientific forewarning?  Scare mongering?

The authors of the paper find a correlation between air pollution (fine particulate matter) and type 2 diabetes.  As the NYT puts it:
A strong link exists between adult diabetes and air pollution, according to a new epidemiological study by researchers at Children’s Hospital Boston. The long-term study builds on previous laboratory studies that have tied air pollution to an increase in insulin resistance, a precursor to diabetes.
The researchers used health, economic, geographical and other data to adjust for known diabetes risk factors, such as obesity, exercise, ethnicity and population density. After controlling for these factors, a strong correlation still emerged between diabetes prevalence and particulate air pollution.
So, those of us who live in polluted cities (such as readers of the NYT) now have to worry about getting diabetes through no fault of our own, just because we live where we do, on top of everything else we have to worry about.  And this one we can't outrun.  Go get a face mask, and hurry!

However, the paper's description of the study isn't quite right.  It doesn't tell the whole story.  From the paper itself:
The relationship between PM2.5 [particulate matter 2.5] levels and diagnosed diabetes prevalence in the U.S. was assessed by multivariate regression models at the county level using data obtained from both the Centers for Disease Control and Prevention (CDC) and U.S. Environmental Protection Agency (EPA) for years 2004 and 2005. Covariates including obesity rates, population density, ethnicity, income, education, and health insurance were collected from the U.S. Census Bureau and the CDC. 
The important fact that this was a county-level study was never mentioned in the Times story.  That is, the fact that the study looked at average diabetes prevalence rates, average obesity, pollution and so on, for whole counties, not individual exposures and covariates.

This is important because of a well-known epidemiological bias called the "ecological fallacy", the problem of attributing group-level characteristics to individuals -- equating group correlations to causation at the level of the individual.  We'd all agree that it was silly to, say, assume that everyone in a voting district was Republican because the county always votes Republican, but in the same way, though harder to intuit, a correlation between high pollution levels and high diabetes rates doesn't tell us anything about any single individual's exposure or duration of exposure to pollution, not to mention whether it caused his or her diabetes.

There may well be alternative explanations for the correlation.  Perhaps diabetes care is good in that county and a lot of patients moved there, after being diagnosed, to take advantage of the care.  Or any number of other possible scenarios.  And the epidemiologist on the study seems to know this:
“We didn’t have data on individual exposure, so we can’t prove causality, and we can’t know exactly the mechanism of these peoples’ diabetes,” said John Brownstein, an assistant professor of epidemiology at Children’s Hospital Boston and co-author of the study. “But pollution came across as a significant predictor in all our models.  
Now, pollution may in fact cause diabetes.  Our point here is not about causation per se (though the biological link doesn't seem obvious from everything that's known about type 2, adult-onset, insulin resistant diabetes, but we certainly can't say it's not possible).  Our point is that the authors haven't convincingly demonstrated a causative link, and it was premature to rush this to print -- and for the NYT to pick up the story -- without better evidence.

The ecological fallacy is in every first-year epidemiology textbook -- and the authors of this paper even refer to it.  The related fundamental logical error is that this equates correlation with causation -- even at the group level.  When authors know they face these issues, the proper thing is not to publish and call their eager friends at the Times,  but to take the result as an indicator that it may be worthwhile following the possible connection up in a proper study.  But that's not the era we live in.

Tuesday, October 5, 2010

The arrogance of science.

We have not read Sam Harris's new book, the soon-to-be bestseller, The Moral Landscape: How Science Can Determine Human Values, but we have watched his TED lecture on the subject, and read Appiah's review in the Sunday NYT and we're pretty sure we're not likely to read the book.  But of course that isn't stopping us from having something to say about it.

Two things disturb us about Harris' argument.  (If you've read the book and can tell us that reading it would change our minds, please let us know -- we'd love to be wrong on this.)  But as we understand it, Harris's argument is both arrogantly imperialistic -- or worse -- and non-Darwinian, which is rather ironic from someone arguing that science will out the Truth. The 'logic' of the argument is to put together intelligent-sounding phrases that have little actual content....especially little scientific content.

Best known as one of the New Atheists, Harris has written previously on how he knows there is no God.  He argues in his new book, and in the lecture, that only science can answer the questions of life's "meaning, morality and life's larger purpose" (as quoted in the review).

Which prompts us to ask, Where is existentialism when we need it?  Better yet, let's call it Darwinian existentialism.  If we are truly to take the lessons of Darwinian evolution to heart, we must accept that there is no "larger purpose" to life.  The only purpose to life, which we don't ourselves construct, is to survive and reproduce.  And even that is not a purpose to life itself, which to an arch Darwinian might be not to survive, so something better can do it instead.  Or to expend solar energy in some particular way.  To argue otherwise is to position humans above Nature, which is precisely what Darwin and his contemporary supporters argued was biologically not so (though even Darwin fell into that ethnocentric trap in Descent of  Man).

Further, if we accept Darwinism in the raw, there is no meaning or morality for science to find. Meaning, morality and purpose are constructed by us once we've got food and a mate. As animals with history and culture and awareness of both, we imbue our lives with values and morals and meaning, but they are products of the human mind.  This doesn't mean that they aren't important, or compelling, or even things to live or die for, but those judgments are our own.  But people with the same genome can adopt very different sense of meaning -- which is equally important and compelling.

According to Harris, science can uncover not only facts, but values, and even the 'right values'.  Just as science can tell us how to have healthy babies, science can tell us how to promote human 'well-being'.  And "[j]ust as it is possible for individuals and groups to be wrong about how best to maintain their physical health," he writes, as quoted in the review, "it is possible for them to be wrong about how best to maximize their personal and social well-being."

What is this well-being of which he speaks?  Who says we or anyone should 'maximize' it, and who are 'we' in this context?  Well-paid professors?  If he meant Darwinian fitness we might pay attention because that's the only objective measure of success that counts in a Darwinian world (unless it's ecosystem expansion, even if at the expense of particular species).  But what he means is something much less empirically tangible -- ironically for someone arguing that science will find it.  He means happiness.  This would be perfectly fine in the realm of psychology or Buddhism or philosophy, but, to our minds, this argument of his is on the same playing field with religious arguments about morality and purpose -- which of course he would not accept -- and even pre-Darwinian.

And, it wasn't that long ago that Science decided that homosexuality wasn't an illness to be cured, or that phrenology wasn't in fact enlightening, or that bleeding patients wasn't a cure -- and of course there are many other such examples.  When what was once True becomes False, what does this say about Science and its ability to find the ultimate Truth? Why would anybody think we're right today....unless it's from ethnocentric arrogance?

The Enlightenment period was the age in which the belief grew that modern science could be used to create a better world, without the suffering and strife of the world as it had been.  It was a world of the Utopians.  Their egalitarian views were opposed vigorously by the elitist right ('we're just scientists telling it like it is')  in the form of Thomas Malthus, Herbert Spencer, strong Darwinians, who opposed the more idealistic thinking.  The Science Can Find the Moral Truth view grew through much of the 19th century, but its consequence, 'modernism', was rejected after science gave us machine guns, carpet bombing, eugenics, the Great Depression, dishonorably wealthy industrial barons, and other delights of the 20th century.  The reaction to that went under various names, but included things like cultural relativism and anti-scientific post-modern subjectivism.  Unfortunately, like any Newtonian reaction, the reaction was equally culpable, if less bloody, in the opposite direction, by minimizing any reality of the world.

Cultural relativism, against which Harris rails, is the view that each culture is a thing of its own, and we can't pass judgment about the value of one culture over another, except as through our own culture-burdened egotistical eyes.  That is not the same as saying that we have to like someone else's culture, nor adopt it, nor need it be a goody-goody view that we have to put up with dangers from such culture (like, for example, the Taliban).  But there is no external criterion that provides objective or absolute value.   Racism and marauding are a way of life in many successful cultures; maybe by some energy consumption or other objective measure it's best for their circumstances.  Science might suggest (as it did to the Nazis and Romans and some groups today) that their way is the only way, the best way, Nature's chosen way.

Science may be a path to some sorts of very valuable Truth, and better lives, such as how to build a safe bridge or have painless dentistry (the greatest miracle of the 20th century!).  Regarding many aspects of our culture, we would not trade.  We ourselves would love to attain the maximum happiness that Harris describes.  But it is an arrogance to assume that in some objective sense that is 'the' truth. 

And what if the 'facts' said that to achieve the greatest good for the greatest number (not exactly an original platitude, by the way) meant that people like us (and Harris) had to cut our incomes by a factor of 100, or 1000, for resources to be equitably distributed?  After all, the USSR implemented 'scientific' ideas of maximal good for the masses (communism, Lysenkoism, to the tune of tens of millions purged, frozen to death in Siberia, or starved because of failed harvests, and more).  The Nazi policies were explicitly based on the belief that Aryans were simpler better than others, based on warped Darwinian truths, and we know what happened.

So, anyone who would still not realize that the smug self-confidence that one can find the ultimate truth through science either is another tyrant potentially in the making, or hasn't read his history.

Whether or if there can be some ultimate source of morality is a serious question and if it has an answer nobody's found it yet.  Religion has no better record than materialistic science, nor secular philosophy.  Nor does Darwin provide that kind of objective value system, especially in humans where very opposed cultural values can be held by people toting around the same gene pool.

The Darlings of the Smug rise, like mushrooms, in every society.  They are glib, but so are demagogues of other sorts.  They're all potentially dangerous -- or are those for whom they serve as the intellectual justification.  Again, that is not to say we should adopt someone else's values, nor that we should hold back from defending ourselves against those who threaten us.

Still, oblivious to these points, Harris argues, as does the far right in the US, that cultural relativism is wrong and should be completely and utterly discounted.  Here are some quotes from his TED talk:
How have we convinced ourselves that every opinion has to count?  Does the Taliban have a point of view on physics that is worth considering?  No. How is their ignorance any less obvious on the subject of human well-being?  The world needs people like ourselves to admit that there are right and wrong answers to questions of human flourishing, and morality relates to that domain of facts.  It is possible for individuals and even for whole cultures to care about the wrong things.  Just admitting this will transform our discourse about morality.
Again, how is this different from, say, the Aryan line which would say we have a right to decide and purge, all in the name of science (and, by the way, it was medical science as well as Darwinism)?  Why is this not the arrogance of imperialism all over again?

When the Taliban, the religious right and the likes of Harris and the New Atheists all believe that only they are the keepers of the Truth, dominion can be attained not by science but by wielding of power alone.