Thursday, September 18, 2014

Malaria control now may not foretell the future

There are hailstorms, landslides, droughts, malaria and...the State. These are inescapable evils: such there always have been and there always will be.
                     Carlo Levi, Christ Stopped at Eboli: The Story of a Year, 1945
Malaria was once endemic in southern Europe, the UK, and the Americas.  It was in Greece at least 4000 years ago, and reached the Americas with or shortly after Columbus, eventually becoming an 'inescapable evil' when the slave trade was at its height.  With mosquito control due to DDT, cleaning up of standing water and other measures, the disease was essentially eliminated in the US and Canada and some parts of South America by 1950.  But the factors that maintain malaria in a population are complex, and it's possible that with climate change, global transfer of goods and increasing immigration from endemic areas, this, and other mosquito-borne diseases could return.

A recent episode of the BBC Radio 4 program, Costing the Earth, discussed the increasing spread of a number of mosquito species in the UK and throughout Europe.  In particular, the Asian tiger mosquito (Aedes albopictus), a voracious biter, has been spreading north from the Mediterranean for several decades.  It doesn't carry malaria, but it is a vector for other significant diseases including Yellow fever, Chikungunya, and nematodes that cause filariasis.  

Asian tiger mosquito; Wikipedia
The Costing the Earth episode pointed out that the mosquito is spreading for multiple reasons, perhaps a perfect storm of causation: temperatures are warming, sustained drought in the UK has meant that more and more people are collecting rain in buckets, to use in watering their gardens, and these standing pots of water have turned out to be a great reservoir for mosquito breeding, there is sustained wetland restoration happening in the country, tires shipped into the UK are also frequently full of stagnant water and thus mosquitoes and a number of species are entering the country via this route.  And so on.  But, mosquitoes that carry and transmit malaria still live in areas where malaria is no longer endemic, including the UK and the US.  So, it's not the absence of the vector that explains the absence of disease.

The dynamics of epidemics are well-understood, in mathematical, ecological and cultural detail (Aeon magazine has just published an excellent and accessible description of the mathematics of epidemics).  In the case of malaria, generally speaking, to maintain the disease in a population the population must be over a given size, there must be a reservoir of infected individuals for the mosquitoes to feed on, there must be a large enough mosquito population to transmit the disease to enough susceptible individuals, the fatality rate can't be so high that the parasite is quickly eliminated by death, environmental conditions must be favorable to the vector, and so on.  And, cultural factors must be favorable as well, so that, e.g., mosquitoes can find people to feed on or to infect at the right time of day or night.  That is, there are multiple factors required to maintain disease in a population, and often the infection can be halted by interfering with any one of them.

So, eliminating the vector, mosquitoes that carry the malarial parasite, is one approach to eliminating malaria, and as anything that breaks the chain of infection, vector control can successfully control the disease.  But, it's not required.  Reducing the prevalence of the disease to a level at which the host/vector ratio is no longer sufficient to transmit the disease widely, as was done in the US and Europe in the 1940's, is another way.  So, we in non-endemic areas of the world live with, and are bitten by potential vectors with no fear of infection with malaria, though now West Nile virus and Chikungunya are another story.  Indeed, this map shows the distribution of vectors or potential vectors around the globe.

            Global Distribution of Dominant or Potentially Important Malaria Vectors

From Kiszewksi et al., 2004. American Journal of Tropical Medicine and Hygiene 70(5):486-498, via the CDC.
Thus, where malaria has been eliminated, continued control depends on there being not enough infected individuals to sustain the infection.  Of course, this could change.  Southern Europe may be seeing the re-introduction of malaria, for example -- the disease has been spreading in Greece now for the first time in four decades.  In this case it can be blamed on the economic crisis and austerity because the government has not been able to pay for mosquito spraying, but the spread of malaria can't be blamed on mosquitoes alone; they've got to have an increased number of infected people to feed on as well. And it's not just malaria -- other heretofore neglected, unexpected, tropical diseases are reaching the shores of the US, and Europe, Chikungunya being one example.

We live in a dynamic, changing, interconnected world.  Malaria rates seem to have been declining in Africa and Asia, and research into better prevention and control is ongoing, and researchers can claim many successes.  But, like the proverbial fluttering of the butterfly wing that unpredictably causes climatic chaos far away, if innocuous acts like saving rain water in the UK can have widespread and unpredicted effects like increasing the spread of previously unknown mosquitoes, and thus potentially disease, it's hard to reliably predict that malaria will remain controlled in currently non-endemic areas.  This is not to be alarmist, but simply to point out that it's a bit hubristic to believe we can predict anything so complex, particularly when it requires predicting future environments.  We've said the same many times before about predicting complex disease from genes.

Carlo Levi (1902 - 1975) was an Italian painter, writer and physician.  Because of his political activism during the fascist era, he was exiled to a small southern town in Lucania, where he spent several years painting, writing and attending to the medical needs of the inhabitants there.  He wrote about this time in his book, Christ Stopped at Eboli, published in 1945.  It's a fascinating story; political, ethnographic, scientific (or quasi so).  Malaria was a fact of life in southern Italy at the time,  and Levi mentions it often in the book.  Including in this passage:
In this region malaria is a scourge of truly alarming proportions; it spares no one and when it is not properly cared for it can last a lifetime.  Productive capacity is lowered, the race is weakened, the savings of the poor are devoured; the result is a poverty so dismal and abject that it amounts to slavery without hope of emancipation.  Malaria arises from the impoverishment of the deforested clayey land, from neglected water, and inefficient tilling of the soil; in its turn it generates in a vicious circle the poverty of the peasants.  Public works on a large scale are necessary to uproot it.  The four main rivers of Lucania ... besides a host of lesser streams, should be dammed up; trees should be planted on the mountainsides; good doctors, hospitals, visiting nurses, medicines, and preventive measures should be made available to all.  Even improvements on a limited scale would have some effect...  But a general apathy prevails and the peasants continue to sicken and die.  
Levi may not have entirely understood the cause of malaria, but he clearly understood the vicious cycle of malaria and poverty, which he witnessed every day, all around him in exile.  As Dan Parker has written here on MT numerous times, economic development itself may be one of the best preventives we know.  But it may not always be enough.  It doesn't much matter which link in the chain of infection is broken; once repaired, we may have to figure out new ways to break the chain again.

Wednesday, September 17, 2014

Antibiotic resistance: Move the money (and control it)!

The BBC Radio4 program Discovery had a two-part series (August 18 and 25th) on the real health danger that we face and the research challenge it presents.  No, not Big Data mapping of diabetes or cancer, or athletic ability or intelligence.  Instead, they were about an impending biomedical disaster, one that essentially trivializes much of which we are throwing away resources on: antibiotic resistance.

Growing antibiotic resistance seems to be imminent or at least inevitable, both in terms of issues like treatment of disease in hospital patients, and in the control of spreadable diseases.  This doesn't seem to be too speculative.  Some strains of infectious bacteria are already resistant to multiple antibiotics, and these are often contracted by hospital patients who were there for some non-infectious reason, and some infectious diseases are not responding to antibiotics at all.

If we no longer have antibiotics, of course, the simplest infection can again become life threatening again, surgery, chemotherapy, kidney dialysis, even an ear infection will become risky again, and infectious diseases will again be the killers in the rich world that they once were.

The antibiotic Novamoxin; Wikimedia Commons

Pharmaceutical firms simply aren't investing in antibiotic development as needed.  Not surprisingly, the reasons are commercial: an antibiotic that becomes widely used may be profitable, but not nearly as much as anticancer agents, or recreational drugs like Viagra, or anti-balding cream.  And, if it's widely used the cost may be lower but resistance is sure to evolve.  If saved for the most dire cases, then sales will be low, cost too high to bear, and not enough profit for the company.

The cost of development and testing and the limited duration of patent exclusiveness present additional issues.  So, nobody is investing heavily in antibiotic development, not even governments that don't have quite the greedy commercial motive for what they do.

The Ebola epidemic is another biomedical disaster that has caught international medical care unprepared.  This is a virus, but there is basically no known antiviral agent; one with some effectiveness seems to be in the works and there are some other stop-gap strategies, but nothing systematic.  But the problem, dangers, and challenges are analogous to the fight against pathogenic bacteria.  Indeed, lately there's been discussion of the possibility--or inevitability?--that Ebola will evolve an ability to be transmitted via the air rather than just physical contact with infected persons.  But of course this is a repeat of the story of SARS, MERS, and other emerging infectious diseases, and surely not the last.

So the question for potential investigators or those worried about the looming disasters becomes: where is the money to solve these problems going to come from?  The answer isn't wholly simple, but it isn't wholly top secret either.

Move the money!
Developed countries are spending countless coinage on the many chronic, often late-onset diseases that have fed the research establishment for the past generation or so.  These are the playgrounds of genomics and other 'omics approaches, and more and more resources are being claimed by advocates of huge, long-term, exhaustive, enumerative 'Big Data' projects--projects that will be costly, hard to stop, and almost certainly will have low cost-benefit or diminishing returns.

We already know this basic truth from experience.  Worse, in our view, many or even most of these disorders have experienced major secular trends in recent decades, showing that they are due to environmental and behavioral patterns and exposures, not inherent genetic or related 'omic ones. They do not demand costly technical research.  Changing behavior or exposures is a real challenge but has been solved in various important instances, including iodized salt, fluoridated water, the campaign against smoking, urban pollution, seat belts/air bags, cycle helmets and much else.  It doesn't require fancy laboratories.

Unfortunately, if we continue to let the monied interests drive the health care system, we may not get antibiotic development.  The profit motive, evil enough in itself, isn't enough apparently, and some of the reasons are even reasonable.  So we have to do it as a society, for societal good rather than profit. If funds are tight and we can't have everything, then we should put the funds we have where the major risks are, and that is not in late-onset, avoidable or deferrable diseases.

Let's not forget that the reason we have those diseases is that we have enjoyed a century or so of hygiene, antibiotics, and vaccines.  The 'old man's friend', pneumonia and things like it, were put at bay (here, at least; the developing world didn't get the kind of attention we pay to ourselves).  But if we dawdle because we're so enamored of high-tech glamour and the sales pitches of the university community (and the for-profit pharmas), and because of our perfectly natural fear of the complex degenerative diseases, we could be making a devil's bargain.

Instead, what we need to do is move the funds from those diseases to the major, globally connected problem of infectious diseases (and similar problems combating evolving pests and infections that affect our food crops and animals as well).  We need a major shift in our investment.  Indeed, quite unlike the current Big Data approach, combatting infectious diseases actually has a potentially quick, identifiable, major payoff.  Some actual bang for the buck. We'll need somehow to circumvent the profit and short-term greed side of things as well.  Of course, shutting down some labs will cost jobs and have other economic repercussions; but the shift will open others, so the argument of job-protection is a vacuous one.

"What?!" some might say, "Move funds from my nice shiny omics Big Data lab to work on problems in poverty-stricken places where only missionaries want to go?" Well, no, not even that.  If plagues return, it won't matter who you are or where you live, or if you have or might get cancer or diabetes or dementia when you get old, or if you've got engineered designer genes for being a scientist or athlete.

The battle to control infectious diseases is one for the ages--all ages of people.  It perhaps is urgent.  It requires resources to win, if 'winning' is even possible.  But we have the resources and we know where they are.

Tuesday, September 16, 2014

Akenfield, and lessons for now-age sustainability movements?

In the 1960s I was stationed as an Air Force weather officer in eastern England (Suffolk, or East Anglia).  I had my off-base lodgings in the intellectual town of Aldeburgh, on the shingle-beach of the North Sea coast.  Aldeburgh is a North Sea fishing town, but more notably the home of the distinguished composer Benjamin Britten, and was a long-time or passing-through place of many notable artists, writers and scholars in the early 20th century.  But Aldeburgh is something of an exception: East Anglia is basically a kind of wetlands rural agricultural area--scenic if you are just passing through, but a place of farming business if you live there.

Aldeburgh village and beach (Wikipedia)

In 1969 the author Ronald Blythe published a book, Akenfield: Portrait of an English Village, of reminiscences of Suffolk folk of various ages and professions.  That was when I was living there, but I didn't learn of the book until recently.  Akenfield is a fictitious name for a village, but the book's stories, told by the locals, are real.  This book is an evocative one, capturing the mood--and change--of an English village's way of life, as seen by people of all ages and occupations.  In your mind's eye, you can hear the birds and the livestock call, and see the farmers, shepherds, smiths and so on plying their trades.

Those familiar with Wendell Berry's work about American farm life, or Aldo Leopold's work on Nature and the American landscape, largely about the Midwest a half-century or more ago, will find Akenfield to have a similar mix of nostalgia by the old-timers, commitment by forward-looking younger people, deep love and dedication for the land, yet recognition of the harsh realities of the onset of industrial farming and the leaving of the land by the young, who headed for better-paying jobs in urban trades and factories.

Suffolk farm by Edward Seago, 1930s (Wiki images)
Tractors replaced horse-drawn plows, the many farm laborers were replaced by machines.  Posh landowners have been replaced by more business-like owners.  Produce and livestock are processed through the landscape on a rapid, no-nonsense (and generally no sentimentality) scale, unlike the mixed, small-scale less commercial farming that had come on.  At the time, the villagers largely located themselves in relation to the two World Wars that had so affected England: their roles in the wars, rationing and hardship, and so on.

That was then....and still is, now
By the 1960s, large-scale business-farming had taken root.  Many of the issues discussed by the Akenfielders would sound the same today:  animals being treated in what for humans or pets would be considered horridly inhumane ways, people being driven off the land by machinery, generalists or money people replacing skilled craftsmen, the new rough treatment of the land compared to the mixed-crop smaller-scale farming of the past.  Chicken and hog farms already had become jails for their inhabitants who may never see the light of day in their short, measured, lifespans.

1969 was nearly 50 years go!  In Akenfield in the '60s there were a few who clung to the older ways, who loved the land and refused to leave it, whose needs were simple and commitment great.  This was not for political reasons, but for local traditional ones.  I can't say much about how things may have changed in East Anglia since then, except that my last (also nostalgic) trip through there to Aldeburgh was in 2006, and the hog lots one passed were large.  No rustic slow-paced life!

These musings strike me as relevant to much that is happening today.  Industrial, now genomic-driven agriculture is dominating and many will say devastating not only the nature of agricultural life but also the land itself.  Soil is being lost, monocropping risks major pest devastation, and large farms have become huge impersonal businesses.  And of course livestock practices are every bit as inhumane as ever they may have been.  Of course the argument now, as then, is that more is being produced to feed more people (and there are now a lot more to feed in the world).

At the same time, some are trying to raise the alarm about what may happen if this continues.  Under the banner of 'sustainability', people are attempting to organize resistance to the Monsantification of the land, as one might put it.  There are small farmers who sell humanely raised, local, often organic, small-scale farm products.  There are those trying to use the land in a long-term sustainable way.

Is it pushing analogy too far to liken these scattered and often struggling movements to those who held on to traditional life a half-century ago?  They passed from the scene (as did some protest-era movements, such as communes, 'small is beautiful', and other similar movements in the '70s protest era). Will the current movements flourish, or are they like the trades of old, destined to pass into history?  If they do, will the industrial model sustain life, or destroy it?

Friday, September 12, 2014

So...it's not genetic after all! (but who's listening?)

Is it time to predict the gradual out-cycling of a focus on genetic causation and a return of environmental causation, in our mainstream scientific dialog (and funding-pot)?  Such a recycling is bound to happen--even if, say, genetics' claims were correct.  Why?  Because the new generation has to have something to show they're smarter than their elders, and because the abuse of genetic determinism by society is a nearly inevitable consequence of the fervid love-affair we're now having with genomics and its glittering technology.  But maybe there's another reason:  maybe genetics really has been oversold!  Is it possible?

Bees and societal (in)determination
Honey bee harvesting is a social phenomenon and experiments by various authors have found that only a fraction (in some studies, 20%) of the workers actually do most of the work.  But a recent controlled study reported in the journal Animal Behavior by Tenczar et al (vol. 95, pp41-48, 2014, but paywalled) found that if those 'busy-bees' are removed, others step in to fill the work gap.  The gist of the evidence seems to be that among the gatherer work force (and presumably other castes as well, though that's not reported), there is a spectrum of contribution and it's condition or context-dependent.  As the paper says:
These bees resembled elite workersreported in a number of other species. However, our results also show that honeybee foraging activity level is flexibly adjusted during a bee's lifetime, suggesting that in honeybees, elitism does not involve a distinct subcaste of foragers but rather stems from an extreme of a range of individual activity levels that are continuously adjusted and may be influenced by environmental cues.  . . . these results support the view that individual workers continuously adjust their activity level to ensure that the colony's nutritional needs are being adequately and efficiently met, and that the net activity of the whole foraging population is likely to be one of the factors that influences this decision. 
The authors discuss the fact that these patterns have not been studied, with varying levels of rigor, in many species of social insects.  While it is not clear that genetic differences are never partly responsible, the evidence is that social roles are not rigidly pre-programmed.  This study was presented by the NYTimes with a captivating video from the authors, but while that was nice and led us to the research story itself, the Times characterized this as a system allowing upward social mobility.  That's a bit pandering to middle-class readership, and didn't really critique this work in the context of today's prevailing genetic-deterministic viewpoint. However, the idea of context-dependent roles, based on the needs and opportunities in society at large, is noteworthy and of course is something that also happens in humans.

Honeybee; Wikimedia Commons

This of course raises the question of how the bees perceive the needs or different roles, or if the role pattern is a spectrum of activity of each bee, then how does it know when and what to do.  This would relate to the bees' brains' ability to digest quite complex information and make decisions, something very interesting to try to understand, and something we wrote about here not long ago.

Intelligence
A new paper in PNAS reports the results of a large study of the genetics of IQ.  Essentially, they found three genes with very small effect and unknown functional association with cognition.  Indeed, one of the genes may not even be a gene. To sort this all out, of course, they say they would need a sample of a million people.  One of the authors faced with this mountain of chaff is quoted this way in the story:
Benjamin says that he and his colleagues knew from the outset that their efforts might come up empty handed. But the discovery that traits such as intelligence are influenced by many genes, each having a very small effect, should help to guide future studies and also temper expectations of what they will deliver. “We haven’t found nothing,” he says.
Nice try!  But the truth is that that is just what they have found: nothing.  Or, at least, nothing new, that is, no thing.  We knew very well that this was the most likely sort of finding.  We have countless precedents, including the results of countless earlier searches for genes for intelligence (and, for that matter, similar findings for most psychological/behavioral traits).  Like other traits from normal ones like stature and IQ, to body weight and major diseases of all sorts, we find polygenic control--countless contributing genetic factors with individually minimal effect. This even though usually the heritability of the trait is substantial, meaning that variation in genes together accounts for a non-trivial fraction of the overall variation in the trait (the environment and other factors contribute the rest, usually around 60-70%).  

But heritability is a persistently subtle and misunderstood (or ignored) measure. Even with nontrivial overall heritability, the aggregate nature of the measure means we cannot say in any given individual whether his/her IQ is based on this or that particular genes, or is some specifiable percent due to genes (that is itself difficult to make sense of when referring to an individual).  And heritability is often measured after taking out, or controlling for the major real causal factors, such as age and sex.  Arguing for a sample for a million, if allowed and funded, is a huge fool's errand and a corrupt way to spend money (because it's mainly to keep professors off the street of unemployment).

Yet the issues in these cases are subtle, because we also know of many different individual genes that, when seriously mutated, cause direct, major, usually congenital damage to traits like intelligence.  Yet few if any of these genes show up in these mega-mapping studies.  It is this sort of landscape of elusive complexity that we need to address, rather than just building expensive Big Data resources that will largely be obsolete before the DNA sequence is even analyzed, based on the daydream that we are not, knowingly, chasing rainbows.

The primary question one thinks to ask is whether 'intelligence' is a biologically meaningful trait.  If not, even if it can be measured and be affected by genes, it isn't really an 'it' and one can't be surprised that no strong genetic influences are found even if the measure is stable and heritable.  Asking about the genetic basis of intelligence under such circumstances is not asking a well-posed question.

Baby stories
The other day we posted about the recent Science issue on non-genetic influences on parenting,  environmental effects on traits and how long-term and subtle they can be, and how they are not Genetic in the sense of the G-rush we are currently experiencing.  The stories are many and diverse and tell the same tale.  

Here the fascinating question is how the various environmental factors could influence a fetus in factor-specific manners that even relate to the factor itself (e.g., maternal diet affecting the future baby's obesity level, or the effect of the mother eating garlic or being exposed to odors on taste preference or specific odor-related behavior in the child).  To answer such questions we have to know more than just about a gene or two.

So, why aren't these findings grabbing headlines?
The bee story made the front-page of the NYTimes, but mainly because of the video and not because it is a counter to the strong genomic hard-wiring ethos so often promoted by scientists these days.  Likewise, the baby influences made the cover of Science, but we didn't see a Hot-News blare announcing that genetics isn't, after all, everything.  And of course the IQ story didn't make that clear either, given that the author said he wanted studies of a million to find the real genetic causes of IQ.  And, determinists say this isn't going to change their mind about the genetics of intelligence, because it's definitely genetic.  

Will we, or when will we, see people begin to back off their claims of strong genetic determinism, and begin addressing the really hard questions concerning how complex genomes interact with complex environments to produce what we are clearly observing?  In my opinion, these questions cannot be addressed from a genetic, or from an environmental, or from a simple Gene + Environment point of view.

Wednesday, September 10, 2014

The Turner Oak effect: unexpected explanations

We are just catching up on a backlog of reading after three weeks away, which means that while the immediate subject of this post may be a bit dated, the topic certainly is not. The August 15 special issue of Science, which we're just now reading, is so fascinating that we can't let it go unremarked.  The issue, called "Parenting: A legacy that transcends genes," provides example after example of the effects of environmental factors on development, taste preferences, the way the brain works, disease risk, and many other aspects of life.  We can't of course evaluate the reliability of all of these results, but the evidence does seem to be pointing strongly in the direction of a mix of genes and environment in explaining the effects of parenting on growth and development.

We don't know that mounting such a strong challenge to the idea that genes are predominantly what make us who we are was the editors' intention, but the subtitle suggests that, and in our view, that's certainly what they have done.  Indeed, we can't help noting that this is an unintended but eloquent counterpoint to Nicholas Wade's view of life, in which everything including the kitchen sink is genetic (or at least we assume he'd say this, since sinks are designed by Eurasians who are because of natural selection genetically of superior inventiveness).

Cover of Science, Aug 15, 2014
Given the papers in this special issue, it's clear that more and more is being learned about how extra-genetic factors affect growth and development. What the mother eats in the days around conception, uterine conditions before conception, conditions during development, components of breast milk, ways of parenting and so forth all apparently affect the growth, development and health of a child.  In vitro fertilization may have life-long effects including risk of disease, starvation during pregnancy may affect risk of disease in offspring, what a mother eats while she's pregnant can influence her child's taste for specific foods, lack of parental care during infancy and early childhood can have lifelong effects, maternal mental illness may affect the development of the fetal brain, and so on.

Lane et al. write about "Parenting from before conception".  Infant health, they write, seems to be particularly influenced by conditions during 'fertilization and the first zygotic divisions, [when] the embryo is sensitive to signals from the mother's reproductive tract.'
The oviductal fluid surrounding the embryo varies according to maternal nutritional, metabolic, and inflammatory parameters, providing a microcosm that reflects the outside world. In responding to these environmental cues, the embryo exerts a high degree of developmental plasticity and can, within a discrete range, modulate its metabolism, gene expression, and rate of cell division. In this way, the maternal tract and the embryo collaborate to generate a developmental trajectory adapted to suit the anticipated external environment, to maximize survival and fitness of the organism. But if the resulting phenotype is a poor match for conditions after birth, or if adaptation constrains capacity to withstand later challenges, offspring are at risk.
Further,
Maternal diet at conception has a major impact on the developmental program. Reduced protein content for just the first 3 days of embryogenesis retards cell proliferation and skews the balance of cell lineage differentiation in the blastocyst.  The effect of nutritional disturbance at conception persists through implantation and influences placental development and nutrient transfer capacity, then after birth, the neonate gains weight more rapidly, developing higher systolic blood pressure and elevated anxiety.
Some of the effect is epigenetic, that is, modifications to the DNA structure that affect gene expression.  And some of the effect is, Lane et al. write, on oocyte mitochondria.  These organelles, "powerhouses of the cell", support blastocyst formation.  Their location and activity levels are known to respond to the mother's nutritional status, and ultimately affect the health of the child, as well as affecting gene expression in the brain, among other things.  Epigenetic effects on sperm, influenced by environmental conditions, also can affect the developing embryo.  But it's the "epi" in epigenetic that tells the tale: it's not the genetic (DNA sequence) variants that cause the trait difference, but variation in the use of the same sequence.

Many of the essays in this issue use the word 'plasticity', meaning that developing embryos are able to respond to various and varying environmental conditions.  If conditions are too extreme, of course, the embryo can't survive, but in general, how an embryo responds to immediate conditions may have lifelong effects.  From the review by Rilling and Young ("The biology of mammalian parenting and its effect on offspring social development"):
Parenting... shapes the neural development of the infant social brain. Recent work suggests that many of the principles governing parental behavior and its effect on infant development are conserved from rodent to humans.
That parenting has a strong effect on the infant's physiology, and that the effects of parent/child interactions have evolved to be strong is not a surprise, of course, given that parenting in mammals is essential for the survival of the offspring.  And plasticity, or adaptability, is a fundamental principle of life.  We have referred to this as 'facultativeness' in the past.  Organisms that are able to adapt to changing environments -- within survivable limits -- are much better equipped to survive and evolve.  Indeed, the final piece in this special section on parenting is titled "The evolution of flexible parenting."  Parenting behaviors among many species are well-documented to respond to environmental changes.  Put another way, it is not being genomically hard-wired that is most adaptable in these ways.

So, with all these examples of the interdigitation of nature and nurture, can we declare the death of genetic determinism?  Well, no.  Genetic determinism is alive and well, thanks in large part to Mendel and the resulting expectation that there are genes for traits that are out there to be found.  But in many ways, we've become prisoners of Mendel -- while many genes have been found to be associated with disease, we know very well that most traits are polygenic, and/or due to gene-environment interaction and we've know this for a century.  So the idea that the effect of parenting might transcend genes shouldn't be surprising.  And the idea that there might be factors that we haven't predicted that affect traits such as diseases or how brains work shouldn't be surprising, either.

The BBC recently aired an excellent 25-part program called "Plants: From Roots to Riches" about the history of Kew Gardens, and because the gardens have been so central to botany for so long, about the history of botany in general.  The series is still accessible online, and well worth a listen.  I bring this up because a story told on one of the episodes struck me as a very apropos lesson about causation.  A "Great Storm" hit the UK in 1987.  This was a hurricane that did tremendous damage, including killing millions of trees, 700 at Kew alone.

Before the storm, arborists had been concerned about a 200 year old tree at the Gardens, the Turner Oak.  It was clearly not well; leaves were stunted, growth was slow, but it wasn't clear what was wrong with it.  During the storm, the tree was uprooted completely and tossed into the air, but as luck would have it, it came back to earth right in the hole its exodus had created.  The arborists decided it didn't need as much attention as many other trees in the gardens after the storm, though, so they left it until they were finished tending to others.  This was three years later, at which time they discovered that the tree was thriving, growing again, and looking healthier than it had in decades.

Quercus x turneri at Kew Gardens; Royal Botanic Gardens

Why?  The arborists eventually realized that all the foot traffic at the Gardens had compacted the soil to the extent that the roots, and thus the tree, were suffering.  It turns out that the soil around a tree must be aerated if the tree is to thrive.

I love this serendipitous discovery.  A tree was ailing, no one knew why, until an unexpected event uncovered the explanation, and it turned out to be something that no one had thought to consider.  Many of the discoveries reported in the August 15 issue of Science strike me as of the same ilk.  Scientists have been looking for genes 'for' diabetes, taste, mental illness, obesity, and so on for decades now, and the explanation for these conditions may be instead events that happen even before conception, where it never occurred to anyone to look before.

There are numerous other examples; a few years ago it was reported that age at death (for late-life, not infant mortality) is affected by the month in which someone is born.  The authors, for some reason, did not follow up this potentially very important finding.  Maybe the effect is due to seasonal foods consumed by the mother during what turn out to be the riskier months of conception--if so, should there be lifelong evidence, if we but looked for it, of accelerated disease prodromes like obesity, hypertension and the like.

Perhaps the Turner Oak effect should be a thing -- it might encourage investigators to explicitly look for the unexpected.  What causes asthma? Could it be disposable diapers?  Who knows?  Broccoli has never been blamed for anything -- maybe it's time for broccoli to be implicated in some disease.  The problem is that we don't think to look because we all 'know' that broccoli is good for us.

Some ideas are kooky, but when it turns out that some kooky ideas really do seem to explain cause and effect, it means we shouldn't always be looking in the same place for our answers (the drunk under the lamppost phenomenon).  The cause and effect relationships described in the parenting issue of Science involve some unexpected environmental effects on gene expression -- epigenetic effects of various kinds -- and plasticity, meaning that cross-talk between genes and environment creates a give-and-take that can't be called genes or environment alone.  We don't know that these are final answers, but we know that we should expand our range of expected possibilities.

Perhaps the Turner Oak effect should guide more of our thinking in science.

Tuesday, September 9, 2014

Sloppy, over-sold research: is it a new problem? Is there a solution?

In our previous posts on epistemology (e.g., here)--the question of how we know or infer things about Nature--we listed several criteria that are widely used; induction, deduction, falsifiability, and so on.  Sometimes they are invoked explicitly, other times they are just used implicitly.

A regular MT commenter pointed out a paper on which he himself is an author, showing serious flaws in an earlier paper (Fredrickson et al.) published in PNAS, a prominent journal.  Fredrickson et al., is a report of a study of the genetics of well-being  The critique, also published in PNAS, points out fundamental flaws in the original paper. ("We show that not only is Fredrickson et al.’s article conceptually deficient, but more crucially, that their statistical analyses are fatally flawed, to the point that their claimed results are in fact essentially meaningless.") We can't judge the issues ourselves, as the paper is out of our area, but the critique seems to be rather broad, comprehensive, and cogent.  So, how could such a flawed paper make it into such a journal?

Our answer is that journals have always had their good and less-good papers, and there have always been scientists (and those who claim to be scientists) who trumpeted their knowledge and/or wares.  When there are credit, jobs, fame and so on to be had, one cannot be surprised at this.

Science has become a market, with industry and university welfare systems, a way for the middle class to get societal recognition (which is an important middle-class bauble), and journals proliferate, many avenues for profit blossom, and university administrators stop thinking and become bean-counters.  Solid science isn't always the first priority.

Science was never a pure quest for knowledge, but it is now to a considerable extent more than before, we think, a business with these various forms of material and symbolic 'profit' as coins of the realm, and the faux aspect can be expected to grow.  There isn't any easy fix, because raising standards to become better policed usually leads to becoming more elite, closed, and exclusive, and that is itself a form of opportunity-abuse.

Our commenter did add that he can no longer trust research sponsored by the US government, and here we would differ.  Much good work is done under government sponsorship, as well as industry sponsorship (which can have its own problems).  The government is a loaded, inertial bureaucracy with its armada of career-builders, and that is predictably stifling.  But the general idea is to do things right, to benefit society (not just professors, or funders, or university administrators).  The problem is how to improve the standard.

The issue is not epistemological
Actually we think the comment was misplaced in a sense, because our post was about epistemological criteria--how do we know how to design studies and make inferences?  The comment was about the way the results are reported, accepted, exaggerated, and the like.  This is certainly related to inference, but rather indirectly we'd say.  Reviewers and editors are too lax, have too many pages to fill, too many submissions to read and the like, so that judgment is not always exercised (or, often, authors bury their weak points in a dense tangle of 'supplementary information').

That is, one can do the most careful study, following the rules, but use bad judgment in its design, be too-accepting of test results (such as statistical tests), use inappropriate measures or tests.  And then, often in haste or desperation to get something published from one's work (an understandable pressure!) submit a paper that's less than even half baked.

What is needed is to tighten up the standards, the education and training, reduce the pressure for continual grant funding and publication streams to please Deans or reviewers, and give scientists time to think, make them accountable for their promises, and slow down.  In a phrase, reward and recognize quality more than quantity.

This is very hard to do.  Our commenter's points are very well taken, in that the journals (and news media) are now heavily populated by low- or sub-standard work whose importance is routinely and systematically exaggerated to feed the insatiable institutional maw that is contemporary science.

Friday, September 5, 2014

When do you believe research?

At the end of the mini-course we taught in Helsinki, after a week of discussion of many essentially philosophy-of-science issues including how to make decisions about cause and effect, or how to  determine whether a trait is 'genetic', or if it can be predicted from genes, a student asked how we decide which studies to believe.  That is, responding to our questioning nature, he wanted to know how we decide which research reports to be skeptical about and which to believe.  I've been thinking a lot about that.  I don't really have answers because it's a fundamental question, but here are a few thoughts.

The class, called Logical Reasoning in Human Genetics, is meant to get students thinking about how they know what they think they know.  Ken gave a lecture on the first day in which he talked about epistemological issues, including the scientific method. We're all taught from childhood that knowledge advances by the scientific method. There are multiple definitions, but let's just go with what's cited in the Wikipedia entry on the subject, in turn taken from the Oxford English Dictionary.  It's "a method or procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses." But most definitions go further, to say one adjusts the hypothesis until there is no discrepancy between it and the latest results.  This is how many web pages and books portray the process.

But this is awfully vague and not terribly helpful (if it's even true: for example, when is there no discrepancy between hypothesis and actual data?).  Who decides what is systematic observation, how and what we measure, how we conduct experiments, and formulate, test and modify hypotheses?  And even if we do agree on all this, it wouldn't give any hint as to which results should be believed.  Any that follow the method?  Any that lend evidence to our hypotheses?  There was plenty of evidence for the sun revolving around the Earth, and spontaneous generation, and the miasma theory of disease, all based on systematic observation and hypotheses, after all.  Clearly, empiricism isn't enough.

In his first lecture, Ken showed this slide:

The essential tenets of the scientific method.  Most of us would include at least some of these criteria in a list of essentials, right?  Ken discussed them all, and then showed why each of them in turn may be useful but cannot in fact be a solid basis for inferring causation.  One may hypothesize that all swans are white, and it may seem to stand up to observation -- but observing a single black swan does in that theory.  Figuratively speaking, when can we ever be sure that we'll never see a non-white swan?  So induction is not a perfectly reliable criterion for forming general inferences.  Prediction is an oft-cited criterion for scientific validity, but in areas of biology depends on knowing future environments, which is impossible in principle.  Scientists claim that theories may never be provable but can always be falsified, which leads to better theory. But scientists rarely, if ever, actually work to falsify their own theories.  And one can falsify an idea by a bad experiment even if the idea is correct.  P-values for statistical significance are subjective choices: P = 0.05 was not decreed by God. And so on.

So, then Ken added the following criterion:


This is probably a better description of how scientists actually do science.  And I'm writing this in Austria, so I'll mention that if you've read Austrian philosopher of science, Paul Feyerabend's "Against Method", this will sound familiar.  Feyerabend believed that strict adherence to the scientific method would inhibit progress, and that a bit of anarchy is essential to good science.  Further, the usual criteria, e.g. consistency and falsification, are antithetical to progress. Indeed, as a philosopher who took a hard long look at the history of scientific advances, Feyerabend concluded that the best description of good science is "anything goes," a phrase for which he is famous, and often condemned. But he didn't mean it as a principle, rather it was a description of how science is actually done.  It is a social and even political process.

However, even an anarchic bent doesn't help us decide which results to believe, even if it does mean that we shouldn't consider that sticklers for method have an advantage.

How do we decide?
A few weeks ago we wrote about a paper that claimed that tick bites are causing an epidemic of red meat allergies in the US and Europe.  Curious.  Curious enough to lead me to read 3 or 4 papers on the subject, all of which suggested a pattern of exposure and symptoms consistent with the habitat of the tick, as well as a mechanism that explained how the tick bite could cause this often severe allergy.  Seemed convincing to us.

But, someone on Twitter wasn't convinced:
The link is to a Lancet article, but it restricts its discussion to the anti-science claims of those who believe that Lyme disease is not what 'evidence-based' medicine says it is.
Similar to other antiscience groups, these advocates have created a pseudoscientific and alternative selection of practitioners, research, and publications and have coordinated public protests, accused opponents of both corruption and conspiracy, and spurred legislative efforts to subvert evidence-based medicine and peer-reviewed science. The relations and actions of some activists, medical practitioners, and commercial bodies involved in Lyme disease advocacy pose a threat to public health.
But should we skeptical about all tick-borne diseases?  The CDC still lists a number of them.  I don't know enough about this subject to comment further, but it's interesting indeed that antiscience claims can themselves be couched in a semblance of the scientific method.  Or at least a parallel track, with its own 'experts', publications, peer reviewers, and so on.  In fact this makes the question of how one decides what to believe almost mystical, or dare we say religious. Surprisingly, while it is often said that science, unlike other areas of human affairs, isn't decided by a vote, in reality group consensus about what is true is a kind of vote among competing scientists; the majority or those in most prominent positions, do tend to set established practice and criteria.

Or what about this piece, posted last week by the New York Times, on the effects of bisphenol on ovarian health?  Evidence seems to be mounting, but even people in the field are cautioning that it's hard to tell cause and effect.  Or, what about the causes of asthma?  Environmental epidemiology has found that breast feeding is a cause, but also bottle feeding, excessive hygiene, or pollution.  Same methods -- "systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses" -- contradictory results.

Or, what about climate change?  How do we decide what to believe?  Few of us are expert enough in meteorology, geology, or climate history to make a decision based on the data, so essentially we must decide based on whether we believe -- yes, believe -- that the science is being rigorously conducted.  But, how would we know?  Do we count the number of peer-reviewed papers reporting that the climate is changing?  If so, that's just a belief that peer-review adds weight to findings, rather than is simply evidence of a current fad in thinking about climate, or circling of wagons, or some other sociological quirk of science.  Do we count the number of papers or op/ed pieces written by US National Academy members, or Nobel prize winners?  In which case, we're even further from actual scientific evidence.

We can list one criterion that, today, must be true.  The results must be evolutionarily sound.  Evolution is probably as close as biology comes to 'theory'; descent with modification from a common ancestor.  If results don't fit within that theory, they are probably wrong.  But not definitively -- we should always be testing theory.

Here's another one, that must be true when considering causation -- the cause must precede the effect.  (This is one in a list of nine criteria sometimes relied upon in epidemiology, the rest of which aren't necessarily true, recognized even by Bradford Hill who devised the list.)  But this isn't terribly helpful. Many things can precede an effect, not just one, and many things that precede the event are unrelated to it.  Which such preceeder do we accept?

Several criteria that might help are replication and consistency, but for many reasons they can't be considered sufficient or necessary.  They might confirm what we think we know -- but consistent and replicated findings of disease due to bad air prior to the germ theory of disease confirmed miasma as a cause.  Life is about diversity, and that is how it evolves, so replication is not a necessary criterion for something about, say, genetic causation, to be true under some circumstances but not all.

Science is done by scientists in (and these days supported by) society.  We need jobs and we try to seek truth.  But one proverbial truth is that science should always be based on doubt and skepticism: rarely do we know everything perfectly.  Once we stop questioning -- and the hardest person to question is oneself -- then we become dogmatists, and our science is not that different from received truth in religion.

Scientists may rarely think seriously or critically about their criteria for truth.  We believe that there is truth, but it's elusive much of the time, especially in complex areas like evolutionary biology, genetics, and biomedical causation.  A major frustration is that we have no formal criteria for inference that always work.  Inference is a kind of collective, social decision process, based on faith, yes, faith in whatever a given scientist believes or is pressured by his/her peers to believe.  The history of science shows that this year's 'facts' are next year's discards.  So which study do we believe when there are important implications for that decision?  If it's not true that you can "use whatever criteria you want", for various pragmatic reasons, then what is true about scientific inference in these areas of knowledge?