Monday, February 28, 2011

Having a wonderrrful time....wissshhshh yur wr heeeeeeeeeer

Well, a special Commentary on Science is in order, because we live in State College, where special academic events often take temporary precedence, even to the extent that crowds obstruct the streets so you can't get to where you're going, such as, say, the library (you knew that State College is a university town, didn't you?).  Well, it really isn't just the library to which access was blocked off this time, but the ordinary streets---those near the many buildings temporary reconfigured for special events experiments.

In this arts & sciences university, for one long weekend in the summer, it's the arts for whose special events the town is reconfigured (it's called the Arts Festival).  And in the winter it's science's turn.  This event is fondly known, with what for a science event is a strangely religious sounding name, as State Patty's Day.  It's when people (often referred to as 'students') flock here from places as far away as, yes, Alabama (14 hour drive) to join the hands-on, direct-experience chemistry class.

You may think it unusual for people to go to such lengths for a science experience, but Penn State here is relentlessly insistent on unprecedented high (no pun intended) academic standards.  As one participant put it:  " I don’t know any other place that has this much fun.”

Experimenters, 2011;
photo from the Centre Daily Times
Yes, we did say 'chemistry' and, perhaps pedantically, State Patty's experiments are narrowly focused on just a single chemical, known as alcohol.  The hands-on aspect refers to moveable or upsettable objects such as mailboxes, light posts, cars, and other similar items, and the experiment is to show how one tiny molecule, in large quantities, can achieve high-energy muscle-enhancing effects.  Lower-energy hands-on experiences involve what is known in the vernacular as 'groping', and shows that this very same molecule also has powerful pheromonic effects, that is, is related to uncontrolled mating rituals.

Another part of the direct-experience aspect of State Patty's is the digestive role that the self-same molecule plays.  It is a highly reactive molecule that, when past a dosage threshold, triggers what (again we apologize for using technical language) is known as the Upchuck Reflex.  The UR is activated under almost any circumstance, that is, is not always controlled by the subject of the experiment.  So it doesn't always occur in the lab, but often out on the sidewalks, local residents' yards, and so on.

Additionally, and also under apparently little control, again abbreviated UR (some terminological confusion, we realize) is the Urinary Response.  Both URs occur anywhere and any time during the experimental phase.

Finally, because so many 'students' come here to take part in the State Patty's experiments, a large number of temporary employees (people normally employed in their towns around the state as police) are hired as lab monitors.  This reflects the fact that there is even yet another effect of this same tiny molecule:  it interacts with the Arrest Proclivity gene.  Those with the unfortunate genotype are assessed an additional lab fee (for some reason, re-named a 'fine' or 'bail').  Sometimes, the AP reaction is so extreme that a period of enforced, protective isolation is required.  For the victims' own good, you understand.

Now, Penn State is known for its ongoing 'adult' education programs, and the same applies to chemistry.  Because, though the course-sizes are smaller, the same kind of first-hand experiments go on weekly here.  For a modest tuition fee (paid to the local establishment in whose lab you do the experiment), you can replicate the experimental effect week, after week, after week.  In fact, you may have heard about us a year or so ago on This American Life--which slanderously and unfairly referred to Penn State and its never-ending chemistry lab as "a drinking school with a football problem"

So, from near or far, welcome to the State Patty's day chem class!

Friday, February 25, 2011

The complexity of simple genetic disease

Cystic fibrosis is an ion channel disease that interrupts the flow of salts and fluids into and out of cells, and this affects multiple organs. The most serious consequence of the disease is the production of thick mucus in the intestines and lungs, which leads to respiratory complications, the leading cause of death among people with CF.

Cystic fibrosis is an inherited disease.  The causative gene, CFTR, was identified 22 years ago.  Over 1000 mutations associated with CF have been identified since then, many seen in only one patient or a single family.  In the US the most common mutation is F508del; this designation means that the amino acid that is normally the 508th amino acid along the chain that makes the CFTR protein has been deleted. Another mutation, G551D, is found in about 4% of patients in the US -- this mutation replaces one amino acid with another at the 551th position in the protein chain.  

Identifying the CFTR gene created quite a lot of excitement about the potential for gene therapy, but the initial enthusiasm was pretty quickly dampened by the difficulty in transporting a normal copy of the gene to the required sites in the body. A different therapeutic approach was described in a paper in PNAS in 2009.
Most CF mutations either reduce the number of CFTR channels at the cell surface (e.g., synthesis or processing mutations) or impair channel function (e.g., gating or conductance mutations) or both. There are currently no approved therapies that target CFTR. Here we describe the in vitro pharmacology of VX-770, an orally bioavailable CFTR potentiator in clinical development for the treatment of CF. In recombinant cells VX-770 increased CFTR channel open probability (Po) in both the F508del processing mutation and the G551D gating mutation. VX-770 also increased Cl secretion in cultured human CF bronchial epithelia (HBE) carrying the G551D gating mutation on one allele and the F508del processing mutation on the other allele by ≈10-fold, to ≈50% of that observed in HBE isolated from individuals without CF. Furthermore, VX-770 reduced excessive Na+ and fluid absorption to prevent dehydration of the apical surface and increased cilia beating in these epithelial cultures. These results support the hypothesis that pharmacological agents that restore or increase CFTR function can rescue epithelial cell function in human CF airway. 
The pharmaceutical company that makes VX-770 has just announced the successful completion of a 48 week clinical trial of the drug. The results are impressive. Lung function was significantly improved, and
[h]ighly statistically significant improvements in key secondary endpoints in this study were also reported through week 48. Compared to those treated with placebo, people who received VX-770 were 55 percent less likely to experience a pulmonary exacerbation (periods of worsening in signs and symptoms of the disease requiring treatment with antibiotics) and, on average, gained nearly seven pounds (3.1 kilograms) through 48 weeks. There was a significant reduction in the amount of salt in the sweat (sweat chloride) among people treated with VX-770 in this study. Increased sweat chloride is a diagnostic hallmark of CF. Sweat chloride is a marker of CFTR protein dysfunction, which is the underlying molecular mechanism responsible for CF. People who received VX-770 also reported having fewer respiratory symptoms.     
This is exciting news for the CF community, even for those who don't have the G551D mutation, because the same company is currently testing a drug to correct for the effects of the F508del mutation.
In people with the G551D mutation, CFTR proteins are present on the cell surface but do not function normally. VX-770, known as a potentiator, aims to increase the function of defective CFTR proteins by increasing the gating activity, or ability to transport ions across the cell membrane, of CFTR once it reaches the cell surface. In people with the F508del mutation, CFTR proteins do not reach the cell surface in normal amounts. VX-809, known as a CFTR corrector, aims to increase CFTR function by increasing the amount of CFTR at the cell surface. 
This all has the potential to change the future for people with CF.  And it also means that if the function of a gene and mutations in that gene are understood, the parameters are there for potentially developing therapies.  We seem to understand a lot about this ion channel.  In fact, if these results are real--general, long-lasting, and clinically or lifestyle-important as well as statistically significant -- they probably will apply to many other CF patients with other mutations that are individually rarer but have similar effects on the CFTR protein. 

But, the gene for CF has been known for 2 decades, and a treatment for just  4% of people with the disease is only now beginning to look promising.  The difficulty of getting to just this point is a sobering reminder that 'personalized medicine' is going to be an order of magnitude harder for polygenic diseases.  And hopefully, we won't have to take back these positive feelings about these potentially life changing results, and this 48 week trial is not being reported prematurely to boost stock prices or anything cynical like that.

If it works as the current story suggests, these results exemplify what we personally have repeatedly said about medical genetics.  There really are good ways to spend genetics research effort, not on mindless GWAS mapping, but on traits that are tractably simple and that really are genetic in a meaningful sense.  This seems like a very good example of that principle even if, as is the case with other instances, only a fraction of all CF patients will benefit directly.

Thursday, February 24, 2011

Think before you speak--or not at all!

If one advised you to think before you speak, you might take that as sage advice to the quick-tongued. But it's more sobering than that. Despite lots of controversy, the idea that cell phone usage may not be brain-safe has arisen again in a story reported by the NY Times.

The study reported increased phone-side brain activity, raising questions about whether this can, over time, damage the brain.  Whether this holds up, or is in any way associated with the fear that phoning causes cancer, we can't say. Whether this study is scientifically sound as well as newsworthy is a similar question.

No matter what the answer turns out to be, if we ever do get an answer, it again exemplifies a major scientific question of our age, one which we've blogged about before (e.g., here) :  how to detect and understand very small risks. This issue is core, and it underlies much of the inconclusiveness of current epidemiological and genetic methods, observational studies and genomewide association studies (GWAS) alike. 

Our statistical methods rely on sample sizes large enough to detect effects that are 'unusual' enough to take a serious evidence for cause. That is, large effects.  But 'unusual' enough, otherwise known as 'statistically significant', is a totally subjective judgment.  Something can look unusual by chance, and something truly very unusual (a perfect bridge hand, or all cherries on a slot machine) can seem to be fore-ordained, if you cannot do enough tests to prove that it was just what is expected by chance.

More disturbing is that something that is very rare, but very real can go undetected by our inferential methods.  The cause may be so weak that no adequate sample could be collected for the outcome to be statistically significant by the usual kinds of 'unusual enough' criteria.  But if it happens to you, it can kill you, and that's real enough to take seriously!

Similarly, suppose some group--say, chatty teenagers who talk on their mobiles in class--has a 1% risk of some brain disease (on top of not belonging in the class in the first place).   Does that mean that each person has a mere 1% risk, small enough to ignore compared to the thrill of chatting up your favored co-ed?  Or does it mean that 1 person in 100 has a 100% risk, and the others, in fact, can talk all they want with complete impunity?

There is no obvious easy way to get out of this box, to know what is 'real', other than what we declare to be real after we've run our Statistica program and got a p-value out of a significance test.  That may be our currently preferred way, but it's a poor way to try to understand the real nature of Nature.

Wednesday, February 23, 2011

Runner's high: a healthy dose of skepticism

Everyone knows what causes "runner's high", right?  It's endorphins.  We know that because someone reported increased levels of endorphins in a runner's blood after a long run sometime in the 80's.  Case closed.

Or not.  A post by Gretchen Reynolds at the Well blog in the NYTimes points out that endorphins don't cross the blood brain barrier, so can't be responsible for the euphoria of a good run.  Instead, neuroscientists are talking now about the 'endocannabinoid system', a neurochemical pathway in the brain, with receptors elsewhere in the body, that is involved in the reduction of pain and anxiety.

An experiment in 2003 found increased levels of endocannabinoid molecules in the blood of student subjects after doing hard exercise. 
The endocannabinoid system was first mapped some years before that, when scientists set out to determine just how cannabis, a k a marijuana, acts upon the body. They found that a widespread group of receptors, clustered in the brain but also found elsewhere in the body, allow the active ingredient in marijuana to bind to the nervous system and set off reactions that reduce pain and anxiety and produce a floaty, free-form sense of well-being. Even more intriguing, the researchers found that with the right stimuli, the body creates its own cannabinoids (the endocannabinoids). These cannabinoids are composed of molecules known as lipids, which are small enough to cross the blood-brain barrier, so cannabinoids found in the blood after exercise could be affecting the brain.
And new research is showing that, for example, mice love to exercise, and choose to do so -- until their endocannabinoid system is knocked out.  So indications are good that this is in fact the system that gets people addicted to running.  

We were interested in the report of all this in the Times, though, because it was written in a way that hit many of the buttons we try to hit here on MT. 
Whether this accumulating new science establishes, or ever can establish, definitively, that endocannabinoids are behind runner’s high, is uncertain. As Francis Chaouloff, a researcher at the University of Bordeaux in France and lead author of the genetically modified mouse study, pointed out in an e-mail, rodents, although fine models for studying endocannabinoid action, “do not fill questionnaires to express their feelings related to running,” and runners’ high is a subjective human experience.
First, a canonical belief was challenged -- endorphins cause runner's high.  And, a new explanation was offered based on what seems to be solid experimental evidence that makes sense -- it's endocannabinoids.  But, Reynolds cautions that we still don't know for certain that endocannabinoids cause runner's high, we may never know for certain, and we certainly can't know for certain from studying mice. But it's an intriguing possibility.

We love a healthy dose of skepticism -- or reality -- mixed in with some promising research.

Tuesday, February 22, 2011

Why? Risk and decisions about risk

Breast cancer isn't even that rare, unfortunately, but there is apparently still a lot that isn't clear about best practice when it comes to diagnosis and treatment.  We have earlier posted about stories related to whether mammograms are worth the risk--that the radiation will induce too many cancers relative to what they detect, or detect tumors that would mainly regress on their own.  A positive mammogram leads to some sort of follow up, and this has its own risks and morbidities.

A story about a new study reports conclusions that more extensive biopsies ('open biopsies') are being done far too often, rather than less costly and less traumatic 'needle' biopsies.  If a tumor is detected, usually surgery is required, but in the former case this means two surgeries, and the story says this is considerably more difficult than a needle biopsy and one surgery.

There was a recent related story saying that lymph node biopsies or removal (in the armpit area through which breast cancer often metastasizes, when it spreads) were not worth doing, as judged by subsequent course of the disease.  And another story claims, at least, the discovery of another breast-cancer related gene--another type of test which, depending on risk estimates, will then lead to further decisions about further tests or treatment.

We know that when an absolute risk is very rare, and must be assessed by aggregate results of very large numbers of instances, it is difficult to make much less evaluate policy.  In the case of radiation, we can estimate the per dose-rate effect of high doses, but must extrapolate the dose-response curve to make a guess at what the low-dose risk, if any, might be.  This is the case with mammograms and even more so with exposures to radiation workers, dental x-rays, CT scans, and the like.

The same issues arise in GWAS or efforts to detect natural selection at the gene level.  Very small effects are difficult to detect, evaluate, or prove.  We usually do so with statistical significance criteria, but often even large samples are not adequate because too many sources of variation impair the ability to convincingly detect the effect.  Things that are real but small can go undetected and tests for them are vulnerable to interpreting fluke positives true positives.

These are challenging issues for science, because we're very good at picking up strong signals, that behave well relative to statistical evaluation criteria (like significance testing.  That ability itself may lure us to try--and expect--to be successful with very weak effects if can but collect huge enough samples.  At present, it's not working very well: at least, reaching consensus is not easy.

And the problems apply even to breast cancer which, in this context, is not even that rare.

Monday, February 21, 2011

Autophilia: Where morality begins?

Autophilia:  Not, it's not a sex term.  But it is a term we like to use for our human love for ourselves, that makes us act as if the world was made for us to exploit and abuse.   Some, of course, think that a kindly (to humans, not pigs or cows or mice) God set it up this way. Of course, life evolved in such a way that we have to eat, and what we have to eat is or was once alive.  Pigs don't want their throats slit nor carrots to be eaten alive, in order that we may have satisfyingly filled post-prandial naps on the couch.

Sometimes, we decide that for scientific as well as for gustatory objectives, inflicting death, or even disease, on other species, including mammals like rodents and even primates, is justified.  Our autophilia over-rides other aspects of ethics.  But there should be limits....

The fat monkey
There is a story in the NY Times about funded science that uses monkeys to experiment with the health risks of 'couch potato' behavior--obesity, diabetes, and the like.
The corpulent primates serve as useful models, experts say, because they resemble humans much more than laboratory rats do, not only physiologically but in some of their feeding habits. They tend to eat when bored, even when they are not really hungry. And unlike human subjects who are notorious for fudging their daily calorie or carbohydrate counts, a caged monkey’s food intake is much easier for researchers to count and control.
To allow monitoring of their food intake, some of the obese monkeys are kept in individual cages for months or years, which also limits their exercise. That is in contrast to most of the monkeys here who live in group indoor/outdoor cages with swings and things to climb on.
To us, this work is thoroughly disgusting, if not immoral. While institutions purportedly have review boards that must approve every research project proposed by their employees (such as university faculty), this is largely nowadays based on CYA (protect yourself from lawsuit) criteria rather than whether the research really is not cruel to animal or human subjects, and/or whether the experimentation is scientifically justified.

From the TimesAt 45 pounds, Shiva
 is twice his normal weight and carries
 much of it in his belly. He can eat all
 the pellets he wants and snack on
 peanut butter, but gets barely any exercise.
It is perfectly legitimate to want to reduce morbidity and impaired quality of life due to our lazy, McFood-laden lifestyles.  Certainly one should treat those who suffer.  But we don't need animals to be made sick--even rodents, and certainly not primates!--to shows us what we already know.  We know how to address this problem and it has to do with education, income and neighborhood equity, and regulation.  It is not a matter of lack of knowledge.

Of course, now we have people experimenting, and doubtlessly doing extensive genomewide mapping, 'epigenetics', expressomics, nutrigenomics, and all sorts of other made-up types of work to glamorize their value to society. And of course we must acknowledge that so long as that is the reward system, one cannot blame the professoriate for going for it!

But, we know how to solve the obesity related chronic disease problem, we just don't have the resolve to do it.  How about rationing hours spent on video games or TV, intake of fast foods, mileage driven vs mileage walked, and so on?  How about providing wholesome shopping and affordable cost for areas that now are only McServed?  Or banning tobacco.  Banning leaf-blowers by subsidizing the price of rakes.  Outlawing golf carts for those under 65.  And so on.

Or, more drastically: if even then you don't follow decent lifestyle, you are on your own when it comes to health care costs.

Fat monkeys, fat people, professor welfare: what would we learn anyway?
The Times reports that at least one weight loss drug has been shown to be effective on these monkeys, and that gastric bypass surgery can be a treatment for diabetes.  They've also found a drug that makes monkeys gain weight, though it reduced appetite in rodents.  Whether or not these results justify keeping monkeys alone in small cages, and unable to exercise for years at a time is your call.  The story also reports that monkey research has taught us that it's total calories, not type of calorie, that makes us fat.  We've known that for decades.

In the long run, fat monkeys can't make a complex trait simple.  These monkeys are not inbred, a longstanding advantage of lab mice and rats, even though they are less similar to humans than monkeys are.  In fact, 40% of the monkeys don't get fat, which means there's something different about those that do.  Will researchers do GWAS now to figure out what that is?  With as much success as they've had explaining the genetics of obesity in humans?  

Is making monkeys sick much more than yet another playground for the professor class who can't think of more cogent things to do?  Is it much more than something that will give us something to do in between lattes, organic salads, and our time at the Nautilus club, where we can watch TV, yes, but only while treadmilling?

If we had a sane, moral research establishment and health care system, we would take care of these health issues first behaviorally, through lifestyle changes that we already know actually increase quality and length of life, on a large scale.  Then, those who still have morbid levels of obesity or diabetes etc. would be easier to identify. They would be the ones for which genetic, or sophisticated surgical or other approaches would be appropriate, would more likely be truly transformative, or might actually make a difference that the victims can't make on their own.

But that would mean we'd have to get our autohpilia under control, and not justify absolutely everything in our own self-interested terms.  But that doesn't seem on the horizon.  How could it be curbed if we haven't even recognized that the autophilia is a (or the) problem?  And, since it seems characteristic of our species, it may actually be a GWASable genetic problem that our professor class could opportunistically seize upon to justify the next round of Omics!

Meanwhile, it may be a strong thing to say that many will object to, but have the project reviewers, funders, and investigators no sense of shame about what they are doing to these innocent monkeys?

Friday, February 18, 2011

Indecent Exposure

We thought we were parodying what's already absurd enough the other day in our post about the atomic bomb being the first Too-Big-To-Fail mega research project.  Actually, nobody suspected that this was the hydra whose heads couldn't be cut off fast enough, but the future was latent in its name: the atOmic bomb project.

Hydra, from Wikimedia Commons
Our past cynicism notwithstanding, Nature tops all that this week, reporting something we should have foreseen but even we in our cynicism did not.  It's the 'exposome'.  Researchers are proposing to wire people up and photograph or otherwise measure every breath they take, every bite they eat, every chemical they come into contact with, collecting both 'external' and 'internal' exposomes, to "reveal the effects of diet, toxins and other exposures", in order to determine which exposures cause which diseases.  

It's a bit of a technological challenge but if there's anything we excel at it's overcoming technological challenges, so we assume the measuring devices will be made (funds will  now likely be diverted to years of R and D to do that), and study subjects will be convinced to wear, swallow, implant or otherwise port cameras, breathalyzers, air quality monitors and so on for months or years at a time, and of course donating blood, urine and fecal samples at specified intervals for genetic and exposure analysis.  Ready your every orifice!

The challenge of creating and maintaining the huge databases this kind of research will yield will be met, and statisticians will figure out new ways to analyze the data and it will all keep many people busy for many years to come.  So much better of a grant bonanza even than the old-fashioned biobank idea, trivially small by comparison.   

Exposomics. Brilliant! 

The tiny little problem with all this is that it's already very easy to predict the results. Let's not even consider the problem that yesterday's risks, being all that we can estimate, are an unreliable predictor of tomorrow's risks. Just as with GWAS, these studies will find effects, but they will be small and explain little and will not be useful for predicting disease.  The few strong ones will be hyped to death (more material for Nature to trumpet, naturally), but most of them will or could have been identified by less exotic means.

Risk factors with major effects are not difficult to identify.  Any Joe Blow, even without a micro-array, can do it.  Single-gene diseases or major environmental risk factors like smoking or lead paint or cholera are readily revealed by current genetic and epidemiological methods.  As the late curmudgeon David Horobin, founder of the non-conformist journal Medical Hypotheses, once wrote, if you can't detect something in small samples (we think he may have said 30), then it's not worth detecting.

That may be going too far, but we already know that when there are multiple factors at play, each with a small effect, be it genes or environmental risk factors, we move into an arena where small samples won't do: either our methods fail, or the answers are unhelpful in any clinical or public health sense, for the same reason: if they are too small to be detectable they are too small, and ephemeral or fickle to be that useful.  Even if we can detect them, which GWAS, biobanks (and, yes, Exposomics) will occasionally do, it is far from obvious that the cost is worth the game.

When risks are very small, as for example, in dental x-rays, and we know that but can't really estimate them, by far the cost-effective approach is simply to restrain use to situations when something that is important is at stake.  That may not be the 100% best-in-principle approach, but in practice it will save far more than it costs.  And the research money can go to providing important dental x-rays for those who can't otherwise afford them.

Omics-itis is bound to spread.  We expect our local deli soon to have a placard outside saying:

Here now!  PeanutButterAndJellySandwichOmics!
Exposomics?  Really, now!

Thursday, February 17, 2011

The "Me!" parade, or is there a better way to think about funding realities?

So, proposed new budgets are suggesting a $1.6 billion, or 5% or so cut in NIH research funding.  What we've seen in regard to the NSF budget is somewhat different: it may increase modestly, but with the  funds clearly targeted to investment in science interactions,  infrastructure, and education. These seem reasonable and not parochial, but of course NSF budgets are generally much less than NIH grants, and less often cover faculty salaries the way NIH does.

We knew about the NIH proposal because as soon as the proposed budget cuts were announced, we've been besieged by the "Me!" parade of  'urgent'  messages from professional societies urging us (with nice assistance in the form of convenient links) to write our congressmen in outrage, to protest the very nerve of suggesting that research take a hit!

But why does this instant email lobbying go on?  Millions are homeless or even jobless, or have no health care, or have disorders that don't require exotic research to alleviate, and to keep the economy from total free-fall the feds had to go into debt that they now have to figure out how to get out of, and if cuts have to go across the board.  Given this, why can't we be realistic and even good citizens to boot, and realize that a 5% cut in funds is not a cataclysm for science, somewhat less serious than being homeless, and  something to which we should respond to constructively, and with good grace?

If everyone in our society  feels  it's an automatic given that we'll protest anything disadvantageous to our personal selves, we'll descend into more internecine strife than the current situation is going to cause anyway.  Science has grown fat (and complacent?) on grant largess over recent decades, but have we delivered to society in commensurate terms?  Or have we become self-satisfied and ever willing to ask for more, bigger, longer, grander funds for feathering our own nests, with universities and research institutes living on the overhead that we (as their sales force) bring in?  Are the new data and findings in NIH-funded projects--which are very interesting, to be sure--our private playground, or are they really what the public tax base should be used for?  Are measures of public health improving as a result--if they're improving at all?

Investigators will certainly be able to manage, if we must, on less, and we think this could even be good in several ways.  Rather than lobby for more funds, why not lobby for more grants, even if they're smaller, and of shorter duration than they've become?  That way new investigators, young investigators, and people who actually have clever new ideas can have a better chance!  Why not cap the amounts any given lab can have, or stop projects that have been continued too long, or have grown too large, or have reached diminishing returns--and divide the savings up among people who offer something new?  Why not do more centralizing and sharing of costly hi-tech resources, and be more stringent about funding hi-tech but low-thoughtfulness projects?

Maybe schools that have grown fat on overhead with inflated but unpaid (soft-money) faculty, driven to flood the system with relentless grant applications, will have to develop a new sense of socially responsible mission, even if this means shrinking in size, and paying more attention (heavens!) to teaching.  The reversal of a 30-year trend towards growth for its own sake would not be an entirely bad thing. With the current age distribution, phasing back could be done as people retire and simply aren't replaced.  We don't need as many graduate students in an environment that is not able to grow exponentially--even if students are the trophies we like to wave about to demonstrate our importance.

Maybe departments will have to think about importance rather than dollars, when they hire new faculty.  Maybe as we tighten our belts, it will push the blood back to our brains, and the constraint will force new, creative thinking, and a new day for science that is both innovative and socially responsible.

Wednesday, February 16, 2011

The Wars Within

Life is an ecology at all levels, from the biosphere itself (sometimes referred to as 'Gaia') to the rumbling that goes on inside you as you and parasites who want to eat you from the inside out, struggle for supremacy.  Ultimately nobody wins, as these are age-old struggles.  But there are different strategies for at least winning battles.

Different stages in the life cycle require different levels of resource allocation.  Thus, according to a new paper in The American Naturalist, Competition and the Evolution of Reproductive Restraint in Malaria Parasites, by Pollitt et al., the trade-off in resource allocation between more and less costly life stages is a "key problem for natural selection to solve."

CDC image, Wikimedia Commons.  Parasite life cycle describe here.

To address this issue, Pollitt et al. look at the life stages of malarial parasites to propose a solution to the problem of how evolution decides to most efficiently divide resources between different life-history stages.  The work is described on the BBC website here.  They have a particular interest in the inter-parasite dynamics in the bloodstream of an infected organism.

For malaria parasites, in-host replication and between-host transmission are two distinct stages of the life cycle.  How, Pollitt et al. wonder, does evolution solve the problem of how to allocate resources between these two stages?
This is analogous to the trade-off between reproduction and maintenance faced by multicellular sexually reproducing organisms.  The assumption that reproduction is costly, resulting in tradeoffs between reproduction and survival and between current and future reproductive effort, is a key concept in evolutionary biology.
And, they say that, in spite of the toll malarial infection takes around the world, little is known about the investment strategies of malaria parasites, although they seem to invest "remarkably little" in transmission during infection. But, many infections are by a genetically heterogeneous mix of parasites, and in-host competition for resources between the different infecting genotypes seems to lead to reproductive constraint.  
Previous studies suggest that when in-host survival is threatened parasites increase investment in between-host transmission at the expense of in-host replication but recent evolutionary theory predicts that the opposite should occur.
The recent discovery that malaria parasites can detect and respond to the presence of unrelated competitors suggests that they could also use this information to decide how much to invest in reproduction. [By the way, see the publication for references; it's chock full.]
First, we used a bank of genotypes to test for genetic variation in patterns of gametocyte investment throughout infections. Second, we monitored three focal genotypes in single and mixed infections with one or more competitors to test whether investment in gametocytes is facultatively reduced in competition. Third, we predicted that if reproductive restraint in mixed infections enables parasites to gain the greatest share of exploitable resources, then the investment decisions of each genotype will be influenced by the availability of these resources.
They infected mice with various parasites, and assayed the parasite load and life cycle stages.  And, they calculated the 'gametocyte conversion rate', the proportion of parasites that differentiate into sexual stages relative to asexual stages, relative to genetic mix of an infection, which is their measure of the extent of competition a parasite is facing.

They found "significant genetic variation and phenotypic plasticity in the reproductive effort of malaria parasites."  And, that, in the context of mixed infections, investment in gametocytes is reduced, and diverted into asexual replication.  That is, competition reduces investment in reproduction.  Pollitt et al. suggest that they do this by taking the measure of their environment and responding accordingly.  But, this brings up the question of how they know that they are in the midst of a heterogeneous infection.  What alerts them to the fact that they aren't surrounded only by kin?  And if it really does matter to them, are we verging on a 'group selection' scenario?  And in that sense, isn't cooperation as important here as possible competition?

Parasites inside you are attacked on their own microscopic level.  We described some of this at length in our book The Mermaid's Tale.  Big organisms like us have immune systems to recognize invaders by their surface-molecular characteristics, and we do this by generating molecules of all sorts of which one, at least, will 'recognize' the invader, bind to it, and trigger a destructive reaction.  But the parasites don't like this, and when we've got too good a hold onto their characteristics, they have gene families coding for cell-surface proteins (called var or avr genes in some species), from which they can randomly pick a different gene to express on their surfaces, which makes them again invisible until the immune system regroups and goes at it anew.

If the results reported in this paper hold up, we think they suggest more about adaptability and facultativeness than any generality about natural selection's problem solving abilities.

Tuesday, February 15, 2011

Reversibility: Dollo's 'law' in development and evolution

We recently posted on reports of the re-evolution of traits that had long been lost in evolutionary time.  This seemed to violate Dollo's "law" that evolution was a one-way train that couldn't back up.

When there are people in a population with or without a particular trait, say, eye color, and their children have a different version, we are not perplexed. Contingencies of gene expression or genotype or alleles (genetic variants) in the population can make this happen.  Darwin tried to fit his idea of inheritance with these ideas, mainly by hand-waving.  Now armed with concepts like multi-gene control and recessiveness of alleles, we have no problem understanding how these things happen.

But there are reasons to think that true reversibility of complex traits can't often happen over evolutionary time.  The basis of the argument is that too many genetic changes are required for a complex trait to be constructed and if the trait is 'erased' by mutation (and that is supported in the face of selection), then over time too many other genetic changes (mutations, gene duplications, other uses of genes, etc.) will make it impossible to back-track.

From this point of view, the 'new' version of the trait is physically similar to the old, or to that in a widely distant species, but is due to selection for the same trait that happens to pick up different genes and alleles to get the job done.  But if a pathway has been conserved because it's used for other things in the organism, it may be that simple genetic changes can reactivate that pathway in a context in which it was active long ago.

There has been a similar kind of no-going-back dogma, a kind of Dollo's "law" in developmental genetics.  Stem cells can differentiate into anything, but once that happens, the differentiated cells simply cannot go back to being stem cells.  We now know that this is not accurate. Even a small number of genetic changes in experimental systems can restore various stem-cell states.  This can happen even if the cell being manipulated is highly differentiated.  Is this as surprising or inexplicable as reversals in evolution?

The answer is that it is far less surprising, as a generality.  With some few notable exceptions, all the cells in your body have the same genome. This means that while each cell is of a particular type largely because it uses a specific set, but not all, of the genes in the genome.  There are, so to speak, 'blood' genes, 'stomach' genes, and so on. Gene expression is based on the physical packaging of chromosome regions and the presence of proteins specific to the cell type, that bind to DNA in regions near to, and that cause the expression of the specifically used genes.  But since with few exceptions all the genes still exist in all cells, if one changed these regulatory traits (packaging and so on of DNA, presence of regulatory proteins), one could make the cell do something else.  There may be too many changes in expression needed to make a stomach cell into a lung or muscle cell on its own, but we're looking at cells from the outside, and cells can be engineered to redifferentiate or dedifferentiate by experimentally imposing required sets of change. And in a sense it's why in some instances it only takes about 4 genes being manipulated to bring cells back to a very primitive stem cell type of state.

Evolution is different,  because once a species is committed to a particular direction, its genes themselves as well as their usage have changed, by virtue of mutation and frequency change induced by chance or natural selection.  Thus, spiders and grasshoppers no longer have the same genes so that only the expression pattern would need to be changed to let spiders hop or grasshoppers spin webs.  That is why evolution rarely truly reverses. Sometimes only a few changes would be needed, if basic pathways still exist but have been mutationally inactivated.

On the other hand, most traits have many paths and most genes have many uses, so that there can be many different paths by which some absent trait--or its likeness!--can reappear.  Natural selection and chance could activate some suitable set of genes to make this happen, and how likely it is depends on what environmental constraints are.  And since many developmental genes are highly conserved over long time periods, there can easily be similarities in the genetic basis of reappearance.

We know that some traits, such as complete vs incomplete metamorphosis in some species of amphibians (i.e., whether or not they go through a larval stage), or the pattern of ocelli (middle eyes) in insects, have re-evolved.  And we know that some genes from mammals can induce similar effects even in insects, by replacing or over activating their corresponding insect gene.

So, reversals are of many types due to many causes.  How likely they are, and how genetically they are brought about, are statistical and context-specific questions.  But there are no real mysteries about whether or not they are possible.

Monday, February 14, 2011

Omics and the atomic bomb

As we noted last week, Nature and Science, and surely many others involved in the 'genomics revolution', are feting the 10th anniversary of the human genome sequence, Nature in its Feb 10 issue, and Science all month.  It's petty at this juncture to point out that 'the' human genome sequence is still not complete, and that the anniversary being celebrated was a date on which it was politically expedient for all those involved in the rancorous sequencing race to declare victory....and go back to work.

But we point these things out anyway, as a reminder of just how much hype the human genome sequencing project has been from the start.  The hype ain't going away anytime soon. In fact, it is leading to an epidemic medical condition, of geneticists needing expensive physical therapy to repair their shoulders that have been dislocated in the latest round of exuberant patting themselves on the back.

DNA sequence,
Wikimedia Commons
This is not at all to say that nothing good (in addition to business for the physical therapists) has come of this whole endeavor.  We know a lot more about gene structure and function and so on than we used to, and indeed genetic technology as represented symbolically by the human genome sequencing has led to many new discoveries across the spectrum of the life sciences.

But when even the best believers in the project say, when confronted with more biological data than ever before amassed in the history of science, that what we need is ..... more data, or that the promised immortality due to genome-based cures will be decades in the future, one has to wonder what we're dealing with.

In fact, the completion of the human genome was quickly followed by the birth of a whole new -omics infrastructure; proteomics, biomics, nutrigenomics, cistronomics, epigenomics, microbiomics, metabolomics, connectomics and on and on, a recognition, tacit or otherwise, that genomics just wasn't going to be enough.  Surprise surprise.  And to some extent we owe it all to the atomic bomb.

The Manhattan project showed that mega-science with huge, long-term funding would be an employment boon unlike anything since the gold rush.  After the catastrophic end to WWII, a foundation was set up in Hiroshima to study the effects of radiation exposures to survivors.  That was about 65 years ago, and the Radiation Effects Research Foundation is still going strong.  Mega-science became the strategy du jour and that view has been growing ever since.

It's a little known fact that the origin of the term Omics is from the Greek, meaning either "Too-Big-to-Kill" or "No-Need-to-Think."  That's because once you start down the 'omics' pathway, that is, of using technology to document absolutely everything in everybody, rather than to think about what you're doing carefully and justify doing something selectively--that is, once you become an omicist, you'll never think again.  The project will involve so much investment that your Senator will not allow NIH to cancel the project--a good deal if ever there was one!  Or, if you're doing science but want to keep up with the Joneses, once you see the guy in the next department doing it, you have to have your own omics project, too.  After all, fair's fair!

We've written numerous times before about the exaggerated (some would make strong arguments for 'knowingly false') promises of the human genome project, and it's true that this is not being entirely overlooked in this celebratory time.  But feting this overwhelming mass of data, that we've just barely begun to make sense of, by calling for more data, to be collected at huge expense even as the cost of sequencing single genomes has plummeted, before we work out what we can do with the data we now have, is a cynical abuse of public trust.  The last set of promises is far from being filled, and we're now supposed to trust researchers with more money to answer the same questions they couldn't answer last time?

We have, say, 10 good candidate genes for effects on some disease, be it psoriasis or diabetes, and yet we continue to map, map, map to find even more genes--hundreds of them--that make ever more trivial contributions.  Why not stop spending on these larger-scale studies, and figure out what the reliably known genes, that may really do something, are doing and how to develop therapies and the like?  Of course some investigators are doing that, but more funds could be diverted to real problems if we but had the will....and hadn't set up so many Too-Big-to-Kill omics endeavors.

At least as serious is that scaling up means more money and longer-term research comfort, which is always easier than trying to think out serious problems to find more creative ways to understand them.  That is the situation we're in now.  New ideas come from the combination of data, genius, luck--and the struggle against a conceptually challenging problem.   Megafunding and megaprojects undermine the last, and most important of these, because they institutionalize science.  Many, including Darwin and Einstein and others of their stature, remarked that they couldn't have done what they did in the stultifying environment of universities, for example, and that was long before universities became the way they are today. 

The challenging problem is not a secret, and yet too many geneticists continue to dance around it -- complex traits are complex.  This is the single most consistent finding to come out of the last several decades of genetic research, including as we've also noted numerous times, from genomewide association studies (GWAS).  It's not a surprise, it's not a secret, and it's actually a positive finding, though too often ignored.  It is not that business as usual has brought no new findings, but the miniaturization of findings, the loss of focus on the really general, central issues we think is the problem.

Friday, February 11, 2011

The human genome birth month

Ten years ago this week, Nature and Science published results of the first pass at sequencing the human genome.  And now Nature and Science are feting the event with commentaries from people who were central to the initial effort, as well as assessments of where we are now. Science in fact will have 4 issues celebrating the birth month of the human genome sequence, not just a single issue. We will blog about a few of these commentaries in the next few days.

A News Focus from the Feb 4 issue of Science asks how all we've learned in the last 10 years is being translated into clinical practice.  Titled "Waiting for the Revolution", the message is that if physicians saw some practical use for genetic information, they'd be using it, but they don't and they aren't.  One consistent message from those who defend the commitment of time and money on genetic research is that it's early days yet, and we can't expect miracles overnight.  The practical use of genetics that genetics enthusiasts frequently cite is sequencing of tumors to allow an informed choice of treatment.

There are a few noted clinical decisions that can be made based on genetic analysis, but they are far fewer than had been hoped (or promised) 10 years ago.  The amount of genetic information that can be collected on any individual continues to far outweigh its usefulness for disease prediction or clinical applicability.  For example, should individuals with a history of deep-vein blood clots be tested for known genetic risk factors, factor V Leiden and prothrombin gene variants?  According to the News piece,
Both genes influence such clotting. People who have had such clots should be treated with anticoagulants anyway, regardless of genetic status, the panel concluded. And in a second group—relatives of people who have had clots but who themselves have not—the panel judged that it would be too risky to treat preemptively with anticoagulants (which can cause hemorrhaging) based on genetic status alone.
And the same is true for many other conditions.  Treat it when it happens, but preventive treatment is not useful, or is even risky, so genotyping is an unnecessary added cost.  Researchers have been looking for genes for diabetes for decades, with little success, for example.  A colleague of ours recently told Ken that although he's been working in the field of heart disease and lipid research genetics for decades, focusing on ApoE (one of the best candidate genes that has been studied ad nauseum), he still cannot put any of the knowledge about ApoE into his internal medicine practice.  

But let's say they find genes for whatever their favorite disease (an outcome that isn't at all guaranteed).  In what sense will that help prevent or treat it?   Does it really matter as a rule?  If you just eat better, exercise and don't smoke, your risk of heart attack will drop by far more than even the most enthusiastic gene-O-hyper can claim.  Enthusiasts will say that genetics will transform medicine to target treatment, itself based on molecular technologies, to each individual's particular risk.  The truth will be a mix, and each person has his or her own view on where the balance, and the distribution of resources, should lie.

Stepping back, we can say that it's always true that most work in a field is pedestrian, unremarkable, journeyman-like but not spectacular.  A century (or more) later we may remember the 'genius' but forget the unheard-of peons in a given field.  Our era is distinguished by its materialistic greed and immodesty, but is otherwise the same as it's always been.  There will be expensive trails of cow-droppings, but there will also be gains, some of them important.  They may or may not correlate to the bravado of those today who boast about their successes, lobby for funds, and so on.  Most of our work will be tiny, incremental, incidental, or irrelevant.

A century from now there may or may not be blogs in which reflection can be seen to look back on the early 21st century to point out where the remarkable insights were.  But we can be sure that then, as now, there will be both the remarkable minority, and the pedestrian majority.

Thursday, February 10, 2011

Wednesday, February 9, 2011

"Peeled truffles and seasoned mushrooms": how are the sins of the fathers visited on their sons?

One of the big sources of dissension among people trying to place human behavior and social organization in the context of biological evolution is understanding the nature of heredity.  It is only hereditary causes that are involved in evolution (and for the purposes of this post, that means genes) and hence play a role in the nature of organisms over the long haul.  What you are depends on your environment, but your reactions to that are constrained by the patrimony built up over 4+ billion years of heritable ancestry.

There are countless examples of casual views of what makes us what we are.  One of us (Ken) has been reading the satires written by the Roman author Juvenal, writing about 125 AD.  Satire 14 seemed relevant to these ideas.  Here's one translation.

Satire 14 is called "The influence of vicious parents" in at least one translation (though it's "No teaching like that of example" in another).  In any case, Juvenal notes the strong influence of parents.  Some children learn from their "wastrel father... to enjoy such things as peeled truffles and seasoned mushrooms, and warblers steeped til they drown in the mushrooms' sauce."

"So Nature ordains," writes Juvenal, that "no evil example corrupts us so soon and so rapidly as one that has been set at home, since it comes into the mind on high authority." [this quote from a different translation]   Juvenal is being cynically satirical about bad, selfish, or debauched human behavior, but his point is cogent: to a great extent we are what we pick up at home.

The point here, with respect to modern genetics, and MT, is that if one has a belief that this or that trait is inherited in the genetic sense, one can make up stories of how the trait got that way, by virtue of a long history of natural selection.  But what evidence do we need before we should be taken seriously in such arguments of inherency?

The essential evidence must include that there are family resemblances.  Factors like recessiveness can obscure patterns, but so can what are called 'incomplete penetrance' or 'phenocopies' or 'sporadic' instances of a trait.  This means that even if there is a gene 'for' a trait--that is, whose variation confers the trait on its inheritor--the trait may not always be present, or, it can arise for other reasons.  Family environment is another factor that influences presence/absence of a trait.

Rigorous genetic inference should have to show that traits are not just familial, but obey Mendel's laws of inheritance.  However, the problem is that such patterns are almost impossible to prove, because there are so many possible causal factors ('parameters' in the statistical tests of inheritance patterns), that almost any trait, or any hypothesis, can be given support.  This is made all the more complicated by the obvious and undoubted fact that many if not the vast majority of genetic effects are context-dependent, in that they are affected by the environment.  And environment begins at home.

The difficulty of showing that a trait that aggregates in families actually 'segregates' as Mendel's laws would have it, has led to a general abandonment of rigorous segregation analysis, replaced by the assumption that traits simply must be importantly genetic and all we need to do is scan the whole genome to find the responsible genes.  How much greed, vice, or seasoned mushrooms is needed to show that the child is the likeness and image of the parent?  See all our previous posts on GWAS if you want to know what the issues are (and what we think of them, whether you agree with us or not!).

In this very sloppy epistemological environment, that is, we face fluid and largely unconstrained criteria for making inferences about what is genetic and what isn't, and how something that you assume 'must be' genetic can be explained by some sort of adaptive scenario that you can invent.  This makes assumption into truth in a rather untestable way.  And when evidence is gathered, there are so many ways to interpret it that it is almost always possible to find one that fits your prior expectations.

Each of us is, to a great extent, what we are.  Clearly everything about us has some genetic under-pinnings, or we would not so clearly resemble our parents to an obvious extent.  But just as clearly, much of what we are comes from our surroundings.  In that sense, it is not inherent in us.

Social politics is complex but there are many reasons why it is convenient to view the sins of the sons as having been visited on them by their fathers. But if "young people readily copy most of our vices," as Juvenal wrote, selfishness and greed and other sins may reflect the nature of society as much as they do the nature of our genes.  Sorting this out is a major challenge, because there are so many social consequences when those in power use convenient assumptions to manipulate others to their own advantage.  Science is built on assumptions, but assumptions are not science.

Tuesday, February 8, 2011

Are we too clean? The hygiene hypothesis and asthma

Cleanliness is next to wheeziness?
An article in The New Yorker last week says that physicians have it all wrong when it comes to childhood allergies (also called atopy).  The recommendation of the American Pediatric Association since 2000 has been that, to prevent food allergies, infants should be breastfed exclusively for 6 months, with solid foods introduced one by one for the next 6 months, to monitor effects of each food, but avoiding eggs and peanuts, because they are the foods to which kids are most likely to develop allergies.

But peanut, egg and dairy allergies are now skyrocketing!  So, some people are starting to think that prevention by avoidance may actually be causing the disease and suggesting that perhaps infants should be exposed to these foods in small quantities much earlier than current recommendations.  And some studies are beginning to show that such exposure might in fact eliminate even an existing allergy.  Perhaps, according to the article, medicine has erred on the side of excessive cleanliness once again. Cleanliness reduces serious infectious disease exposures, perhaps, but one shouldn't overdo it because we've evolved to be commensal with microorganisms, both viruses and bacteria--we've coevolved with them, are absolutely covered inside and out with them, and in many ways our lives literally depend on them.

By the way, it's worth pointing out that while there is undoubtedly some genetic variation, overall genes have little if anything to do with the spike in allergy incidence--genomes haven't changed in the last 50 years! 

An inverse relationship between allergy and helminth infection has been recognized since the 1960s, despite the fact that both induce a similar immunological response, but the reasoning that allergy may be a side effect of excessive cleanliness starts with a hypothesis first proposed in 1989 in a paper published in the British Medical Journal in 1989.  This paper was the first to propose that the 'post industrial revolution epidemic' of hay fever and eczema in Britain could be due to what has come to be called the 'hygiene hypothesis'.  The author looked at a handful of factors that could be associated with risk of hay fever, in a study sample of 18,000 children. "Of the 16 perinatal, social, and environmental factors studied the most striking associations with hay fever were those for family size and position in the household in childhood."

The author concludes,
These observations do not support suggestions that viral infections, particularly of the respiratory tract, are important precipitants of the expression of atopy. They could, however, be explained if allergic diseases were prevented by infection in early childhood, transmitted by unhygienic contact with older siblings, or acquired prenatally from a mother infected by contact with her older children. Later infection or reinfection by younger siblings might confer additional protection against hay fever.  
Over the past century declining family size, improvements in household amenities, and higher standards of personal cleanliness have reduced the opportunity for cross infection in young families. This may have resulted in more widespread clinical expression of atopic disease, emerging earlier in wealthier people, as seems to have occurred for hay fever. 
The idea was soon fleshed out to suggest that reduced exposure to pathogens, such as hepatitis A, Toxoplasma gondii or helicobacter pylori, changes the balance between two types of immune response, type 1 (TH1, triggered by bacterial and viral infections and autoimmune diseases), and type 2 (TH2, associated with helminth infections and allergic diseases).  An extension of the hygiene hypothesis to include autoimmune diseases was proposed in 2002 (cited here), and has subsequently been demonstrated in animals.  E.g., in lab mice, the incidence of type 1 diabetes rises with cage cleanliness.  (Type 1 diabetes is another disease whose incidence has risen in the industrialized world, along with asthma.)

 A much-cited paper in Science in 2002 is worth quoting at length.
There has been a significant increase in the prevalence of allergic diseases over the past 2 to 3 decades. Currently, more than 130 million people suffer from asthma, and the numbers are increasing ; nevertheless, there is a considerably lower prevalence of allergic diseases in developing countries. There are also clear differences in the prevalence of allergies between rural and urban areas within one country. For example, in Ethiopia, asthma is more prevalent in urban areas than in rural villages, and asthma is more common in residents of urban Germany than in farmers living in rural Bavaria. To explain these observations, environmental factors associated with more industrialized and urban living have been studied intensively, but there is little consistent evidence to suggest that obvious risk factors, such as increased exposure to indoor allergens, pollution, or changes in diet and breastfeeding, could account for the rise in atopic diseases. However, another category of environmental factors, childhood infections, shows an overwhelming and consistent negative association with atopy and allergic diseases. Allergic sensitization is overrepresented among first-born but is less frequent in children from large families and those attending day care, suggesting that a frequent exchange of infections may have a protective effect. 

Divergent outcome of TH2 responses in industrialized (low pathogen exposure) and developing countries (high pathogen exposure). It has been argued that improved hygiene, frequent use of antibiotics, and vaccination has led to reduced bacterial and viral infections in industrialized countries and therefore to insufficient stimulation of TH1 responses, which in turn allows the expansion of TH2 cells. TH2 responses are characterized by increased IgE to allergens, mastocytosis, and eosinophilia. Mast cell degranulation and release of inflammatory mediators leads to mucus production and smooth muscle cell contraction, precipitating allergic diseases of the airways. Helminths are prevalent in developing countries and lead to strong TH2 responses. Nevertheless, helminth-infected populations show little signs of allergic disorders. This difference may be explained by the differences in exposure to pathogens. A high prevalence of chronic infections in developing countries results in persistent immune challenge, with cycles of infection and inflammation, which is followed by the triggering of anti-inflammatory molecules to restrict immunopathology. This dynamic interaction educates the immune system to establish a robust regulatory network, possibly the key to controlling allergic diseases. Such a network would be weakly developed in industrialized countries with a low pathogen load, allowing inappropriate immunopathological reactions to develop more readily.

As the worm turns!
The helminth connection led to some interesting work in the fine tradition of self-experimentation, with researchers infecting themselves with helminths and monitoring what happened to their own allergies or asthma, with some positive effect.  Some people now propose worm therapy, in fact, but the pathogen load has to be a heavy one.

But, as usual, the story is not so simple.  An article in last week's Scientific American discusses a paper in Pediatric Allergy and Immunology reporting that Ugandan kids of mothers treated for helminth infection when they were pregnant were more likely to have eczema than kids whose mothers weren't treated.  The paper is subscription only, so we don't have access to the full text, but it looks like the story is more complicated than Scientific American would have it.  Mothers were treated with 1 of 2 different anti-helminthic drugs (albendazole or praziquantel) or placebo, and, from what we can tell from the abstract, the effects seem to differ depending on which drug the mother received and which pathogens she was carrying.
Results:  Worms were detected in 68% of women before treatment. Doctor-diagnosed infantile eczema incidence was 10.4/100 infant years. Maternal albendazole treatment was associated with a significantly increased risk of eczema [Cox HR (95% CI), p: 1.82 (1.26–2.64), 0.002]; this effect was slightly stronger among infants whose mothers had no albendazole-susceptible worms than among infants whose mothers had such worms, although this difference was not statistically significant. Praziquantel showed no effect overall but was associated with increased risk among infants of mothers with Schistosoma mansoni [2.65 (1.16–6.08), interaction p = 0.02]. In a sample of infants, skin prick test reactivity and allergen-specific IgE were both associated with doctor-diagnosed eczema, indicating atopic aetiology. Albendazole was also strongly associated with reported recurrent wheeze [1.58 (1.13–2.22), 0.008]; praziquantel showed no effect.
It's impossible to know from the abstract whether the authors considered the alternative explanation, that any atopy and asthma was the result the drug of eliminating the infection, or in fact of the drug--as Holly pointed out here.

And, the same Science paper we've quoted at length suggests that TH1/ TH2 balance may not in fact be the explanation for the inverse correlation between helminth infection and allergy, but instead the protective effect of infection may have to do with the anti-inflammatory response it triggers.  Other severe infections such as measles or tuberculosis are also inversely correlated with allergy.  The authors propose an alternative protective pathway to explain the hygiene hypothesis:

Education of the immune system by pathogens. Dendritic cells can develop into distinct subpopulations, depending on the nature of the signals they receive from the microenvironment, and then direct T cell differentiation into polarized subsets. Viruses, bacteria, and helminths carry distinct signature molecules that interact with dendritic cells to stimulate TH1-type and TH2-type immune responses. When uncontrolled, strong TH1 and TH2 responses lead to autoimmunity and allergy. High pathogen burden may either change the physiology of the microenvironment or result in the accumulation of novel signature molecules that together endow dendritic cells with the ability to induce regulatory T cells. Regulatory T cells produce suppressory cytokines and are part of an anti-inflammatory network that ensures that inflammatory T cells (both TH1 and TH2) and their downstream effectors are kept under control.

So, does the 'hygiene hypothesis' explain the significant rise in asthma, allergies and autoimmune diseases in the industrialized world?  The evidence is building, but the relationship is still more of a correlation, based on epidemiological associations, than demonstrated causation and there are still many unanswered questions.  If nothing else, you can be sure that the complexity of the immune response, and how much is still not understood, are limiting factors when it comes to answering those questions.

But it is possible that one day the apparent protective mechanism will be demonstrated and understood, and therapeutic approaches will be designed based on this mechanism so you won't have to infect yourself with helminths to treat your asthma.

And, we'd like to point out that the mechanisms will involve genetic pathways, undoubtedly, and the approaches will involve molecular technology.  But the therapies won't depend on personalized medicine, nor on the sequencing of the human genome -- whose 10th anniversary is being so heavily feted this month in Science.

Friday, February 4, 2011

Laws of nature? Can social science find them?

Nature has published a list of the 10 most pressing questions in social science, as determined by a symposium of social scientists who met last April to collect ideas and propose the list.  These questions will, they hope, drive social science research for decades to come.
The 'top ten' approach was inspired by a list of 23 major unsolved questions compiled by the mathematician David Hilbert in 1900. The Hilbert problems helped to focus the attention of mathematicians throughout the following century. "He laid out the road map for twentieth-century math," says Nick Nash, a vice-president at General Atlantic, an investment firm based in Greenwich, Connecticut. "What if we had a road map for other disciplines?"

So here's the social science road map.
Social science lines up its biggest challenges
Top ten social-science questions:
1. How can we induce people to look after their health?
2. How do societies create effective and resilient institutions, such as governments?
3. How can humanity increase its collective wisdom?
4. How do we reduce the ‘skill gap’ between black and white people in America?
5. How can we aggregate information possessed by individuals to make the best decisions?
6. How can we understand the human capacity to create and articulate knowledge?
7. Why do so many female workers still earn less than male workers?
8. How and why does the ‘social’ become ‘biological’?
9. How can we be robust against ‘black swans’ — rare events that have extreme consequences?
10. Why do social processes, in particular civil violence, either persist over time or suddenly change?

We can't say that we actually understand all of these -- number 6, for example.  What is it that we don't understand about how humans create knowledge?  The anatomy and physiology of creativity, learning and speech? The role of cultural context in inspiration?  Not sure.

But, we'd say that a lot is already known about some of the others.  Number 1, how to induce people to look after their health.  You know what they are talking about here -- not how to get rich people to be more careful when they ski to avoid breaking their legs, or how to prevent their yachts from capsizing.  Yes, effective health education is a problem.  But really, which segments of society are most likely to smoke and be overweight?  Not the rich.  The obvious solution is to give more money to the poor.  Improve their standard of living, and their health will follow. 

And number 2, how do societies create effective and resilient institutions, such as governments?  Aren't repressive regimes effective at what they do?  Wasn't Saddam's government effective in many ways?  Electricity was on more hours of the day, a lot more oil got pumped out of the ground than the post-Saddam government has managed to do.  But we're pretty sure that's not what the list-makers had in mind. 

And number 4, how to reduce the skill gap between whites and blacks. Better funding for schools in poor neighborhoods would be one way.  Unless the unstated assumption is that whites are genetically predisposed to be better skilled.

This list strikes us as largely about what might be called 'bourgeois' concerns of the here and now (and upper-middle class) in a way that the list of mathematics challenges can't be, or a list of, say, unanswered questions in biology wouldn't be.  That doesn't make these questions irrelevant by any means, but they are a kind of we-they list of current issues, reflecting assumptions and perspectives of the people asking them. 

These applied social engineering questions are analogous to pharma or agribusiness objectives of using genetics to make drugs or crops that work, or, by their sales, satisfy the companies, or whatever.  If social science is to actually make progress in understanding human nature, rather than simply re-inventing the questions whenever something crops up, we think more attention should be paid to the basics.  For example, does life follow tractable 'laws' and if so how they are to be understood and applied to specific questions, across the spectrum of life, like the nature of thistle burrs or the nature of lipid metabolism? If there are such laws, do we know them?  can we know them?  can they help us account for societal structures? 

The social sciences should be at the core of our understanding of ourselves.  But they need to come up with a better list of questions before that can happen.

Thursday, February 3, 2011

Whence minority rule?

Well, no science post today.  We're spending too much time watching live footage from Egypt. Not only do we, as Americans and people fortunate enough to live in a democracy, sympathize with the protestors, but our daughter is now living and teaching music in a conservatory in Palestine, amidst people for whom occupation is a constant fact of life (and we sit here smugly, the occupiers of North America, who put its prior indigenous inhabitants essentially in permanent refugee camps).

But in addition to the sense of unfairness about human politics in so many places, this leads us to muse about how it is that human societies manage to establish minority rule, often cruel minority rule.  How is it that a small number of people can control the lives of a much larger number?

The evolutionary basis of inequality is perhaps interesting but not the answer.  Social hierarchies cannot be put down to the leaders' better intelligence or belligerence genes.  Even if dominance were genetic, its evolution over countless generations in small demes would largely have fixed the responsible genes, so that most positions in the dominance hierarchy would be due to chance or something other than genes. 

Social pyramids are something different. One person leverages several others by dint of personality or resources or something, and they in turn leverage a larger number.  Those at the top control information, resources, education, and so on.  They then control acculturation, beliefs (including the belief that the boss deserves to be boss because of some religious or political ideology), and so on.

But these are descriptions of what happens, even in democracies.  Yet it seems inevitable regardless of ideology, as communist countries surely and clearly showed.  Hierarchies are of all sorts, some more rigid than others, but they are found at all levels of society.  It's easy, one might say, to see why organizations from families to teams to clubs to governments function only when they have  leaders and followers, but again description is not what's interesting to us from a biological point of view.

From that point of view, how do minorities manage to control majorities?  Is the answer to be found in physiology or genetics in any useful sense?  Or is it, as the founders of modern (and hence pre-postmodern) social scientists argued, something that must be explained in terms of social, rather than biological facts--that is, that society's structures may be manned by biological organisms, but the nature and evolution of those structures has its own properties independent of the biological details: hierarchies can exist in any population with any set of genotypes.  Are there some properties or principles by which such social facts can be explained, and if so are they like 'laws of Nature'?  Could it be otherwise, and if not, why?

These are thoughts that come to us as we watch the struggles now going on in the Middle East.