Friday, March 11, 2011

Free will in the genomics age: does it have any meaning?

In the March 10 edition of the BBC Radio 4 program In Our Time, the three guests discussed the subject of free will.  Does it exist?  Could it exist in the age of science? Or is it just a mistaken notion that is a hangover from religion, related to what is needed in order for people to be responsible for their own actions, and hence where they end up in Eternity?

This discussion was by philosophers, not neuroscientists, but the neuroscience and general-science perspective is there.  The discussion is quite interesting.  The idea of free will arises because we so much feel that we have it, that we make decisions.

The issue for us relates to the concept of determinism.  If the universe is completely Newtonian, that is, follows perfect laws at all scales of observation, then everything is related to and in that sense predictable by anything.  At the time of the Big Bang, what you are going to have for dinner was, in principle, predictable.  That would have fundamental consequences for evolution, since in a purely deterministic universe there is no real competitive factor among rabbits and foxes: the slowest rabbit was fore-ordained to be dinner for the fast fox.  Random variation screened by unpredictable experience is not what's going on, despite the Modern Synthesis claims to the contrary!

Of course this is all nonsense, because to see that everything is predictable you probably would have to be outside the universe to observe it, but the  idea of a totally deterministic universe is that it's entirely of itself--no outside agent that could meddle, or even observe it.

Anyway, if this is the world we're in, then nobody is responsible for their actions in the moral sense, not even Stalin or Hitler or Ghaddafi, or Mother Theresa for that matter.  We are determined from conception by our genes, and our environment.  Our neurons wire up during life, in totally predictable ways (predictable in principle, that is, if one knew where every molecule and every neural cell was at every instant, etc.), and so the thoughts we think are just the result of that wiring--not in any sense freely thought by us, if thoughts really are just signals flying around among neurons.

As the discussion in the BBC program points out, even if randomness exists, we aren't morally at free will, because we're the combination of pure physical determinism, plus chance events that we don't control.  Thus the usual appeal to quantum mechanics probability doesn't change the story of determinism vs moral responsibility.

But are even 'random' events like mutation really random?  If they are not, but are just determined in ways we can't understand, the world returns to deterministic laws-of-Nature status.
But what is a 'chance' event?  Is it one with no cause?  Or is there some kind of cause that is probabilistic--clearly something we do not understand?  If, for example, random mutations really follow some laws of probability, then determinism just takes a slightly different form.  Free will remains in the realm of the non-material, and hence mystic and non-existent illusion.

So, if our thousands of genes controlling the behavior of billions of cells, in environments with many chance factors, are just working out the physical forces, there is no such thing as free will, no matter how it feels to us.  If there is true probability (neurons wire to some extent just by chance, truly), then there may indeed be something that would genuinely approach free will: it would not be predictable, even by probability distributions (because the latter would not be pure chance, but a different kind of cause).  In terms we understand, at least, pure chance is an effect without a cause!

More likely, what we're learning by all our omics technologies is essentially that things appear so random, and there is so much of it, that we can never, even in principle collect enough data to predict whether you'll have this or that flavor ice cream today, or whether you'll have chicken or pasta on your next overseas flight.  Even if the appearance of randomness in brains, like that in tossed coins, is really just an illusion of randomness.

It is thus hard to escape that no matter how it looks, all that seems to be free will is illusion, not true free will.  And it's dispiriting to feel that so much of life is an illusion (a view that Darwin is supposed to have expressed, though we don't remember seeing the quote - if you know it, let us know).  But if free will is an illusion, mistaken appearance of causeless effects, then for the very same reasons, so is natural selection.  And that is food for thought, for people as well as the happy, not really just lucky, fox.

Wednesday, March 9, 2011

When science gets it right: a smokin' prediction

Science, as practiced by scientists, has lots of flaws and fallibilities.  Methods and inertia and vested interests sometimes drive what's done and how it's done.  When inappropriate designs or methods are used to answer a question, or when an idea (or belief) is so strong that it can hardly be falsified by scientific evidence, science deserves criticism.

But when science gets it right and for the right reasons, this should be recognized as demonstrating that causation does actually occur in this world and can be identified when the situation is clear enough, by the methods we know how to use.  Often, success comes when a single cause is strong on its own, and predictive of an effect.

There have been decades of very good evidence that smoking causes lung cancer.  One can predict that a certain amount of smoking should lead to a certain amount of cancer.  It's not precise, but it's clear, and shows, at least statistically (since not even most smokers get lung cancer), that smoking is a causative agent.  Given what we knew of male smokers and cancer rates decades ago, information gathered when most smokers were males, it was predictable that when women started thinking that a smoke was cool they'd start joining their men friends in the cancer wards.

Women began smoking in large numbers around 25-50 or more years ago, and a new study demonstrates that it's catching up to them.  Also reinforcing the causal connection, men had quit smoking in large numbers at about the same time in the past, and their rates of lung cancer have been declining as would have been--as was--predicted.

Lung cancer rates have more than doubled for women over 60 since the mid-1970s, figures show.
Cancer Research UK figures say the rate rose from 88 per 100,000 in 1975 to 190 per 100,000 in 2008, the latest year for which statistics are available.
Lung cancers in men fell, and CRUK say this is linked to smoking rates.
The proportion of male smokers peaked before 1960. But women had rising rates in the 1960s and 1970s, which would have an effect on those now over 60.
Overall, the number of women diagnosed with lung cancer has risen from around 7,800 cases in 1975 to more than 17,500 in 2008.
Figures for men went from 23,400 over-60s diagnosed in 1975, falling to 19,400 in 2008, with rates showing a similar large drop.


Strong evidence, to go with laboratory and molecular/biochemical evidence about the nasty ingredients in smoke and what it does to DNA to transform nice, pink healthy lung cells to charred, ugly cancerous one (anybody who's taken a gross anatomy class in a medical school has probably seen the coal-bag lungs of cadavers of former smokers).

Famous people, most notoriously RA Fisher, one of the founders of modern statistics, have tried to find reasons why this association was due to confounding--some true cause other than Virginia's finest, but that was correlated with smoking. But the evidence has piled up the other way (despite the effects of other exposures).

Other predictions
So this is prediction about the future made from past observations.  But what about the other kind of prediction?  A scientific theory can be really convincing if it can make some additional predictions that would be a consequence of the hypothesis.  So, what if you go not to smokers and non-smokers and follow their exposure rates, but go to lung cancer wards and ask whether the patients were smokers? You'd expect to find that most of them were, and that is what the evidence shows.  Even with twists, such as in Utah, where the population is heavily Mormon.  Mormons don't believe in smoking, but the cancer wards in Utah suggest that Mormon lung cancer patients had apparently not adhered to their religion's teaching.

Understandably, attention is on the gruesome outcome of lung cancer.  But we can make another prediction, and we guess some of the data are probably already in hand.  In many studies, perhaps largely of men since they were the main smokers, a high fraction of smoking-attributed death and disease was not due to lung cancer, but involved many other systems--heart attack, emphysema, and many others.  Lung cancer is only a minority, perhaps a small minority of these consequences.  So we can predict that these traits have diminished in men (we think they have), but should be increasing, along with lung cancer, in women.  If that turns out not to be the case, then we have to revisit much that we think we know about smoking.

Given both the prospective prediction and retrospective assessment, our ideas about cause and effect receive strong, persuasive scientific support.  No weakling GWAS evidence here!  Yet, why given this strong and clear support for smoking as a sledge-hammer kind of risk factor, do so many people--even college students who learn about these facts in a reasonably rigorous way, still smoke?  It raises questions about the efficacy of education, about understanding of statistics and risk, and of the impact (or not) of scientific knowledge.

Because today, only the tobacco industry would still claim that smoking was just plain innocent fun.

Monday, March 7, 2011

Everything's just the same, unless it isn't

Holly's latest poem, and her typically thoughtful comments about it, raise many fundamental problems in evolutionary biology.

She pointed out that while none of us is descended from today's monkeys, humans and monkeys alike are descended from some common ancestor.  She noted that we typically refer to that ancestor as a 'monkey', and drawings of how it probably looked, look like, well, like monkeys!  So how is it that we've changed but they haven't?

The usual image of Darwinian evolution is of continual change driven by relentless competition, often likened to the Red Queen in Through the Looking Glass, who had to run as fast as she could to stay in the same place.  But if that is so, and the environment (which includes competing species) is always changing, then how can a species not always be changing?

Fly in amber; Wikimedia Commons
There are countless instances of fossils so remarkably similar to today's descendants that one couldn't tell the difference.  A squid, complete with ink and looking as tasty as today's calamari; flies in amber caught cleaning their legs the way flies do today.  Beetles and barnacles, ferns, and fish all that look just like what we can see on a walk or dip, but tens or hundreds of millions of years old.  This is nothing compared to the appearance of the some of the very earliest fossils that are known, call 'stromatolites' that look exactly like bacterial microfilms today, but are 3.8 billion years old!

This is stasis on a grand scale and it's compatibility with adaptive change that is also clearly occurring  is what Gould and Eldridge were addressing with their idea of 'punctuated equilibrium'.  Their idea was that very stable environments lead to stable ecosystems that can last a long time but that at some point and in some local area too small to be found in the fossil record, local conditions favor major adaptive change and the lucky descendants are competitively advanced enough to expand into the larger area from which we then find them in the fossil record.

(Fossilized fern, 350 million yrs old; public domain)
Take beetles, horseshoe crabs, or bacteria, in which this stasis has been seen, and compare the DNA sequences of the present-day species and what we  find is that their sequences have diverged by an amount roughly corresponding to their ancestry in the fossil record.  That is, genome-wide, they have diverged as you'd expect--even if their morphology and behavior seem to have stayed unchanged.

I was at a meeting in Brazil and discussing this with the population ecologist Doug Futuyma.  We posted on this subject last April.  Doug's idea, which he has expressed in papers, is roughly that chromosomal incompatibility prevents too much mixing among contemporary species, maintaining them as isolates even if they live in the same area. 

One explanation is that the visible traits are controlled by functional parts of the genome, that might be highly conserved over time, but comprise a minority of the overall genome sequence, so that the rest of the genome is free to accumulate functionless variation in a clock-like way.

We certainly know that the more functional parts of genomes are much slower to change, and sometimes go a long time without changing, than the less functional parts.  Presumably the same is true of traits, too.

But this is at least a bit strange, because there are in all cases, after all, lots of diverged, clearly different descendant species alive today.  Many kinds of beetles, crabs, and flies.  How did they escape from the prison of stasis?  One possibility is an observer bias: of all the countless ways to vary from a common starting point, given the chance aspects of genomic change, a species here and there might--just by chance--not experience much change, while other species under the same circumstances did change.  Afterward, our attention is drawn to the static exception, which we misperceive as having stayed put for some important functional reason.  That would be a perception bias on our part, and say nothing about Nature.

Doug dismissed that idea, saying that ecologists making these observations wouldn't make such a mistake.  His idea of hybrid sterility could explain how a number of species could stay isolated, even though living in the same place, but why wouldn't there be evolution within each?

Maybe Darwin's ideas about the steady if very gradual nature of evolution were wrong, even adaptive evolution.  Maybe it is less steady and more herky-jerky than he thought.  Maybe the environment isn't changing very fast (say, the composition of a given ocean region), and through Darwinian selection it maintains traits rather constantly, but in a somewhat different way than is usually thought.

We usually think of a gene for this and a gene for that. But if the traits we're seeing conserved are affected by many genes, and all selection does is trim off the extreme (too green of a shell, or too pale), then the central tendency of the trait, how most individuals look, can stay the same, while the underlying genes are, in fact, changing.  This is known as phenogenetic drift.

So, it is possible that our and our monkey friends' common ancestor was monkey-like in terms of its fossilized skeleton, while its genome, on average, diverged appropriately.  In some lineages, such as that which led to apes and then to us, something changed in some local population, that led to an initial species divergence.  This set up a group of animals today whose common ancestor with monkeys was a 'monkey', but whose common ancestor with each other was an 'ape'--one that may have looked like today's apes.  Each resemblance group maintains several descendant members, but there are split-off groups that diverge but themselves maintain similarities.

In any event, Holly raised the right questions, regardless of how she feels about her ancestors!

Saturday, March 5, 2011

Spring break, and broken spring, and a thoughtful spring

Public domain photo
Well, it's spring break for us here at Penn State and perhaps that's the total silly season, but this 'research' story tops all that we've seen, even through the last couple of HoorayForMe!! month in the major journals celebrating the human genome project's 10th genomiversary.  We mean, if this is the research train, we want off at the next station.  We need a break at least, a bathroom break.

Yes, folks, if you have a full bladder you'll make better decisions, says this paper, second in rigor and importance only to a few recent papers in nuclear physics.  Otherwise, your thought processes are simply pee-thetic.  In fact, the thought is master to the deed, these stream of consciousness researchers show:  you only have to think yellow to take incisive actions.  How, you ask?
Dr Mirjam Tuk, who led the study, said that the brain’s “control signals” were not task specific but result in an "unintentional increase" in control over other tasks.
"People are more able to control their impulses for short term pleasures and choose more often an option which is more beneficial in the long run,” she said.
Or, perhaps somebody in the editorial offices of the 'journal' and the media that reported this (and not on their funny pages)' have sprung a spring.

Now we are not denigrating this finding.  It's explanation is simply obvious.  You make better decisions with a full bladder not because of some stretch-receptor's gene-expression DRD4 dopamine receptor release's effects.  Though it might be (or this report might be) a reflection of  the dope receptor.  It's because when you're wriggling, squeezing, trying discretely to grab so you won't drip, you don't dilly-dally about decisions, but you cut right to the chase, so you can chase right to the loo.

We hope no springs will leak during the coming break week, and that all your bladders will be stretched to the limit when you have to take a stand (that is, if you're a guy).

Friday, March 4, 2011

Outrunning genetic determinism?

Tara Parker-Pope writes in the NYTimes that exercise keeps you young.  Or rather, she reports on the results of an experiment in mice that had been engineered to have malfunctioning mitochondrial repair enzymes, recently published in PNAS.  Mitochondria are cellular power generators, and as we age, if they become damaged, and are left unrepaired, the cells they power can falter or die, leading to all the effects of aging that most of us will experience sooner or later; muscles become less powerful, our cortex shrinks, hair turns grey or falls out, skin becomes wrinkled, and so forth. As Parker-Pope reports:
The mice that Dr. Tarnopolsky and his colleagues used lacked the primary mitochondrial repair mechanism, so they developed malfunctioning mitochondria early in their lives, as early as 3 months of age, the human equivalent of age 20. By the time they reached 8 months, or their early 60s in human terms, the animals were extremely frail and decrepit, with spindly muscles, shrunken brains, enlarged hearts, shriveled gonads and patchy, graying fur. Listless, they barely moved around their cages. All were dead before reaching a year of age.
Except the mice that exercised.
Some of the mice were made to run on a treadmill the mouse equivalent of a 10K race in 50-55 minutes, 3 times a week.
At 8 months, when their sedentary lab mates were bald, frail and dying, the running rats remained youthful. They had full pelts of dark fur, no salt-and-pepper shadings. They also had maintained almost all of their muscle mass and brain volume. Their gonads were normal, as were their hearts. They could balance on narrow rods, the showoffs.
But perhaps most remarkable, although they still harbored the mutation that should have affected mitochondrial repair, they had more mitochondria over all and far fewer with mutations than the sedentary mice had. At 1 year, none of the exercising mice had died of natural causes. (Some were sacrificed to compare their cellular health to that of the unexercised mice, all of whom were, by that age, dead.)
That is, genetic determinism isn't necessarily so deterministic after all!  This of course is one of our mantras here in MT.  And this is an interesting example because the kinds of single gene mutations with the strong, clear effects seen in these mice can be the mutations with the most deleterious effects of all genetic mutations.  Tay Sachs disease, cystic fibrosis, PKU, and so on are all single gene diseases with very deleterious effects.  Nobody--not even us!--denies this kind of genetic causation. 
And yet.....

Most cases of PKU are due to mutations in a gene called PAH.  These can cause severe mental retardation if left unchecked, but reducing a single amino acid from the person's diet will ameliorate or in many cases prevent the effect.  So here too, genetic determinism becomes environmental interaction, as with these mice. And, to complicate the notion of genetic determinism even more, there are many different alleles in the PAH gene and the actual phenotype varies very much and only for some alleles is it highly predictable from the genotype (and a couple of other genes are known to affect severity).  Most other  genetic disorders, even the 'Mendelian' disorders, have similar levels of variation, even when the major causative gene is known.

Indeed, it has been estimated that about 10% of very harmful alleles in humans are the normal allele in other animal species.  That means both environmental and genomic context are involved. And sometimes, as in PKU, one animal model (rat, monkey cell line, mouse, guinea pig) is 'better'--more apparently human-like--than others.  That implies differences elsewhere in the genome, but does not automatically imply that the model is thus more relevant to humans!  What and how we learn about even single-gene traits is not so simple.  So much for simple genetic determinism. 

If an obvious genetically determined trait like mitochrondrial disrepair can be altered by environmental factors, a serious variant in a ubiquitously vital gene, does this tell us anything about genetic non-determinism of non-deleterious traits?  Probably, though it's dangerous to generalize.  We know that risk of heart disease, type 2 diabetes, stroke, dementia, obesity and so forth -- all traits for which probably billions of dollars have been spent on searching for genetic causation -- can be altered by simple things like exercise.

So, what about normal variation in a trait like intelligence?  Or musical or athletic ability?  Or criminal tendencies?  The media and professionals alike persist in salivating over stories that do seem to indicate important genetic factors, and those are advances in knowledge. But stories like this one, that remind us that environmental factors can have a significant effect even on traits that have an identified single genetic component with a large effect, should be a sobering reminder that genes aren't always destiny.

Wednesday, March 2, 2011

The Tang of old times: Human Genome Sequence spin-offs

When one is given NASA's list of the Great Achievements gained by going to the Moon, if you sort through the chaff, and the telegenic thrills, what you find is:  Tang and Teflon.  Yes, folks, your non-stick eggs and breakfast beverage (with its vitamins!) are what you or your parents paid for with their billions of tax dollars.

It's not that going to the Moon was bad, and in its time and context, as we ourselves remember, it was indeed exciting, and made us proud, and all that.  But the cogent question is in what ways it was worth the cost.  That's a tough one, because we're middle class and had no wants, and the entertainment and fascination were worth something to us.  In a time of truly ominous feeling, the Cold War, the moon landing gave us a sense of national security,  quality, and unity. But what else might have been done with the funds, that might have had more substantial, or long-lasting effects than the vitamins we dosed on each morning?

There's no answer to this question, and the money is spent so it's moot....unless we use it as a lesson going forward.  As an immediate example, NASA is reminding us of all of these Great Successes as it lobbies furiously for personed voyages to the Moon, to Mars, and in a story on the news last week, even to asteroids!  Wow!!

This week's analogously self-laudatory articles in Science raise memories of those old times.  It's the ongoing celebration of the Human Genome Project.  It is unquestionable that DNA sequencing has brought many advances in technology and some advances in knowledge.  But if you look closely at what the bragging articles are all about, much of it is about adding millimeters to the meters we already had measured--like tweaking the Tree of Life--or yet more promises that we'll soon have things like psychiatric diseases knocked, thanks to the human genome.  Of  course one major thing it has brought us, so far at least, is business for many corporations and employment for the expanded professor class--we say that satirically to an extent, but economically it hasn't been trivial.

It's unfair to criticize science for failure to cure all known human ills in the mere post-Genome decade.   Well, it's not really unfair given the culpably false promises on which it was launched and has been hyped ever since.

What is unfair is for Science to publish commentary after commentary by the people who have vested interests in genomic research, have been living on genome research funding and the like.  They have undeclared but profound conflicts of interest.  Few if any skeptical views are being included, that we've seen so far.

So Mary-Claire King lauds the 'Genome Project' for the ability to identify and then diagnose a mutation that leads to profound mental impairment, to prevent an affected child being born to a woman with 3 affected brothers.  First, the 'Genome Project' is only a symbol of the decade of research using whole genome sequence.  That's a trivial point, but one at least worth realizing.  Knowing 'the' human genome sequence didn't solve this particular family's problem.

More important than anecdotes, no matter how genuinely heart-warming, is to ask how much it cost to prevent one child from being born with such a profound trait.  Is it too much to say that the cost has been $1 billion if after all this feat is being touted because of the Genome Project?  That would actually seriously underestimate the background work required for the direct mapping and testing of the trait in this particular family (did the latter cost, say $100,000?).   Or why not count all the cytogenetics and other work for the past 100 years, without which there'd be no Genome Project?  Now, on its own merits, it would seem quite cruel to say that $100,000 is too great a price to pay for this fantastically good result for this one family.  But that is a very unfair way to look at it, and here's a much fairer way:

How many lives could have been improved if $100,000 (or $1 billion) had been spent in other ways that are known to be beneficent, to make lives markedly better, and the like?  There are many ways to reckon this (nutrition or exercise programs, all sorts of amniocentesis tests that don't require exotic genome sequencing, and so on).  The point is not to belittle a very fine result, but to point out that it is rather wrong not also to ask the question: what else might have been done?

Of course, one can argue that genome research will continue to pay off in good ways, amortizing the per-case cost, and that is clearly going to be the case.  But it won't answer the question as to whether other ways to spend the funds would have, and/or could still have, more bang for the buck than extensive and expensive genomic approaches.  Those otherwise-spent costs could be amortized, too.  Even something as boring as childhood nutrition would benefit countless kids for the same cost, who would then themselves grow up to be more productive (repaying society for its investment many times over), some of whom would do research in their lives to further improve health, and so on.

Meanwhile, of course, the genomics interests aren't going to say that now that we've got where we are, let's stop spending so much on genomics research and turn research resources elsewhere: because one can always identify the next urgent genomics research question down the road that simply must be answered (oh, and in my lab, by the way, so please come across with the grants).

This calculus won't be done nor these questions seriously asked in mainstream discussions.  Because our society works the way it works, by lobbying and special interests.  We all know this, whether or not we like it, or whether it's a good or bad to allocate resources.  Instead, in true NASA fashion, we're told that now we have cheap sequencing so we can, fortunately to our great relief,  start pouring even more total resources into sequencing everybody for everything.  Rather than a restrained, sobered-up recognition of where genomics is, and isn't, relevant, and a strategic discussion of where best to put resources, it's Full Steam Ahead!

Hold on to your wallets!