Showing posts with label complex trait genetics. Show all posts
Showing posts with label complex trait genetics. Show all posts

Friday, March 21, 2014

The fluidity of fluid intelligence

If IQ is a measure of some aspects of intelligence, and intelligence is the product of a gene or genes, then it should follow that IQ is a stable trait during an individual's lifetime. So I was interested to hear on a recent episode of the BBC radio program, Analysis, that IQ can change even over the course of participation in a brief psychological study.

Princeton professor of psychology and public affairs, Eldar Shafir, co-author of the book, with economist Sendhil Mullainathan, "Scarcity: Why Having Too Little Means So Much", was interviewed  on the program about how having too little time or money influences our lives.  Mullainathan and Shafir believe that experiencing scarcity changes the way we think, and makes a bad situation even worse; poverty creates a "scarcity mind-set" and causes poor people to make bad decisions, which perpetuates their poverty.

To test this, they interviewed people shopping in a mall in New Jersey, determined their financial status, presented them with various financial scenarios and then asked them to play computer games that measured their 'fluid intelligence', a component of IQ that indicates things like the ability to think logically, to reason, or to handle novel situations.

When the scenario is manageable, if for example they are asked what they would do if their car breaks down but it won't cost much to fix, poor and rich people perform equally well on the tests.  But if the scenario is challenging, say fixing the car costs $1500, rich people did as well on the intelligence tests as they did before, but poor people did significantly worse.

Mullainathan and Shafir contribute this to scarcity of what they call 'bandwidth', or the amount of mental capacity that is used to make decisions.  They found that IQ fell by 13 points in their poor study subjects given a challenging scenario.  This, Shafir said, can be equivalent to a drop from borderline gifted to average, or average to borderline deficient.  Shafir contrasted this with a night without sleep, which leads the IQ of the sleep-deprived to be 10 points lower than usual.

Scarcity has other effects as well, according to Mullainathan and Shafir, leading people into a cognitive 'tunnel' so that they can't think broadly about how to solve a problem.  Shafir describes it this way in an interview with the the American Psychological Association:
Every psychologist understands that we have very limited cognitive space and bandwidth. When you focus heavily on one thing, there is just less mind to devote to other things. We call it tunneling — as you devote more and more to dealing with scarcity you have less and less for other things in your life, some of which are very important for dealing with scarcity. There's a lot of literature showing that poor people don't do as well in many areas of their lives. They are often less attentive parents than those who have more money, they're worse at adhering to their medication than the rich, and even poor farmers weed their fields less well than those who are less poor.
Clearly this can become politically volatile very quickly; right-wingers might interpret these results as indicating that poor people doom themselves to poverty, while left-wingers interpret them to show that poverty begets poverty.

But it's the effect on IQ that interests me, and yes, this is another subject that gets volatile very fast. How can this thing, that so many believe is genetic and therefore relatively fixed, change so readily, and in fact predictably?  This is not the first time that fluid intelligence has been shown to be, well, fluid.  A 2007 paper in PNAS showed that it is trainable, and can be significantly improved, e.g., and methods for improving intelligence, something previously thought to be impossible, are now rife.

If true, this doesn't mean that genes have nothing to do with intelligence -- whatever that is -- though it does mean intelligence isn't fixed.  Perhaps intelligence can be thought of as analogous to blood lipid levels, say; we may be genetically predisposed to high or low cholesterol, but we can raise or lower our levels with diet, exercise, or medication.  That is, as every trait, it has a genetic scaffolding, but it is also influenced by experience.  And, as with intelligence, some people have extreme cholesterol levels, generally due to single or few genes.  However, generally, these are genes that don't influence cholesterol levels in people between the extremes.

This is of course one implication of the clear fact that the 'heritability' of intelligence is well below 1.0, meaning that environmental factors are important as well as genetic ones.  The volatility of the measure is, however, an indicator that even the trait itself may not be very stable and that 'environment' may not refer just to random non-genetic factors but ones that systematically affect the measure.  In this case, the environmental factor could suggest that people in poverty are poor because of low-IQ genotypes, but Mullainathan and Shafir believe it's more complicated than that, that poverty creates a mindset that perpetuates poverty.

Similar kinds of issues apply to most complex traits.  Heritability can vary with age as well as many other factors, because the impact of environmental factors can change, and perhaps for genetic reasons as well.  Some genetic factors may be expressed differently at different ages.  A major issue in general in regard to complex traits would be if the genetic component doesn't just fix a certain fraction of the trait value, but is volatile.  Then the time and way of measurement could generate values that are taken as more inherent and permanent, but in fact are more widely variable.  The variation could be such that the genetic component is far less relevant than is often thought.  Of course it could be the other way round.  In each trait if we are determined to identify how much is inborn and how much acquired, it may be that we need to be much more knowledgeable about the determinants, and more careful in how we measure traits -- or how we 'label' individuals.

Friday, April 6, 2012

Novel mutations = novel conclusions?

As reported in the NYT, the results of three new studies, published (here, here, and here) this week in Nature on the genetics of autism has found novel gene mutations that might explain risk as well as evidence that risk increases with the age of the father.  From the Sanders et al. paper:
Here we show, using whole-exome sequencing of 928 individuals, including 200 phenotypically discordant sibling pairs, that highly disruptive (nonsense and splice-site) de novo mutations in brain-expressed genes are associated with autism spectrum disorders and carry large effects. On the basis of mutation rates in unaffected individuals, we demonstrate that multiple independent de novo single nucleotide variants in the same gene among unrelated probands reliably identifies risk alleles, providing a clear path forward for gene discovery. Among a total of 279 identified de novo coding mutations, there is a single instance in probands, and none in siblings, in which two independent nonsense variants disrupt the same gene, SCN2A (sodium channel, voltage-gated, type II, α subunit), a result that is highly unlikely by chance.
It explains risk in only a very small fraction of cases.  Though, Sander et al. suggest that the model may well be useful for explaining many more. As the Times story says:
Experts said the new research gave scientists something they had not had: a clear strategy for building some understanding of the disease’s biological basis.
And,
An intensified search for rare mutations could turn up enough of these to account for 15 percent to 20 percent of all autism cases, some experts say, and allow researchers a chance to see patterns and some possible mechanisms to explain what goes awry. 
This would be great, of course.  Any clues to the bigger picture could be extremely helpful.  However, if autism is like every other complex disorder, a finding that's true at one extreme of the distribution of the phenotype will not necessarily apply to any other part of the distribution.  There are high shared fractions of the genome among relatives, usually many coding changes, too.  So it is going to be difficult to 'prove' that the change observed really is causal. Exome sequence assumes coding changes and in a way is vulnerable to identifying a coding change and assuming it's causal, when regulatory changes are not in the search space; is this the drunk looking for his keys under the lamplight?

Indeed, as O'Roak et al. conclude,
Although there is no one major genetic lesion responsible for ASD, it is still largely unknown whether there are subsets of individuals with a common or strongly related molecular aetiology and how large these subsets are likely to be.
O'Roak et al. identified novel mutations in sporadic, non-familial cases, and characterized them as to their severity and type, and they also identified pathways that the affected genes might share.  They conclude that there are likely to be from hundreds to over a thousand genes associated with autism: "Our analysis predicts extreme locus heterogeneity underlying the genetic aetiology of autism."

Of course there must be 'networks', and all of this kind of rhetoric sounds impressive but really is post-facto and in a sense superficial.  Genes interact with other genes, not just in terms of protein-protein interactions but also related to expression level (not assayable by exome sequencing).  So saying that there are hundreds of genes in networks is to some extent big-words to acknowledge that we may find this or that component, but this trait is simply not simple.

Neale et al. report, "Our results support polygenic models in which spontaneous coding mutations in any of a large number of genes increases risk by 5- to 20-fold."  Again, other functional elements in DNA that greatly outnumber the protein-coding parts, are likely to be at least as important.  Indeed, there are findings that 1% or more of autism is due to copy number variation, which may overall swamp the rare variants in importance, if the results hold up.

Each of these studies looked at a subset of the study population -- because autism is such a wide spectrum of disorders, it's important to reduce possible genetic heterogeneity by narrowing the phenotype in any study -- and found novel, or sporadic mutations to be associated with risk.  Because the idea that new mutations, which we all carry a substantial number of, might be causative can't help predict who is at risk, the hope is that if these mutations are indeed associated with risk, they might give some clues as to which developmental pathways are affected in this disorder.  The hope has been for years that genes for autism will be identified. Now that it's looking like this is a polygenic disorder, if indeed genes are a primary cause, and that sporadic mutations might be significant, it's looking more and more likely that those who have long said that major genes that can predict the disorder will not be found have been right.

The Times quotes a well-known population geneticist on this work:
“This is a great beginning, and I’m impressed with the work, but we don’t know the cause of these rare mutations, or even their levels in the general population,” said Dr. Aravinda Chakravarti of the Institute of Genetic Medicine at the Johns Hopkins University Medical School, who was not involved in the studies. “I’m not saying it’s not worth it to follow up these findings, but I am saying it’s going to be a hard slog.”
If these new results can in fact lead to understanding what goes awry in the developing brain to lead to autism, great.  Whether this will ever be clinically significant is another matter.  And one needs to remember that autism is by far mainly environmentally caused!  The report last week that its prevalence has increased by 78% in the past decade alone shows that this is about environments (that increase is unlikely to all be due to changing definitions of the disorder, or changes in diagnostic practices).  Well, a determined geneticist will argue that rapid environmental change could, in principle, have led to higher disease risk by triggering big responses in a few common genetic variants interacting with the environment.  Not so! We've had he environmental change (whatever it is), and ASD is clearly not due to one or two major genes responding to that change.

The same arguments apply to excessively exuberant claims implying simple genetic adaptation due to natural selection, and for the same reasons.

If biomedical research is about doing something about autism, rather than about forcing genetic thinking onto the problem, we're looking under the wrong lamp-post!

Wednesday, October 14, 2009

If it talks like a duck, and has a beak like a duck....is it a duck?

They say if it talks like a duck and walks like a duck, then you have to conclude it's a duck. That's a snide way of saying that you judge things by how they look, not necessarily what someone says--and is often applied to politicians' obscurantist rhetoric, for example. But it can have implications for biology, too.

If it has a beak like a duck, it's a duck....unless it's a genetically modified chicken. Experiments with two genes, Bmp4 and Cam1 (sometimes written CaM), have shown that not only are they expressed in critical areas of the embryonic jaw, but that altering their expression sites, timing, or intensity can alter jaw (and hence beak) size and shape. Famously, this is an explanation of variation in the Galapagos finches, but the involvement of the genes in many species, including mammals, is clear. An article in Annual Review of Genetics (Roles for BMP4 and CAM1 in Shaping the Jaw: Evo-Devo and Beyond, J. Parsons and R. Craig Albertson, vol. 43, Dec 2009) summarizes the extensive knowledge and experimental results related to these two genes and jaw development.

So, experimentally altering the genes has effects on face and jaw length and shape, as can natural alterations. To some extent, at least, give us the alleles at these loci and we can predict the face--especially if the alleles have major effect. Of course they're not the only contributing genes, so this is a question of penetrance. That's the probability of a jaw effect given the genotype.

But what about the inverse question, known as the detectance? If you give us the face, can we identify the contributing genes? This is a very important question because natural selection sees faces, not genes. It could be of medical importance, too, if the presence of a disease points to its specific cause in a way that may help choice of therapy.

If many genes contribute, as we know they do, their diverse effects may be difficult to assess and a given trait, like a long vs short face, may not need to involve any particular subset of these genes.

A very fine recent paper by Roseman, Kenny-Hunt, and Cheverud (Phenotypic Integration Without Modularity: Testing Hypotheses About the Distribution of Pleiotropic Quantitative Trait Loci in a Continuous Space, Evol. Biol. 36, 282-191, 2009) provides an interesting case in point (the first and last authors are people with whom we have an active research collaboration, and hence we're predisposed in their favor, but the paper is fine on its own merits!). They crossed two standard strains of inbred mouse, called Large (Lg) and Small (Sm), then intercrossed the offspring for 10 generations (producing the'F10' offspring). Then, they landmarked 15 different positions on the jaws of 1,240 F10 mice, and computed inter-landmark distances. Each mouse was been genotyped for 1,470 variable sites (SNPs), or 'markers', spaced more or less evenly across all the chromosomes.

Roseman et al. did the GWAS (genomewide association study)-like mapping experiment: asking for each measured distance, in what parts of the genome marker variation was statistically associated with variation in the distance. They found 28 such chromosomal 'hits', each associated with one or more distance.

What's relevant to this posting is that neither Bmp4 nor Cam1 (called Calm1 in mice) was located in any one of these candidate chromosome regions. Now statistical data have many problems and limitations, and more data may lead to revisions (which will be forthcoming, since F34 generation mice are available, and will provide much more refined map locations).

But the point here is that while nobody doubts that Bmp4 and Cam1 are involved in mouse jaw development (expression studies and other work makes that manifestly unambiguously clear), the genes seem not to be materially involved in the jaw size or shape differences between these particular mouse strains.

The reason is very simple and can be characterized as phenogenetic (or genotypic) equivalence. A fundamental characteristic of complex traits is that many different genotypes can produce essentially the same phenotype. Even if a genotype were accurately to predict phenotype (which we know is only true a small fraction of the time), a phenotype does not as a rule predict the underlying genotype--that's the major lesson of GWAS experience!

In evolutionary biology and systematics, similar traits in related species are called homoplasies if they have different causes--if they evolved independently and, in modern terms, used different genes. They are classically called analogous rather than homologous.

In relation to today's post, this means that similar faces in different species may be homoplasies. Yet in other cases, even among the same species, they may be homologous--due to mutations in the same genes. Or, if the genes are the same but the mutations different, they are a kind of mix.

This can confound evolutionary understanding if one is not careful, because the same differences--different genes same trait variation, or same genes, different alleles, same trait variation--can occur. The reason is that the evolution of variation and of speciation is a process, not an event.

Predictance and detectance are very different entities that can, but need not, have similar values. They are fundamentally important in the current debates about gene mapping in biomedical genetics, and in evolutionary genetics. We'll have more to say in other posts, as this is a basic and important subject!

Tuesday, July 14, 2009

The mind boggles

Schizophrenia is one of those important human traits that has eluded understanding despite heavy research investment. It is elusively variable and hence challenging to diagnose as a single entity or to decide how to split it up into causally distinct subsets. It seems highly familial in terms of its increased risk among family members, and hence seems clearly to have a genetic component. But the specific genes have been elusive--they must be there in the genome, but where are they?

A recent paper in Nature ("Common polygenic variation contributes to risk of schizophrenia and bipolar disorder", The International Schizophrenia Consortium, published online 1 July 2009) looked at large amounts of data on schizophrenia from several study populations. The authors did an extensive amount of genotyping and then various kinds of analysis (they looked, for example, at about a million variable sites (SNPs) in the genome, to identify regions where a particular variant marker was found more often in some 3322 cases than 3587 controls--pretty large studies for this kind of trait.

No really strong signal, that is that explained a high fraction of the disorder, was found. But through a series of analytic approaches, including computer simulations to test a range of possible genetic causal models to see which fit best, the authors (and this is one of those papers with a huge list of authors) concluded that many thousands of genes (classically they'd be known as 'polygenes') contribute to the trait. Most of the contributing variants are rare, but more importantly, they have individually very small effects.

Regardless of the details of the study, which could include all sorts of artifacts or be affected by the methods and assumptions of the authors, the study seems convincing that schizophrenia is like many other traits of a polygenic nature. The authors confirmed current ideas that bipolar disorder may involve many of the same genes, as well.

There are good evolutionary and biological reasons why this makes sense. In a nutshell, it's because so many processes are involved in brain development and function, each of them subject to mutational variation, that there are many ways to end up with the same trait. Natural selection only prunes those who can't reproduce as successfully, but the effect is distributed across these many parts of the genome, and hence acts only very weakly against any one of them. The result is an accumulation of variation that, at each individual region is essentially undetectably abnormal. The frequency of the individual variants changes over generations (and over geographic space in our species) mainly by chance (genetic drift).

The individual components have to work together--the 'cooperation' that is at the core of life as we outline in our book The Mermaid's Tale, but there is plenty of tolerance for variation, what we refer to as functional 'slippage'. It all makes sense biologically, evolutionarily, and causally.

In addition to its consistency with evolutionary expectations, this flies in the face of current predominant thinking about the prospects for what is being called 'personalized medicine', that is, medicine based on each individual person's genotype. If genotypes are poorly predictive, as in this case they seem to be, then they are of no real use to a clinician. In fact, as with so many similar studies, the total identified effect was small: based on various assumptions, the polygenic component identified by this geomewide search accounted for only 3 to 20% of the total disease risk, which itself is only 1%! Schizophrenia is an important problem (1% of the population is a lot of people), but clearly the predictive power of these gene-sets is modest, and this assumes that environmental effects will retain their current overall nature and impact (many of the genes probably have effects that vary depending on environment).

Many researchers will try to develop synthesizing methods to make individual sense of polygenotypes, so that treatment might be varied accordingly. How well they succeed only time will tell. But this is another case in which extensive study of a trait based on modern high-intensity technology has documented the nature of complex traits.

Saturday, May 23, 2009

Genetic leaf-litter

There are many ways in which everyone is a conceptual prisoner, encaged in culturally based limits. We are born to, and trained in and entrained by our circumstances, and these in turn are a legacy of history. We can try to escape from this but probably the most we can hope for is to keep subtle assumptions and constraints at bay. In genetics, there is a pervasive concept of the 'wild type', a concept that goes back into the history of genetic research, referring to the natural allele at a gene, that was favored by a history of selection, relative to which other alleles (mutational variants) were viewed as generally rare and harmful (waiting to be shortly removed by natural selection).

There is a tacit extension of this gene-specific concept to the whole genome (or even organism) as when 'normal' inbred laboratory mice are referred to as the 'wild type' relative to an experimental modification such as a transgenic gene knockout mouse of the same strain.
Sometimes this is clear shorthand, but beware of conceptual shorthand! An implication of this kind of genetic thinking is that in regard to human traits, including especially disease, there is the normal human genome as represented by 'the' human genome sequence available in genome data bases, and the disease-causing mutants. But in fact genomes are very large sequences of DNA that serve as targets for mutation in every cell, every individual, every generation.

We know that biological traits are the result of developmental processes that include countless genes (of the classical protein-coding type as well as many other functional DNA sequence elements). Species contain large numbers of members--there are about 7
billion of us humans stalking the Earth. What this means is that there is a potentially huge amount of variation at most if not all viable spots in our genome. After a mutation occurs, it may proliferate if its bearer successfully reproduces. Over time, some of these alleles grow in frequency to become quite common.

When genomic DNA is sequenced in a number of individuals, this variation is easily detected. But whether affected by natural selection or just by the chance aspects of reproductive success or failure, most allelic variation that is present in genomes at any given time is rare. Relative to the more common variants, this genetic variation is a kind of leaf-litter of variation. Even with hundreds of thousands or, indeed, hundreds of millions of very rare variants present in our species, any small sample will pick up some of them by chance.

In a small sample, those will seem
to be more common than they are; so if we sequenced 5 people (10 copies of the genome) the lucky variants whose true population frequency is only a few in a billion that by chance are in the 5 people we sample, will seem to have a frequency of at least 10% (one copy of the 10 we sampled being the variant). The tip-off that this genomic leaf-litter exists is that most of the variants are not seen in other samples, or if common enough to be sampled more than once, usually only seen in samples from the same geographic region (because that's where they arose as new mutations, and were transmitted to descendants who remained living in the same continent). In developed countries, variants that cause disease will show up in specialty clinics at major medical centers.

In trying to find variants by mapping, as in genomewide association studies (GWAS) that compare sequences between cases and controls, we may feel that we have so far detected the common, but not all the rare causal variants that exist. But we may also feel that if we can just enlarge our samples, we'll get a much better handle on the nature of the effects of these variants, or we'll detect the remaining variants that haven't yet been detected.


This is likely to be an illusion, as the growing number of those of us who argue that very large GWAS will not bring a big payoff of the kind envisioned and promised by those who argue for this kind of project. There are several reasons for this skepticism.
First, it is hard to detect rare things with statistical significance, much less to get a good idea of their effects and action. One needs huge samples to get enough instances to show that the variant is meaningfully more common in cases than controls.

But second, the leaf-litter phenomenon means that as sample sizes increase, more and more rarer and rarer variants will be picked up. It will be difficult to show clearly that they are causally involved with our trait, but even if they are they will have less and less effect on public health. They will vary from population to population, and sample to sample from the same population. Environments may affect whether carriers of the variant manifest the disease, and most such variation will at most have minor effect on risk of disease (if the effect were stronger, the allele would have been removed by selection, or we would have been able to detect it in family studies).

And if it requires more than one such variant, or even many of them, to combine to produce disease, the detection and evaluation situation will be that much more challenging, if not pointless.
There will always be exceptions, as is true about the nature of life. But the leaf-litter phenomenon is real and there is plenty of evidence for it. It is predicted by population genetics theory. And it is consistent with results of mapping studies that have been done to date. Ironically, perhaps, while the individual rare alleles have little detectable effects, their aggregate effects in the population may account for the observed heritability (familial aggregation of risk, or similarity of trait values) of most traits, including disease. That heritability, which is clearly there, is what has been considered mysterious given the failure of linkage or GWAS studies to find the genes that are responsible.

We are presented with a kind of epistemological paradox: the genetic variation exists, but we may have insurmountable challenges to find most of it. Indeed, it is somewhat mystical even to argue that it exists as individual effects, if they cannot be found or replicated by current statistical genetic methods.
Evolution 'cares' about reproductive success, not about simplicity in genetic causation. From a population perspective, evolution occurs because mutations occur generating variation that selection and chance can effect from one generation to the next.

Genetic leaf-litter is thus the fuel for evolution. We may care to know the cause of each instance of a trait or disease, but Nature has only cared about viability and success, and tolerates the leaf-litter. As massive amounts of human DNA sequence are produced, we will see this. It will be an incredible playground for population and evolutionary geneticists. But what we do with it, in terms of identifying disease causation, is not clear.

Thursday, April 16, 2009

GWAS: really, should anyone be surprised?

There's a story today in the New York Times about papers just published in the New England Journal of Medicine (embargoed for 6 months, so we can't link to it here) questioning the value of GWAS--genomewide association studies. We're interested because we, but largely Ken, often in collaboration with Joe Terwilliger at Columbia, have been writing for many years--in many explicit ways and for what we think are the right reasons, for 20 years or more--about why most common diseases won't have simple single gene, or even few gene explanations.

"The genetic analysis of common disease is turning out to be a lot more complex than expected," the reporter writes. Further, "...the kind of genetic variation [GWAS detect] has turned out to explain surprisingly little of the genetic links to most diseases." Of course, it depends on who's expectations you're talking about, and who you're trying to surprise. It may be a surprise for a genetics true-believer, but not for those who have been paying attention to the nature of genomic causation and how it evolved to be as it is (the critical facts and ideas have been known, basically, since the first papers in modern human genetics more than 100 years ago).

GWAS are the latest darling of the genetics community. Meant to identify common genetic factors that influence disease risk, the method scans the entire genome of people with and without the disease to look for genetic variants associated with disease risk. Many papers have been published claiming great success with this approach, and proclaiming that finally we're about to crack disease genetics and the age of personalized medicine is here. But upon scrutiny these successes turn out not to explain very much risk at all--often as little as 1%, 3% (even if the relative risk is, say 1.3--a 30% increase, or in a few cases 3.0 or more--there will always be excpetions). And this is for reasons that have been entirely predictable, based on what is known about evolution and genes.

Briefly, genes with major detrimental effects by and large are weeded out quickly by natural selection, most traits, including disease, except for the frankly single-gene diseases (which can stay around because they're partly recessive), are the result of many interacting genes, and, particularly for diseases with a late age of onset, environmental effects. And, there are many genetic pathways to any trait, so that the assumption that everyone with a given disease gets there the same way has always been wrong. Each genome is unique, with different, and perhaps rare or very rare genes contributing to disease risk and these will be difficult or impossible to find with current methods. Risk alleles can vary between populations, too--different genes are likely to contribute to diabetes in, say, Finns than in the Navajo. Or even the French. Or even different members of the same family! Inconvenient truths.

Now, news stories and even journal articles rarely point out any of these caveats. Indeed, David Goldstein is cited in the Times story as saying that the answer is individual whole genome sequencing, another very expensive and still deterministic approach (that surely will now be contorted by the proponents of big Biobanks to show that this is just the thing they've had in mind all along!). Nobody backs away from big long-term money just to satisfy what the science actually tells us. Now maybe there is the story, in the public interest, that a journalist ought to take on.

So, the push will be for ever-more complete DNA sequences to play with, but this is not in the public interest in the sense that it will not have major public health impact. Even if it identifies new pathways, one of the major rationales given in the face of the awkward epidemiological facts, which is unlikely to be a major outcome in public health terms. We have many ways to identify pathways, and more are becoming available all the time. While whole sequences can identify unique rare haplotypes or polygenotypes that some affected people share, that is really little more than trying the same method on new data, like Cinderella's wicked step-sisters forcing their feet into the glass slipper.

If it's true as it seems to be, that most instances of a disease are due to individually rare polygenotypes, then the foot will not fit be any better than before. And it won't get around the relative vs absolute risk, nor the environmental, nor the restrospective/prospective epistemiological problem. These are serious issues, but we'll have to deal with them separately. And the rarer the genotype the harder to show how often it is found in controls, hence the harder to estimate its effect.

------------------------
A few reasons why no one should be surprised:

*Update* Here are a few of the papers that have made these points in various ways and contexts over the years. They have references to other authors who have recently made some of these points. The point (besides vanity) is that we are not opportunistically jumping on a new bandwagon, now that more people are (more openly) recognizing the situation. In fact, the underlying facts and reasons have been known for even a far longer time.

The basic facts and theory were laid out early in the 20th century by some of the leaders of genetics, including RA Fisher, TH Morgan, Sewall Wright, and others.

Kenneth Weiss, Genetic Variation and Human Disease, Cambridge University Press, 1993.

Joseph Terwilliger, Kenneth Weiss, Current Opinion in Biotechnology, 9(6), 578-594 (1998). Linkage disequilibrium mapping of complex disease: fantasy or reality?

Kenneth Weiss, Joseph Terwilliger, Nature Genetics 26, 151 - 157 (2000). How many diseases does it take to map a gene with SNPs?

Joseph Terwilliger, Kenneth Weiss, Annals of Medicine 35: 532-544 (2003). Confounding, ascertainment bias, and the quest for a genetic "Fountain of Youth".

Kenneth Weiss, Anne Buchanan, Genetics and the Logic of Evolution, Wiley, 2004.

Joseph Terwilliger, Tero Hiekkalinna, European Journal of Human Genetics, 14, 426–437 (2006). An utter refutation of the 'Fundamental Theorem of the HapMap'.

Anne Buchanan, Kenneth Weiss, Stephanie M Fullerton, International Journal of Epidemiology, 35(3):562-571 (2006). Dissecting complex disease: the quest for the Philosopher's Stone?

Kenneth Weiss, Genetics 179, 1741–1756 (2008). Tilting at Quixotic Trait Loci (QTL): An Evolutionary Perspective on Genetic Causation.

Anne Buchanan, Sam Sholtis, Joan Richtsmeier, Kenneth Weiss, BioEssays, What are genes for or where are traits from? What is the question? BioEssays 31:198-208 (2009).