Wednesday, February 3, 2010

Don't throw out those Victorian-era calipers just yet!

Type 2 diabetes
Is this a trend? We've seen two recent papers that say that non-genetic factors are better predictors of a trait than are known genes. One study, published in the British Medical Journal (14 Jan, 2010, Talmud et al.) and described here, has to do with prediction of type 2 diabetes. Researchers followed thousands of government workers in London for nearly 20 years, and found that two different risk assessment tools, the Cambridge and the Framingham models based on environment and health measures, were significantly better at predicting risk than a model based on twenty or so risk alleles.
When the researchers assessed the so-called Cambridge and Framingham type 2 diabetes risk models, which are based on non-genetic factors such as age, sex, family history, waist circumference, body mass index, smoking behavior, cholesterol levels and so on, they found that both predicted risk of the disease better than a genetic risk model based on 20 common, independently inherited risk SNPs.

The Cambridge model had 19.7 percent sensitivity for detecting type 2 diabetes cases in the Whitehall cohort based on a five percent false positive rate, while the Framingham model had 30.6 percent sensitivity. The gene count score, meanwhile, detected 6.5 percent of cases at a five percent false positive rate and 9.9 percent of cases at a 10 percent false positive rate.
Of course, sex is genetically determined, and so may waist circumference, body mass index or cholesterol be, at least to some extent, so it's arguable that once the genes for these traits are known, it will be possible to predict diabetes risk with somewhat more precision. But even if we assume that genes for these traits are major, and will be found -- and regular readers of this blog will know that we would be dubious about this -- age, diet, smoking behavior, activity levels, and so on are not genetic factors, yet have major effects on risk.

To most of us, this might suggest that we hold off sending our DNA to one of the companies that do genetic risk prediction.  Even though the senior director for research at one of the direct to consumer genetic testing companies says that they may not be able to predict risk very precisely for everyone, but for someone at high risk, they do a good job. But, how important is that, really?  People at high risk of genetic diseases generally belong to families in which the disease is already known, or at which one of the few well-documented major mutations is present. And anyway, type 2 diabetes can often be prevented with diet and exercise, so perhaps people in high risk families would be better off spending their money on nutritionists or personal trainers rather than genetic testing.

Height
The other paper, published in the European Journal of Human Genetics, Feb 18, 2009, with the charming title, "Predicting human height by Victorian and genomic methods" (Aulchenko et al.), compares prediction based on 54 genetic loci that have been shown to have strong statistical association with height, and prediction based on the height of a subject's parents.
In a population-based study of 5748 people, we find that a 54-loci genomic profile explained 4–6% of the sex- and age-adjusted height variance, and had limited ability to discriminate tall/short people, as characterized by the area under the receiver-operating characteristic curve (AUC). In a family-based study of 550 people, with both parents having height measurements, we find that the Galtonian mid-parental prediction method explained 40% of the sex- and age-adjusted height variance, and showed high discriminative accuracy.
Ironically, given how difficult it is to find a large enough suite of predictive genes, height has been shown by numerous studies to be one of the most heritable human traits, with heritability on the order of 80% or more.
It can be expected that once all loci involved in human height are shown, the discriminative accuracy of the genomic approach may surpass that of the Galtonian [Victorian] approach. However, it will be a tall order to find all these variants, at least using the current methodology consisting of (meta-analysis) of genome-wide association studies, tailored to capture common variants.
Partly this is because the variants described to date will turn out to be those with the greatest effect. Indeed, one estimate is that thousands of genes contribute to traits like stature or diabetes, most with very small effect. And, too, if height is like other traits, and there's no reason to expect it won't be, there will be many genes that are found only in some populations or even in single families.

Now, the story is not all negative for the predictive value of genetics; another paper reflects recent efforts to tally the many small effects that can be gleaned by creative use of GWAS data (Evans et al., Human Molecular Genetics, vol.18, pages 3524-3521). The argument is a sophisticated statistical one, but the general conclusion is that when you do this you gain some information beyond what you get from the already known stronger-effect genotypes, but that gain is not great, and it is at present so complex as to have no obvious clinical value in most cases.

And these analyses were case-control studies, in which environments were essentially unmeasured (matching was done for age and sex and perhaps some other variables, but not extensively for the many variables that makes the overall fraction of genetic control far less than 100%, and usually less than 50%).

The bottom line is: watch what you eat, and walk, don't ride, to work!


(Thanks to Eric Schmidt for alerting us to the diabetes story.)


2 comments:

EllenQ said...

The issue of family history is a small one in this post but it made me curious. Has anyone ever discussed the influence of shrinking family sizes and longer time between generations on our knowledge of familial risk. I know what diseases and disorders my parents and grandparents have/had but beyond that (aunts and uncles, great-grandparents, great-aunts and uncles) I don't have much of an idea nor do I have a very large sample of family members do consider.

Ken Weiss said...

This is a very interesting question. Part of the problem, and it applies to the kind of quantitative genetic prediction this post was about, is that you have to adjust for cohort effects (so your height--relative to other women your age--is well predicted by the the mid-parent height of your parents, adjusted for sex and their cohort.

Family size has been shrinking generally, so one would have to look for differential expansion or shrinkage.

Another issue could be the dilution in an outbred population of each generation by incoming genes. Mendelian traits remain 'true' generation after generation, conditional on the allele being transmitted. But quantitative traits experience regression to the mean and, as a recent paper showed, even for highly genetic, but complex traits, most cases are 'sporadic' (the only case in the family).

So it must be the case that shrinking family size reduces detectability. Just as drift reduces variation, there will be allele loss in smaller families. The ultimate, of course, is China where, with one-child families almost all familial risk information would be lost.

We assembled a large genealogical data base (including cause of death) for Laredo, Texas, and there are the Iceland and Utah Mormon data bases, and perhaps some Scandinavian ones, too, that might have the kind of information you're musing about.