This is the third and final post on a recent article in The Land Institute's Land Report, describing advances and methods to identify and isolate desirable genetic variation in plant species, with the goal of sustainable agriculture by scientific, but efficient, methods.
This is pure modern genetics, combined with traditional Mendelian-based empirical breeding as has been practiced empirically over many thousands of years and formally since the mid-19th century.
The discussion is relevant to the nature and effects of natural selection, which, unlike breeding choice, is not molecularly specific and is generally weak. That's why it's difficult to find desirable individual plants in the sea of natural variation, and why intentional breeding is so relatively effective: with a trait in mind, we can pick the few individual plants that we happen to like, and then isolate them for many generations, under controlled circumstances, from members of their species without that trait.
By contrast, natural selection seems usually to act very slowly. Among other things, if selection were too harsh, then perhaps a few lucky genotypes would do well, but the population would be so reduced as to be vulnerable to extinction. Strong selection can also reduce variation unrelated to the selected trait, and make the organisms less responsive to other challenges of life. If environments usually change slowly, selection can act weakly and achieve adaptations (though some argue that selection has its main, more dramatic effects very locally).
With slow selection, even if consistent over many generations, variation arises at many different genes that can affect a trait in the favored direction. Over time, much of the genome may come to have variants that are helpful. But they may do this silently: even if variation at each of them still exists, there can be so many different 'good' alleles that most individuals inherit enough of them to do well enough to survive. But the individual alleles' effects may be too small to detect in any practical way.
These facts explain, without any magic or mystical arguments about causation, why there is so much variation affecting many traits of interest and why their genetic basis is elusive to mapping approaches such as GWAS (genomewide association studies).
Of course, even highly sophisticated breeding doesn't automatically address variable climate, diet, etc. conditions which can be relevant--indeed, critical, to a strain's qualities. Molecular breeding is much faster than traditional breeding, but still takes many generations. Think about this: even if only 10 generations, in humans that would mean it would take250 years (the age of the USA as a country) to achieve a result for a given set of conditions. So how could this kind of knowledge be used in humans....other than by molecular based eugenics (selective abortion or genotype-based marriage bans)--days we surely don't want to return?
Breeders might eventually fix hundreds of alleles with modern, rapid molecularly informed methods. But we can't do that in humans, nor as a rule identify the individual alleles, because our replicate samples come not from winnowing down over generations in a closed, or controlled, breeding population, but from new sampling of extant variation each generation, in a natural population.
The data and molecular approaches seem similar in human biomedical and evolutionary genetics, but the problem is different. As currently advocated, both pharma and 'personalized genomic medicine' essentially aim at predictions in individuals, based on genotype, or treatment that targets a specific gene (pharma will wise up about this where it doesn't work, of course, but lifetime predictions in humans could take decades to be shown to be unreliable).
It's hard enough to evaluate 'fitness' in the present, much less the past, or to predict biomedical risk from phenotype data alone, though such data are the net result of the whole genome's contributions and should be of predictive value. So how to achieve such prediction based on specific genotypes in uncontrolled, non-experimental conditions, if that is a reasonable goal, is not an easy question.
In ag species, if a set of even weak signals can be detected reliably in Strain B, they can be introduced into a stock A strain by selective breeding. It need not matter that the signals that only explain a fraction of the desired effect in strain B aren't detected by the mapping effort because repeated iteration of the process can achieve desired ends. With humans, risk can be predicted to some extent, from GWAS and similar approaches. But so far most of the genetic contribution detected has been elusive, weakening the power of prediction.
In humans, the equivalent question is perhaps how and when molecular-assisted prediction will work well enough in the biomedical context, or in the context of attempting to project phenogenetic correlations back into the deep evolutionary past accurately enough to be believable. Perhaps we need to think of other approaches. Aggregate approaches under experimental conditions is great for wheat. But humans are not wheat.
Daniel Dennett's seven tools for thinking
2 hours ago