Showing posts with label million genome project. Show all posts
Showing posts with label million genome project. Show all posts

Wednesday, February 4, 2015

Exploring genomic causal 'precision'

Regardless of whether or not some geneticists object to the cost or scientific cogency of the currently proposed Million Genomes project, it is going to happen.  Genomic data clearly have a role in health and medical practice. The project is receiving kudos from the genetics community, but it's easy to forget that many questions related to understanding the actual nature of genomic causation and the degree to which that understanding can in practice lead to seriously 'precise' and individualized predictive or therapeutic medicine remain at best unanswered.  The project is inevitable if for no other reason that DNA sequencing costs are rapidly decreasing.  So let's assume the data, and think about what our current state of knowledge tells us we'll be predict from it all.

An important scientific (rather than political or economic) point in regard to recent promises, is that, currently, disease prediction is actually not prediction but data-fitting retrodiction.  The data reflect what has happened in the past to bearers of identified genotypes.  Using the results for prediction is to assume that what is past is prologue, and to extrapolate retrospectively estimated risks to the future. In fact, the individual genomewide genotypes that have been studied to estimate past risk will never recur in the future: there are simply too many contributing variants to generate each sampled person's genotype.

Secondly, if current theory underlying causation and measures like heritability is even remotely correct, the bulk of the risk associated with individual genomic factors are inherited in a Mendelian way but do not as a rule cause traits that way.  Instead, each factor's effects are genome context-specific and act in a combinatorial way with the other contributing factors, including the other parts of the genome, and the other cells in the individual, and more.

Thirdly, in general, most risk seems not due to inherited genetic factors or context, because heritability is usually far below 100%.  Risk is due in large part to lifestyle exposures and interactions.  It is very important to realize that we cannot, even in principle know what current subjects' future environmental/lifestyle exposures will be, though we do know that they will differ in major ways from the exposures of those patients or subjects from whom current retrospective risks have been estimated.  It is troubling enough that we are not good at evaluating, or even measuring current subjects' past exposures, whose effects we are now seeing along with their genotypes.

In a nutshell, the assumption underlying current 'personalized' medicine is one of replicability of past observations, and statistical assessments are fundamentally based on that notion in one way or another.

Furthermore, most risk estimates used are perforce for practical reasons based essentially on additive models:  add up the estimated risk from each relevant genome site (hundreds of them) to get the net risk.  Depending on the analysis, this leaves little room for non-additive effects, because things are estimated statistically from a population of samples, etc.  These issues are well known to statisticians, perhaps less so to many geneticists, even if there are many reasons, good and bad, to keep them in the shadows. Biologically, as extensive systems analysis shows clearly, DNA functions work by its coded products interacting with each other and with everything else the cell is exposed to.  There is simply no reason to assume that within each individual those interactions are strictly additive at the mechanistic level, even if they are assessed (estimated) statistically from large population samples.

For these and several other fundamental reasons, we're skeptical about the million genome project, and we've said that upfront (including in this blog post.)  But supporters of the project are looking at the exact same flood of genomic data we are, and seeing evidence that the promises of precision medicine are going to be met.  They say we're foolish, we say they're foolish, but who's right?  Well, the fundamental issue is the way in which genotypes produce phenotypes, and if we can parameterize that in some way, we can anticipate the realms in which the promised land can be reached, ways in which it is not likely to be reached, and how best to discriminate between them.

Simulation
Based on work we've done over the past few years, one avenue we think should be taken seriously, which can be done at very low cost could potentially save a large amount of costly wheel-spinning, is computer simulation of the data and the approaches one might take to analyze it.

Computer simulation is a well-accepted method of choice in fields dealing with complex phenomena, as in chemistry, physics, and cosmology.  Computer simulation allows one to build in (or out) various assumptions, and to see how they affect results.  Most importantly, it allows testing whether the results match empirical data. When data are too complex, total enumeration of factors, much less analytical solutions to their interactions is simply not possible.

Biological systems have the kind of complexity that these other physical-science fields have to deal with.  A good treatment of the nature of biological systems, and their 'hyper-astronomical' complexity, is Andreas Wagner's recent book The Arrival of the Fittest.   This illustrates the types of known genetically relevant complexity that we're facing.  If simulation of cosmic (mere 'astronomical' complexity) is a method of choice in astrophysics, among other areas, it should be legitimate for mere genomics.

A computer simulation can be deterministic or probabilistic, and with modern technology can mimic most of the sorts of things one would like to know in the promised miracle era of genomewide sequencing of everything that moves.  Simulation results are not real biology, of course, any more than simulated multiple galaxies in space are real galaxies.  But simulated results can be compared to real data.  As importantly, with simulation, there is no measurement or ascertainment error, since you know the exact 'truth', though one can introduce sampling or other sorts of errors to see how they affect what can be inferred from imperfect real data.  If simulated parameters, conditions, and results resemble the real world, then we've learned something.  If they don't, then we've also learned something, because we can adjust the simulations to try to understand why.

Many sneer at simulation as 'garbage in, garbage out'.  That's a false defense of relying on empirical data, that we know are loaded with all sorts of errors.  Just as with simulation, an empiricist can design samples and do empirical data in garbage in, garbage out ways, too.

Computer simulation can be done at a tiny fraction of the cost of collecting empirical data. Simulation involves no errors or mistakes, no loss or inadvertent switching of blood samples, no problems due to measurement errors or imprecise definition of a phenotype, because you get the exact data that were simulated.  It is very fast if a determined effort is made to do it.  Even if the commitment is made to collect vast amounts of data, one can use simulation to make the best use of them.

Most simulations are built to aid in some specific problem.  For example, under (say) a model of some small number of genes, with such-and-such variant frequency, how many individuals would one need to sample to get a given level of statistical power to detect the risk effects in a case-control study?

In a sense, that sort of simulation is either quite specific, or in some cases been designed to prove some point, such as to support a proposed design in a grant application.  These can be useful, or they can be transparently self-serving.  But there is another sort of simulation, designed for research purposes.

Evolution by phenotype
Most genetic simulations treat individual genes as evolving in populations.  They are genotype-based in that sense, essentially simulating evolution by genotype.  But evolution is phenotype-based: individuals as wholes compete, reproduce, or survive, and the genetic variation they carry is or isn't transmitted as a whole.  This is evolution by phenotype, and is how life actually works.

There is a huge difference between phenotype-based and gene-based simulation, and the difference is highly pertinent to the issues presently at stake.  That is because multiple genetic variants whether changing under drift or natural selection (almost always, it's both, with the former having the greatest effect), that we get the kind of causal complexity and elusive gene-specific causal effects that clearly is the case.  And environmental effects need to be taken into account directly as well.

I know this not just by observing the data that are so plentiful but because I and colleagues have developed an evolution by phenotype simulation program (it's called ForSim, and is freely available and open-source, so this is not a commercial--email me if you would like the package).  It is one of many simulation packages available, which can be found here: NCI link.  Our particular approach to genetic variation and its dynamics in populations has been used to address various problems and it can address the very questions at issue today.

With simulation, you try to get an idea of some phenomenon so complex or extensive that data in hand are inadequate or where there is reason to think proposed approaches will not deliver on what is expected.  If a simulation gives results that don't match the structure of available empirical data reasonably closely, you modify the conditions and run it again, in many cases only requiring minutes and a desk-top computer.  Larger or very much larger simulations are also easily and inexpensively within reach, without waiting for some massive, time-demanding and expensive new technology. Even very large-scale simulation does not require investing in high technology, because major universities already have adequate computer facilities.

Simulation of this type can include features such as:
* Multiple populations and population history, with specifiable separation depth, and with or without gene flow (admixture)
*  The number of contributing genes, their length and spacing along the genome
*  Recombination, mutation, and genetic drift rates, environmental effects, and natural selection of various kinds and intensities to generate variation in populations
* Additive, or function-based non-additive, single or multiple phenotype determination
* Single and multiple related or independent phenotypes
* Sequence elements that do and that don't affect phenotype(s) (e.g., mapping-marker variants)

Such simulations provide
* Deep known (saved) pedigrees
* Ability to see the status of these factors as they evolve, saving data at any point in the past (can even mimic fossil DNA)
* Each factor can be adjusted or removed, to see what difference it makes.
* Testable sampling strategies, including 'random', phenotype-based (case-control, tail, QTL, for GWAS, families and Mendelian penetrance, admixture, population structure effects)
*  Precise testing of the efficacy, or conditions, for predicting retrofitted risks, to test the characteristics of 'precision' personalized medicine.

These are some of what the particular ForSim system can do, listed only because I know what we included in our particular program.  I don't know much about what other simulation programs can do (but if they are not phenotype-based they will likely miss critical issues).  Do it on a desktop or, for grander scale, on some more powerful platform.  Other features that relate to some of the issues that the current whole genome proposed sequence implicitly raises could be built in by various parameter specification or by program modification, at a minuscule fraction of the cost of launching off on new sequencing.

Looking at this list of things to decide, you might respond, in exasperation, "This is too complicated!  How on earth can one specify or test so many factors?"  When you say that, without even yet pressing the 'Run' key, you have learned a major lesson from simulation!  That's because these factors are, as you know very well, involved in evolutionary and genetic processes whose present-day effects we are being told can be predicted 'precisely'.  Simulation both clearly shows what we're up against and may give ideas about how to deal with it.

Above is a schematic illustration of the kinds of things one can examine by simulation, and check with real data.  In this figure (related to work we've been involved with), mouse strains were selected for 'opposite' trait value, then interbred, and then a founding few from each strain were crossed, and then intercrossed for many generations to let recombination break up gene blocks.  Then use markers identified in the sequenced parental strain, to map variation causally related to the strains' respective inbred trait. Many aspects and details of such a design can be studied with the help of such results, and there are surprises that can guide research design (e.g., there is more variation than the nominal idea of inbreeding and 'representative' strain-specific genome sequencing generally considers, among other issues).

Possibilities in, knowledge out
As noted above, it is common to dismiss simulation out of hand, because it's not real data, and indeed simulations can certainly be developed that are essentially structured to show what the author believes to be true.  But that is not the only way to approach the subject.

A good research simulation program is not designed to generate any particular answer, but just the opposite.  Simulation done properly doesn't even take much time to get to useful answers.  What it gives you is not real data but verisimilitude--when you match real data that are in hand, you can make sharper, focused decisions on what kinds of new data to obtain, or how to sample or analyze them, or, importantly, what they can actually tell you.  Just as importantly, if not more so, if you can't get a good approximation to real data, then you have to ask why.  In either case, you learn.

Because of its low relative cost, the preparatory use of serious-level simulation should be a method of choice in the face of the kinds of genomic causal complexity that we know constitutes the real world. Careful, honest use of simulation to know about nature and as a guide is one real answer to the regularly heard taunt that if someone doesn't have a magic answer about what to do instead, s/he has no right to criticize business as usual.

Simulation, when not done just to cook the books in favor of what one already is determined to do, can show where one needs to look to gain an understanding. It is no more garbage in, garbage out than mindless data collection, but at least when mistakes or blind alleys are found by simulation, there isn't that much garbage to have to throw out before getting to the point.  Well-done simulation is not garbage in, garbage out, but a very fast and cost-effective 'possibilities in, knowledge out'.

Our prediction
We happen to think that life is genetically as complex as it looks from a huge diversity of studies large and small, on various species. One possibility is that this complexity implies there is in fact no short-cut to disease prediction for complex traits. Another is that some clever young persons, with our without major new data discoveries, will see a very different way to view this knowledge, and suggest a 'paradigm shift' in genetic and evolutionary thinking.  Probably more likely is that, if we take the complexity seriously, we can develop a more effective and sophisticated approach to understanding phenogenetic processes, the connections between genotypes and phenotypes.

Tuesday, January 27, 2015

Somatic mutation: does it cut both ways?

I've written journal articles as well as blogposts here at MT, about the known and potential importance of somatic mutation (SoMu) as a cause of disease.  I referred to this in our post on 'precision' medicine yesterday, saying I'd write about it today.  So here goes, an attempt to show why SoMu may be an important causal phenomenon, one I called 'Cryptic causation' in a paper a few years ago in Trends in Genetics.

SoMu's are DNA changes that occur in dividing cells after the egg is fertilized.  Mutations arise every time cells divide after that, throughout life.  Each time a cell divides thereafter, the mutations that arose when it was formed are transmitted to its daughter cells, and this continues throughout life (unless that site experiences another mutation at some point during its lifelong lineage).  The distinction between somatic mutations and germ line mutations goes back to Weissmann's demonstration of the separation of the 'soma' and the 'germ line', the germ line being a developmental clade of cells leading to sperm and egg cells and soma being cells unrelated to these.  A change from parent to offspring that reflects mutation arising in the germ line is the usual referent of the word 'mutation'.   Wherever they arose in the embryogenesis of the gonads, they are treated as if they occurred right at the time of meiosis.  That isn't a real problem, but it is fundamentally distinct from SoMu, because the latter are inherited in the somatic (body) tissue lineage in which they arose, but are not transmitted to offspring.

Normally, we would dismiss somatic mutation as just one of those trivial details that has little to do with the nature of each organism--its traits.  At any given genome location, most of the cells have 'the' genome that was initially inherited.  If a SoMu breaks something in a single cell in some tissue, making that cell not behave properly, so what?  Mostly the cell will die or just while away its life not cooperating, its diffidence swamped out by the millions of neighboring cells, performing their proper duties, in the mutant cell's organ.  It will have no effect on the organism as a whole.

But that is not always so!  In some unfortunate cell, a combination of inherited and somatic variants may lead that individual cell to be hyperviable in the sense of not following the local tissue's restrictions on its growth and behavior.  It can then grow, differentiate, grow more, again and again. We have a name for this: it's called cancer.

Somatic changes may mean that different parts of a given organ have somewhat different genotypes. Some fraction of, say, a lung or stomach, may work more or less efficiently than others.  If the composite works basically well, it won't even be noticed (unless, for example, the somatically mutant clones cause differences, like local spots, in skin or hair pigment).  But when a change in one cell is early enough in embryogenesis, or there is some other sort of phenotype amplification, by which a single mutant cell can cause major effects at the organismal level, the SoMu is very important indeed.

It isn't just cancer that may result from somatic mutation.  Epilepsy is a possible example, where mutant neurons may mis-fire, entraining nearby otherwise-normal neurons to engage in firing, and producing a local seizure.  I suggested this possibility a few years ago in the Trends in Genetics paper, though the subject is so difficult to test that although it is a plausible way to account for the locality of seizures, the idea has been conveniently ignored.

There are theories that mitochondria, of which cells contains hundreds or thousands, may mutate relatively rapidly and function badly.  They are an important way the cell obtains energy, and the mitochondrial DNA is not in the nucleus and is not prowled by mutation-repair mechanisms the way chromosomes are.  Some have suggested that SoMu's accumulate in neurons in the brain, and since the neurons don't replicate much if at all, they can gradually become damaged.  It's been suggested that this may account for some senile dementia or other aging-related traits.

Beware, million genome project!
What has this got to do with the million genome project?  An important fact is that SoMu's are in body tissues but are not part of the constitutive (inherited) genome, as is routinely sampled from, say, a cheek swab or blood sample.  The idea underlying the massive attempts at genomewide mapping of complex traits, and the new culpably wasteful 'million genomes' project by which NIH is about to fleece the public and ensure that even fewer researchers get grants because the money's all been soaked up by DNA sequencing, Big Data induction labs, is that we'll be able to predict disease precisely, from whole genome sequence, that is, from constitutive genome sequence of hordes of people.  We discussed this yesterday, perhaps to excess. Increasing sample size, one might reason, will reduce measurement error and make estimates of causation and risk 'precise'.  That is in general a bogus self-promoting ploy, among other reasons because rare variants and measurement and sample errors or issues may not yield a cooperating signal-to-noise ratio.

So I think that the idea of wholesale, mindless genome sequencing will yield some results but far less than is promised and the main really predictable result, indeed precisely predictable result, is more waste thrown onto mega-labs, to keep them in business.

Anyway, we're pretty consistent with our skepticism, nay, cynicism about such Big Data fads as mainly grabs in tight times for funding that's too long-lasting or too big to kill, regardless of whether it's generating anything really useful.

One reason for this is that SoMu cannot be detected in the kind of whole genome sequences being ground out by the machinery of this big industry.  If you have SoMu's in vulnerable tissues, say lung or stomach or muscle, you may be at quite substantial increased risk for some nasty disease, but that will be entirely unpredictable from your constitutive genome because the mutation isn't to be found in your blood cells.  Now, thinking about that, sequencing is not so precise after all, is it?

I've tried to point these things out for many years, but except for cancer biologists the potential problem is hardly even investigated (except, in a different sort of fad, by epigeneticists looking for DNA marking that affects gene expression in body cells but that, also, cannot be detected by whole genome sequencing).

In fact, epigenetics is a similar though perhaps in some ways tougher problem.  DNA marking affects gene expression by changing it in local tissues, which reflects cellularly local environmental events and hence constitutive genomics can't evaluate it directly.  On the other hand, epigenetic marking of functional elements can easily and systematically be reversed, also enzymatically in response to specific environmental changes at the cell level.  These are somatic changes in DNA dynamics, but at least SoMu, if detected, basically doesn't get reversed within the same organism and is 'permanent' in that sense, and hence easier to interpret.

But--the mistake may go in the opposite direction!
But I've myself neglected another potentially quite serious problem.  SoMu's arise in the embryonic development of the tissues we use to get constitutive genome sequences.  The lineage leading to blood and other tissues divides from other lineages reasonably early in development.  The genome sequenced in blood is not in fact your constitutive genome!  Information found there may not be in other of your tissues, and hence not informative about your risks for traits involving gene expression.

The push for precision based on genomewide sequencing is misguided in this sense, the opposite of the non-detectability of SoMu's in blood samples.  The opposite may be true: what's is found in 'constitutive' genomes in blood samples may actually not be found in the rest of the body and may not have been in your inherited genome!

This may not be all that easy to check.  First, comparing parent to offspring, one should see a difference, that is, non-transmitted alleles in both parties.  But since neither parent's blood and offspring's blood is entirely their 'constitutive' genomes, it may be difficult to know just what was inherited.  Even if most sites don't change and follow parent-offspring patterns, it doesn't take that many changes to cause disease-related traits (if it did, then why would so much funding be going to 'Mendelian', that is, single gene, usually single-mutation traits)?

One could check sequences in individuals' tissues that are not in the same embryonic fate-map segment as blood, or compare cheek cells and blood, or other things of that nature.  In my understanding at least, lineages leading to cheek cells (ectodermal origin) and blood cells (mesodermal origin) separate quite early in development.  So comparing the two (being careful only to sample white cells and epithelial cells) could reveal the extent of the problem.

It might comfortingly show that little is at issue, but that should be checked.  However, of course, that would be costly and would slow down the train to get that Big Funding out of Congress and to keep the Big Labs and their sequencers in their constituencies in operation.

Still, if we are being fed promises that are more than just ploys for mega-funding in tight times,  or playing out of the belief system that inherited genome sequence is simply all there is to life, or is enough to know about, then we need to become able to look where genetic variation manifests its effects:  at the local cell level.  Even for a true-believer in DNA as everything, a blood-based sequence can only tell us so much--and that may not include the variation that exists in the person's other tissues.

Well, one might wish to defend the Infinite Genomes Project by saying that at least constitutive genome sequences from blood samples get most, or the main, signal by which genetic variation affects risk of traits like disease.  But is that even true?

First, huge genomewide mapping studies routinely, one might say notoriously relative to the genome faith, account for only a fraction, usually small fraction, of the estimated overall genetic contribution as estimated by measures like heritability.  Predictive power is quite limited (and here we're not even considering environments, which cloud the picture greatly).

But second, risk from constitutive genome sequence is, as a rule and especially for complex or late-onset traits that are so important to our health and longevity, accounting only for a fraction of overall risk.  That is, heritability is far below 100%.  So the bulk of risk is not to be found in such sequence data.  And while 'environment' is clearly of major importance, SoMu appears as environment in genomic studies, because the variants are not in constitutive sequences and not shared between parents and offspring in family studies.  This may be especially important for traits that really do seem to involve genes in the cellular mechanism, as so clearly shown by cancers.

Thus, it is not accurate to say that at least we even get the bulk of genetic (meaning inherited) risk accounted for by pie-in-the-sky exhaustive genome sequencing.  Yet, testing for SoMu is not even on the agenda of Big Data advocates.

How much more one would get from a serious approach to SoMu--which would require some serious innovative thinking--remains untested.  It's not on the agenda not because we know its relatively unimportant, but because it's hard to test, and in that sense hard to use to grease the wheels of current projects for which an excuse to keep funding is what is really being sought by the Big Data advocates.  It's safer, even if we know it's got its limits and we don't really know what those limits are.

A real 'genomic' approach should include checking for the problems caused by SoMu--in both directions!