Thursday, January 19, 2012

Probability does not exist! Part IV. Here's to your health!

Probability and unique events
Probability and statistics are very sophisticated, technical, often very mathematical sciences.  The field is basically about the frequency of occurrence of different possible outcomes of repeatable events.

When events can in fact be repeated, a typical use of statistical theory is to estimate the properties of what's being observed and assume, or believe, that these will pertain to future sets of similar observations.  If we know how a coin flipped in repeated observations in the past, we extrapolate that to future flips of that coin--or even to flips of other 'similar' coins.  If we observe thousands of soup cans coming off an assembly line, and know what fraction were filled slightly below specified weight, we can devise tests for efficiency of the machinery, or methods for detecting and rejecting under-weight cans.  And there are countless other situations in which repeatable events are clearly amenable to statistical decision-making.

When events cannot be or haven't been repeated, a common approach is to assume that they could be, and use the observed single-study data to infer the likely outcomes of possible repetitions.  As before, we extend our inference to to new situations in which similar conditions apply. In both truly and singular events there is similar reasoning, regardless of the details about which statisticians vigorously argue.

Everyone acknowledges that there is a fundamentally subjective element in making judgments, as we've described in the previous parts of this series of posts.  They are called, for example, significance tests from which one must choose a cutoff level or decision level.  But in well-controlled, relatively simple, especially repeatable situations, the theory at least provides some rigorous criteria for making the subjective choices.

The issues become much more serious and problematic when the situation we want to understand is either not replicable, not simple, not well understood, or in which even our idea of the situation is that the probabilities of different possible outcomes are very similar to each other.  Unfortunately, these are basic problems in much of biology.

Like dice, outcome probabilities are estimated from empirical data--past experience or experiments and finite (limited) samples.  Estimation is a mathematical procedure that depends on various assumptions and values, like averages of some measured trait, have measurement error and so on.  One might question these aspects of any study of the real world, but the issue for us here is that these estimates rest on some assumptions and are retrospective, because they are based on past experience.  But what we want those estimates for is to predict, that is to use them prospectively.

This is perhaps trivial for dice--we want to predict the probability of a 6 or 3 in the next roll, based on our observations of previous rolls.  We can be confident that the dice will 'behave' similarly.  Remarkably, we can also extrapolate this to other dice fresh from a new pack, that have never been rolled before, but only on the assumption that the new dice are just like the ones our estimates were derived from.  We can never be 100% sure, but it seems usually a safe bet--for coin-flips and dice.

Predicting disease outcomes
But this is far from the case in genetics, evolution, and epidemiology.  There, we know that no two people are genetically alike, no two have exactly the same environmental or lifestyle histories.  So that people are not exactly like dice.  Further, genes change (by mutation) and environments change, and these changes are inherently unpredictable as far as is known.  Thus, unlike dice, we cannot automatically extrapolate estimates from past experience such as genes or lifestyle factors and disease outcomes, to the future -- or from past observations to you.  That is, often or even typically, we simply cannot know how accurate an extrapolation will be, even if we completely believe in the estimated risks (probabilities) that we have obtained.

And, any risk estimation is inherently elusive anyway because people respond.  If you're told your risk of heart disease is 12%, that might make you feel pretty safe and you might stop exercising so much, or add more whipped cream to your cocoa, or take up smoking, but if you're told your risk is 30% you might do the opposite.  Plus, there's some thought that heart disease might have an infectious component, and that's never included in risk estimators, and is inherently stochastic anyway.  And, if there's a genetic component to risk, that can vary to the extent that many families might have an allele unique to them, which can't be included in the model because models are built on prior observations that won't apply to that family. 

A second issue is that even if the other things are orderly, in genetics and epidemiology and trying to understand natural selection and evolution, we are trying to understand outcomes whose respective probabilities are usually small and usually very similar.  As we've tried to show with the very similar (or identical?) probabilities of Heads vs Tails, or of 6 vs 3 on a die, this is very difficult even in highly controlled, easily repeatable situations.  But this simply is often not nearly the case in biology.

Here the risks of this vs that genotype, at many different genes simultaneously, are very indivdually small and similar, and that's why GWAS requires large samples, often gets apparently inconsistent results from study to study, accounts for small fractions of heritability (the estimated  overall genetic contribution).  This means that it is very difficult to identify genetic contributions that are statistically significant--that have strong enough effects to pass some subjective decision-making criterion.

This means it's very difficult to estimate a statistically reliable risk probability to persons based on their genotype, and certainly makes it difficult to assign a future risk. Or to know whether each person with that genotype has the same risk as the average for the group.  That is why many of us think that the current belief system, and that's what it is!, in personalized genomic medicine, is going to cost a lot for relatively low payoff, compared to other things that can be done with research funds---for example, to study traits that really are genetic: for which the risk of a given genotype is so great, relative to other genotypes, that we can reliably infer causation that is hugely important to individuals with the genotype, and for which the precision of risk estimates is not a big issue.


Probabilities and evolution
Similarly, in reconstructing evolution, if the differences among contemporary genotypes in terms of adaptive (reproductive) success are very similar, the actual success of the bearers of the different genotypes will be very similar, and these are probabilities (of reproduction or survival).  And if we want to estimate selection situations in the distant, unobserved past, from net results we see today, the problems are much more challenging even if we thoroughly believe in our theories about adaptive determinism or genetic control of traits.  Past adaptation also occurs, usually we think, very slowly over many many generations, making it very difficult to apply simple theoretical models.  Even to look for contemporary selection, other than in clear situations such as the evolution of antibiotic or pesticide resistance, is very challenging.  Selective differences must be judged only from data we have today, and directly observing causes for reproductive differences in the wild today is difficult and requires sample conditions rarely achievable.  So naturally it is hard to detect a pattern, hard to make causal assertions that are more than storytelling.


And, finally
We hope to have shown in this series of posts why we think we have to accept that 'probability' is an elusive notion, often fundamentally subjective and not different from 'belief'.  We set up criteria for believability (statistical significance cutoff values) upon which decisions--and in health, lives--depend.  The stability of the evidence and vagaries of cutoff-criteria, and our often reluctance to accept results we don't like (treating evidence that doesn't pass our cutoff criterion but is close to it as 'suggestive' of our idea rather than rejecting our idea), all conspire to raise very important issues for science.  The issues have to do with allocation of resources, egos, and other vested interests upon which serious decisions must be made.

In the end, causation must exist (we're not solopsists!), but randomness and probability may not exist other than in our heads.  The concept provides a tool for evaluating things that do exist, but in ways that are fundamentally subjective.  But we are in such a hurry in the system of science and its use that has evolved that we are not nearly humble enough in regard to what we know about what we don't know.   That is a fact that exists, whether probability does or not!

It is for these kinds of reasons that we feel research investment should concentrate on areas where the causal 'signal' is strong and basically unambiguous--traits and diseases for which a specific genetic causation is much more 'probable' than for the complex traits that are soaking up so many resources.  Even the 'simple' genetic traits, or simple cases of evolutionary signal, are hard enough to understand.

6 comments:

Anonymous said...

Great series of posts on the nature of probability and statistics! I suspect you are aware of a relatively small but very active school of statistical thinking that wholly embraces the notion that probability is subjective, entirely in our heads, and not necessarily derived from repeatable events, but from the beliefs of the statistical analyst (wherever they may derive). This is known as Bayesian statistics. Its ideas are three-hundred years old (originating in a paper published by Thomas Bayes in 1763); since the mid-twentieth century it has maintained a steady voice against the more dominant mode of "frequentist" statistics (which derives from thinking about repeated events); in recent years has become appreciated especially in genetics, evolution, epidemiology, and other computationally intense disciplines.

Ken Weiss said...

Absolutely. And thanks for your thoughtful comment!

We use Bayesian approaches informally and in our own research are developing more formal ways to take multiple sorts of mapping data into account to evaluate candidate regions.

Bayesian methods provide a mechanical way, so to speak, of adjusting our strength of confidence in some explanation.

Genetics and evolutionary biology use such methods all the time, as you say. I think that investigators may take their 'Bayesian' posterior probabilities more seriously than is legitimate relative to actual truth, as if it removed the subjective component. Then the conclusions are advertised to or get picked up by the gullible public media, grant system, etc. as if they were truths.

I would also argue informally at least (I'm not qualified to say whether this is formal or not), that confidence limits are sometimes applied to posteriors, which is a kind of lapse back to frequentism (I think that MCMC methods in biology do that, by examining parametric surface shapes around chosen maxima).

Likewise, as I'm sure you know, but many don't, likelihood methods are designed in a way to get the most out of existing data without assumptions of replicability. But then support intervals are constructed, which basically invoke frequentism, I think.

These are technical points, and hopefully my perception isn't too far off. In any way, there is still the element of subjectivity that (I think) needs much more serious awareness.

Anonymous said...

Yes, I agree that one of the greatest challenges is to convey the subjective nature of probability to the general public, even when there is a well established technical interpretation as such. Your efforts in this regard are appreciated.

Interestingly, influence can work backwards. For example, as Bayesian methods have become more accepted (and perhaps in order that it can have become accepted), it has increasingly been the case that "prior" probabilities are set according to "objective" criteria, either by mathematical formulations, or by referring strictly to historical data (thus lessening the subjective influence of the analyst). Rather than the Bayesian viewpoint opening science to subjectivity, science seems to have constrained Bayesian ideas to become more "objective."

Ken Weiss said...

I think that's right so long as we keep in mind what doubt and uncertainty, and our various assumptions, in mind.

James Goetz said...

Ken, I agree with your focus in this series. In my comment in the beginning of this series, I only looked at the theory of probability. For example, in some cases, I will refer to a fair coin toss while I know that I could never prove that any give coin toss was absolutely fair, but I need such a theory to make any sense of science. And in your cases, you are criticizing variables in say GWAS that are incredibly complex and often impossibly to quantify in a statistical study.

Ken Weiss said...

If religion or philosophy tries to understand a perplexing world using various kinds of approaches, science is trying to understand other aspects of the world,also perplexing. Philosophers and religious scholars recognize the challenges--even when dealing with received texts, as you of course know very well.

We should acknowledge our similar problems and conundrums,and where matters of 'faith'--agreement to accept uncertainties--are appropriate.

We can't know everything, apparently, and statistical methods are an eerily insubstantial but effective way of dealing with real-world problems. But we should acknowledge what such methods are, and aren't.