We are lead to post on this now because of a recent BBC Radio 4 program, More or Less, which is always about the meaning of statistics, but the Dec 30 program happened to mention the Italian statistician, Bruno de Finetti and his book, Theory of Probability, which begins thus:
PROBABILITY DOES NOT EXIST
The abandonment of superstitious beliefs about the existence of the Phlogiston, the Cosmic Ether, Absolute Space and Time, . . . or Fairies and Witches was an essential step along the road to scientific thinking. Probability, too, if regarded as something endowed with some kind of objective existence, is no less a mis-leading misconception, an illusory attempt to exteriorize or materialize our true probabilistic beliefs.So, God is dead, but what does this mean, exactly? A 2002 paper by Robert Nau on de Finetti's thesis explains that de Finetti meant that probability is nothing but a subjective analysis of the likelihood that something will happen, that probability does not exist outside the mind. That is, it's the rate at which a person is willing to bet on something happening. This is as opposed to the classicist or the frequentist's view of the likelihood of aparticular outcome of an event. That view depends on the assumption that the same event could be identically repeated many times over, and then 'probability' of a particular outcome has to do with the fraction of the time that outcome results from the repeated trials. This example from Nau's paper clarifies the differences in approach.
For example, in the case of a normal-looking die that is about to be tossed for the first time, a classicist would note that there are six possible outcomes which by symmetry must have equal chances of occurring, while a frequentist would point to empirical evidence showing that similar dice thrown in the past have landed on each side about equally often. A subjectivist would find such arguments to be suggestive but needlessly encumbered by references to superfluous events. What matters are her beliefs about what will happen on the single toss in question, or more concretely how she should bet, given her present information. If she feels that the symmetry argument applies to her beliefs, then that is sufficient reason to bet on each side at a rate of one-sixth. But a subjectivist can find other reasons for assigning betting rates in situations where symmetry arguments do not apply and repeated trials are not possible.In other words, the idea is that probability is not part of the real world, only of one's belief in the nature of that world.
What does this have to do with the risk of having a heart attack? Well, how much are you willing to bet on your chances? That is, if your chances are 20%, and you feel that's high, you might be willing to do whatever you can to reduce your cholesterol, you might take up going to the gym more regularly, quit smoking, or become a vegan. Someone else, though, might feel that 20% is not so high, and do nothing at all to alter their (what are currently considered to be) risk factors. But the physical basis of that belief is far less clear than the idea of the belief.
And, physicians advising us differ in how much they are willing to bet on the probability we'll get sick. Some are very diligent about cholesterol, some less so, some advise all men of a certain age to have a PSA test for prostate cancer, others none. They are reading the same probabilities, but what they make of them differs. Indeed, the question of interpretation is secondary to the notion of probability itself.
Yet, how do we account for the role that probabilities do play in the real world, such as in the example from which much of probability was developed, when events can be repeated: gambling. And 'bet' is the appropriate operational concept. The formal theory was largely developed in the literal context of gambling, but the same idea applies to health.
In rolling dice, we have 6 outcomes and no reason to prefer any of them (see below!). If we know about rolling, we might, in advance, decide that each possible outcome would be as likely to result. We don't know which, so we might say that in a large number of rolls, each face would come up the same number of times.
In fact, however gambling notions of probability first arose (scholars have some ideas, but we don't), by the time formal theories were being developed, there was extensive and systematic experience with past sets of rolls that actually did occur (apparently not obviously so in Roman times, where gambling with bones was thought to be related to things like how the gods viewed the gambler, etc.). We don't personally know how extensive such data, experimental or otherwise, were but the notions of equal occurrences not only seemed intuitive at the time but backed up by experience. '6' came up about 1/6th of the time in dice games, leading naturally to a theory that all sides had equal chances to arise -- fractions of the times it will arise.
Heart attack risks based on, say, cholesterol levels, are based on past experience, too. But unlike dice, people are more than simple structures. We have more than cholesterol levels. So the fraction of people with cholesterol over some level, who had heart attacks in our studies, is used to estimate the fraction of people with such level will have a coronary in the future. Yet we know very well that each person's 'ancillary' risk factors in the data on which the probability was based were different, and worse, that we simply cannot know about those risk factors in the future. So what does the genetic-risk-perveryer's probability actually mean?
We also talk in probabilistic terms when we say such things as that God probably exists (or doesn't). This clarifies the unclarity of such wording. God either does or doesn't exist, clearly without any actual 'probability', so this is really just a statement, using a serious-sounding word, of strength of belief. If you examine it closely, the same really applies to similar statements about whether we'll have a heart attack or not, or whether 5 & 6 will come up on the next roll of a pair of dice. Even those sound more rigorous, or suggest experiments or relevance to actual data, even they are based on the assumption that multiple replicates of unique events can take place, and they are based on some idea, or 'model', of the process involved such as how we measure cholesterol, how we sample people and measure their cholesterol and diets and obesity and so on), and even how dice are rolled (see next installment!).
Some statisticians and books and lectures simply assume that we know what probability is and don't attempt to define it, or to do so in practical or frequentist terms. Others try to wriggle out of this situation by saying that the frequentist terms can be disregarded and that there are other systematic ways to extract from the available data alone, some idea of what our best idea of the situation is. These are called 'likelihood' and 'Bayesian' approaches (there may be others we're not aware of), but if they stay somewhat closer to actual data, they essentially are ways to strengthen belief, and belief is in ourselves rather than a physical property of the object of belief. That is the subjectivist assertion.
Next time, we'll show how the meaning of 'probability' or the interpretation of repeated events in probability terms--even in seemingly simple cases like coin-flipping or dice-rolling--are far from clear.
28 comments:
anne, you say de finetti's definition is the person's subjective judgement of how likely something is; then you give an example where the chance of a heart attack is a given at 20%. is this their own assessment or their doctor's? either way i fail to see what it has to do with whether they will make an effort to live more healthily unless the 20% figure is as accurate as analysis of a large dataset from a similar demographic will allow, and the individual patients assess their own risk as being much higher/lower than the more objective figure presented to them.
i can see the appeal of treating the tricky definition of probability as being subjective, but i'm not sure where that would leave quantum physics!
If I understand the issue you raise, I would say (I can't speak for Anne!) that the value 20% is an estimate based on some kind of data, which reflects assumptions that were made in its collection. But what you do is a subjective response to the value 20%. This may or may not reflect whether the 20% is statistically 'significant', since de Finetti's argument, one widely shared, is that ultimately it's a belief-based response in either case.
And as we tried to point out, perhaps not clearly enough, responding to a doc's estimate of 20% also assumes that the same applies to you and your future--yet the 20% value was estimated from retrospective data. You thus must assume the future is like the past.
Probability is almost entirely an axiom if its very definition assumes things like replicability, or it's definition is circular (probability is the 'chance' that....). So aren't there two issues: What does one mean by probability in the quantum physics sense? and How do you test or respond to it?
Maybe here there's some unclarity, perhaps on our part,about the difference between probability and making inferences from probabilistic concepts or data?
And in medicine, unlike physics, one is making a big leap of faith to assume that the relevant future will be like the past.
Ok, here’s just one of many heart attack risk assessment tools on the web. https://www.heart.org/gglRisk/locale/en_US/index.html?gtype=health Evaluation of risk here is based on large numbers of observations of people with the risk factors taken into consideration here (gender, age, abdominal obesity, triglicerides, blood pressure, smoking and so on) who did or did not go on to have a heart attack. So, it's population-based and objective, if you accept that the most important risk factors are known, and so on.
My point about this is that you and your doctor will do with a 12 or 20 or 30% risk what your preconceived assessment of the idea of 'risk' causes you to do. That is, your response is your subjective analysis of what this means to you. How much you'll bet (that is, change your lifestyle) based on this number. If you find the idea of a 12% risk of having a heart attack alarming, you are likely to take whatever measure you and/or your doctor think are likely to lower your risk. On the other hand, you might think that 30% is pretty good odds, and keep having your Big Mac every day for lunch.
And that only reflects your assessment of your odds, not what the odds actually even mean. What does it mean to be at 12% risk of a heart attack? It's kind of mystical.
Anne's last point: Based on some test data a group of people can be identified who, assuming the accuracy of the test, are at some risk, like 12% (whether that means lifetime risk, or if you live forever, or by age 60, and if you don't change your ways, or whatever).
But this does not mean, and we currently rarely have the ability to test whether 12% of these people are at 100% risk, or each person is 12% risk (whatever that implies, causally), or some other distribution of true risk within the group.
As to quantum mechanics, I haven't nearly the knowledge reguired to understand the data specifically, but the idea as I understand it is that the position of a subatomic particle, such as an electron, at any time of measurement cannot be predicted specifically. All one can say is that there is a distribution of relative probabilities that the electron will be at each possible position.
I don't know what empirical evidence was used to determine this, but it must at some point either been based strictly on theory (that is, some principles people decided to accept but no longer question) or some data. If data, then there must have been explicitly or implicitly something akin to a significance test of the quantum hypothesis, that is of the fit of imperfect data to the theory. And that would mean a subjective judgment of some sort, a decision-making point.
And interestingly, there is this parallel universe school of thought that uses quantum phenomena to argue that everything is perfectly deterministic but that each universe has independent; the distribution that we take to be of probabilities is really of the relative frequency of the value in question, among these perfectly deterministic universes. Thus, what looks like a probability process to us is really just an illusion that reflects which of the universes we're in.
Or something to that effect, strange and a bit hard (for me) to grasp. But it further muddies the idea of probabilism and determinism and of what kind of whacky universe we have found ourselves groping through!
The last sentence of the first paragraph of your post struck home. Fifteen years ago I spent a year wondering if I was part of the x% of the group with 100% risk or the rest with 0%. I knew that no one knew what my particular outcome would be -- only the percentage of a group of people in my situation that probably wouldn’t be around in the near future. [I’m skipping the difficulty of judging similarity among members of the group.] That number did not affect choice re what to do next, but it did affect my outlook for the next year, and actually -- ever since. Seize the day, live it up. We never know which of the two groups we’re in, nor how large it might be.
I look forward to the next installment. Any ideas on better ways for the doc to explain the situation to the patient?
Yes, exactly. It's a real challenge for doctors to translate from population level data -- which is where risk estimations come from, necessarily -- to their patients, each a unique individual with unique environmental exposures, genomes, and actual (unknown) risk. This on top of the fact that everyone interprets risk in his or her own way.
Some doctors simply treat any risk as 100% risk, and so recommend an aggressive response. Other doctors are more sanguine about risk. What _should_ they do? I don't think there's a right answer, simply because, as with your experience, actual outcome so often can't be predicted. A doctor I like isn't necessarily a doctor you'll like, because you and the same doctor I like won't see eye to eye on how to treat risk, whereas she and I happen to agree.
And I'll add that we must realize that the doctor is often too busy, or not specifically trained, to judge the research results him/herself. S/he relies on some general guidance from some source or other. And it's not just that doctors weren't trained as researchers or statisticians, and that was in their school days anyway.
They're busy taking care of patients who need answers, often right away.
So not only is it unreasonable to expect typical doctors to have a sophisticated understanding of the literature, but nobody including the best researchers themselves knows the answers to the distribution of individual risk that the net risk value summarizes.
Interesting, and somewhat spooky, to think about....
Can you comment on the following phenomenon? http://www.theatlantic.com/technology/archive/2012/01/science-can-neither-explain-nor-deny-the-awesomeness-of-this-sledding-crow/251395/?google_editors_picks=true
Crows sled?!
We posted on smart corvids a few weeks ago, so this sliding crow isn't a surprise! Aside from the total awesomeness of the video (!!), this is the best part of that story:
Was there some kind of greater lesson here about the evolutionary
process or how crows use play?
Kamil demurred. "It would just be storytelling."
Yes, who knows what this means, other than that a crow can adapt to its surroundings, including using a bottle lid for a sled!
Fascinating. Thanks for your take on it!
so you are saying that there is an objective probability, but people's psychology means they will not only have differing levels of acceptable risk but also be optimistic or pessimistic about their assessment of whether their personal risk is below or above the objective assessment. again, i'm not sure how this fits into de finetti's interpretation as it does include an objective probability...
the many worlds theory doesn't negate the existence of objective probabilities - at the point where the universe splits they are both the same so how can they both be deterministic and yet split an event into 2 different outcomes?
probability wavefunctions are a good example, as you give, but perhaps it's a more difficult concept than that of radioactive decay:
every fundamental particle, such as unionised atoms of plutonium, is exactly the same. when a radioactive substance decays there is an extremely accurate probability for each atom to decay in a given time period. you can't know which individual atom will decay during any time period, but because of the large numbers the probability can be determined very accurately.
also - deterministic systems assume cause and effect, but i'm afraid einstein got rid of that as a given!
Not exactly. I'm saying that "objective" probabilities are calculated, but their meaning is elusive. In fact, you either will or you won't have a heart attack. Your 'risk' is either 0 or 100%. You won't have 12% of a heart attack, so whatever these risk calculators are calculating, it isn't what will happen to you. What they mean is up to what you make of them.
i guess, as a physicist, i'm coming at this issue having worked with inherent probabilities and 'made my peace' with that, because you can't really argue with the experimental proof.
your risk of a heart attack isn't "either 0 or 100%" it is actually 12%. this is one of the weird things physics has given us, and is the same principle as schrodinger's cat - which people would think is either 0 or 100% dead but is actually in a superposition of states and is indeed 50% dead in some sense.
the issue of 'what is probability' isn't a purely philosophical one - there are physical experiments producing relevant evidence on the issue. i think you need a quantum physicist to do a guest post if you are doing a series on it (not me!).
We certainly might need a physicist to explain what their ideas are! But here it is not what you say, I think. If every person were identical, as every electron (supposedly?) is, then you would be right.
But every person is different, even if they share the same variant whose risk is estimated (retrospectively) as, say 12%. They are not replicates, and therein lies the problem.
Whether in truth, whatever that means, anyone is actually doomed ('risk'=deterministic 100% presence of the trait) or immune (no chance of getting it), no one can say.
Just perhaps to make the point, in a limping way that is Schrodinger's-cat like, if the supposed risk truly was 100% that you get trait X, but the data do not clearly show that this means by age 50, then someone in that category who dies of something else earlier, is not at 100% risk.
The 100% figure would have been obtained from data that all people in my very finite sample, who had some particular genetic variant, got trait X. That's a restrospective 'cat' observation that doesn't really address the future cats.
If probability is merely an illusion, then all statistical studies are invalid. And if all statistical studies are invalid, then there is no basis for scientific theories based on statistical evidence.
I think that is far too strong a statement. Statistical studies make assumptions, and when they are accurate enough the results are reliable enough--though 'enough' is a matter of judgment. Is something that only would occur without a stipulated cause one time in 1000? Does such an observation reflect some measurement error, so that the cause really is deterministic? These are some of the questions we need to be more careful about asking.
If one can repeat an experiment such as in rolling dice, then we can come close to the truth (assuming there is one truth) by observing repeat trials. Can we extend this observation on 12 dice to some other dice? When does it matter to determine if the other die are made of the same composition as the first set? Does it matter if the probability of Heads in a coin toss is 51% rather than 50%? Or 48.553%?
These questions can be trivial or profound depending on the situation. Disease-related issues in genetics, and inferring aspects of evolution are areas where the probabilistic landscape is not so clear, as upcoming installments in this series will (try to) argue.
Probability, related to actually or imaginably repeatable events, provides a practical guide for observation in science. For some that's enough. For me personally, I'd rather know if we're coming to know some ultimate truth, or just finding approximations for which we can do some storytelling?
I believe that the issue is the far to lax tendency we have in science, especially the areas we write and care about, to accept sloppy evidence as supporting storytelling.
But, of course, that's just my own view....
I would have thought that Einstein never acceded to true probabilism (his famous God not playing dice quip). But I'm no expert. We are dealing in macroscale issues with epidemiology and evolution, and I don't have a good sense of how one should relate that to quantum physics (even if I understood quantum physics--my parallel self in various multiverses does, but we have decohered and I can't communicate with them!).
In any case, whether or not there is true determinism or true probabilism at the nano level, the nature of uncertainties at our level is what we think raises all sorts of problems that most in biology don't want to acknowledge, much less think about very seriously.
Is this a better statement?:
If approximations of probability are merely an illusion, then all statistical studies are invalid. And if all statistical studies are invalid, then there is no basis for scientific theories based on statistical evidence.
he did say that, though not relating to cause and effect, but consensus is that he solidly lost that argument with bohr... with no reputable physicist siding with einstein now in the face of the mountain of evidence that probabilities are inherent in physics.
there are plenty of macroscopic quantum effects e.g. here. just google 'macroscopic quantum effects'!
and again, radioactive decay is inherently a statistical process that i'm sure you will agree can have macroscopic implications.
quantum tunneling probabilities can be modeled and calculated - including for some biochemical enzyme reactions if you would like a biologically relevant example.
I'm outta my league here! I guess my only question at this point is whether some things are knowably truly probabilistic as opposed to the idea that empirically we can't tell. And, perhaps, the idea that probabilistic in this sense invokes a funny kind of causation, and requires that we specify a distribution of possible outcomes....and doesn't showing that or estimating that require assumptions about replication and statistical 'signifcance'?
i know it's certainly a strain on my brain to think that 2 things that are exactly the same can spontaneously behave differently for no causal reason! but that's the picture experiments and theory in modern physics is painting.
it seems that there are inherent probabilities in nature and to nail down what they are you can put faith in the accuracy of a theory and calculate it, or you can measure the frequency of the event with a large sample size - or both to try to validate the theory. large sample sizes aren't usually a problem at quantum scale - with billions of billions of atoms in a milligram of matter. not billions and billions, billions of billions.
at the planck scale virtual particle-antiparticle pairs pop in and out of existence and the fields they produce could possibly causally affect things like radioactive decay, but then you are left with the probability of the pair popping up...
We saw exactly the dice-like problem with reports from Hadron about Higgs bosons. A claim that the observed tracks were significant at a p-value of 2 or 5 standard deviations above the mean. In other words,even with billions (quadzillions?) of collisions so few were expected that the usual kinds of tests were required.
I've heard that the same applies to trying to catch neutrinos in the act,or ask if they travel faster than light, etc.
So even physics' billions don't always take care of the problem.
Although there is a frequentist concept that relates relative frequencies to limiting frequencies in the long run, employing frequentist probability and probabilistic models does not assume identical repetitions at all. There are numerous consequences of a frequentist probability model for the next n trials, say, and it is these consequences that are most useful for the statistician (for testing and learning about actual phenomena).
Mayo
errorstatistics.com
You're in an area I don't follow closely and I don't get your point from your message. I have not had a chance to plumb your blog, though it looks interesting.
My only reaction is that other than the philosophers of statistics and epistemology, the vast majority of people in genetics and evolution treat frequentist approaches as being about identical repetitions (i.e., fixed probabilities of a given event per trial). Based on estimates of the probability per test made from data, the outcome distribution for the next n trials is, in almost every application in biology that I'm aware of, based on fixing that 'p'.
We know that can change (as, in evolutionary biology, where allele frequency in one generation changes stochastically in the next generation, and the new value becomes the parameter for the following generation. But per generation the probabilities are assumed to be fixed.
There are some evolutionary models, and perhaps others in genetic epidemiology that I'm unaware of, that allow an event to have a distribution of probabilities. Whether those have led to anything useful (since, for example, one needs to know something about the distribution), I don't know.
But even then I think the idea of repeatability is central. If there are deeper philosophical notions, then I don't know about them (tho' don't doubt their existence).
As to Bayes, I gather from your blog that you denigrate Bayesian commitment to subjectivity (or something like that) and argue that there is a revival of appropriate frequentism--these are my ways of describing the little I could glean from a cursory glance at your blog, which I hadn't known of. I am not qualified to comment on that in any deep way.
There are very sound applications of Bayesian approaches in focused problems in genetics (for example, inferring mode of inheritance in family data). And clearly frequentist approaches are useful and ubiquitous; my strongest personal criticism of the latter would be the arbitrariness and post-hoc application of significance cutoffs or power computations.
But they are all subjective in one way or another, and I think that too often is not recognized.
The inherent problem with believing in probability is right there: the dice has six sides and 6 comes up 1/6th of the time.
But a dice's roll outcome can be precisely controlled by the roller Or determined by anyone observing it as it is rolls (one simply accounts/uses spatial mapping/perception(velocity, strength, curvature of each surface it will hit, ambient wind, so on and so forth). A coin for instance can land more than 3 ways as well as not land at all, but which way it lands is entirely determined by the one flipping the coin. Same for shuffling a deck, dealing cards, etc.
Probability absolutely does not exist, every outcome has highly specific, deliberate cause/effect or action/reaction determination. You can make a choice, but that choice can never 'random' as well (by mere definition of choice and making one, not even regarding the fact 'random' doesn't exist either). All these things merely refer to the inadequacy of the determinator to control the outcome or utilize/account for data/information readily and easily available.
'Random Chance' (which means something very different scientifically) likewise is a illusion construct based off of beliefs (instead of objective or subjective reality).
While something like 'luck' definitely exists it's not exactly how it sounds to be. But yes it kind of boils down to the following situation: if it must to happen to someone and only that specific someone, it will definitely happen to you, but this entirely because of choices you've made. (The so called 'your lot in life' when you refuse to change the core of who you are and your decision trees)
I don't think that is necessarily right, Niji, no matter how intuitive it sounds and no matter that the examples you give do seem to be deterministic and 'probability' essentially is a reflection of the practical uncertainties. However, quantum physics is currently viewed as fundamentally probabilistic. There are various ways to interpret that, one being that there is a level of deterministic causation to which science has not yet been able to delve. The other is that true probabilism is just not something our brains or our culture makes it easy to understand, but that physicists 'understand' it in the sense of being able to write rigorously accurate mathematical models, and hence accept that they are modeling some fundamental truth. So I think the issues are pretty deep, philosophically speaking.
Post a Comment