Below is the second installment in a short series of posts by a current Penn State graduate student in Chemical Ecology, Tristan Cofer. The thoughts are based on conversations we have been having, and reading he has been doing on these topics. The idea of the posts is to provide reflections by someone entering the next generation of scientists, and looking at the various issues in understanding, epistemology, and ontology, as they are seen today, by philosophers and in practice:
******************
Probabilities are everywhere. They come up in our conversations when we talk about making plans. They are there in our games as “chances”, “odds”, and “risks”. We use them informally when we make decisions about our health and well–being. And, in a more formal sense, we use them in science when we make inferences about data. Indeed, probabilities are so common that they can at times seem almost familiar.
But just what exactly are we talking about when we talk about “probabilities”? When I say, for instance, that the probability that a tossed coin will land heads up is 50%, am I saying something about that coin’s disposition to produce a certain outcome, or am I only expressing the degree to which I believe that that outcome might occur? Do probabilities exist out there in the real world as things that we can measure, or are they just in our minds as opinions and beliefs?
The short answer seems to be, yes, probabilities are both. They have an objective and a subjective element to them. This duality has apparently been there from the start, at a time when formal probability concepts were first developed in the seventeenth century. According to the philosopher Ian Hacking, during the Renaissance, the term “probable” was taken to mean “approved by some authority” rather than by evidence. It was not until the Enlightenment, when early Empiricists first began looking to Nature for “signs” to support casual associations, that “probable” came to mean “having the power to elicit a change”. Hence, “approval by testimony” from people and institutions was superseded by evidential observations. Transforming signs into evidence helped to advance what we might call frequentists–based induction, which was formalized as a mathematical concept in the Port Royal Logic in 1662.
Of course subjective probabilities have hardly disappeared, and in fact, it may be argued that we have seen their resurgence in the popularity of Bayesian– or conditional–based statistical inquiry. That being said, however, I am not sure that understanding how the term “probability” developed gets us much closer to understanding what probabilities really are.
It seems that in order to make progress here, we must talk about cause and effect. Namely, we need to discuss whether probabilities are like physical laws that define an event, or whether they are contrivances that we use to describe things after the fact. If they are descriptions based on the past, then what rational do we have for extending our inferences into the future? Is there any legitimate guarantee that future events will proceed at the same frequency as their predecessors? And even if they do, then for how long?
On the other hand, we might ask, if probabilities are only descriptive then what makes them so regular? Why does a tossed coin land heads up one-half of the time, almost as though it had some property that we might call its “probability”. Moreover, how are probabilities such as this determined? Could it be that we really are living in a clock–work universe, and that even our uncertainty is defined by deterministic processes? These questions are perhaps beyond what sciences and mathematics is able to answer. But maybe that is okay. This seems to be fertile ground for philosophical inquiry, which might provide insights where they are needed most.
But just what exactly are we talking about when we talk about “probabilities”? When I say, for instance, that the probability that a tossed coin will land heads up is 50%, am I saying something about that coin’s disposition to produce a certain outcome, or am I only expressing the degree to which I believe that that outcome might occur? Do probabilities exist out there in the real world as things that we can measure, or are they just in our minds as opinions and beliefs?
The short answer seems to be, yes, probabilities are both. They have an objective and a subjective element to them. This duality has apparently been there from the start, at a time when formal probability concepts were first developed in the seventeenth century. According to the philosopher Ian Hacking, during the Renaissance, the term “probable” was taken to mean “approved by some authority” rather than by evidence. It was not until the Enlightenment, when early Empiricists first began looking to Nature for “signs” to support casual associations, that “probable” came to mean “having the power to elicit a change”. Hence, “approval by testimony” from people and institutions was superseded by evidential observations. Transforming signs into evidence helped to advance what we might call frequentists–based induction, which was formalized as a mathematical concept in the Port Royal Logic in 1662.
Of course subjective probabilities have hardly disappeared, and in fact, it may be argued that we have seen their resurgence in the popularity of Bayesian– or conditional–based statistical inquiry. That being said, however, I am not sure that understanding how the term “probability” developed gets us much closer to understanding what probabilities really are.
It seems that in order to make progress here, we must talk about cause and effect. Namely, we need to discuss whether probabilities are like physical laws that define an event, or whether they are contrivances that we use to describe things after the fact. If they are descriptions based on the past, then what rational do we have for extending our inferences into the future? Is there any legitimate guarantee that future events will proceed at the same frequency as their predecessors? And even if they do, then for how long?
On the other hand, we might ask, if probabilities are only descriptive then what makes them so regular? Why does a tossed coin land heads up one-half of the time, almost as though it had some property that we might call its “probability”. Moreover, how are probabilities such as this determined? Could it be that we really are living in a clock–work universe, and that even our uncertainty is defined by deterministic processes? These questions are perhaps beyond what sciences and mathematics is able to answer. But maybe that is okay. This seems to be fertile ground for philosophical inquiry, which might provide insights where they are needed most.
3 comments:
Hi Ken, I understand many highly intelligent scientists and philosophers held to causal determinism. However, multitudes of physics experiments indicate probability distributions in Galton board experiments, celestial mechanics, fluid dynamics, thermodynamics, chemical reactions, nuclear reactions, and quantum mechanics. Nothing in science indicates deterministic causation and everything in science indicates probabilistic causality. The perspective that all evidence of probabilistic causality in nature is ultimately meticulously determined has no supporting evidence. Peace, Jim
I think one way out of the confusion may be found in Popper (1959) ``The Propensity Interpretation of Probability''
@article{10.2307/685773,
ISSN = {00070882, 14643537},
URL = {http://www.jstor.org/stable/685773},
author = {Karl R. Popper},
journal = {The British Journal for the Philosophy of Science},
number = {37},
pages = {25--42},
publisher = {[Oxford University Press, The British Society for the Philosophy of Science]},
title = {The Propensity Interpretation of Probability},
volume = {10},
year = {1959}
}
And as extended by Frick (1998):
@Article{Frick1998,
author="Frick, Robert W.",
title="Interpreting statistical testing: Process and propensity, not population and random sampling",
journal="Behavior Research Methods, Instruments, {\&} Computers",
year="1998",
month="Sep",
day="01",
volume="30",
number="3",
pages="527--535",
abstract="The standard textbook treatment of conventional statistical tests assumes random sampling from a population and interprets the outcome of the statistical testing as being about a population. Problems with this interpretation include that (1) experimenters rarely make any attempt to randomly sample, (2) if random sampling occurred, conventional statistical tests would not precisely describe the population, and (3) experimenters do not use statistical testing to generalize to a population. The assumption of random sampling can be replaced with the assumption that scores were produced by a process. Rejecting the null hypothesis then leads to a conclusion about process, applying to only the subjects in the experiment (e.g., that some difference in the treatment of two groups caused the difference in average scores). This interpretation avoids the problems noted and fits how statistical testing is used in psychology.",
issn="1532-5970",
doi="10.3758/BF03200686",
url="https://doi.org/10.3758/BF03200686"
}
Thanks for this comment. I had been aware of the 'propensity' argument but haven't read that article for a long time, so I don't know if I would see it as very different from the typical use of 'probability'. I had not known of Frick, so thanks for pointing it out. The issues are important, profound, and in many ways perplexing--to me, at least.
Post a Comment