Thursday, November 29, 2012

Where should reductionism meet reality?

The dawn of empiricism
The march of modern science began, roughly speaking, in the period about 400 years ago when modern observational science replaced more introspective theoretical attempts to characterize the world.  The idea was that we can't just imagine how the world must be, based on some ancient and respected thinkers like Aristotle (and the 'authority' of the Bible and church).  Instead, we must see what's actually out there in Nature, and try to understand it.

Designs for Leeuwenhoek's microscopes, 1756; Wikipedia
Another aspect of the empirical shift was the realization that Nature seemed to be law-like.  When we understood something, like gravity or planetary motion or geometry, there seemed clearly to be essentially rigid, universal laws that Nature followed, without any exception (other, perhaps, than miracles--whose existence in a way proved the rule by being miracles).

Laws?  Why?
Why do we call these natural patterns 'laws'?  That is a word that basically means rules of acceptable behavior that are specified by a given society.  In science, it means that for whatever reason, the same conditions will generate the same results ever and always and everywhere.  Why is the universe like that?  This is a subject for speculation and philosophers perhaps, because there is no way to prove that such regularities cannot have exceptions.  Nowadays, we just accept that at its rudiments, the material world is law-like.

What it is about existence that makes this the way things are is either trivial (how else could they be?) or so profound and wondrous that we can do no more than assert that this is how we find them to be.  However, if this is the case, and as evidence that new technologies like telescopes showed that classical thinkers like Aristotle had been wrong that the laws were so intuitive that we could just think about Nature to know them, then we need to find them outside rather than inside of our heads.  That way was empiricism.

The idea was that by observing the real world enough times and in enough ways, the ineluctable regular patterns that we could describe with 'laws' could be discovered. Empirical approaches led to experimental, or controlled, observation, but what should one 'control' and how are observations or experiments to be set up to be informative so we could feel that we knew enough to formulate the actual laws we sought?  As the history unfolded, the idea grew that the way to see laws of Nature clearly was to reduce things to the fundamental level where the laws took effect.  In turn, this led to the development of chemistry and our current molecular reductionism:  If absolutely everything in the material world (the only world that actually exists?) is based on matter and energy, and these are to be understood in their most basic, or particular (or basic wave-like) existence, then every phenomenon in the world must in some sense be predictable from the molecular level.

The alternative was, and remains, the notion that there are immaterial factors that cause things we observe in the material world.  Unless science can even define what that might mean, we must reject it.  We call it mysticism or fantasy.  Of course, there may be material things we don't know about, along with things we're just learning about (like 'dark' matter and energy, or parallel universes), but it is all too easy to invoke them, and almost impossible for that to be more useful than just saying 'God did it' -- useless for science.

If anything, reductionism that assumes that atoms and primal wavelike forces are all that there is could be like saying everything must be explained in terms of whole numbers, assuming that no other kinds of numbers (like, say, 1.005) exist.  But science tries, at least, to explain things as best we can in terms of what we actually know exists, and that, at present, is the best we can do.

But 'observe' at what level?
Ironically, this view of what science is and does doesn't help in some very similar ways.  That is the case for at least two primary reasons.

First, the classical view of things and forces is a deterministic one.  According to that, if we had perfect measurement, we could make perfect predictions.  Instead, it is possible or even likely that some things are inherently probabilistic.  Even with perfect observation, we can't make perfect prediction.  In what is actually not a true example but illustrates the point, even if we know which face of a coin is 'up' when we flip it, we can't predict how the coin will land.  All we can say is something like that half the time it will land with Heads up.

There is lots of debate about whether things that seem inherently probabilistic and hence each even not exactly predictable just reflects our ignorance (as it does in coin flipping!) or whether electron or photon behavior really is probabilistic.  At present, it doesn't matter: we can't tell so we must do our work as if that's the way things are.  One positive thing is that the history of science includes development of sampling and statistical theories that help us at least understand such phenomena.

But this means that reductionism runs into problems, because if individual events are not predictable, then things of interest that are the result of huge numbers of individually probabilistic events become inherently unpredictable except, at best, also in a probabilistic sense like calling coin-flips.  But with coins we know or can rather accurately estimate the probability of the only two possible outcomes (or three, if you want to include landing on the rim).  When there are countless contributing 'flips', so that the result is, for example, the result of billions of molecules' buzzing around randomly, we may not  know the range of possible outcomes, nor their individual probabilities.  In that case, we can really only make very general, often  uselessly vague, predictions.
Pallet of bricks; Wikipedia

Second, reductionism may not work because even if we could predict where each photon or electron might be, the organization of the trait we're interested in is an 'emergent' phenomenon that simply cannot be explained in terms of the components alone.  A building simply cannot be predicted from itemizing the bricks, steel beams, wires, and glass it is made of.

Complexity of the emergent sense is a problem science is not yet good at explaining -- and this applies to most aspects of life; e.g., we blogged about the genius of Bach's music as an emergent trait last week.    It, too, is something we can't understand by picking it apart, reducing it to single notes. In a sense, the demand or drive for reductionism is a struggle against any tendency to be mystic.  We say that yes, things are complicated, but in some way they must be explicable in reductionist terms unless there is a magic wand intervening.  The fundamental laws of Nature must apply!

Herein lies the rub.  Is this view true?  If so, then one must ask whether it is our current methods, that were designed basically for reductionist situations, need revision in some way, or whether some entirely new conceptual approach must rise to the challenge of accounting for emergent traits.

This seems to be an unrecognized but currently fundamental issue in the life sciences in several ways, as we'll discuss in a forthcoming post.

1 comment:

James Goetz said...

Hi Ken,

I suppose that radical reductionism supports causal determinism and says there are no genuine stochastic events in the universe but only pseudo-stochastic events comparable to a Turing machine that merely appears to produce stochastic outcomes while the outcomes are actually deterministic and unpredictable with stochastic appearance.

I suppose that all natural emergence is reducible to the laws of physics, but for example, the initial conditions of the observed universe in no way necessitated many things that emerged in the universe such as DNA-based life. I see the universe is in some way a roll of dice that could have ended up with another outcome.

I look forward to your forthcoming post :-)