This is still debated--that is, is randomness just a concept of mathematics that in the real world is only an illusory manifestation of our lack of sufficient data about things, or could there be things that are really, truly, no-kidding random? Quantum physics is held by many to be the only area in which that may be the case. Rolling dice and flipping coins only appear random in this view. But even if truly random, many argue that quantum effects are so small and numerous that they even out and simply have no bearing on things at the monstrous level of animals and plants, and can be ignored.
Randomness is usually viewed as just a matter of sampling and experimental measurement error. In the long run, with big enough samples, things will 'even out'. That is, this kind of randomness is just statistical 'noise' that we have to deal with in sampling the real world to understand it.
But in evolution and genetics, since the next generation emerges only from the current one, and what is dead and gone can't come back, randomness (if it exists) can have a permanent effect.
Just the other day we heard an otherwise sophisticated evolutionary geneticist say, of genome mapping kinds of searches for evidence of adaptation, that the problems we are facing these days are just a matter of using the right statistics (i.e., to detect what's significant). Amazingly, we heard that at least one statistics instructor here at Penn State (where the statistics program has long been a leading one), told a student that the .05 significance cutoff level had something fundamentally meaningful other than being a wholly conventional, subjective, arbitrarily decided criterion for making decisions about evidence.
Using and properly interpreting the right statistics is certainly important, and failure to do that is responsible for a lot of problems in genetics and evolutionary interpretation, as it is in many other areas of life and of science.
But in areas like GWAS and genomic scale analysis, the problem is not all in the statistics. It's in the phenomena themselves. Even when nothing causally genetic is going on, genome-scale analysis with open-ended, unconstrained data mining, is almost guaranteed to find something that will appear to be 'significant'. And we know theoretically and empirically that important evolutionary things can be going on, but not be detectable. In fact, we can't actually prove that 'nothing causally genetic is going on', either!
|GWAS Manhattan plot|
This program points out what we said yesterday about astrology: if the world is truly deterministic, then everything is so connected to everything else -- as is often said, the universe is just a clockwork phenomenon -- that everything in a sense can predict everything else. If that is actually true, than Darwinian evolution, premised on the idea that some genetic variation has a greater chance of success (that's how Darwin himself phrased it around 40 times in the Origin of Species), is a sham: because it's all predictable. The poor rabbit is destined to be eaten, not because its slowness reduces its chance of escape from the wolf.
Think how arrogant we so often are in making strong assertions with the great data limitations that we have, or in some ways cannot completely overcome. This is especially important, in a subtle way, because modern science largely rests on concepts of statistical significance and formal 'hypothesis testing', in processes that are either probabilistic as far as we can tell, or that we are forced to study by, but hopefully appropriately structured (random?), sampling. And to realize that much of Nature could be entirely non-random, even controlled by very simple processes, and yet appear random by every known test. Those processes can be totally non-random, for example as the BBC discussion mentions, the sequence of digits in the value of pi (circumference divided by diameter of any circle in the universe), are indistinguishable from randomness.
Indeed, often our belief in deterministic laws of Nature may lead us to assume determinism, and treat the probabilistic aspect of data as just a practical impediment, and using 'significance tests' (with arbitrarily agreed-on cutoff values) as if that proves it, when the process itself could be inherently probabilistic.
It is very sobering.
NOTE: Actually, Galileo did many experiments. These are described in his Dialogues Concerning Two New Sciences (1638), he showed laws of motion and gravity by rolling a ball down inclined planes of various steepness. He had to do that because time measurement, done by a water-clock, was not precise enough for dropping things off the Leaning Tower. But he also recognized that he needed replication, and repeated these experiments 100 times: an acknowledgment of random measurement error.