A regular reader, John Vokey, pointed out a very nice recent article in the British Medical Journal, by the arch skeptic Ben Goldacre and David Spiegelhalter, about how we know whether something that's obvious is actually true. Here is a link to that piece. He's a widely known writer and commentator, as well as a practicing physician in Britain, who has written a great deal about similar aspects of how we use data and how this affects medicine. He writes about 'Bad Pharma' to try to correct such things (see link below his picture for more).
While it is obvious that exercise is good for your health, and we have some good physiological and physical reasons and mechanisms to back up that statement, in our post we noted that the correlation between health and exercise may not be so simple. For example, you have to already be healthy to exercise so the correlation may be a result not a cause of better health.
Goldacre takes something bluntly obvious, that wearing a helmet when bicycling is good for your health (that is, in terms of injuries). He shows that even this is neither so obvious nor simple. Just to illustrate the point, if you ride more often or more often in traffic because you feel safer when you wear a helmet, even with the same per-mile (or, in Goldacre's UK, per kilometre) risk there will be more rather than fewer cycling-related injuries: the population at-risk has grown. Or drivers may cut closer to you seeing that you are helmeted. And so on. As John Vokey pointed out in his comment, that brief but to-the-point article is a fine lesson in statistical reasoning.
If something as apparently simple as the risk of cycling with vs without a helmet is not so simple, then how much more complex will other sorts of causation, epidemiological, genetic, and evolutionary are supposed cause-and-effect scenarios be? A due respect for this complexity should routinely temper conclusions from simple study designs (or, in the case of evolution, almost pure surmises about natural selection in the distant past).
Yet pressures, and perhaps natural tendencies in our boastful current culture, seem to be doing just the opposite: leading investigators to make ever-quicker and ever more grandiose claims about their findings. This is used for self-promotion in general, in seeking grant support, and in the rush to the media. And science journalists often show little, sometimes almost zero sense of skepticism or even circumspection, about such claims.
The issues we face in science are nowadays very complex and subtle, and we know from even simple examples, such as the one Goldacre used to illustrate the pitfalls of statistical reasoning, that our conclusions can be very wrong, even in very simple ways. We try to make conclusions in science, but we should do that by starting with respect for the complexity of the problem.