We, as people of science, believe -- er, conclude -- that we have a worldview based on weighing the evidence. That's the definition of science. Presumably most MT readers share the same worldview, or you wouldn't be here. As such, we all most likely accept evolution, consider climate change to be real and the result of human action, don't treat our ills with homeopathy or attend fundamentalist churches, or even any church at all. We practice some version of a 'scientific method,' whether based on extensive reading of the philosophy and history of science or a hazy recall of what we were taught in 4th grade. We do this, but we also know that we (nor anyone else) don't know everything: we accept imperfect knowledge and we accept tentative theories as such, but we try to go with what the evidence suggests is most likely.
And yet, there's the problem of exactly what constitutes that 'evidence.' Homeopathy may clearly be off our radar, but do you take vitamin C or zinc or echinacea when you have a cold? Maybe your cold went away in 8 days this time rather than the usual 10. Well, all colds go away, so how would you know if whatever you took helped? Maybe you had a virus that caused an 8 day cold this time, but last time's virus caused a 10 day cold. Well, so maybe you just conclude that it doesn't hurt to take vitamin C, so you'll just keep doing it when you have a cold, just in case. Is that an evidence-based conclusion or is it faith? Or perhaps faith that whether there are really are any benefits, there aren't any risks either--a subjective attitude towards evidence.
This is a trivial example, but how will you decide whether to have a mammogram or a PSA for prostate cancer or whether to take statins to lower your cholesterol, or which treatment is best for whatever you do get? There is evidence for and against of all of these, based on peer-reviewed studies and state-of-the-art methodologies, and choosing to believe one -- based on...? -- means you've chosen to deny the science that supports the others.
How do we know that climate change is real? We didn't go out and collect the evidence ourselves, and most of us, as biologists, aren't equipped to weigh the physics, climatology, and so on ourselves. We don't make such judgments based on whether we have a fortunately mild winter where we live. Instead, we have to trust that people we choose to believe have assessed the evidence and come to sound conclusions that we can trust. So, yes, we take that on faith, too, even if we know that those who generated the data were fallible humans.
What about genetics? It used to be that you could generally predict where someone would come down on genetic determinism by knowing his/her politics, and vice versa, but that's not so true anymore. To some otherwise left-leaning scientists, genetics is providing some uncomfortable truths -- there are interpopulation genetic traces of histories of geographic isolation. This to some comes awfully close to suggesting that race is a biological category, not the social construct as has been argued in the social sciences for decades. So, to fit this evidence into their political worldview, a lot of geneticists are now saying that it's important to document genetic differences between populations in order to address social disparities in health, or to teach people that differences between populations are only skin deep. This requires handling underlying structural issues such as defining 'population' properly and not going beyond that -- which in evolutionary terms is largely a matter of sampling to get the data we want.
Are these evidence-based conclusions? E.g., is genetics in fact going to address social disparities in health? There's scant evidence of that to date. Society may do it as a political and economic decision, but it will generally not be based on genetics.
And then there's the question of what genetics can tell us anyway. Holly's post last week arguing against fear of knowledge was cheered by a lot of people who believe -- it's so hard to avoid that word -- that what companies like 23andMe have on offer is in fact valuable knowledge. But, is that based on evidence? Yes, carrier status is valuable to know so that people can make decisions about their risk of transmitting disease to offspring but 23andMe at least, as far as we can tell, offers only a handful of such tests. Disease prediction and ancestry are their big sellers, and, well, Ken and I have blogged about what we think about those in the past, though people who believe in these tests don't agree with us, for whatever reason. People with whom we'd probably share a lot in terms of our outlook on the world, on science, most of our understanding of the foundation of genetics and so on. But not on the quality of the evidence in this instance, and thus on whether direct-to-consumer companies are shilling, or should be selling what they are. This of course doesn't make us right and everyone else wrong. It just means we're either evaluating the same evidence differently or considering different pieces of evidence.
But, to make a point in a stark way: 23andMe sells update service concerning risk. That implies that today's risk estimates are not so reliable -- that is, the evidence changes -- and that the revised bulletins you will get will be more accurate and hence worth the fee. But we know that steady improvement in accuracy is not the case, for the historical reason that risks jump all over the place from study to study, and that biological complexity and environmental changes alter risk even if it can be accurately estimated--making that question itself moot--changes that are often greater then their current value. Is the evidence actually homing in on Truth about risk? It's hard to believe that it is.
What about IQ? That's still an easy one -- left-wingers don't think it, if there is an 'it', is genetic in a major way, and right-wingers do. In this case, there's a ton of evidence on both sides, and what we choose to believe is based on our worldview. That's often the case, but not always. Sometimes, particularly when the issue isn't laden with political overtones the way race and intelligence are, there's so much variation afoot that an issue can be overdetermined. In other words, many different ideas can be fitted to the same data comparably well. When there are too many contributing variables, the stability of estimates of their effects is simply too imprecise and unstable for solid prediction at present. Sample sizes of enormous scale are required even to estimate many of the effects.
IQ is a good example. There may or may not be population differences in average IQ as measured by some test. Whether or not we agree about the test, the distributions overlap greatly, so the average difference, which is the kind of figure typically cited, doesn't tell the main story. And the test results change for many reasons, not only that social and environmental factors are documentably very important. We know, of course, of many genetic changes that greatly impair IQ scores (e.g., the causes of Fragile X or Down syndromes). Clearly intelligence involves genes and must therefore be affected by genetic variation. But that does not mean that in general an indivdiual's IQ can be predicted from knowing his/her genotype (indeed, if anything, it may be far better predicted by knowing parents' average IQ). The issue is when people's potential is gauged socially by their test scores, or people are treated in ways that depend on their social or biological group.
A recent episode of an excellent BBC Radio 4 program, Analysis, broadcast on Sept 23, addressed this, discussing what they called "moral tribalism" and the issue of politics affecting the evidence we choose to accept, and it's well worth any scientist's time to listen to. They point out that "we are all science deniers" when the science is uncomfortable. It is no secret that science is part of human social endeavors, and it may be more objective than others, it is still highly subjective, political, and emotional. That itself is uncomfortable, but worth examining.
...and then there's the problem with all the science that we aren't aware of. We're weighing the evidence but only the evidence that's shown to us. Ben Goldacre's Ted Talk about all the negative results that never get published (or otherwise shared with us)
ReplyDeleteYes, and of course the other unaware component are the things we have not yet learned about and hence can't yet have built into our theories or our ideas about the world. These (like negative results) will be hidden from us and can seem like deviations of fact from current theory. In that sense, they provide the wiggle room for denial, speculation, Just-So story telling, and scientific controversies.
Delete