Monday, October 11, 2010

BIG shoes to fill!

China has apparently decided to take mythological beasts seriously.  They have launched a search for the mythical Himalayan Yeti, often known as BigFoot in the US because the Wild West mountains have also been rumoured to be inhabited by the same, or at least some, large ape-like non-human primate. 
Chinese researchers have been searching since the 1970s. There have been more than 400 reported sightings of the half-man, half-ape in the Shennongjia area. In the past, explorers have found inconclusive evidence that researchers claimed to be proof of Bigfoot's existence, including hair, footprints, excrement and a sleeping nest, Xinhua reported.
Is there no storytelling that we simply will not believe?

This particular quest has a long (and checkered) history.  There are legitimate Asian (though not North American) fossils of very large hominid primates called Dryopithecines.  There are apparently ancient Chinese manuscripts with references to, and we vaguely remember hearing that they included drawings of, large apes.  There was at least one lunatic anthropologist named Grover Krantz, who had tenure and drew salary in an otherwise legitimate  university (Washington State) who spent years tracking 'Big Foot'.  Geneticists in Anthropology departments get reports of Big Foot sightings -- and requests to do DNA testing -- from the public all the time.

There's a book called The Long Walk, about some WWII prisoners of the Soviet Union who escaped and through many trials managed to cross all the way, over the Gobi Desert, to India.  The book is the personal recounting of the adventure by one of the survivors.  At one point, in a matter of fact way, the author describes how crossing mountains, they came upon several large, reddish apes of some sort--on the ground, not in trees.  The escape party waited til they felt safe before crossing the little valley, skirting these creatures.  The validity of this story has been attacked, but there can be various reasons for that.

Now, all of the supposed physical evidence is as bogus as a $3 bill.  But at what point do we say that there might be some truth worth searching for?  Many cryptozoologists put 2 and 2 together and get 5, which is at best what's going on here.  If there really were such a creature (alive today, that is), the odds are vast that we would have found bones or carcasses, or have pictures from someone who stumbled across them.  Too many people crawl over the earth for this not to be the case--certainly in North America where there are no wilds too wild not to have been explored or settled.

But one thing that keeps these searches going is that there are many species being discovered in various parts of the world, some of them unexpected.  But these are either the many small critturs, like insects and the like, that teem the jungles, or they are deep sea species that are hard to get at.  Nothing so spectacular as a huge man-like ape.  It's always possible, of course, that there are remote refugia.   But such mythical beasts necessarily must be claimed to be in such places, because that's the only way they could have escaped discovery.  Even Loch Ness, though not remote, has its deeps.

At least, the Chinese aren't going to spend US taxpayers' money on this wild-ape chase.  Well, at least not directly--since a lot of the money in China got there because Americans wanted their cheap junk, maybe we're paying for that wasted research, too.

Friday, October 8, 2010

Leave me alone or I'm going home!: The Heisenberg uncertainty principle in evolution and epidemiology

To a non-physicist, the gist of the Heisenberg uncertainty principle, or the observer effect phenomenon associated with it, is that studying an object changes the object.  You want to know the position and movement of a subatomic particle, say, but to find that out you have to study it with energy, like a light beam, which allows you to identify the position by seeing how the collision with your measurement beam occurs. But that alters the target particle's movement so you can't know both.

Similar kinds of issues apply to modern-day epidemiology.  We referred to this yesterday in a comment about the effect of maternal drinking during pregnancy affecting the future of their offspring.  A Heisenberg analogy for epidemiological studies goes like this:  if by studying something that has many contributing risk factors, the persons being studied change their behaviors, and thus their exposures because they know the results of the study, you can no longer estimate what the exposure risks will be.

Often, the change in behavior is of a magnitude that it's a major effect relative to the signal that's being studied.  If you stop eating pork because a study says that eating pork gives you a somewhat increased risk of warts (it doesn't--this is just a made-up example!), then the effects of pork-eating will be changed by virtue of the exposure change and the knowledge that this is being studied.  If this happens often enough--as it does with our 24/7 many-channel news reports--then tracking risks or measuring them becomes very problematic, except for the really major risk factors (like smoking and lung cancer) which are robust to small changes.  The science and the scientist become part of the phenomenon, not the external observer that they need to be to do the science.  This leads to many of the serious challenges to modern epidemiology, behavior, education, political, economic, etc. studies, including those of genetic causation.  And since trivial risk factors are mis-reported in the news as big ones, the signal-noise ratio is even less favorable to clear-cut conclusions.

There is a kind of Heisenberg analog in evolutionary terms, too.  If relative fitness--reproductive chances--are affected by both genome and ecologic contexts, and the differences are small, then what happens tomorrow to a given genetic variant, is highly dependent on all sorts of environmental or other genotype changes. A given variant won't have the same relative effect tomorrow as it did today, and since evolutionary models are about relative fitness, the evolutionary landscape changes.

This becomes Heisenberg-like not because it's about observer-interference effects in this case, but because the context changes can be as great or much greater than the net fitness advantage of a genetic variant.  This means fate-prediction is difficult, and in this case the observer analog has to do with the screening-efficacy of natural selection.  Changes in the frequency of an allele can change its net fitness effect.  When fitness (like electron positions?) is not just contextual but essentially probabilistic, something that affects position (current relative fitness) affects evolutionary trajectory.  That's one reason evolution is  essentially unpredictable, except under unusually strong conditions, and in that sense not deterministic as it is viewed in the usual Darwinian concept, especially as put forward by those not versed in evolutionary theory--and that includes many biologists and all the blathersphere that invoke Darwin or natural selection in making pronouncements about society.

Thursday, October 7, 2010

The chaff for the grain: the effects of alcohol during pregnancy

A study just published in the Journal of Epidemiology and Community Health reports that light drinking during pregnancy does no harm to the child.  That is, children of mothers who were 'light drinkers' during pregnancy had no increased risk of 'socioemotional problems and cognitive deficits at age 5'.  This is a follow-up of a study that showed no risk of light drinking to three year olds in the same sample.

The story on the BBC website says:
Drinking one or two units of alcohol a week during pregnancy does not raise the risk of developmental problems in the child, a study has suggested.
Official advice remains that women abstain completely during pregnancy.
A study of more than 11,000 five-year-olds published in the Journal of Epidemiology and Community Health found no evidence of harm.
There were more behavioural and emotional problems among the children of heavy-drinking women.
Mothers were first interviewed when their infant was 9 months of age (sweep 1).
Questions were asked about mothers' drinking during pregnancy, other health-related behaviours, socioeconomic circumstances and household composition. Sweeps two and three of the survey took place when cohort members were aged approximately 3 and 5 years. At the age 5 years home visit cognitive assessments were carried out by trained interviewers and questions were asked about the cohort members' social and emotional behaviour, socioeconomic factors and the psychosocial environment of the family. 
Now, this raises a number of alarm bells for anyone used to thinking about study design issues.  For one, sensitive subjects are hard to measure accurately, especially by recall interviews.  Sexual behavior, alcohol consumption, and so on are notorious for that problem.    Doctors, for example, routinely double the number their patients give them when they're asked how many drinks they consume per day. And alcohol consumption during pregnancy is particularly sensitive. So whether or not the data are reliable in this study, especially given that it was recall data -- mothers were asked to remember how many drinks they had per week during each trimester of their pregnancy when their infant was already 9 months old -- is one possible problem here.

But perhaps that's not the most important issue here.  Confounding is an important potential study killer any time, but  this study seem particularly fraught.  The authors do recognize that confounding variables could be a problem, that many other factors could influence socioemotional behavior, not just alcohol consumption during pregnancy, and they try to control for some of these.  But, given how many different things can affect a child's behavior between pregnancy and age 5 (that is, in fact, everything), it's very hard for us to believe, no matter how well possible confounders are controlled for, that it's possible -- or even sensible -- to try to boil down the explanation for behavior differences between 5 year olds to the difference between 3 and 5 drinks a week.  Especially given the fragility of the data on alcohol consumption. And the idea has been prevalent that alcohol can pose a fetal danger even so early that the mother doesn't yet realize she's pregnant; so if she quits or cuts back when she does learn, reliance on recall may lose some accuracy.

That said, let's turn to the results.  Here's the link (if you can get access) to the table that interests us most, the prevalence of socioemotional problems according to whether the mother never drank, didn't drink during pregnancy, was a light, moderate or heavy drinker.  Note that in every problem category -- every category -- the prevalence of the problem under study (conduct problems, hyperactivity, emotional problems, etc.), and the odds ratio, are higher in children of mothers who never drank than in either all children, or all children except for those whose mothers drank heavily during pregnancy.  That's in all the models they tested, from controlling for just one variable, age of child, to controlling for many different variables. Not drinking is a stronger risk factor than almost any amount of alcohol.

Interestingly, and curiously, the authors use not-in-pregnancy as their reference, rather than not drinking at all.  Clearly they can be missing some sociocultural or other environmental confounders by doing that.  Also, the data are reported in terms of 5% significance, with little if any mention of the correction for the huge number of tests they have done.  They did find some sorts of trends, which mollifies this concern, but only somewhat.  They report this as a confirmation of earlier findings, but that, too, is a bit uncalled for since this is an extension of, and hence not independent from, the earlier phases of the same study (it includes the earlier results, essentially).

When there are huge numbers of covariates, and potential confounders, and clearly countless other factors could, in principle, be unmeasured confounders, one has to be circumspect about this study.  Even if the results are correct as reported and interpreted, the net impact of moderate drinking only applies after regressing out the other measured factors.  They may, in aggregate, have a much greater effect on childhood behavior risks than the net effect of pregnancy-imbibing, so that the absolute effect of drinking (even if alcohol is the actual cause itself) is going to be small.

If we can believe the data, and that never drinkers really never drank and so on, this certainly makes one wonder about the effects of unmeasured confounding variables, and raises the question of how much of the reported effect due to heavy drinking really is due to alcohol.

And yet again, about publishing obviously inconclusive studies....  Such studies are very costly, and likely to change years hence as envirionmental exposures and confounders and ways of measuring things change.  It is not a shock to learn that a touch of a drink now and again is not a particular problem, especially if it relaxes the mother, say.  The most important question is not that, but whether very early drinking can lead to birth defects of various sorts.  The reason is that the very early embryo is only a few cells with most of its differentiation yet to come, so a damage to any cell can have proliferative effects.

But later, during the period of organogenesis and then mainly growth, fetuses are generally much more robust to small external exposure effects. There are many more problems about pregnancy and child health than the rather trivial effects that this very large study, even if all its results are true, deals with.  And the solution to many of the real problems is:  moderation, improvement in socioeconomic differences.  But those are deeper problems that society doesn't want to deal with.  It's safer to look at a blizzard of statistical data and talk, with serious demeanor, about the minor things that might be guessed at and perhaps even changed.

Wednesday, October 6, 2010

Air pollution causes diabetes?

Here's a paper in Diabetes Care that we found because it was written up in the New York Times on Monday.  Why?  Proper scientific forewarning?  Scare mongering?

The authors of the paper find a correlation between air pollution (fine particulate matter) and type 2 diabetes.  As the NYT puts it:
A strong link exists between adult diabetes and air pollution, according to a new epidemiological study by researchers at Children’s Hospital Boston. The long-term study builds on previous laboratory studies that have tied air pollution to an increase in insulin resistance, a precursor to diabetes.
The researchers used health, economic, geographical and other data to adjust for known diabetes risk factors, such as obesity, exercise, ethnicity and population density. After controlling for these factors, a strong correlation still emerged between diabetes prevalence and particulate air pollution.
So, those of us who live in polluted cities (such as readers of the NYT) now have to worry about getting diabetes through no fault of our own, just because we live where we do, on top of everything else we have to worry about.  And this one we can't outrun.  Go get a face mask, and hurry!

However, the paper's description of the study isn't quite right.  It doesn't tell the whole story.  From the paper itself:
The relationship between PM2.5 [particulate matter 2.5] levels and diagnosed diabetes prevalence in the U.S. was assessed by multivariate regression models at the county level using data obtained from both the Centers for Disease Control and Prevention (CDC) and U.S. Environmental Protection Agency (EPA) for years 2004 and 2005. Covariates including obesity rates, population density, ethnicity, income, education, and health insurance were collected from the U.S. Census Bureau and the CDC. 
The important fact that this was a county-level study was never mentioned in the Times story.  That is, the fact that the study looked at average diabetes prevalence rates, average obesity, pollution and so on, for whole counties, not individual exposures and covariates.

This is important because of a well-known epidemiological bias called the "ecological fallacy", the problem of attributing group-level characteristics to individuals -- equating group correlations to causation at the level of the individual.  We'd all agree that it was silly to, say, assume that everyone in a voting district was Republican because the county always votes Republican, but in the same way, though harder to intuit, a correlation between high pollution levels and high diabetes rates doesn't tell us anything about any single individual's exposure or duration of exposure to pollution, not to mention whether it caused his or her diabetes.

There may well be alternative explanations for the correlation.  Perhaps diabetes care is good in that county and a lot of patients moved there, after being diagnosed, to take advantage of the care.  Or any number of other possible scenarios.  And the epidemiologist on the study seems to know this:
“We didn’t have data on individual exposure, so we can’t prove causality, and we can’t know exactly the mechanism of these peoples’ diabetes,” said John Brownstein, an assistant professor of epidemiology at Children’s Hospital Boston and co-author of the study. “But pollution came across as a significant predictor in all our models.  
Now, pollution may in fact cause diabetes.  Our point here is not about causation per se (though the biological link doesn't seem obvious from everything that's known about type 2, adult-onset, insulin resistant diabetes, but we certainly can't say it's not possible).  Our point is that the authors haven't convincingly demonstrated a causative link, and it was premature to rush this to print -- and for the NYT to pick up the story -- without better evidence.

The ecological fallacy is in every first-year epidemiology textbook -- and the authors of this paper even refer to it.  The related fundamental logical error is that this equates correlation with causation -- even at the group level.  When authors know they face these issues, the proper thing is not to publish and call their eager friends at the Times,  but to take the result as an indicator that it may be worthwhile following the possible connection up in a proper study.  But that's not the era we live in.

Tuesday, October 5, 2010

The arrogance of science.

We have not read Sam Harris's new book, the soon-to-be bestseller, The Moral Landscape: How Science Can Determine Human Values, but we have watched his TED lecture on the subject, and read Appiah's review in the Sunday NYT and we're pretty sure we're not likely to read the book.  But of course that isn't stopping us from having something to say about it.

Two things disturb us about Harris' argument.  (If you've read the book and can tell us that reading it would change our minds, please let us know -- we'd love to be wrong on this.)  But as we understand it, Harris's argument is both arrogantly imperialistic -- or worse -- and non-Darwinian, which is rather ironic from someone arguing that science will out the Truth. The 'logic' of the argument is to put together intelligent-sounding phrases that have little actual content....especially little scientific content.

Best known as one of the New Atheists, Harris has written previously on how he knows there is no God.  He argues in his new book, and in the lecture, that only science can answer the questions of life's "meaning, morality and life's larger purpose" (as quoted in the review).


Which prompts us to ask, Where is existentialism when we need it?  Better yet, let's call it Darwinian existentialism.  If we are truly to take the lessons of Darwinian evolution to heart, we must accept that there is no "larger purpose" to life.  The only purpose to life, which we don't ourselves construct, is to survive and reproduce.  And even that is not a purpose to life itself, which to an arch Darwinian might be not to survive, so something better can do it instead.  Or to expend solar energy in some particular way.  To argue otherwise is to position humans above Nature, which is precisely what Darwin and his contemporary supporters argued was biologically not so (though even Darwin fell into that ethnocentric trap in Descent of  Man).

Further, if we accept Darwinism in the raw, there is no meaning or morality for science to find. Meaning, morality and purpose are constructed by us once we've got food and a mate. As animals with history and culture and awareness of both, we imbue our lives with values and morals and meaning, but they are products of the human mind.  This doesn't mean that they aren't important, or compelling, or even things to live or die for, but those judgments are our own.  But people with the same genome can adopt very different sense of meaning -- which is equally important and compelling.

According to Harris, science can uncover not only facts, but values, and even the 'right values'.  Just as science can tell us how to have healthy babies, science can tell us how to promote human 'well-being'.  And "[j]ust as it is possible for individuals and groups to be wrong about how best to maintain their physical health," he writes, as quoted in the review, "it is possible for them to be wrong about how best to maximize their personal and social well-being."

What is this well-being of which he speaks?  Who says we or anyone should 'maximize' it, and who are 'we' in this context?  Well-paid professors?  If he meant Darwinian fitness we might pay attention because that's the only objective measure of success that counts in a Darwinian world (unless it's ecosystem expansion, even if at the expense of particular species).  But what he means is something much less empirically tangible -- ironically for someone arguing that science will find it.  He means happiness.  This would be perfectly fine in the realm of psychology or Buddhism or philosophy, but, to our minds, this argument of his is on the same playing field with religious arguments about morality and purpose -- which of course he would not accept -- and even pre-Darwinian.

And, it wasn't that long ago that Science decided that homosexuality wasn't an illness to be cured, or that phrenology wasn't in fact enlightening, or that bleeding patients wasn't a cure -- and of course there are many other such examples.  When what was once True becomes False, what does this say about Science and its ability to find the ultimate Truth? Why would anybody think we're right today....unless it's from ethnocentric arrogance?


The Enlightenment period was the age in which the belief grew that modern science could be used to create a better world, without the suffering and strife of the world as it had been.  It was a world of the Utopians.  Their egalitarian views were opposed vigorously by the elitist right ('we're just scientists telling it like it is')  in the form of Thomas Malthus, Herbert Spencer, strong Darwinians, who opposed the more idealistic thinking.  The Science Can Find the Moral Truth view grew through much of the 19th century, but its consequence, 'modernism', was rejected after science gave us machine guns, carpet bombing, eugenics, the Great Depression, dishonorably wealthy industrial barons, and other delights of the 20th century.  The reaction to that went under various names, but included things like cultural relativism and anti-scientific post-modern subjectivism.  Unfortunately, like any Newtonian reaction, the reaction was equally culpable, if less bloody, in the opposite direction, by minimizing any reality of the world.

Cultural relativism, against which Harris rails, is the view that each culture is a thing of its own, and we can't pass judgment about the value of one culture over another, except as through our own culture-burdened egotistical eyes.  That is not the same as saying that we have to like someone else's culture, nor adopt it, nor need it be a goody-goody view that we have to put up with dangers from such culture (like, for example, the Taliban).  But there is no external criterion that provides objective or absolute value.   Racism and marauding are a way of life in many successful cultures; maybe by some energy consumption or other objective measure it's best for their circumstances.  Science might suggest (as it did to the Nazis and Romans and some groups today) that their way is the only way, the best way, Nature's chosen way.


Science may be a path to some sorts of very valuable Truth, and better lives, such as how to build a safe bridge or have painless dentistry (the greatest miracle of the 20th century!).  Regarding many aspects of our culture, we would not trade.  We ourselves would love to attain the maximum happiness that Harris describes.  But it is an arrogance to assume that in some objective sense that is 'the' truth. 

And what if the 'facts' said that to achieve the greatest good for the greatest number (not exactly an original platitude, by the way) meant that people like us (and Harris) had to cut our incomes by a factor of 100, or 1000, for resources to be equitably distributed?  After all, the USSR implemented 'scientific' ideas of maximal good for the masses (communism, Lysenkoism, to the tune of tens of millions purged, frozen to death in Siberia, or starved because of failed harvests, and more).  The Nazi policies were explicitly based on the belief that Aryans were simpler better than others, based on warped Darwinian truths, and we know what happened.

So, anyone who would still not realize that the smug self-confidence that one can find the ultimate truth through science either is another tyrant potentially in the making, or hasn't read his history.

Whether or if there can be some ultimate source of morality is a serious question and if it has an answer nobody's found it yet.  Religion has no better record than materialistic science, nor secular philosophy.  Nor does Darwin provide that kind of objective value system, especially in humans where very opposed cultural values can be held by people toting around the same gene pool.

The Darlings of the Smug rise, like mushrooms, in every society.  They are glib, but so are demagogues of other sorts.  They're all potentially dangerous -- or are those for whom they serve as the intellectual justification.  Again, that is not to say we should adopt someone else's values, nor that we should hold back from defending ourselves against those who threaten us.

Still, oblivious to these points, Harris argues, as does the far right in the US, that cultural relativism is wrong and should be completely and utterly discounted.  Here are some quotes from his TED talk:
How have we convinced ourselves that every opinion has to count?  Does the Taliban have a point of view on physics that is worth considering?  No. How is their ignorance any less obvious on the subject of human well-being?  The world needs people like ourselves to admit that there are right and wrong answers to questions of human flourishing, and morality relates to that domain of facts.  It is possible for individuals and even for whole cultures to care about the wrong things.  Just admitting this will transform our discourse about morality.
Again, how is this different from, say, the Aryan line which would say we have a right to decide and purge, all in the name of science (and, by the way, it was medical science as well as Darwinism)?  Why is this not the arrogance of imperialism all over again?

When the Taliban, the religious right and the likes of Harris and the New Atheists all believe that only they are the keepers of the Truth, dominion can be attained not by science but by wielding of power alone.

Monday, October 4, 2010

Playing dice

Einstein wrote with respect to the randomness of quantum physics, which troubled him greatly, "I, at any rate, am convinced that He [God] does not throw dice [with the universe]."  Well, the 2010 Ig Nobel prizes were announced on Thursday (they say they'll post a webcast of the event soon).  We blogged about the Biology prize winning paper demonstrating fellatio in bats when it came out, so we'll take this opportunity to point out with pride that it's not just disappointing papers that catch our attention, we do also recognize prize-winning research when we see it.  

But the Management prize caught our attention as well.
MANAGEMENT PRIZE: Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo of the University of Catania, Italy, for demonstrating mathematically that organizations would become more efficient if they promoted people at random. 
REFERENCE: “The Peter Principle Revisited: A Computational Study,” Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo, Physica A, vol. 389, no. 3, February 2010, pp. 467-72. 
Those of us over a certain age remember the splash that The Peter Principle made when it first came out in 1969.  Author Peter Lawrence explained in a way that made complete sense why we were destined to be forever surrounded by maddening incompetence: as the prize-winning paper puts it, ’Every new member in a hierarchical organization climbs the hierarchy until he/she reaches his/her level of maximum incompetence’ -- after which, of course they are promoted no further.  This is perplexing.
 

As Pluchino et al explain,
Despite its apparent unreasonableness, such a principle would realistically act in any organization where the mechanism of promotion rewards the best members and where the mechanism at their new level in the hierarchical structure does not depend on the competence they had at the previous level, usually because the tasks of the levels are very different to each other. [In the paper] we show, by means of agent based simulations, that if the latter two features actually hold in a given model of an organization with a hierarchical structure, then not only is the Peter principle unavoidable, but also it yields in turn a significant reduction of the global efficiency of the organization. 
So, in the worst of all possible worlds (which sounded eerily familiar to many in 1969), most positions in most organizations are filled with the person least able to carry out the required responsibilities.  


Did this Ah-ha! realization change the world?  Of course not -- we need not point out that the economic fiascos of the past two years are perfect evidence of this.  No doubt Pluchino et al. had this in mind as they explored this issue further:

Within a game theory-like approach, we explore different promotion strategies and we find, counterintuitively, that in order to avoid such an effect the best ways for improving the efficiency of a given organization are either to promote each time an agent at random or to promote randomly the best and the worst members in terms of competence.
So, the Peter Principle happens because of the widespread, perhaps even universal assumption that if an employee excels at a job at one level, s/he will excel at the job on the next rung of the organizational hierarchy, even if it actually requires very different skills.  This is just common sense, right?  

But the problem is, as Pluchino et al. point out, that "common sense in many areas of our everyday life, often deceives us."  To demonstrate just this, they did mathematical simulations of what would happen to organizational efficiency if the most competent, the least competent, and then a random selection of employees were promoted.  They found that the random selection -- or equivalently, promotion of some of the most and some of the least competent -- was preferred.

Fine, so they've solved the global efficiency problem, and the world will surely take notice.  But let's take this back around to science -- sidestepping any possible effects of The Peter Principle in academia, which isn't the reason we're interested in this paper.  Instead, to us this paper relates to the general problem of determining cause and effect, something we write about here a lot.

Science, another human endeavor, is just as loaded with incorrect assumptions about cause and effect as business, of course.   It isn't just that science's knowledge is always limited, but that we cling to things we have reason to believe are not correct but that it would be inexpedient of us to acknowledge.  This has to do with vested interests, career momentum, and so on.

We also cling to deeper beliefs: for example, that things simply must have a cause, and that if that's so, then it's only technology and the like that keeps us from identifying it.  We just do not like to accept randomness any more than we absolutely have to.  Mendelian transmission is an example where, with some exceptions, we do accept limited predictability.  But we fight it:  many if not most evolutionary biologists, and hanger-on people of all sorts who invoke 'evolution' or Darwin's name to advance some favorite point of view, simply do not want biological traits to be affected by chance. They want them to be predictable from genes.  But we know this is true only to a limited extent.

Grant reviews and funding decisions, exam paper scores and grades, promotion and tenure reviews, and many other aspects of academic and scientific life are clearly largely random.  Yet we toil away to a great extent to make them seem critically well-evaluated.

Another thing all of this shares is reliance on 'experts'.  Even when we know that experts are often as wrong as right when it comes to many of the most key decisions -- be they whether to go to war, how to regulate economies, or science funding policies.

Perhaps it's part of the human condition to rely on denial of things we know are true, assume the world is more causally knowable than we know it is, and hope the bombs we release in the process don't fall on us.

In a serious sense, while we know this is a problem, it is not at all clear what to do about it.  Society, science, universities, companies, and so on have to act and take decisions.  How should it be done better than we do it now?   It sounds funny to suggest it could be done at random -- funny enough to be worthy of an Ig Nobel prize -- but what serious, applicable lessons can be gleaned from that?

Friday, October 1, 2010

Do science lovers know the most about science?

All right everybody. Here's an unsatisfying attempt (read: failure) to brilliantly cap off this week's buzz....

All the rage this week has been about how people without religion know the most about religion and about how religious folks know relatively little about religion.

Since this has quickly become part of the greater Religion and Science discourse, the natural next question that follows, at least to me, is this:

Do the people who trust science the most (some called this "faith" in science), actually know the most about it?

Or, as we saw in the religion poll this week, do people who have the least "faith" in science know the most about it?

I think we'd all be very surprised if the answer to the second question is yes, but the first question is not as simple because there are certainly people who fully support science without looking into any of it for themselves. Everyone knows someone like this. Sometimes they're completely reasonable. Other times their embrace of science is so enthusiastically wide and undiscerning, that they support pseudoscience as well. My friend who thinks aliens built the pyramids. Yours who buys the vibration-laden sugar pills at Whole Foods to treat headaches.

To my chagrin, this poll and its results published in Scientific American this week have absolutely no bearing on our questions. Out of all the topics they listed, people trust scientists about evolution above anything else. It's interesting, but be forewarned... they polled readers of Scientific American and Nature. Guess how many scientists and science-minded people are in their sample population?

Hmmm. Wonder how much a randomly sampled population of Americans actually trusts science and scientists? I'm guessing it's a little less than these results.

There is nothing in this report about whether or not the science trusters are informed or knowledgeable compared to those who don't trust science.

Anyway, I hope that if you know of any links that have answers to our question (the question in the title), you'll post them in the comments.

Or, instead, you may choose to rant about how supporting science is not a proper analogy to being religious. (My personal view.)