Showing posts with label Dunner-Kruger effect. Show all posts
Showing posts with label Dunner-Kruger effect. Show all posts

Wednesday, June 4, 2014

What is scientific knowledge and how hard is it to come by?

Two stories we're looking at reflect issues that we care about, and that we think everyone who thinks about how we know what we know should care about.  They have to do with the fragility of knowledge in at least some areas of science, including biomedical and genetic knowledge.  There are some lessons here, and they are perhaps surprisingly indirect.

What's in a name?  A rose by any other name (except Rosenkrantz) would be as sweet
The tweetosphere is resounding with comments on the study that looked at whether hurricane names affect the damage they do.  This is not some group of mystics studying Karma, but real scientists (perhaps with nothing else to do, since the subject would seem rather remote from what people on a payroll ought to be spending their time at).  

Hurricane Katrina; Wikimedia

As reported in this PNAS paper, ("Female hurricanes are deadlier than male hurricanes," Jung et al.),  more people die in hurricanes with women's names than hurricanes with male names.  This sounds wholly silly since the names are chosen before any hurricanes arise, and, anyway, were selected in different ways over the years.  But according to this paper, hurricanes with female names give people the impression that they are kinder and gentler than those with male names; the Rose that smells sweet.  At least, the authors' interpretation of their result is that when a female-named hurricane is barreling toward you, you may be tempted to stay home rather than evacuate, given that the storm is just a girl.  Or, if you expect that male-named hurricanes are more deadly, the treacherous Rosenkrantz to be suspected and feared, you either evacuate or you stay home, in the basement, with a helmet on, under a cover so you can pretend the monster isn't at your very door trying to huff and puff and blow your house down.

This paper is not being treated with much respect, and at least some critiques of the findings are over at Ed Yong's post about this story, and there are important issues about science communication and the respect that prominent scientific journals deserve, or not.  And, methodological issues as well.  These raise serious doubts about the validity, or at least the interpretability of the study, and about the journal's acceptance criteria (if the study's methods or conclusions were only marginally solid, it perhaps belonged in a supposedly lesser journal).  

However, our point here is different, and has to do with the many subtle ways in which exposures to all sorts of things epidemiologists or geneticists might never think of, much less be able to measure, could have major effects on our behavioral and physical traits.

What if this study is not poorly done and its reports have identified a real effect, and at least a handful of University of Illinois undergrads associate gender with relative danger--or power, or aggressivity, or some other trait that they associate with a hurricane's force?  Fear-by-name would be a cultural trait, and it might be very different in other age groups, or states, or other cultures--in matriarchal cultures, for example, the effect might be reversed.  This would be an instance of what is an undeniably real problem with many epidemiological studies--the problem of confounders, unmeasured variables that influence the outcome but whose effect is unseen because it's unconsidered, but that are actually causal. The identified cause could be so indirect in a multiplicity of ways, that it may be 'true' but basically uninterpretable in terms of what one might hope to find as the actual primary causes, or to determine whether there was or was not a causal link between the correlated risk factors.

Whether one accepts or questions the validity of this particular correlation study, how many other such kinds of subtle causation actually do affect our lives?  How many studies' interpretations such problems they undermine?  How can the outcomes be predicted in a sea of causes?

What's up, Doc?
The second story, old but good, is a commentary by a physician who's been around the block a few times, observing that older physicians are more likely to admit that neither they, nor anyone, understands biomedical causation as well as they like to claim.
Even as a med student, I was struck by the discrepancy between how much the junior doctors (particularly the interns and second-year residents) seemed to know, and how much the more experienced doctors knew: with few exceptions, the junior doctors seemed to know a lot more.
Students and young physicians have explanations for everything, and seem very confident about the extent and precision of knowledge about cause and treatment and so on.
In contrast, the expert physicians – the doctors who had spent decades of their lives treating particular types of patients, and studying a specific disease – tended to be far less definitive, and much more likely to say, “to tell you the truth, we really don’t know.”
One would expect, if science and experience in seeing purportedly causal symptoms and the effect of treatment are so fundamental, that senior physicians would be spot-on much more of the time than their greenhorn juniors.  They have had the experience by which to put two and two together, after all. Instead, at least the thoughtful and candid seniors have far less confidence than their younger upstarts. Why is this?

Holly Dunsworth reminded us the other day that there's a name for this phenomenon.  It's the Dunning-Kruger effect and Wikipedia describes it this way: "Unskilled individuals suffer from illusory superiority, mistakenly rating their ability much higher than is accurate. This bias is attributed to a metacognitive inability of the unskilled to recognize their ineptitude," citing a NYTimes piece about a bank robber who painted his face with lemon juice, under the misapprehension that it would make him invisible to surveillance cameras.  As David Dunning (he of the effect) read about this bank robber, who had been immediately caught from video tape pictures, he thought, if the guy "was too stupid to be a bank robber, perhaps he was also too stupid to know that he was too stupid to be a bank robber — that is, his stupidity protected him from an awareness of his own stupidity."

So, medical students don't know yet how much they don't know.  Further, young people are have just been taught courses by professors who don't get paid to teach what isn't known, but instead (especially, perhaps, in medical school) teach the truth--the symptoms and their causal results and the miracles of physicians' interventions.  They have a vade mecum drummed into their heads: a code book of diagnosis and treatment.  Indeed, the growth of HMO and computing power has to a great extent led to a hegemony that leads them to be trained not to use their individual judgment.

The young are also starting their careers, and they have the excited belief that they'll be implementing this crisp knowledge they've been taught.  Even if they go into academic research, careers are made by declaring what we know and don't know in confidently specific terms.  Give me a big grant and I'll answer this big question.  They are committed to the idea that questions, as they're often posed, actually have answers.  Doubt doesn't sell so there's no incentive to doubt a simple line.  Indeed, even some skilled individuals are trained to have the belief, if not illusion, of their superiority as experts, with degrees and salaries to match, and journalists hanging on their every word.

Both phenomena--the indirect effects of hurricane names and the over-confidence of those who don't take (or care not to take) what science tells us seriously enough, lead to expensive over-confidence in science and scientists, and not nearly enough humility in the face of the complexity that Nature throws at us.

Wednesday, March 6, 2013

Ignoring our ignorance

Seeing more
Two pieces interesting in their own right herein collide.  Here's a video from the NYT about an enhanced technique that allows the viewer to detect motion that is invisible to the naked eye -- or camera.



This new technology potentially has clinical usefulness, according to the developers, and if so, that's a good thing.  But it's also interesting as a stark reminder of how much we can't and don't see around us.  Granted, this technology primarily makes visible things we can detect with more familiar (and cheaper) techniques (like stethoscopes), but still, new ways of seeing -- your heartbeat right there on your face -- are not only eye-openers, they're also brain-openers.

Seeing less
Here's another reminder that we don't always see what we are looking at.  A paper in the February issue of BioEssays, "When peers are not peers and don't know it: The Dunning-Kruger effect and self-fulfilling prophecy in peer-review," Sui Huang, is a discussion of the problem of peer reviewing by reviewers who don't recognize that they don't understand what they are being asked to evaluate.

The Dunning-Kruger effect is when people don't recognize that how much they don't know about a subject, and rate their own abilities or knowledge higher than merited.  This has implications in academia, as this paper points out; when peers aren't in fact intellectual peers, they can determine the fate of a paper, or grant application, without even recognizing that they've missed the crucial points.

The Dunning-Kruger effect is happening in science more and more, according to Huang, for a number of reasons.  One, perhaps a reviewer doesn't recognize the specialized use of a word, and assumes its colloquial or older meaning instead -- chaos, or epigenetics are two examples Huang cites.
Failure to consider the “other” meanings of a term prevents the recognition of one's own ignorance of concepts used in other fields. Second, because of the parceling of science into small kingdoms, authors often are the sole authority in their province with no equal. Finally, the increasingly interdisciplinary nature of research creates an asymmetry of knowledge: the reviewer as a single person faces the daunting combined knowledge of an entire team of coauthors. Thus, statistically, we can safely accept our first claim and assume that on average, reviewers nowadays are with high probability less knowledgeable about the subject matter of a manuscript than its authors.
Of course, there are many other possible reasons that papers or grants don't get adequately reviewed.  Editors don't know everything either, and don't know everything that potential reviewers don't know, so there's potentially a lot of ignorance determining the shape of other people's careers.  And they are overloaded trying to find knowledgeable and willing reviewers, and are in no position to really judge the reviewer's formal qualifications, patience, meticulousness in reviewing, or, of course, understanding of a particular paper.  Online publishing (or inclusion of essentially limitless Supplemental information) overwhelms reviewers.  Grant reviewing takes another toll.  And paper authors aren't always clear about what they did or how they did it--and there's a lot of intentional obfuscation, too.  But this is really a topic for another time.

Huang speculates as to why reviewers might not recognize the limits of their knowledge.  Pride (though this would perhaps indicate the limits recognized but not acknowledged to others), self-deception, simple ignorance -- if you don't know that 'chaos' has a specialized meaning, there's nothing to recognize.

The Dunning-Kruger effect is always true and always has been
But of course, ignorance of their own ignorance isn't particular to reviewers.  Aren't we all subject to the Dunning-Kruger effect all the time, bumping up against the limits of our knowledge but drawing conclusions anyway?  The Earth was once flat, the sun once revolved around the Earth, the continents were stationary, evolution wasn't true.  No one ever worries that they don't know as much as people will in 50, 200, 2000 years, and therefore decides she has no business making observations and drawing conclusions.

It's humbling, and sobering.  Or, it would be, if we weren't all busy ignoring our ignorance.