Thursday, November 27, 2014

Some Holiday Thoughts

Well, its Thanksgiving week, and it's quiet here in town, a time for reflection.

A bit less thankful, perhaps

In a general sense, Thanksgiving is a rather odd holiday. Why we can act so 'thankfully' when what we have is bounty our forebears took by force from the original Americans, and when what we have is immorally excessive compared to much of the world. How some can think we're thanking God for providing us with more than others (even within our own nation) is also rather curious. Why aren't we saying a few words of guilt and culpability, rather than thanks, for the inequity of which we are the beneficiaries? Probably, even my question shows that I'm not thankful enough for the cosmic level of luck that has given me what it has.

Well, I'm not a total ingrate: if thanks is not the right emotion, I'm very happy for the meal, the warm comfort, company, family, friends, and good cheer. But it comes with more than a tinge of guilt for our collective lack of perspective.

In any case, we like to take holiday times to think about things other than genetics and evolution, or, perhaps about these same things differently. One of my favorite poets is William Wordsworth, because of his pastoral, contemplative rather than technical way of viewing Nature. Here are a couple poems that seem appropriate this time of year:

Wordsworth: contemplating Nature (hopefully not a headache from over-eating)

Nuns fret not at their convent’s narrow room
Nuns fret not at their convent’s narrow room;
And hermits are contented with their cells;
And students with their pensive citadels;
Maids at the wheel, the weaver at his loom,
Sit blithe and happy; bees that soar for bloom,
High as the highest Peak of Furness-fells,
Will murmur by the hour in foxglove bells:
In truth the prison, into which we doom
Ourselves, no prison is: and hence for me,
In sundry moods, ’twas pastime to be bound
Within the Sonnet’s scanty plot of ground;
Pleased if some Souls (for such there needs must be)
Who have felt the weight of too much liberty,
Should find brief solace there, as I have found.

And here's one you probably read in school, but is worth reading again, and contemplating:

The world is too much with us
The world is too much with us; late and soon,
Getting and spending, we lay waste our powers;—
Little we see in Nature that is ours;
We have given our hearts away, a sordid boon!
This Sea that bares her bosom to the moon;
The winds that will be howling at all hours,
And are up-gathered now like sleeping flowers;
For this, for everything, we are out of tune;
It moves us not. Great God! I’d rather be
A Pagan suckled in a creed outworn;
So might I, standing on this pleasant lea,
Have glimpses that would make me less forlorn;
Have sight of Proteus rising from the sea;
Or hear old Triton blow his wreathèd horn.

We hope anyone who happens to dial into Mermaid's Tale this holiday is safe, happy, and
able to reflect on that good fortune.

Wednesday, November 26, 2014

A Leash of Hemp: Do our brains trick us into thinking we're good at sizing-up strangers?

I wish it wasn't so timely, but maybe this re-post of my reflection on how easy it seems to be to size up strangers is worth a read ...


Running is a precious 30 or so minutes for me. It's a drug. If the pace and the light and the rock'n'roll in the earpods are just right, it's god. But today while I ran on the old railroad-turned-trail in our neighborhood, everything was more or less ungodly, more or less routine. As always, I passed by many walkers, cyclists, runners, dogs, cats, squirrels going the other direction. Most humans say hi or wave. I give the peace sign. I live in Peace Dale. I like words and peace and cute.

And as I'm apt to do, I meditate on the people I see, sizing them up, giving them roles, stories, assessing their general vibe, riffing on them until new thoughts hijack those neurons, which is frequent on a run. And I want to tell you about a specific people-passing incident from this morning because it illustrates, just in that fleeting, mundane, snapshot of a throwaway moment something much more profound, not just about human nature but about how we perceive it which folds back on the nature of human nature itself. (whoa)

He's about 50 yards away when I first notice him:  White, in his twenties, gray hooded sweatshirt and jeans, hood up over his hair, walking a dark-brown pitbull-looking breed. 

And I think to myself, I bet his dog's leash is made of hemp.

To me, all those traits I'm observing together scream 'leash of hemp.'

And sure enough, as the space between us narrows, I can see it's a leash of hemp. 

Naturally I'm thinking something like, See? See, politically correct world? Stereotypes can be true. Some biological and cultural traits cluster together predictably. Calm down everybody. It's just human nature! We vary predictably in many ways.

And nobody can argue against the fact that some traits do cluster and that you can predict some things about people based on things like their sex, their clothes, their age, their gait, their dog breed, ... 

Okay, cool. But whoa whoa whoa.

Did I really use all those observations to successfully predict the leash was made of hemp? Did I really just validate that stereotype about 'hemp dog leash people?' Did I really just support that theory (stereotypes are real, baby) by forming a hypothesis that ended up being correct?

Of course it could have been a coincidence, my correct guess. There are only so many kinds of dog leashes: Leather, acrylic, and hemp are the main ones I think. So, given a leash exists, the odds are pretty good that I'll guess what kind it is, regardless of what the guy or the dog look like who are tethered together by it. 

But that's not what I meant by my doubt. I'm wondering about something other than coincidence, something quite sinister.

I'm wondering whether I actually predicted the leash was made of hemp or if my brain tricked me into thinking I did.

I mean, don't you think it's suspicious that I caught myself predicting what the leash would be made of? Doesn't that seem weird?

I could, just as easily, have tried to guess his shoe or jeans brand, whether he was wearing a watch or not, if he was going to smile at me or not, harass me or not. Many things that I could see or experience upon closer proximity were available for prediction and, instead of any those things, I chose to predict the material of the dog leash.

Why? What was my consciousness up to?

Let's cut to the punchline: I don't think I predicted anything at all. Right after the whole thing went down, I caught my consciousness red-handed.[1] 

From as far away as I was, I could still have quite easily perceived that the leash draped and swung like hemp, and unlike leather or acrylic. I could also have very easily perceived the pale color, unlike the dark colors that acrylic and leather usually are.

So I could have known it was hemp without yet knowing it, consciously, since there's a delay between perception and consciousness.

And instead of coming to know 'hemp' from my immediate perception of it, I think my consciousness narrated my experience in such a way that I was predicting something I already knew! My consciousness made me believe that I had a clever hunch, a hunch that was consistent with a stereotype of this 'hemp dog leash person.'

Put another way, my consciousness was taking creative credit for the observations that the perceptive parts of my brain were already processing. Sounds a bit like some people you work with, doesn't it?

All this is happening in split seconds. It's not hard to imagine how my consciousness--since it's already very busy trying to constantly make sense of the world--could garble this input and the timing of it by inserting a narrative. It's not hard to imagine, given the delay between perception and consciousness, how my mind could mangle the more likely true story which was simply...

input-input-input (ad nauseam)

by editing it into a story of ...


OK. Besides how fascinating this cognitive delay is, with all its relevancy for studies of ESP, pre-cognition, magic tricks, and falling for them. Besides all the implications for the existence (or not) of free will ... since, if your consciousness is delayed, are you really deciding your life or are you just experiencing life and narrating it as if you're deciding it?

Aside from all those fascinating supernatural and existential implications, this is the kicker: This illusion stemming from our slow and overbearing consciousness probably affects how we relate to other human beings who we encounter every day.

See, the leash of hemp boosted my confidence in two things that are already boosting one another to begin with:

1. I think I'm good at predicting human nature.

2. I think human nature, especially in stereotyped and categorical terms, is predictable.

These two things may be true in many regards. But my experience, my little experiment, my leash of hemp, lead me to believe that 1 and 2 are stronger than they are and that they're realer than they probably are, in reality.

And a leash of hemp moment is just so exhilarating, at least for a scientist like me, when it's about traits untied to value, like leash material. It's a similar feeling you get once you're so familiar with the chimpanzees you're observing that you can predict, given a set of circumstances or time of day or whatever, what they'll do next, where they'll walk, climb, who they'll play with or groom. It's a real high.

But how about when those links, those predictions, are about value-laden traits like beauty, intelligence, sexual orientation, religious belief, violence? I'm more likely to believe stereotypes and my own abilities to predict human nature when it comes to these much more sensitive or much more volatile issues simply because I guessed that a dog's leash was hemp while running this morning.

A leash of hemp's no big deal when it's just about a leash of hemp,[2] but a taking a 'leash of hemp' about someone who's wearing a head scarf or a short skirt, about someone who speaks with a Southern accent or an educated accent, who goes to temple or to church, who has darkly or lightly pigmented skin? Maybe that's when we should humble our consciousness. Maybe that's when we should remind it that perception was there first. Maybe help it question whether it's really so smart. After all, maybe it already saw, heard, smelt what it so cleverly claims to have sensed, believed, predicted. Maybe it doesn't deserve the credit it's taking, thereby encouraging itself to apply its methods to other situations that require more nuance, more sensitivity, more observation, more time, actually getting to know a person, that whole human connection thing, you know?

It might feel like it, but we're not, objectively, human nature experts. We can be too easily tricked by our delayed and overbearing consciousness. We're too quick to be seduced by these split-second cognitive events that validate our intellect, our experience, and our beliefs all at once.[3]

Knowing this, being conscious of the lag between perception and consciousness, catching one's mind in the act, why is it still so hard to change our minds going into the future?

Maybe if leashes of hemp were more ubiquitous they'd serve as a nice gimmicky reminder about these illusions--dampening our habit of skyrocketing all the way up to "human nature" from a dog's leash. But would that really do any good? After all, skin colors, eye shapes, skirt lengths, accents... these things are ubiquitous and yet they clearly aren't gimmick enough to humble our consciousness, to help us remain skeptical of what each of us knows so well about human nature just from our itty bitty n of 1.


Related reading...

[1] I thank running for opening my brain to such a thing--a good hypothesis considering how my most satisfying thoughts usually come during the 30 minutes of the day that I am pushing my body down the running trail.

[2] Unless you've got something against hemp, hippies, dogs, pitbulls, white people, men, hoodies.

[3] Implicit here is an assumption that all humans suffer from this delayed consciousness, that it's part of "human nature." oy, is it?

Tuesday, November 25, 2014

Let's Abandon Significance Tests

We thought we'd re-run the first blog post Jim Wood wrote (or read), from May 2013, on significance testing.  It was an excellent post the first time, and it's an excellent post again, with a message that doesn't get old.

By Jim Wood

It’s time we killed off NHST.

Ronald Aylmer Fisher


NHST (also derisively called “the intro to stats method”) stands for Null Hypothesis Significance Testing, sometimes known as the Neyman-Pearson (N-P) approach after its inventors, Jerzy Neyman and Egon Pearson (son of the famous Karl). There is also an earlier, looser, slightly less vexing version called the Fisherian approach (after the even more famous R. A. Fisher), but most researchers seem to adopt the N-P form of NHST, at least implicitly – or rather some strange and logically incoherent hybrid of the two approaches. Whichever you prefer, they both have very weak philosophical credentials, and a growing number of statisticians, as well as working scientists who care about epistemology, are calling – loudly and frequently – for their abandonment. Nonetheless, whenever my colleagues and I submit a manuscript or grant proposal that says we’re not going to do significance tests – and for the following principled reasons ­­– we always get at least one reviewer or editor telling us that we’re not doing real science. The demand by scientific journals for “significant” results has led over time to a substantial publication bias in favor of Type I errors, resulting in a literature that one statistician has called a “junkyard” of unwarranted conclusions (Longford, 2005).

Jerzy Neyman
Let me start this critique by taking the N-P framework on faith. We want to test some theoretical model. To do so, we need to translate it into a statistical hypothesis, even if the model doesn’t really lend itself to hypothesis formulation (as, I would argue, is often the case in population biology, including population genetics and demography). Typically, the hypothesis says that some predictor variable of theoretical interest (the so-called “independent” variable) has an effect on some outcome variable (the “dependent” variable) of equal interest. To test this proposition we posit a null hypothesis of no effect, to which our preferred hypothesis is an alternative – sometimes the alternative, but not necessarily. We want to test the null hypothesis against some data; more precisely, we want to compute the probability that the data (or data even less consistent with the null) could have been observed in a random sample of a given size if the null hypothesis were true. (Never mind whether anyone in his or her right mind would believe in the null hypothesis in the first place or, if pressed on the matter, would argue that it was worth testing on its own merits.)   

Egon Pearson
Now we presumably have in hand a batch of data from a simple random sample drawn from a comprehensive sample frame – i.e. from a completely-known and well-characterized population (these latter stipulations are important and I return to them below). Before we do the test, we need to make two decisions that are absolutely essential to the N-P approach. First, we need to preset a so-called α value for the largest probability of making a Type I error (rejecting the null when it’s true) that we’re willing to consider consistent with a rejection of the null. Although we can set α at any value we please, we almost inevitably choose 0.05 or 0.01 or (if we’re really cautious) 0.001 or (if we’re happy-go-lucky) maybe 0.10. Why one of these values? Because we have five fingers – or ten if we count both hands. It really doesn’t go any deeper than that. Let’s face it, we choose one of these values because other people would look at us funny if we didn’t. If we choose α = 0.10, a lot of them will look at us funny anyway.

Suppose, then, we set α = 0.05, the usual crowd-pleaser. The next decision we have to make is to set a β value for the largest probability of committing a Type II error (accepting the null when it’s not true) that we can tolerate. The quantity (1 – β) is known as the power of the test, conventionally interpreted as the likelihood of rejecting a false null given the size of our sample and our preselected value of α. (By the way, don’t worry if you neglect to preset β because, heck, almost no one else bothers to – so it must not matter, right?) Now make some assumptions about how the variables are distributed in the population, e.g. that they’re normal random variates, and you’re ready to go.

So we do our test and we get = 0.06 for the predictor variable we’re interested in. Damn. According to the iron law of α = 0.05 as laid down by Neyman and Pearson, we must accept the null hypothesis and reject any alternative, including our beloved one – which basically means that this paper is not going to get published. Or suppose we happen to get = 0.04. Ha! We beat a, we get to reject the null, and that allows us to claim that the data support the alternative, i.e. the hypothesis we liked in the first place. We have achieved statistical significance! Why? Because God loves 0.04 and hates 0.06, two numbers that might otherwise seem to be very nearly indistinguishable from each other. So let’s go ahead and write up a triumphant manuscript for publication. 
Significance is a useful means toward personal ends in the advance of science – status and widely distributed publications, a big laboratory, a staff of research assistants, a reduction in teaching load, a better salary, the finer wines of Bordeaux…. [S]tatistical significance is the way to achieve these. Design experiment. Then calculate statistical significance. Publish articles showing “significant” results. Enjoy promotion. (Ziliak and McCloskey, 2008: 32)
This account may sound like a crude and cynical caricature, but it’s not – it’s the Neyman-Pearson version of NHST. The Fisherian version gives you a bit more leeway in how to interpret p, but it was Fisher who first suggested less than or equal to  0.05 as a universal standard of truth. Either way, it is the established and largely unquestioned basis for assessing hypothesized effects. A of 0.05 (or whatever value of a you prefer) divides the universe into truth or falsehood. It is the main – usually the sole – criterion for evaluating scientific hypotheses throughout most of the biological and behavioral sciences. 

What are we to make of this logic? First and most obviously, there is the strange practice of using a fixed, inflexible, and totally arbitrary a value such as 0.05 to answer any kind of interesting scientific question. To my mind, 0.051 and 0.049 (for example) are pretty much identical – at least I have no idea how to make sense of such a tiny difference in probabilities. And yet one value leads us to accept one version of reality and the other an entirely different one.

To quote Kempthorne (1971: 490):
To turn to the case of using accept-reject rules for the evaluation of data, … it seems clear that it is not possible to choose an a beforehand. To do so, for example, at 0.05 leads to all the doubts that most scientists feel. One is led to the untenable position that one’s conclusion is of one nature if a statistic t, say, is 2.30 and one of a radically different nature if t equals 2.31. No scientist will buy this unless he has been brainwashed and it is unfortunate that one has to accept as fact that many scientists have been brainwashed.

Think Kempthorne’s being hyperbolic in that last sentence? Nelson et al. (1986) did a survey of active researchers in psychology to ascertain their confidence in non-null hypotheses based on reported p values and discovered a sharp cliff effect (an abrupt change in confidence) at p = 0.05, despite the fact that p values change continuously across their whole range (a smaller cliff was found at p = 0.10). In response, Gigerenzer (2004: 590) lamented, “If psychologists are so smart, why are they so confused? Why is statistics carried out like compulsive hand washing?”

But now suppose we’ve learned our lesson: and so, chastened, we abandon our arbitrary threshold a value and look instead at the exact p value associated with our predictor variable, as many writers have advocated. And let’s supposed that it is impressively low, say = 0.00073. We conclude, correctly, that if the null hypothesis were true (which we never really believed in the first place) then the data we actually obtained in our sample would have been pretty unlikely. So, following standard practice, we conclude that the probability that the null hypothesis is true is only 0.00073. Right? Wrong. We have confused the probability of the data if you are given the hypothesis,  P(Data|H0), which is p, with its inverse probability P(H0|Data), the probability of the hypothesis if you are given the data, which is something else entirely. Ironically, we can compute the inverse probability from the original probability – but only if we adopt a Bayesian approach that allows for “subjective” probabilities. That approach says that you begin the study of some prior belief (expressed as a probability) in a given hypothesis, and adjust that in light of your new data.

Alas, the whole NHST framework is by definition frequentist (that means it interprets your results as if you could do the same study countless times and your data are but one such realization)  and does not permit the inversion of probabilities, which can only be done by invoking that pesky Bayes’s theorem that drives frequentists nuts. In the frequentist worldview, the null hypothesis is either true or false, period; it cannot have an intermediate probability assigned to it. Which, of course, means that 1 – P(H0|Data), the probability that the alternative hypothesis is correct, is also undefined. In other words, if we do NHST, we have no warrant to conclude that either the null or the alternative hypothesis is true or false, or even likely or unlikely for that matter. To quote Jacob Cohen (1994), “The earth is round (< 0.05).” Think about it.

(And what if our preferred alternative hypothesis is not the one and only possible alternative hypothesis? Then even if we could disprove the null, it would tell us nothing about the support provided by the data for our particular pet hypothesis. It would only show that some alternative is the correct one.)

But all this is moot. The calculation of p values assumes that we have drawn a simple random sample (SRS) from a population whose members are known with exactitude (i.e. from a comprehensive and non-redundant sample frame). There are corrections for certain kinds of deviations from SRS such as stratified sampling and cluster sampling, but these still assume an equal-probability random sampling method. This condition is almost never met in real-world research, including, God knows, my own. It’s not even met in experimental research – especially experiments on humans, which by moral necessity involve self-selection. In addition, the conventional interpretation of values assumes that random sampling error associated with a finite sample size is the only source of error in our analysis, thus ignoring measurement error, various kinds of selection bias, model-specification error, etc., which together may greatly outweigh pure sampling error.

And don’t even get me started on the multiple-test problem, which can lead to completely erroneous estimates of the attained “significance” level of the test we finally decide to go with. This problem can get completely out of hand if any amount of exploratory analysis has been done. (Does anyone keep careful track of the number of preliminary analyses that are run in the course of, say, model development? I don’t.) As a result, the values dutifully cranked out by statistical software packages are, to put it bluntly, wrong.

One final technical point: I mentioned above that almost no one sets a β value for their analysis, despite the fact that β determines how large a sample you’re going to need to meet your goal of rejecting the null hypothesis before you even go out and collect your data. Does it make any difference? Well, one survey calculated that the median (unreported) power of a large number of nonexperimental studies was about 0.48 (Maxwell, 2004). In other words, when it comes to accepting or rejecting the null hypothesis you might as well flip a coin.

And one final philosophical point: what do we really mean when we say that a finding is “statistically significant”? We mean an effect appears, according to some arbitrary standard such as < 0.05, to exist. It does not tell us how big or how important the effect is. Statistical significance most emphatically is not the same as scientific or clinical significance. So why call it “significance” at all? With due deference to R. A. Fisher, who first came up with this odd and profoundly misleading bit of jargon, I suggest that the term “statistical significance” has been so corrupted by bad usage that it ought to be banished from the scientific literature.

In fact, I believe that NHST as a whole should be banished. At best, I regard an exact value as providing nothing more than a loose indication of the uncertainty associated with a finite sample and a finite sample alone; it does not reveal God’s truth or substitute for thinking on the researcher’s part. I’m by no means alone (or original) in coming to this conclusion. More and more professional statisticians, as well as researchers in fields as diverse as demography, epidemiology, ecology, psychology, sociology, and so forth, are now making the same argument – and have been for some years (for just a few examples, see Oakes 1986; Rothman 1998; Hoem 2008; Cumming 2012; Fidler 2013; Kline 2013).

But if we abandon NHST, do we then have to stop doing anything other than purely descriptive statistics? After all, we were taught in intro stats that NHST is a virtual synonym for inferential statistics. But it’s not. This is not the place to discuss the alternatives to NHST in any detail (see the references provided a few sentences back), but it seems to me that instead of making a categorical yes/no decision about the existence of an effect (a rather metaphysical proposition), we should be more interested in estimating effect sizes and gauging their uncertainty through some form of interval estimation. We should also be fitting theoretically-interesting models and estimating their parameters, from which effect sizes can often be computed. And I have to admit, despite having been a diehard frequentist for the last several decades, I’m increasingly drawn to Bayesian analysis (for a crystal-clear introduction to which, see Kruschke, 2011). Thinking in terms of the posterior distribution, of support for a model provided by previous research as modified by new data seems a quite natural and intuitive way to capture how scientific knowledge actually accumulates. Anyway, the current literature is full of alternatives to NHST, and we should be exploring them.

By the way, the whole anti-NHST movement is relevant to the “Mermaid’s Tale” because most published biomedical and epidemiological “discoveries” (including what’s published in press releases) amount to nothing more than the blind acceptance of values less than 0.05. I point to Anne Buchanan’s recent critical posting here about studies supposedly showing that sunlight significantly reduces blood pressure. At the < 0.05 level, no doubt.


Cohen, J. (1994) The earth is round (p < 0.05). American Psychologist 49: 997-1003.

Cumming, G. (2012) Understanding the New Statistics: Effect Sizes, Confidence Intervals, and Meta-Analysis. New York: Routledge.

Fidler, F. (2013) From Statistical Significance to Effect Estimation: Statistical Reform in Psychology, Medicine and Ecology. Routledge, New York.

Gigerenzer, G. (2004) Mindless statistics. Journal of Socio-Economics 33: 587-606.

Hoem, J. M. (2008) The reporting of statistical significance in scientific journals: A reflexion. Demographic Research 18: 437-42.

Kempthorne, O. (1971) Discussion comment in Godambe, V. P., and Sprott, D. A. (eds.), Foundations of Statistical Inference. Toronto: Holt, Rinehart, and Winston.

Kline, R. B. (2013) Beyond Significance Testing: Statistics Reform in the Behavioral Sciences. Washington: American Psychological Association.

Kruschke, J. K. (2011) Doing Bayesian Analysis. Amsterdam: Elsevier.

Longford, N. T. (2005) Model selection and efficiency: Is “which model…?” the right question? Journal of the Royal Statistical Society (Series A) 168: 469-72.

Maxwell, S. E. (2004) The persistence of underpowered studies in psychological research: Causes, consequences, and remedies. Psychological Methods 9: 147-63.

Nelson, N., Rosenthal, R., and Rosnow, R. L. (1986) Interpretation of significance levels and effect sizes by psychological researchers. American Psychologist 41: 1299-1301.

Oakes, M. (1986) Statistical Inference: A Commentary for the Social and Behavioral Sciences. New York: John Wiley and Sons.

Rothman, K. J. (1998) Writing for Epidemiology. Epidemiology 9: 333-37.

Ziliak, S., and McCloskey, D. N. (2008) The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor: University of Michigan Press.

Friday, November 21, 2014

Serial science

Are you hooked on Serial yet?  The current season of this radio series from the makers of This American Life explores whether Adnan Syed, in jail for the last 15 years and sentenced to life for killing his ex-girlfriend Hae Min Lee when they were both 18, really did it.  He says he didn't.  His friend Jay says he did.  The question comes down to, what happened in those crucial 21 minutes when his whereabouts are unknown and undocumented, and Hae was strangled?

Sarah Koenig, presenter of this series, and a producer of This American Life, describes herself as having been obsessed with this case for the past year.  She probes every angle, reads every report she can find, listens to the police interview tapes, talks to anyone who will speak to her about what they saw or what they know, revisits the supposed scene of the crime, tests whether Adnan could have gotten there in the 21 minute window of opportunity, challenges Adnan with her thoughts, doubts and questions in hundreds of phone calls, and so much more.
What she realized is that the trial covered up a far more complicated story, which neither the jury nor the public got to hear. The high school scene, the shifting statements to police, the prejudices, the sketchy alibis, the scant forensic evidence - all of it leads back to the most basic questions: How can you know a person’s character? How can you tell what they’re capable of? In Season One of Serial, she looks for answers.
Some weeks Koenig is convinced Adnan is guilty, some weeks she's not so sure. We don't yet know what she concludes, or indeed whether she concludes anything other than that it's impossible to reach a conclusion, but, perhaps because it's possible to weigh the evidence from various angles, it's a gripping series.

Scales of justice; Wikipedia

A compelling story.  Not because we really care about Adnan -- unless he's innocent, in which case of course, the injustice is a tragedy.  Indeed, it's Hae one really cares about, a bright young woman whose future was violently and callously taken from her.  And, the idea that, to compound this tragedy, a young man's life may have been ruined by a system that didn't do it's job is also part of the emotional hook that keeps us listening.  But more, to me the show is fascinating because of what it tells us about truth, and how we know what we know.

Starting with the Enlightenment around 400 years ago, the gold standard for science has been empirical evidence, the equivalent of fingerprints, witness stories and DNA.  Naturalists collected then, scientists collect now, observational or experimental data, to try to make sense of, and build a story with.  As Koenig does, scientists assume there's a truth, and that evidence can lead us to it.  Yes, the evidence needs to be tested and weighed, and evaluated and re-evaluated, but the assumption is that with evidence we can know the truth.

But that's science. Koenig is talking about a legal question -- this crime happened, who did it? Lawyers aren't necessarily looking for the truth, even though they know there is one, because they know they can't necessarily know what it is.  The evidence often can be interpreted in numerous ways, used by a canny defense attorney as well as a canny prosecutor in support of either guilt or innocence.  Or, as in Adnan Syed's case, it seems that what would be crucial evidence just doesn't exist, so he was convicted on circumstantial evidence instead.

And of course sometimes pieces of evidence are omitted in the pursuit of a consistent story, which is often what lawyers are really after.  Many times, defense lawyers don't know whether their client is guilty or innocent, but they simply (or not so simply) build whatever exculpatory story the evidence (or some of the evidence) will support.  So, Koenig is being an ace reporter in this series, and a great storyteller as she unpeels layer after layer after layer of evidence in her search for the truth, but she's not necessarily being a winning lawyer.  But that's ok; she is a reporter.  By contrast, in our system a lawyer's job is to take sides, not to seek truth, which is different from science and might seem strange unless you realize that there may not be a better way, to determine guilt or innocence  when truth can't be tested directly.

Chemical balance; Indica Scientific

Scientists are telling stories from evidence, too, with the hope that there's a knowable truth to be found.  As much evidence as they can get.  Should they be impartial?  Yes and no.  Building evidence is what the whole push for Big Data, and meta-analyses is about; building stories from enough data that we can assume we're approaching the truth.  Evidence that should be weighed impartially.  This seems like the new path to truth but, to be fair, Darwin collected Big Data in his way, too, observing more pigeons, barnacles, and orchids than most of us would have had the stomach for.  He was a diligent, patient observer, who also built a consistent story from the evidence -- but with various theories in mind.  Not impartially.  So, scientists are reporters of the natural world, but the synthesizer scientists are lawyers, too, piecing together the evidence to make a good, consistent story, taking a side.

But, it's never clear how close we are to the truth, even if or when we think we can assume there is a single truth.  Understanding what genes are and what they do, for example, took a lot of sleuthing, building a story, from the circumstantial evidence that Mendel so diligently provided, to the discovery that chromosomes were an important actor, and the discovery of DNA, and so on.

But geneticists understood what genes were a lot more definitively in the beginning than they do now. Ironically that was because it was before there was so much evidence.  Ask 5 of them now what a gene is, and if any of them actually give you an answer it will be vague, and it's likely that it won't agree with any of the others you get.  Ken always told his Human Genetics students each semester that much of what he was going to tell them wasn't going to be true the next year.  They seemed surprised or nonplussed, but he then explained about our growing knowledge and understanding.  Presumably there's a truth, and presumably we're heading toward it, but often it seems we have no idea how close we are getting to it.  Of course, the goal keeps changing as discoveries keep happening, and that doesn't help.

We don't yet know how Sarah Koenig is going to conclude her story.  She will have plowed through masses of evidence, but if the truth was in there to be found, I think it would have been found 15 years ago.  She's talking to a lot of people who knew Hae and know Adnan, and perhaps she'll dig up something new and significant that might change the story.  But so far, the crucial evidence is missing; only Adnan, Jay and Hae know what happened, but Hae can no longer speak, there were no witnesses, and Adnan and Jay are telling different stories.  Or, maybe someone else killed Hae.

This is an imperfect metaphor for science, of course, because no one person is holding out on us on whether there are multiverses, or dark matter, or something crucial missing from our understanding of biology, or even how antimalarials work.  But the elusiveness of these kinds of truths, the difficulty of interpreting the evidence and the idea that we might need to re-evaluate the data from time to time certainly pertain to science.

Thursday, November 20, 2014

K13 and the spread (or simultaneous emergence) of drug resistance in malaria parasites

We’ve mentioned this before, but the malaria and evolution story is complicated by multiple evolutionary tales:

  • Humans adapt to parasites
  • Parasites adapt to humans
  • Mosquitoes adapt to both

Parasites may adapt to mosquitoes too – and humans have adaptations to mosquitoes…

Malaria parasites bursting from red blood cells.  From National Geographic, June 1986 - scanned and shared online by Centuron:
This post is a story about parasites developing responses to some of the things we do to get rid of them.  Malaria parasites appear to have a real knack for survival, or at least the ones that survive and spread do.  Time and time again they have developed immune responses to our antimalarials.  Sometimes it happens quickly, sometimes it seems to take decades, but each time a new antimalarial is used, parasite strains emerge that are resistant to that antimalarial drug.

Southeast Asia appears to be a “special” place with regard to the evolution of antimalarial resistance.  For whatever reason, parasites that are resistant to new antimalarials always seem to be first documented here and then sometimes appear to subsequently spread globally.  (See Klein 2013 for a nice review of some theory around this problem (2)).  For example, chloroquine resistance in falciparum malaria seems to have independently emerged in both South America and Southeast Asia, but then seems to have spread globally from Southeast Asia (3).

Plasmodium falciparum parasites are almost globally resistant now to all antimalarials except for artemisinin.  In an attempt to keep these drugs effective, there has been a huge push to only use them in combination with other antimalarials.  The mechanisms of action of most antimalarials aren’t well understood, but the hope has been that by using different drugs, with different half-lifes, and probably different modes of action, then it will be much more difficult for parasites to develop resistance when compared to monotherapy (only using a single drug).

However, despite these efforts, artemisinin resistance has emerged in Southeast Asia (4).  It is not normally complete treatment failure at this point, but rather increased clearance times.  For example, while it would once take at most two days for parasites to be cleared from a patient’s blood stream after taking a dose of artemisinin, it now can take five.  Occasionally the treatment doesn’t work at all.   This is even occurring with artemisinin combination therapy.  Strains of parasites with “reduced sensitivity” have been found in Cambodia, in part of Vietnam, and along both sides of the Thailand-Myanmar border.

Some work has attempted to understand the genetics behind artemisinin resistance but many results, including a few I’ve been a part of, have contradicted each other.  However, one region on the parasite’s chromosome 13 keeps popping up in analyses.   Earlier this year, mutations in a particular gene (Kelch 13 (K13)-propeller) were identified as being potentially important in artemisinin resistance.  The function of this gene in the parasites isn’t well understood, but it is related to protein interactions.  And it isn’t a single point mutation that seems to confer resistance.  It appears that a wide variety of mutations, any of which are occurring in this gene, lead to parasites that are less sensitive to artemisinins – and this has now been confirmed both in vitro and in vivo.

The in vitro portion of this work began with a lab strain of falciparum malaria (3d7) which was intermittently exposed to artemisinins over a period of five years(5).  Doses of the drug were applied, then removed, then applied at higher proportions over this period of time.  Parasites from each dose cycle were sequenced so that the origin of mutations could be documented and so that mutations could be compared between case and control strains.  Ultimately the researchers narrowed their search down to a mutation in a single gene that corresponded to a point in time where some of the lab parasites seemed to no longer have strong, negative reactions to the antimalarial.

[It is important, I think, to remember that drug resistance isn’t usually an all or nothing type of trait, it is much more a trait of degree.  Even in situations where an antimalarial no longer works, it is likely that by increasing the dose of that antimalarial, there will be a point at which the parasites are still sensitive.  The problem is that it also becomes toxic to the human at some point.]

Next the researchers began looking at field isolates, across space and time, in Southeast Asia.   While they didn’t always find the same point mutations, they did find mutations in the same gene, in geographic areas where parasites are known to be less sensitive to artemisinins.  In areas where parasites still appear to be sensitive to the drug, they did not find mutations in this gene.  Furthermore, the prevalence of these mutations appears to have increased in certain regions (the ones that now have artemisinin resistance) over time.  

These findings are interesting I think for several reasons.
Here we have a gene in which mutations are somehow related to artemisinin resistance in malaria parasites.  But there isn’t a single mutation that leads to this resistance phenotype – rather it seems that just about any mutation(s) in this “gene” leads to resistance.  Does that make this a gene for resistance?

Another major finding, this time from a paper that came out in September 2014 (6), is that these mutations may not be spreading in the same way that other resistant strains (like chloroquine resistant falciparum malaria, for example) seem to have.  By analyzing the flanking regions of the K13 gene, analyzing patterns of linkage disequilibrium, the authors noted that several mutations in the K13 gene appear to have emerged independently and almost simultaneously both in Cambodia and along the Thailand-Myanmar border.

Once again the implications are quite interesting, if also scary.
One is that the evolutionary response seems less rare and unique if it can happen independently and simultaneously in different regions.  Does this mean that combination therapy is not working the way we hoped it would?

Another is more directly related to public health.  Right now there are several small scale elimination attempts occurring throughout Southeast Asia.  In fact, I’m working with one of the teams doing this (briefly discussed here).  Our hope is that we can wipe out resistant strains before they spread (via mosquitoes or humans) to other regions – perhaps especially Africa.  If resistance is likely to evolve anywhere that artemisinins are being used, we may not be able to halt this spread.  I would argue that our intentions to eliminate malaria in targeted subregions are worthwhile regardless.  But, it is a bit scary nevertheless.  

*** My opinions are my own!  This post and my opinions do not necessarily reflect those of Shoklo Malaria Research Unit, Mahidol Oxford Tropical Medicine Research Unit, or the Wellcome Trust.  

1. Network MGE. Reappraisal of known malaria resistance loci in a large multicenter study. Nat Genet. 2014;46(11):1197–205.

2. Klein EY. Antimalarial drug resistance: a review of the biology and strategies to delay emergence and spread. Int J Antimicrob Agents [Internet]. Elsevier B.V.; 2013 Feb 7 [cited 2013 Mar 8];41(4):311–7. Available from:

3. Payne D. Spread of chloroquine resistance in Plasmodium falciparum. Parasitol Today [Internet]. 1987 Aug;3(8):241–6. Available from:

4. Dondorp A, Nosten F, Yi P. Artemisinin resistance in Plasmodium falciparum malaria. New Engl J Med J … [Internet]. 2009 [cited 2013 Nov 17];455–67. Available from:

5. Ariey F, Witkowski B, Amaratunga C, Beghain J, Langlois A-C, Khim N, et al. A molecular marker of artemisinin- resistant Plasmodium falciparum malaria. Nature. 2014;505(7481):50–5.

6. Takala-harrison S, Jacob CG, Arze C, Cummings MP, Silva JC, Khanthavong M, et al. Independent Emergence of Artemisinin Resistance Mutations Among Plasmodium falciparum in Southeast Asia. J Infect Dis. 2014;491:1–10.

Wednesday, November 19, 2014

"Save the planet": a meaningless slogan?

What is all this talk about sustainability and so on?  What does the term mean and do we have historical precedents to turn to for an answer?  These days, in relation to the concept of sustainability we also hear a debate about how to 'save the planet' in the face of climate change, global warming, CO2 emissions, fossil fuels, overpopulation, industrial agriculture and erosion, antibiotic overuse, loss of clean water, and so on.  We're on various email lists that almost every day send us stories bemoaning the course of things that are not 'sustainable'.

Nothing specific triggers this post other than musing about a set of issues that may be of critical importance to 'us'--whoever 'us' is.  These various terms and slogans make sense to many people and indeed even hearing them puts others' hair on fire, because they oppose what those who want to save the planet are assumed to be advocating.  But these terms are in themselves almost without clear meaning, if any meaning at all, and that can be a problem, given how polarized society is on the issues.

Generally speaking, those on the political left express urgent fear about the current problems that are being discussed, recognized, or claimed.  The left wants to save the planet by cutting back on the use of fossil fuels and big-scale agriculture, human overpopulation, the destruction of natural habitats, and so on.

In reaction, generally those on the political right say that if there really is a problem (they tend to doubt sensory reality and science as its formalizer), then industrial innovation and the capitalist seizing on opportunity will fix it, so not to worry.  Even more, they often argue that the crises are being over-stated or data being misunderstood (or fabricated) so that, in truth, the planet doesn't need 'saving' anyway.  Global warming is either being misinterpreted or it's just part of the normal cycling on Earth and the pendulum will swing back in time.  Industrial capitalism will feed and warm us all, if we but give it the time and enough rein to do its job. The planet, in a sense, will 'save' itself.

Earth; What exactly does one want to 'save'?  Source: Wikipedia

But if you think about it, none of this has much meaning at all, no matter that it sounds like it does.
To see this, we ask in particular what, exactly, save the planet means?

1.  Does it mean save our current way of life?  Do we want to cut back on fossil fuels enough to stop or reverse global warming and resource exhaustion, but not so much that even liberals would complain?  I don't hear them saying we need to outlaw18-wheelers, or trains, tractors, air conditioners, golf carts, power leaf-blowers, backyard pools, or personal cars.  'Cutting back' generally seems to mean to people that we can have a bit less, though not a whole lot less, and still keep our lifestyle and reverse climate change, loss of topsoil, and so on.  Of course, if we really wanted to 'save' the planet in these terms, perhaps we should be advocating global equity in resources and living conditions, but nobody is actually serious about that because if we evened out the income distribution we'd all be in the soup.  Indeed, of course, we're concerned that the Chinese, Indians, Brazilians et al. want to wait til they have cars and A/Cs before they start to save the planet.

I mean, if we polluted ourselves out of supermarkets and personal cars, we would still survive, though there would be a lot fewer of us, and perhaps no global transport and even, heaven forbid, no electronic entertainment.  Some would survive all the fracking anyone could possibly imagine.  So, here, 'saving' really means something akin to delaying the demise of our way of life.

2.  Save human life on Earth?  That is an understandable if selfish thing to advocate.  The Earth would not miss us were we to go extinct.  And we will eventually be gone, of course.  So as far as that goes, again what is being advocated is not save the planet, but delaying our specific species' demise.  Likewise, saving species from extinction is, as any ecologist or evolutionary biologist (or cosmologist) knows, quite illusory.  All species become extinct and only a fraction of lineages do that by evolving into new species.  In the long term, of course, the Earth will be swallowed up by the exploding dying Sun.  So 'saving' again means 'delaying' something we, personally, in our very short-sighted, egocentric lives value.

Even more than that, we want our own lives to have some sort of long-term meaning. That's of course also an illusion, unless perhaps you have expectations of an infinite afterlife.  It's just that when we are returned to ashes, some new future ashes will harbor similar thoughts (about their own lives).

3.  Save the 'planet' as a whole?  It means little to talk about 'saving' the planet.  First, evolution has always adapted life to our planet's conditions, and there is absolutely no reason to think it won't adapt to whatever humans do to the place, including nuclear holocaust.  But since Earth is doomed to destruction eventually, what, exactly, does 'saving the planet' mean?  I think it probably means some cuddly short-term view of things, rather than a carefully considered view, unless it means preserving for the moment things we happen to like, like pandas, our kids and grandchildren or fellow countrymen and the like.  There's of course absolutely nothing wrong with that, but one should be clear, because not saving 'the planet' doesn't mean there'd be nothing; it just means it'd be different.

Our planet is not in danger and doesn't need 'saving'.  The Earth is one huge biochemical reaction and its 'Gaia', its physico-chemical unity is based on its components, energy, and so on.  When or whether or for how long people, or any given lifestyle, or any lifestyle exists is part of that.  We may try to preserve what we like, or enough of what we like, in a state that for our limited lifespan and egocentric purposes seems permanent, if that's how we wish to define 'save'.

4.   Go back to swidden hand-hoed agriculture?
There are many arguments, apparently quite valid, that we are rapidly exhausting our soil in various ways having to do with large-scale industrialized commercially capitalized agriculture.  Does save the planet mean to find and implement ways, that apparently do exist, to grow enough to feed the human population without this being forced upon us when naturally developed soils are drained away?  This might be a good objective, but it is a political one, obviously, because from the 'planet's point of view, maybe its overall sustainability would be better off if we did exhaust the agricultural soil and starved ourselves out of existence or at least back to less resource-demanding numbers. But save the planet as a slogan probably isn't advocating this.  Does it just mean we don't like the way Kansas is being farmed, that Big Ag's like Monsanto are very rich, and that we (most of us who've never really seen a farm first-hand) are venting some nostalgia about things we don't really personally even know much about?

5.  Develop 'sustainability'?
This word is as vague and in a way naive as save the planet.  Nothing in human (much less evolutionary) history is eternally sustainable.  Change is part of Nature and its geological-historical processes.  Human agriculture has more or less from its beginning gone through periods of growth, resource over-use, and decline.  That ours will do the same should come as no surprise.  Is that bad? From the point of view of our own personal nostalgia and sentimentality, perhaps.  From the point of view of 'the planet', there is no reason to impose such a purely human judgment.

In a NY Times editorial on the Canadian tar sands XL pipeline debate, Andrew Nikiforuk concludes:
"The American social critic Lewis Mumford described mining as barbaric to land and soul. By any definition, Keystone XL grants license to an earth-destroying economy."
The editorial is in itself a good discussion of the tar sands and how they are extracted and what that will do to large tracts of forest.  But the final description is just naive, in the sense I am discussing.  No matter what the ramifications in the short term, mining won't actually destroy the earth.

Yet, of course, there is something here--but what is it?  Esthetics about primeval forest?  Global warming and forest destruction?  Dislike of greedy BigOil?  Failure of society to come to grips with the potential traumas whose seeds short-term convenience and greed will lead to?

Do as I say, not as I do?
A new book by Naomi Klein called This Changes Everything, is getting rave reviews these days.  She makes a case that current global capitalism is responsible for climate change that is soon to be disastrous (not good for saving the planet!).  We must cut back, way back, on our energy consumption in the developed world.  Citing conservation advocates' estimates, for global sustainability and to curb or reverse global warming, we must, if we are humanitarian, share the wealth, that is, share the per-capita Wattage expended.  We here in the developed world need to cut back by something like 80%, and let those in the developed world grow, for humanitarian reasons, a few-fold.

Sounds nice.  I haven't read the book, and I am writing based on reviews, including an excellent recent one (OA) by Elizabeth Kolbert in the December 4th issue of The New York Review of Books.  From the reviews, I probably will agree with the book's argument, but that's not the point here.  The review notes in passing that Klein has traveled the globe so much that she's an 'elite' frequent flyer club member, that she has been flying over the world to visit places where relevant activities to curb resource extraction are taking place, that she got ideas when dining in Geneva, and that she and her husband are making a film to go with her book.  The book is printed on paper and distributed around the world.  It's not freely downloadable.  Will the film involve air travel, or a lot of resource consumption, or be free on line?

It's an ethical dilemma and by no means new.  Authors and film-makers, like missionaries for causes back through history, must make a living.  The justification for high consumption on their part is a kind of executive privilege: it's a nasty job being holier than thou, but somebody's got to to it.  This is similar to what has always been heard from powerful, pious, or privileged as they squeeze the rest of the population, exhorting them to bite this or that bullet.

Yet, obviously, authors do have to make a living, just as preachers, earls and kings do, and maybe it's simply true that someone abusing the resource issue is needed if masses are to be informed so they won't abuse the resources.  Masses need leaders.  You can make your own assessment as to what's fair, whether there are other ways than experts and spokespersons to carry a banner, and so on.  There are no easy answers (and, certainly, yours truly drives, has a warm house, travels to Europe to lecture or visit family, eats fresh food in winter shipped from the tropics, and so on).  It's very hard not to be hypocritical.  The human track record isn't very good in this respect.  Indeed, even many of today's arguments were au courant only a few decades ago--if you're old enough, you'll remember 'the population bomb' and 'small is beautiful' and daisy-painted VW buses parked in drop-out, live- naturally communes.

Is change as we are experiencing it these days any more threatening than it has been to prior human generations, albeit each in its own particular way, with likely negative as well as positive consequences?  If we individually or collectively want to alter things to satisfy some goal, and want to rally others behind that view, is the simple, catchy save the planet banner the kind of rallying point that works?

OMG, this is what really matters!
While finishing this post, nibbling a chocolate covered hazelnut, we came across this horrifying story that in an incredibly timely way showed in stark relief just the very disastrous things we face, to make our entire point.


A new story apparently reports iron-clad proof that what we are doing to the earth is going to make its most precious resource disappear completely: chocolate!  Horror of horrors, now here is some thing that really does threaten anybody and everybody in every way and that even the most rabid Republican capitalist can agree on!  This shows why we really do need to save the planet and it clearly shows that that slogan has unambiguous, and unquestionable meaning.

Well, take a breath (if you can!).  Should we be clearer and more precise about what, exactly, that might be, and why, that a consensus could agree on?  Can we be clearer?  Maybe nostalgia is enough, but otherwise, save the planet is more an advertising slogan for a vague, essentially ideological point of view than a clear statement of some objective goal.  There are serious issues for us, or at least our living descendants. Is it possible to have agreement an agenda or are people just too different in what they think about themselves and the world, whether you want to call that selfishness or whatever?