Wednesday, January 9, 2013

#overlyhonestmethods or, Telling the Truth in Science

Y'all have seen the #overlyhonestmethods thread on Twitter, no?  The hashtag tells the story -- scientists, and surely some ex-scientists, revealing how and why they do what they do.  E,g.:

The instrument was fully calibrated prior to use in this study. Several decades prior to use in this study. 
Expand

THIS. RT  Blood samples were spun at 1500rpm because the centrifuge made a scary noise at higher speeds.Expand


If you haven't seen it yet, and you have any interest in science -- which you do because you're here -- you really should go check it out.  The tweets just keep on rolling, and they are hilarious.  In a very edgy way.  We're tempted to say they are getting even more edgy, but that's unfair because they were edgy to start with.  (If you don't want to hop all the way over to Twitter, here's a sample.)

We've read enough of these now to be able to detect some recurring themes.  Of course, the major theme is that science is rarely done the way it oughta be (see above re. the centrifuge speed, and others like "we used an n of 19 because one Eppendorf tube rolled under the freezer," "incubation time was 3 hours because that's how long I took for lunch," etc.).

But additionally:
  • a lot of students and post-docs really don't like their supervisors, or at least distrust their knowledge of lab work since they've spent years or decades writing grants instead of working at the bench.  
  • a lot of experiments are very sloppily done.  
    • and/or a lot of people have no clue why they use the methods they do,  
    • or, maybe it's that a lot of methods don't need to be strictly adhered to,
    • or that we often have no clue whether they need to be or not.
  • a lot of papers are cynically written.
  • much (most?) of science is done with both eyes on the next grant.
  • people cite a lot of papers they haven't read, not only when they are behind a paywall.
  • a lot of kissing ass is going on in science. 
    • very cynically. 
  • undergrads get blamed for everything.  Probably correctly.  When it's not the post-doc's fault.
  • no one likes reviewer #2.  
  • a lot of scientists are very funny.  
But beyond all this, this thread is a beautiful example of the anthropological concept of emics and etics, emics being the way 'natives' explain a behavior or idea and etics being the way an outside observer explains the same thing.  Except that here the scientists are playing both roles.  They can readily say the right thing -- those tens of thousands of papers that get published every year -- but they can also step outside the culture of science and deconstruct why they said or did what they did.

What is truth?  What can be reasonably expected?
The truth is often unclear, and no complex operation like a science experiment or study worthy of the name is entirely straightforward.  Generally, if it were so easy, we'd know the answer and wouldn't need to do the study.

Yes, and we're all fallible.  We can forget various steps of a procedure, misremember what we did, make errors in our notes, inadvertently or even subconsciously color what we say or how we interpret data. We can even yield to temptation to make various short-cuts and alterations to data, like throwing out 'outlier' observations that just don't seem to be right (relative to our preconceptions, or reporting only the segment of our results (the one transgenic mouse out of many) that 'best' represents the results as we present them (that is, that best fits our theoretical expectations) -- and #overlyhonestmethods has a lot of that.

These are sins, because brutal objective honesty with precision is the goal.  But given human failings, if we're really right about the problem at hand, the results of these kinds of distortions from purity won't change that--the truth will out, so to speak.

But what the twitterati here are discussing is something very different indeed!

"Oh, what a tangled web we weave when first we practise to deceive!"  (Walter Scott)
What this set of tweets so accurately reflects is the current culture in science, which is explicitly expressed (though often not said publicly except between colleagues or faculty and their graduate students).  We're aware of it, it is systematic if not systemic, and it is driven, clearly, by the rush to publish, the strongly felt need to hype our work to build our careers, and the desperate and relentless struggle for grants, jobs, and the like.

Grantsmanship, or how to strategize your ideas to get funded, is often at its core, and sometimes openly, about deception of reviewers and funders.  A very common, if not typical, example is the smug grins with which investigators say that the thing they do is propose funding for work that is outlined in the proposal as to be done, but in fact has already been done.  Or to present work selectively, or to cite reviewers' work to butter them up, and so on.  This is all driven by the 'business model', and the rush, rush, rush to publish and spend.

Well, you might say, this is just the game we play.  We all know we're doing it.  Human society is always laden with this sort of organized dishonesty. It's what you have to do to compete for resources and after all, what is life about if not that?  If we all do it, the system internally adjusts accordingly.

This may be the hard truth, but it's a worse offense is to acquiesce to it.  Because then we end up in Scottsville: we weave tangled webs of dissembling and disinformation, of untruths known and unknown, of habits of deception that become essentially routine.  And here the idea of science as self-correcting and hence immune to such tangled webs becomes a myth.

That's because untangling this kind of web is often impossible.  Studies are too large and too expensive for them to be replicated.  Or the sample or working material is unique: if I mis-report a study of a particular disease in, say, Bongabonga, you can't go to Bongabonga and repeat it.  Samples from elsewhere are only partly relevant.  Likewise with intricate experimental systems involving high technology, and very particular setup.  The costs of the kinds of knowingly huge studies we do these days prohibits such checking-up even if it were technically possible.  And the many intricate details, often only sloppily reported in reams of 'Supplemental Information' on the journal's web page, are often challenging to understand if not intentionally obscure.

And more than just checking up, we have to build on others' work because we simply can't do everything over.  So what we build on needs to be reliable.  That's why human fallibility and vanity is no matter to be joking about, despite #overlyhontestmethods, given how true it actually is.  Huge investments are made, and we have a very unclear understanding of the degree to which this sort of cheating is meaningful and felonious, rather than simply the misdemeanors of imperfect beings.  Cheating is, in a sense, rewarded if we just smile and look the other way.  Even if we might just say it's not worth worrying about because most science, like most anything else, is rather pedestrian and headed for obscurity even if it were as pure as Ivory soap.

But when this is how we train our students, we sow seeds of a problematic future.  Especially in an era when funds are limited, any avoidable dishonor in the system should be avoided, and we have to hope that the result of systematic misdemeanors we smile about will not become so accepted that we cannot untangle our weaving.  That could lead to science becoming just another shaky belief system, on which much of our lives rely.  Hasn't human history had enough of those already?

No comments: