Tuesday, January 7, 2020

Crossing the academic integrity line

Every so often, perhaps now ever more often, we sees stories of university faculty members, or their research groups, fabricating data or publishing distorted, or culpably misleading presentations of their work in science journals.  Presumably the same is happening in grant applications.  (I refer here to the life sciences, since I know little about other areas of academic publishing).

I think that far more often, indeed, perhaps almost regularly, the science community publishes papers that are configured to put the authors' best feet forward.  Research results can easily be manipulated, within the confines of strict truthfulness.  For example, many transgenic mice are produced with some dysfunctional result introduced to, for example, understand some human disease by elucidating how, say, a gene mis-functions when mutated.  Mice can't object and humans can't be experimented on, so we need 'model' systems, and we naturally want to both discover important things in their use and to get credit for our work.

But we are only human.  Too often, what is published in a main paper are the best-foot-forward results, with caveats clearly stated.....in 'supplemental information'.  Unfortunately, the latter can be as long as the New York phone book, while the actual paper is only a few pages.  And burying uncomfortable aspects of results, in length-unlimited 'supplemental information' is a well-honed skill.  Some 'supplementals' are quite extensive and of course inconvenient truths can be hidden--indeed, probably only a small number of readers ever bother to check the supplements beyond looking for some point or other.

Anyone who thinks that this is not a way for best feet to be put forward is naive.  Indeed, in most cases, it's not cheating, since the article itself simply cannot convey the entire experimental system.  BUT.....

If supplemental information can function to allow investigators to hide their worse feet, that can verge on outright dishonesty.  For example, if I make, say, ten transgenic mice with Mutant Gene X replacing the normal gene, the common result is that the animals are all different, for many reasons including reasons unknown.  Some victims (er, mice) may show no effect at all, others may die as a side effect of the experiments (poor mice!).  No one can be blamed for this sort of thing (except, perhaps, for doing what they do to innocent mice in the first place).  But, often and  I think unarguably typically, the mouse showing the clearest effects, assuming they are effects, is the one published and often denoted 'representative'.  Is this fraud?

It is strategic, but one can explain it away by saying (hopefully) that the mice with lesser effects didn't entirely receive the transgene properly and so on.  The 'representative' mouse must be the one with the complete incorporation of the gene.  If you want to see all those with the transgene, even the Supplemental tome may not (I think typically does not) show the other mice, the ones with lesser (or no) effect.

There can be legitimate reasons for this variation, of course: the transgene may in fact not be properly inserted, or not in all cells, or the receiving mouse may have somehow detected or compensated for it.  So--shouldn't the paper reporting the experiment show all the recipient mice?  Obviously it should--but if Nature or Science only give you 5 pages, and you want to show that you've found what the transgene does, you have motivation for only showing the 'representative' mouse in your paper.

Is this a form of scientific misinformation or even 'fraud'?  You have to make your own judgment.  Even given legitimate reasons why some recipient mice don't show the result, there can be lots of reasons why you don't want to present it in your paper--not least being that if only one of your transgene victims shows the result, maybe your interpretation or claims are simply wrong!  Gulp--then no Nature paper!

How often do you think that, perhaps uncomfortably like advertizing agencies, papers even in prestigious journals are doing something similar?  How can one tell?  If the problem is more than trivial, is there anything that can be done to stop or at least minimize it?

After all, the point of scientific reports is to lead others to build on them.  The building is only as strong as its foundation.

1 comment:

Anonymous said...

Clap, clap! Light into darkness.