Tuesday, September 9, 2014

Sloppy, over-sold research: is it a new problem? Is there a solution?

In our previous posts on epistemology (e.g., here)--the question of how we know or infer things about Nature--we listed several criteria that are widely used; induction, deduction, falsifiability, and so on.  Sometimes they are invoked explicitly, other times they are just used implicitly.

A regular MT commenter pointed out a paper on which he himself is an author, showing serious flaws in an earlier paper (Fredrickson et al.) published in PNAS, a prominent journal.  Fredrickson et al., is a report of a study of the genetics of well-being  The critique, also published in PNAS, points out fundamental flaws in the original paper. ("We show that not only is Fredrickson et al.’s article conceptually deficient, but more crucially, that their statistical analyses are fatally flawed, to the point that their claimed results are in fact essentially meaningless.") We can't judge the issues ourselves, as the paper is out of our area, but the critique seems to be rather broad, comprehensive, and cogent.  So, how could such a flawed paper make it into such a journal?

Our answer is that journals have always had their good and less-good papers, and there have always been scientists (and those who claim to be scientists) who trumpeted their knowledge and/or wares.  When there are credit, jobs, fame and so on to be had, one cannot be surprised at this.

Science has become a market, with industry and university welfare systems, a way for the middle class to get societal recognition (which is an important middle-class bauble), and journals proliferate, many avenues for profit blossom, and university administrators stop thinking and become bean-counters.  Solid science isn't always the first priority.

Science was never a pure quest for knowledge, but it is now to a considerable extent more than before, we think, a business with these various forms of material and symbolic 'profit' as coins of the realm, and the faux aspect can be expected to grow.  There isn't any easy fix, because raising standards to become better policed usually leads to becoming more elite, closed, and exclusive, and that is itself a form of opportunity-abuse.

Our commenter did add that he can no longer trust research sponsored by the US government, and here we would differ.  Much good work is done under government sponsorship, as well as industry sponsorship (which can have its own problems).  The government is a loaded, inertial bureaucracy with its armada of career-builders, and that is predictably stifling.  But the general idea is to do things right, to benefit society (not just professors, or funders, or university administrators).  The problem is how to improve the standard.

The issue is not epistemological
Actually we think the comment was misplaced in a sense, because our post was about epistemological criteria--how do we know how to design studies and make inferences?  The comment was about the way the results are reported, accepted, exaggerated, and the like.  This is certainly related to inference, but rather indirectly we'd say.  Reviewers and editors are too lax, have too many pages to fill, too many submissions to read and the like, so that judgment is not always exercised (or, often, authors bury their weak points in a dense tangle of 'supplementary information').

That is, one can do the most careful study, following the rules, but use bad judgment in its design, be too-accepting of test results (such as statistical tests), use inappropriate measures or tests.  And then, often in haste or desperation to get something published from one's work (an understandable pressure!) submit a paper that's less than even half baked.

What is needed is to tighten up the standards, the education and training, reduce the pressure for continual grant funding and publication streams to please Deans or reviewers, and give scientists time to think, make them accountable for their promises, and slow down.  In a phrase, reward and recognize quality more than quantity.

This is very hard to do.  Our commenter's points are very well taken, in that the journals (and news media) are now heavily populated by low- or sub-standard work whose importance is routinely and systematically exaggerated to feed the insatiable institutional maw that is contemporary science.

2 comments:

  1. Agreed. We need to find ways to evolve a ``slow-science'' culture. For example, getting agreement that publishing more than, say, one paper per year is overkill.

    ReplyDelete
  2. Reply to John V
    Nice to hear from you, John. There have been meager attempts to tame down the vita-stuffing pattern, such as by allowing only up to 5 or 10 papers to be listed on CVs associated with promotion or NIH/NSF grant applications. I think they have not worked for several reasons.

    We're a middle class phenomenon and status is part of what that means. We allow many to be part of academe (including me!) who would have had scant chance in the past....and this sort of thing is part of the price we pay.

    I see no real chance of a voluntary 'slow-research' movement unless it comes either from the top by huge budget cuts, or by grass-roots organizing, or by tumbling numbers of potential students who must be lured by proper attention being paid to their education.

    But the research industry, or academy/industry/government complex to paraphrase Eisenhower, works against that. It's a part of our society in the same way that official religion is part of various societies.

    Anyway that is my feeling at present.

    ReplyDelete