Thursday, July 3, 2014

IRBs: Insider control can't do what's expected. Part I: some history

We are supposedly able to sleep peacefully in the security of our homes because Institutional Review Boards (IRBs) are on guard to protect us from harm at the hands of universities' Dr Frankensteins.  But the system was built by the potential Frankensteins, so any such comfort goes the way of any belief that people can police their own ethics, especially when money is involved.  This is shown by a recent revelation in the news (the short version: scientist creates flu strain that human immune system can't fight, with IRB approval), that we'll be seeing more about in the near future.  So, get your face mask on and head under the covers if you want to sleep in peaceful bliss.

First, however, a brief history of IRBs

What protects us from mad scientists?
The idea of IRBs arose largely not from Frankenstein but from abuses, especially courtesy of the Nazis. Absolutely horrid crimes by almost any standard were committed in the name of research.  It wasn't just the Germans.  The Nuremberg Code for research, which stipulates essentially that it must involve voluntary consent, do no harm, have some benefit, and so on, was one result.

But abuses weren't patented by the Nazis.  Anatomists at least as far back as Galen did vivisection, at least on mammals and perhaps on humans.  People still object to vivisection--animal research--and if you knew what is allowed you might join them, even though the rationale is, as it has been since ancient times, that we make the animals suffer ultimately to relieve human disease.  Of course, we claim the animals aren't suffering, on various rationales (they aren't sentient, aren't conscious, aren't really suffering, .....).

The abuses before WWII didn't stop what happened afterwards. The well-documented Tuskegee study of southern black men affected by syphilis was, once revealed for its cruelty, another motivation for current IRBs.  A similar study in Guatemala and some shady doings of research in Africa because it can't be done here, all show the pressure that is needed to keep scientists under control.  Formal requirements for each research institute to form an IRB to review and approve any research done there, has led to very widely applied general standards, in principle consistent with the Nuremberg Code.  More recently up to date issues, like confidentiality in the computer-data era, have been added.

The idea is that the IRB will prevent research that violates a stated set of principles from being done in their facilities or by their employees.  Over the past few decades, everyone entering the research system has become aware (indeed, via formal training, has been made to become aware) of these rules and standards.  Every proposal must show how it adheres to them.

So, the rationale behind IRBs is unquestionably good, and much that is positive has resulted.  In broadest terms, we each know that we must pay attention to the ethical criteria for conducting research.  Of course, we are humans and the reality may not match the ideal.

From ideal to institutionalization
IRBs are committees comprised of a panel of investigators from the institution (though, even there  one can't review one's own proposals), plus administrators working for the institution, and at least one 'community' member.  The latter may be a minister, nurse, or some other outsider.

The idea is that each institution knows its own circumstances best and having its own independent IRB is better than some meddling government behemoth like, say NIH, that would make decisions from the outside (when NIH is, for example, the funder who will decide what will be funded--an obvious conflict of interest).  So those in, say, Wisconsin, know what's ethical for cheesedom, while Alabamians and San Franciscans have their own high ethical sense of self-restraint.

But this is a system run by humans, and over the decades it has become something of a System.  For example, perhaps you can imagine how a non-academic member from the community, even a minister, might be cajoled or cowed by the huge majority of insiders, the often knowingly obfuscating technological thicket of proposals, and so on.  As is also a problem for any peer review system, IRB members may or may not be anonymous, but within an institution even if they are, their identity can certainly be discovered.  They know, even if it's never said out loud, that if they scotch a proposal from someone on their campus, that person will be on the IRB in the future and could return the favor.  This can obviously be corrupting in itself, even if the IRB members take the care required to read each proposal carefully, and even if everything proposed is clearly stated.  Sometimes they do, but being on the board is a largely thankless task and how often do they not take that care?

It is not hard to see how IRBs will pay close attention to the details and insist on this or that tweak of a proposed protocol, what I call safe ethics.  They certainly do impose this sort of ethics--ethics that don't really stand in the way of what their faculty want to do.  But they may be reluctant to simply follow Nancy Reagan and just say 'no' to a major proposal.

IRB members from the administration are bureaucrats whose first instinct is to protect their institution (and, perhaps, their own jobs?).  They want to avoid public scandal and obvious abuse, but every proposal that is rejected is a proposal that can't be funded, and won't bring in overhead money and generate publications for the institution to boast about.  I have personally known of a case in a major university medical school whose administrator-member unashamedly (though privately) acknowledged discouraging their IRB from rejecting proposals because the institution wanted the overhead. You can guess whether research that ordinary people, people without a vested interest, might consider objectionable--such as unnecessary harsh experiments on hapless mice or other animals or studies that could jeopardize human confidentiality but with realistically scant likelihood to discover anything really important--is going to get a pass.  Maybe the investigator will be asked for some minor revisions.  But a lot of dicey research gets approved.

There are professional bioethicists in most large research-based universities including medical schools. They may have PhDs in ethics per se, and can be very good and perceptive people (I've trained some myself).  They write compelling, widely seen papers on their subject.  But in most cases they live directly or indirectly on grant funds.  They may get 5% or so of their salary on a grant as the project's ethicist.  Their careers, especially in medical schools, depend on bringing in external funds.  This is almost automatically corrupting.  Do you think it affords any sort of actual protection of research subjects for more than some rather formal issues like guaranteeing anonymity that usually few would object to?  How likely is it that a project's pet ethicist can say simply "No, this is wrong and you can't do it!"?  Surely it does sometimes happen, but since ethicists must make their own careers by being part of research projects, this really is an obvious case of foxes guarding hen-houses.

The Human Genome Research Institute (NHGRI) at NIH has had some fraction, we think 3%, of its research budget mandated to cover ethics related to genomic studies.  Decades of experience show that this should be re-named 'safe ethics'.  NIH does protect (where possible) against plagiarism, unethical revealing of subject identities, and that sort of thing.  But not against whole enterprises they want to fund that might be very wasteful (e.g., the funds would buy much more actual health--the 'H' in NIH--than, say, another mega-genomics study).  This is a truly and deeply ethical issue that cuts to the bone of vested interests, even in this case of the NHGRI.  If such things have ever been prohibited, we don't know of them, and they surely are the exception rather than the rule.  Even harmless research in the human rights sense, that is very costly, is an ethical affront to the competing interests of even more important things society can do with its funds.  But reports from the NIH ELSI (ethics) meetings have always been entirely consistent with the view I'm laying out here.

The truth is that in science, as in other areas of human affairs, money talks, and mutual or reciprocal interests lead to a system predominated by insider-trading.  The untold millions being spent on countless studies of humans or other animals, whose serious payoff to the society supporting them, if any, is no closer than light years away, is, in my opinion, offensive.  Peer review is not totally useless by any means, and doesn't always fund the insiders, but there are certainly major aspects of interlocking conflicts of interest in science.

Scientists are experts at hiding behind complex technical details and rhetoric, and we are as self-interested as any other group of humans.  We have our Frankensteins, who are amorally driven to study whatever interests them, rationalizing all the way that they're just innocent babes just following Nature's trail, and if what they do might be harmful (to humans, forget what it might do to mice, who can't vote) it's up to the political system, not scientists, to prevent that.  It's an age-old argument.

One must admit that having bureaucrats and real outsiders make decisions about what sort of research should be allowed, has its own problems.  Bureaucrats have careers, and often live by protecting their bailiwicks and the thicket of rules by which they wield power.  There aren't any easy answers.  And not all scientists are Frankensteins by any means, most being truly hoping to do good.  But the motivation to do whatever one wants even if, or perhaps especially if, it is edgy and has shock-value is often coin of the realm today.

Tomorrow, in Part II, we'll take a look at a most recent example, the influenza research mentioned at the beginning, of what is an abject failure at worst, and at best a questionable lack of institutional oversight of its own IRB.

4 comments:

  1. As a clarification, 5% of the annual NHGRI budget is devoted to ELSI research (See http://www.genome.gov/11006943).

    Could you please clarify for me what you mean by this: "Even harmless research in the human rights sense, that is very costly, is an ethical affront to the competing interests of even more important things society can do with its funds." I'm not following your argument... Perhaps I am too impatient and this is made clearer in Part II.

    ReplyDelete
    Replies
    1. Thanks. I couldn't remember if it was 3 or 5%.

      We can try to address this more clearly in Part II. It is that to me a deep ethical issue has to do with priorities not just abuse of subjects or animals. There are traits and problems that (speaking of genetics) are clear and we know their basis and they devastate peoples' lives. Yet we continue to pour massive funds into what I would say is low to very low payoff, like the plethora of 'omics' fads that are now in play. It isn't that there is nothing to learn, but it is that we know enough that most of this will go hardly anywhere.

      This is, of course, a personal judgment about priority and its ethics for publicly funded science, which is different from issues about human subjects' rights and so on. Maybe it's too separate from the usual IRB issues, even though I think it is as deeply important in an ethical sense, to have been included in the same posts.

      Delete
  2. Thanks Ken. You know I'm certainly in agreement that budget priorities are out of whack (and you know I challenge the bioethical hegemony regularly). I was getting thrown off by your use of "human rights" (probably because I've been focused on that in the narrow legal sense lately). Do you recall the figure that Erich Schienke made a few years back on intrinsic, extrinsic, and procedural ethics (the ethical dimensions of scientific research or EDSR model)? I don't have the specific citation on the top of my head, but I think it helps tease out some of the issues you're trying to highlight.

    ReplyDelete
    Replies
    1. I never saw Erich's figure so if you can find it I'd like to see it. On Monday I'll try to clarify things (at least as I see them), and will say that I think that I mixed the more focused mission of IRBs with the broader bioethical issues and how they are (or aren't) under control.

      Delete