Showing posts with label IRBs. Show all posts
Showing posts with label IRBs. Show all posts

Tuesday, May 26, 2015

Medical research ethics

In today's NYTimes, there is an OpEd column by bioethicist Carl Elliott about biomedical ethics (or its lack) at the University of Minnesota.  It outlines many what sound like very serious ethical violations and a lack of ethics-approval (Institutional Review Board, or IRB) scrutiny for research.    IRBs don't oversee the actual research, they just review proposals.  So, their job is to identify unethical aspects, such as lack of adequate informed consent, unnecessary pain or stress to animals, control of confidential information, and so on, so that the proposal can be adjusted before it can go forward.

As Elliott writes, the current iteration of IRBs, that each institution set up a self-based review system to approve or disapprove any research proposal that some faculty or staff member wishes to do, was established in the 1970's. The problem, he writes, is that this is basically just a self-monitored, institution-specific honor system, and honor systems are voluntary, subjective, and can be subverted.  More ongoing monitoring, with teeth, would be called for if abuses are to be spotted and prevented.  The commentary names many in the psychiatry department at Minnesota alone that seem to have been rather horrific.

But there are generalizable problems.  Over the years we have seen all sorts of projects approved, especially those involving animals (usually, lab mice).  We're not in our medical school, which has a distant campus, so we can't say anything about human subjects there or generally, beyond that occasionally one gets the impression that approval is pretty lax.  We were once told by a high-placed university administrator at a major medical campus (not ours), an IRB committee member there, that s/he regularly tried to persuade the IRB to approve things they were hesitant about....because the university wanted the overhead funds from the grant, which they'd not get if the project were not approved.

There are community members on these boards, not just the institution's insiders, but how often or effective they are (given that they are not specialists and for the other usual social-pressure reasons) at stopping questionable projects is something we cannot comment on--but should be studied carefully (perhaps it has been).

What's right to do to them?  From Wikimedia Commons

The things that are permitted to be done to animals are often of a kind that the animal-rights people have every reason to object to.  Not only is much done that does cause serious distress (e.g., making animals genetically transformed to develop abnormally or get disease, or surgeries of all sorts, or monitoring function intrusively in live animals), but much is done that is essentially trivial relative to the life and death of a sentient organism.  Should we personally have been allowed to study countless embryos to see how genes were used in patterning their teeth and tooth-cusps?  Our work was to understand basic genetic processes that led to complexly, nested patterning of many traits of which teeth were an accessible example.  Should students be allowed to practice procedures such as euthanizing mice who otherwise would not be killed?

The issues are daunting, because at present many things we would want to know (generally for selfish human-oriented reasons) can't really be studied except in lab animals. Humans may be irrelevant if the work is not about disease, and even for disease-related problems cell culture is, so far, only a partial substitute.  So how do you draw the line? Don't we have good reason to want to 'practice' on animals before, say, costly and rare transgenic animals are used for some procedure that may take skill and experience (even if just to minimize the animal's distress)?  With faculty careers depending on research productivity and, one must be frank, that means universities' interest in getting the grants with their overhead as well as consequent publication productivity their office can spin about, how much or how often is research on humans or animals done in ways that, really, are almost wholly about our careers, not theirs?

We raise animals, often under miserable conditions, to slaughter and eat them.  Lab animals often have protected, safe conditions until we decide to end their lives, and then we do that mostly without pain or terror to them.  They would have no life at all, no awareness experience, without our breeding them.  Where is the line to be drawn?

Similar issues apply to human subjects, even those involved in social or psychological surveys that really involve no risk except, perhaps, possible breach of confidentiality about sensitive issues related to them. And medical procedures really do need to be tested to see if they work, and working on animals can only take this so far. We may have to 'experiment' on humans in disease-related settings by exploring things we really can't promise will work, or that the test subjects will not be worse off.

More disturbing to us is that the idea that subjects are really 'informed' when they sign informed consent is inevitably far off the mark.  Subjects may be desperate, dependent on the investigator, or volunteer because they are good-willed and socially responsible, but they rarely understand the small print of their informedness, no matter how educated they are or how sincere the investigators are. More profoundly, if the investigators actually knew all the benefits and risks, they wouldn't need to do the research.  So even they themselves aren't fully 'informed'.  That's not the same as serious or draconian malpractice, and the situation is far from clear-cut, which is in a sense why some sort of review board is needed.  But how do we make sure that it works effectively, if honor is not sufficient?

What are this chimp's proper civil 'rights'?  From the linked BBC story.

Then there are questions about the more human-like animals.  Chimps have received some protections.  They are so human-like that they have been preferred or even required model systems for human problems.  We personally don't know about restrictions that may apply to other great apes. But monkeys are now being brought into the where-are-the-limits question.  A good journalistic treatment of the issue of animal 'human' rights is on today's BBC website. In some ways, this seems silly, but in many ways it is absolutely something serious to think about.  And what about cloning Neanderthals (or even mammoths)?  Where is the ethical line to be drawn?

These are serious moral issues, but morals have a tendency to be rationalized, and cruelty to be euphemized.  When and where are we being too loose, and how can we decide what is right, or at least acceptable, to do as we work through our careers, hoping to leave the world, or at least humankind, better off as a result?

Tuesday, July 15, 2014

More on IRBs and restraint on science

Stories over the last couple of days lead us to interrupt our series on natural selection with this brief post on research and ethical conundrums. It's a continuation of a couple of posts from last week (here and here) about IRBs (ethical review committees), bioethics and the idea of societal restraints on what scientists do or are permitted to be funded to do.

Nobody likes restraint, but science is supported by the public and science also has important public implications.  Nothing human is perfect or without potential down sides, including risks.  If society expects to benefit from new knowledge, it will have to pay for things that go nowhere and will also have to assume some risk.  The problem is how to assess work that shouldn't really be done, or paid for, and how to assess risk.
Our posts last week were about the indomitable scientist whose controversial work on engineering viruses--gain-of-function experiments--to make them dangerous went on despite disagreement about the public health consequences of the work.  Is it more important to protect public health by exploring the viruses in greater detail and thus enable the manufacture of better vaccines, or to do absolutely everything possible to prevent the public health disaster that could ensue if the viruses were to escape the lab?  That is, not make the organisms in the first place. 

Again, scientists are basically no more or less honorable than others in our society, and our society isn't exactly famous for its collective unity.  To the contrary, in a selfish society like ours, and when scientists have an idea and there may be money to be made (commercially or in grant funds), or potentially great public benefit, they are going to do what they can think up to get around rules that might stymie the objects of their desires.  That may include shading on honesty or not being as clear or forthcoming, or obfuscating.  Whatever works.  We're great at that--as can be seen in research papers (often, buried in the massive 'Supplemental' material!).  If you think that's not how things work, what planet do you live on?

But by coincidence, just since our posts about IRBs, infectious disease and science ethics, several significant and relevant events have come to light.  Six vials of smallpox virus were found buried in a lab freezer at the NIH in Washington, having been there since the 1954 (as reported by infectious disease writer Maryn McKenna in one of her fine series on this issue) when research into vaccines against the disease was underway, and before smallpox was eradicated; the last case on Earth was seen in 1978.  The vials were sent to the Centers for Disease Control in Atlanta (CDC), where it was discovered that 2 of them contained viable virus; McKenna reports that the samples will be destroyed after they've been thoroughly analyzed.

Officially, only 2 labs in the world still have smallpox samples; the CDC and a lab in Siberia.  The rationale for maintaining these stockpiles is that this would quickly enable whatever research would be necessary if the disease were to reappear -- presumably through biological warfare or terrorism rather than accidental release, but this does now put the latter possibility on the table.

But then in a widely reported story, the CDC found a 'lapsed culture' in infectious disease laboratories, the lax control potentially exposing workers to anthrax and shipping dangerous flu viruses.  The labs have been closed at least temporarily and external review requested.  No one hurt--this time.

Researchers do need to send potential dangerous samples to collaborators, and to work on them in their own labs (where, of course, employees do the actual work, not investigators).  The problem is that if or when an accident does occur, it could be of massively awful proportions.  The problem isn't new -- indeed, McKenna links to a 2007 piece in the USA Today reporting a long list of accidents in US labs handling "deadly germs".  Where is the line, and how do we draw it, to balance between the self- or selfish interest of scientists, the proper concern of government for public health measures, the potential for personal or corporate gain, the potentially major benefit to society, and the risks due to culpable avoidance of ethical standards (such as getting around the IRBs) or ordinary human fallibility?

Life involves risks as well as gains.  But unfortunately, risky research requires regulatory decisions about these issues by insiders, those with the knowledge but also conflict of interest, because it involves regulating themselves.  This is an area in which the public doesn't seem to have adequate means to be the guardians of our own interests.  No obvious solution comes to mind.

Monday, July 7, 2014

IRBs: Insider control can't do what's expected. Part II: Loss of control going viral

The virus that might roar
A story published in The Independent last week reported the controversial work of virologist Dr Yoshihiro Kawaoka at the University of Wisconsin-Madison.  Kawaoka was in the news several years ago for manipulating the H5N1 strain of flu virus so that it would be able to evade the immune defenses that much of the world developed when the virus was pandemic in 2009, killing over 500,000 people (the story was covered at the time by ScienceInsider).  That work was the subject of intense debate and scrutiny, and a moratorium was imposed while it underwent review.  The moratorium was lifted last year, and the work eventually cleared for publication.  

According to a recent piece in the Wisconsin State Journal, during the moratorium Kawaoka began to do the same kind of work with the virus that killed so many people globally in 1918.  The results of that project were recently published in Cell Host and Microbe.  Kawaoka's goal is to understand the kinds of genetic changes that would make these viruses circumvent human immunity to become even more infectious or more lethal.  The rationale, according to The Independent, is that it will help in the development of vaccines if such genetic changes were to occur in the wild.  

The problem, as many see it, is that there is no guarantee that these virulent strains won't escape from the lab and do much harm.  While Kawaoka says this won't happen, other lethal experimental organisms have done, and for new technologies like this such risk is always a concern.  Indeed, for old technologies -- the debate about whether to keep smallpox virus in labs has been going on for decades.  Kawaoka's work was approved by the university institutional review board although, according to The Independent, at least one member of the board was not willing to approve his current project.  

Does the fact that Kawaoka is a star on the faculty of the University of Wisconsin, where he has been treated extremely well, influence the IRB?  He is well-known in the field of influenza research, and has been involved in much recent work on emerging viruses, and no doubt his track record should count when his work is evaluated, but it's also possible, as always when power may be an issue, that Kawaoka's proposals have an easier time passing review than, say, a new researcher's would.  

But what is the IRB's role here?  Is it the board's job to decide what kind of risk society should be subjected to when academics do their work?  Or is that the job of an inter-institutional, or governmental agency, such as the U.S. National Science Advisory Board for Biosecurity (NSABB), which reviewed, and approved, Kawaoka's earlier work?

The risk of an inadvertent epidemic or even pandemic from this research may be small to very slight, but the consequences of such a thing would be so huge as to ask about the risk-benefit balance.  The importance of the discovery, should the research be successful, could be very great as well.  So there is no easy answer.

But that the University of Wisconsin allowed one of its very well-heeled faculty members to develop a modified pathogenic virus to which humans would no longer have resistance sounds like something out of Dr Strangelove.  How often is this sort of thing being done in a university near you--with or without its noble IRB being aware of it?

As we noted above, the previous work that got Kawaoka and Dutch investigators into hot water involved tinkering with the H5N1 flu virus to see what it would take to escape our immune system. Their idea was essentially to test virus genomic modification on ferrets, who in many ways are similar to humans immunologically.  The work was allowed to proceed after review, but in fact how can anyone guarantee that an accident won't happen?  We don't happen to know the conditions of the lifting of the moratorium, but no matter how extensive the review, or how cautious the scientists promised to be, no one can be absolutely certain that an accidental release of these viruses won't occur.  It reminds me of the time a little boy was getting on his bike to ride down the hill in front of our house.  His father reminded him to put on his helmet before he went, in case he fell off the bike.  "But Dad," he protested, "I'm not going to fall off!"

Similar concerns about recombinant DNA were raised a generation ago, and over time adequate protections were worked out and no disaster occurred that we know of.  But recombinant DNA doesn't pose the kinds of dangers that virulent viruses do.  And we have seen with other things, like stem cells, that scientists will do their best to find ways to do what they want to do.  Scientists, and the private sector, are both anxious to find new cures and also, one must acknowledge, looking for the major profits that are to be made.  The stem cell issue is more complicated because objections largely were religious.  Scientists may sneer at such things as ignorance standing the way of progress, but religious people are citizens and taxpayers, and if they are the majority, and aren't in favor of such a project, in a democracy, perhaps that should rule, whether frustrated scientists like it or not.

And there are other issues.  If a rock-star scientist threatens to leave the institution and go work elsewhere, this can be an incentive for an institution that treats faculty members like celebrities--particularly if they bring in big grant money--to compromise standards. 

And, should a properly independent system, with zero vested interests, be allowed or instructed to impose research bans for some number of years, appropriate to the offense, for investigators about whom there is evidence of misleading the IRB, or doing things not approved or even disapproved?    

It is, as in most similar kinds of situations, difficult to see how policy should be formed and implemented. After all, even amoral scientists are still scientists and citizens, and if they think something should be done, they have their votes, too.  And major public good might often also entail risks. 

The IRBs were started in the wake of abuses by Nazi and other scientists, including the most respected pillars of their society, and including in our country, as we mentioned last week.  That showed that scientists can't automatically be trusted not to intentionally, or even inadvertently do harm.  But many of us feel that the tenor of the committees has itself drifted from that proper gate-keeping job to a primary function to protect the institution against law-suits, part of a general trend in universities that is stifling in many ways, as well as costly in time and resources.  

Our mistaken mixing of messages
Making decisions is not easy, but there should be a balance of power.  However, in Thursday's post on IRBs, we mixed two aspects of bioethics.  One was about treatment of research subjects, human or otherwise.  The other was about priorities for spending society's resources (both are involved in our discussion here as well).  The issues overlap somewhat but we probably should have kept them separate.  IRBs are not mandated to deal with research priorities or societal concerns, though they do have to judge whether a project violates those concerns, and about whether doing some procedure on mice or other animals is warranted for the stated purpose of a project.

The peer review and policies of funders are the bodies that deal with research priorities.  My view is, as stated in Part I and elsewhere is that our priorities often too much depend on vested interests.  That is because agencies like NIH ask scientists what should be the next research priority.  Indeed, as I have seen directly several times, an agency like the National Academy of Sciences, entrusted with advising the government, can be paid by an NIH agency to hold a meeting about priorities, at which the agency's funded clients, and agency administrators, attend.  This is, essentially, insider trading and the NAS should not accept such contracts.  However, how to set priorities is not an easy thing to decide, since asking scientists their view is begging for self-interest to be at play, yet scientists know better than the public what the issues are.

In this sense, humans or animals are involved in projects that subject them to conditions that are allowed because of the social politics of the funding and academic career apparatus.  Are we out of proper alignment with what most would agree are appropriate societal priorities?  The payoff in actual public or scientific good is often, I think, far below what is promised.  This is of course a value judgment, but so are all IRB decisions and policies.

In any case, the Wisconsin issue that triggered these comments is more closely related to IRBs and its degree of real control of research ethics than about whether funds should be spent on this type of project rather than some other. Here, in fact, the story as written suggests serious abuse of what IRBs should rightly be policing.  One can argue that the knowledge being sought would properly have very high societal priority (because it deals with dangerous infectious disease), but that's a separate question. 

More generally, the funding priority issue may often even more important than the safer, local IRB protections. Billions of dollars go to feed the established research system, making it very self-aggrandizing and far less innovative than it might be if funding commitments, mega-longterm projects and the like were not so entrenched.  Instead of spending mega-bucks on more Big Data surveys we might focus funding on problems that were well-posed enough to be soluble.  This is again a societal issue about how resources are used, or captured, which does, of course, go beyond local IRB concerns that we were mainly intending to comment on.

So, while the ethical issues are not entirely separate, it confuses things to mix them as I did in our previous post.

Thursday, July 3, 2014

IRBs: Insider control can't do what's expected. Part I: some history

We are supposedly able to sleep peacefully in the security of our homes because Institutional Review Boards (IRBs) are on guard to protect us from harm at the hands of universities' Dr Frankensteins.  But the system was built by the potential Frankensteins, so any such comfort goes the way of any belief that people can police their own ethics, especially when money is involved.  This is shown by a recent revelation in the news (the short version: scientist creates flu strain that human immune system can't fight, with IRB approval), that we'll be seeing more about in the near future.  So, get your face mask on and head under the covers if you want to sleep in peaceful bliss.

First, however, a brief history of IRBs

What protects us from mad scientists?
The idea of IRBs arose largely not from Frankenstein but from abuses, especially courtesy of the Nazis. Absolutely horrid crimes by almost any standard were committed in the name of research.  It wasn't just the Germans.  The Nuremberg Code for research, which stipulates essentially that it must involve voluntary consent, do no harm, have some benefit, and so on, was one result.

But abuses weren't patented by the Nazis.  Anatomists at least as far back as Galen did vivisection, at least on mammals and perhaps on humans.  People still object to vivisection--animal research--and if you knew what is allowed you might join them, even though the rationale is, as it has been since ancient times, that we make the animals suffer ultimately to relieve human disease.  Of course, we claim the animals aren't suffering, on various rationales (they aren't sentient, aren't conscious, aren't really suffering, .....).

The abuses before WWII didn't stop what happened afterwards. The well-documented Tuskegee study of southern black men affected by syphilis was, once revealed for its cruelty, another motivation for current IRBs.  A similar study in Guatemala and some shady doings of research in Africa because it can't be done here, all show the pressure that is needed to keep scientists under control.  Formal requirements for each research institute to form an IRB to review and approve any research done there, has led to very widely applied general standards, in principle consistent with the Nuremberg Code.  More recently up to date issues, like confidentiality in the computer-data era, have been added.

The idea is that the IRB will prevent research that violates a stated set of principles from being done in their facilities or by their employees.  Over the past few decades, everyone entering the research system has become aware (indeed, via formal training, has been made to become aware) of these rules and standards.  Every proposal must show how it adheres to them.

So, the rationale behind IRBs is unquestionably good, and much that is positive has resulted.  In broadest terms, we each know that we must pay attention to the ethical criteria for conducting research.  Of course, we are humans and the reality may not match the ideal.

From ideal to institutionalization
IRBs are committees comprised of a panel of investigators from the institution (though, even there  one can't review one's own proposals), plus administrators working for the institution, and at least one 'community' member.  The latter may be a minister, nurse, or some other outsider.

The idea is that each institution knows its own circumstances best and having its own independent IRB is better than some meddling government behemoth like, say NIH, that would make decisions from the outside (when NIH is, for example, the funder who will decide what will be funded--an obvious conflict of interest).  So those in, say, Wisconsin, know what's ethical for cheesedom, while Alabamians and San Franciscans have their own high ethical sense of self-restraint.

But this is a system run by humans, and over the decades it has become something of a System.  For example, perhaps you can imagine how a non-academic member from the community, even a minister, might be cajoled or cowed by the huge majority of insiders, the often knowingly obfuscating technological thicket of proposals, and so on.  As is also a problem for any peer review system, IRB members may or may not be anonymous, but within an institution even if they are, their identity can certainly be discovered.  They know, even if it's never said out loud, that if they scotch a proposal from someone on their campus, that person will be on the IRB in the future and could return the favor.  This can obviously be corrupting in itself, even if the IRB members take the care required to read each proposal carefully, and even if everything proposed is clearly stated.  Sometimes they do, but being on the board is a largely thankless task and how often do they not take that care?

It is not hard to see how IRBs will pay close attention to the details and insist on this or that tweak of a proposed protocol, what I call safe ethics.  They certainly do impose this sort of ethics--ethics that don't really stand in the way of what their faculty want to do.  But they may be reluctant to simply follow Nancy Reagan and just say 'no' to a major proposal.

IRB members from the administration are bureaucrats whose first instinct is to protect their institution (and, perhaps, their own jobs?).  They want to avoid public scandal and obvious abuse, but every proposal that is rejected is a proposal that can't be funded, and won't bring in overhead money and generate publications for the institution to boast about.  I have personally known of a case in a major university medical school whose administrator-member unashamedly (though privately) acknowledged discouraging their IRB from rejecting proposals because the institution wanted the overhead. You can guess whether research that ordinary people, people without a vested interest, might consider objectionable--such as unnecessary harsh experiments on hapless mice or other animals or studies that could jeopardize human confidentiality but with realistically scant likelihood to discover anything really important--is going to get a pass.  Maybe the investigator will be asked for some minor revisions.  But a lot of dicey research gets approved.

There are professional bioethicists in most large research-based universities including medical schools. They may have PhDs in ethics per se, and can be very good and perceptive people (I've trained some myself).  They write compelling, widely seen papers on their subject.  But in most cases they live directly or indirectly on grant funds.  They may get 5% or so of their salary on a grant as the project's ethicist.  Their careers, especially in medical schools, depend on bringing in external funds.  This is almost automatically corrupting.  Do you think it affords any sort of actual protection of research subjects for more than some rather formal issues like guaranteeing anonymity that usually few would object to?  How likely is it that a project's pet ethicist can say simply "No, this is wrong and you can't do it!"?  Surely it does sometimes happen, but since ethicists must make their own careers by being part of research projects, this really is an obvious case of foxes guarding hen-houses.

The Human Genome Research Institute (NHGRI) at NIH has had some fraction, we think 3%, of its research budget mandated to cover ethics related to genomic studies.  Decades of experience show that this should be re-named 'safe ethics'.  NIH does protect (where possible) against plagiarism, unethical revealing of subject identities, and that sort of thing.  But not against whole enterprises they want to fund that might be very wasteful (e.g., the funds would buy much more actual health--the 'H' in NIH--than, say, another mega-genomics study).  This is a truly and deeply ethical issue that cuts to the bone of vested interests, even in this case of the NHGRI.  If such things have ever been prohibited, we don't know of them, and they surely are the exception rather than the rule.  Even harmless research in the human rights sense, that is very costly, is an ethical affront to the competing interests of even more important things society can do with its funds.  But reports from the NIH ELSI (ethics) meetings have always been entirely consistent with the view I'm laying out here.

The truth is that in science, as in other areas of human affairs, money talks, and mutual or reciprocal interests lead to a system predominated by insider-trading.  The untold millions being spent on countless studies of humans or other animals, whose serious payoff to the society supporting them, if any, is no closer than light years away, is, in my opinion, offensive.  Peer review is not totally useless by any means, and doesn't always fund the insiders, but there are certainly major aspects of interlocking conflicts of interest in science.

Scientists are experts at hiding behind complex technical details and rhetoric, and we are as self-interested as any other group of humans.  We have our Frankensteins, who are amorally driven to study whatever interests them, rationalizing all the way that they're just innocent babes just following Nature's trail, and if what they do might be harmful (to humans, forget what it might do to mice, who can't vote) it's up to the political system, not scientists, to prevent that.  It's an age-old argument.

One must admit that having bureaucrats and real outsiders make decisions about what sort of research should be allowed, has its own problems.  Bureaucrats have careers, and often live by protecting their bailiwicks and the thicket of rules by which they wield power.  There aren't any easy answers.  And not all scientists are Frankensteins by any means, most being truly hoping to do good.  But the motivation to do whatever one wants even if, or perhaps especially if, it is edgy and has shock-value is often coin of the realm today.

Tomorrow, in Part II, we'll take a look at a most recent example, the influenza research mentioned at the beginning, of what is an abject failure at worst, and at best a questionable lack of institutional oversight of its own IRB.

Monday, January 3, 2011

Does the hook hurt? What about the experiment?

Do fish feel pain?  Unequivocally yes, according to Victoria Braithwaite, a fish biologist here at Penn State.  She described how she knows this on The Forum, a radio program on the BBC World Service on 12/18.  It's very odd that this wasn't understood until her work -- a paper that garnered much attention by Braithwaite and co-authors was published in 2003 (but has been selectively forgotten or ignored by many) -- but very nice that she's cleared that up.  Amazingly, people have long labored under the belief that fish are insensate creatures.

The reasoning behind such views is difficult to understand.  Fish certainly avoid danger, but they don't have facial expressions we can read, and they are cold, slimy, primitive creatures (well, relative to us humans, we like to think!).  They reproduce like, well, like fish so with such numbers what's the advantage of pain receptions?  They have primitive brains (relative to ours).  


Or is it more than just the arrogance that goes with the angler's realization that we're in charge?  Not so long ago, lobbying was done among life scientists to write to Congress to oppose legislation that might regulate the kinds of experiments that could be done on fish.  Our noble peers in biology wanted to be left alone, not constrained, after all!  It was even said that pain in lab animals was good for them--yes, it was said!--because it made them resilient to the conditions under which they lived.  We're not enamored of bureaucrats who feel they can meddle in our daily research life, but this wholly self-interested campaign-of-convenience by scientists was gross.

Anyway, the idea of piscal pain experience would be more convincing if they had what we would at least recognize as neural pain receptors.  In fact, pain wiring was long ago worked out in birds and mammals, perhaps, according to Briathwaite, because it's easier to feel empathy with these creatures.  But, even the question of whether fish have the neural wiring that transmits the  stimulus to the brain wasn't known until Briathwaite et al.'s work.

And yes, fish do have them!  (Sorry, anglers and zebrafish torturers!).  Transmitting pain information is done in two stages, by two different types of nerve fibers, the A-delta and C fibers.  A-delta fibers transmit information about damaging or noxious stimuli instantaneously and the second transmits it more slowly.  Fish have both of these nerve types, though in a different ratio from birds and mammals (fish have more of the C type, relative to other vertebrates).

But, says Braithwaite, "finding the fibers themselves doesn't necessarily tell us that the second stage of pain is going on", that when the signal passes up the spinal cord to the brain, the fish becomes aware that it has been damaged.  To test this, Braithwaite et al. provided fish with a painful stimulus, either vinegar or bee venom, injecting small amounts into the snout of rainbow trout.  They injected saline solution into the snouts of a control group of fish, and found the two groups had very different responses.  The respiration rate of those injected with the pain stimulus quickly accelerated and stayed high, and these fish went off their feed, while the control group responded to being handled and injected with increased respiratory rate and not eating, but their respiration quickly went back down and their feeding behavior returned to normal.  Which Braithwaite et al. interpreted as clear evidence that they feel pain.

But does it matter?  Is it enough to modify their behavior?  That is, do the fish respond to pain?  Fish have a low tolerance for novelty, so the researchers put Lego objects into the tanks, treated them with pain injections, and observed their behavior.  Would they avoid the Legos, as normal? In fact, when treated with pain stimulus, they did approach the novel object, which the researchers interpreted as showing that the fish were distracted from their normal behavior by pain.  But, they were able to reverse this with pain relief.  They gave the fish some morphine and observed that they again avoided the Legos, as normal.  (Are you nauseous yet, given that they knew by now that fish feel pain?)

Braithwaite says that responding to something that is damaging is important evolutionarily, and only vertebrates can have the experience of learning from pain, and learning to avoid it.  But, do fish suffer when they feel pain?  Is that an odd question?  Isn't perceiving pain by definition suffering? Is it more than self-centered for us to couch this in terms that relate to our, human, kind of experience?  Braithwaite explained that they've found an area in the fish brain that's devoted to processing emotional information, as in other vertebrate brains.  It's more rudimentary in fish, but if it's damaged or lesioned, the fish's ability to respond to emotional information is impaired, she said.

And further, do fish have consciousness?  Ascribing consciousness to a non-human animal is a tricky area, Braithwaite said.  She takes her model of consciousness from Gerald Edelmann, who says that consciousness is modular.  She didn't expect to find all the modules we have in fish, but says there is evidence for two of them; 'primary consciousness' and 'phenomenal consciousness'.

Primary consciousness is the ability to create a mental representation of something, and Braithwaite says that fish can do that.   That is, they can do spatial mapping.  Phenomenal consciousness is how we experience and understand the world.  Braithwaite's view is that the fact that fish can learn from experience is evidence that they have phenomenal consciousness, too.  And, the evidence of fish consciousness is, to Braithwaite, both evidence that fish can suffer, and that they are deserving of welfare rights, as are birds and mammals -- equivalent to humans.  Sport fishermen, especially those who enjoy catch and release fishing, need to consider that fish feel pain, as do fish farmers.

This is all well and good, and it's good to see the old conceit that fish don't feel pain put to rest, but we think there are also lessons here about academic hubris that should, but probably won't be learned. The anthropocentric conceit isn't new, but we're supposed to be scientists, to deal with the real world, not the one we wish were out there!  We have bestialized the world except for ourselves.  In the west, at least, this might be a consequence of long-standing (convenient) biblical views that God, in His wisdom and compassion, gave us dominion over the fish of the sea, and over the fowl of the air, and over every living thing that moveth upon the earth (does that include bacteria and hookworms?).

Whatever the source, we want to be able to have our will with these beasts and will not welcome the latest piscine bulletin.  It's bad enough that after having our experimental will with chimps we have to let them live out their natural life (but not monkeys), and we are not supposed to subject lab mice to torture (as we and our IRBs define it, which turns out to be pretty lenient).  But fish?!