In the sad way that science has become ever more bourgeois, Nature, itself now largely a checkout- counter mag, has a feature editorial on plagiarism (p 435, 28 March 2019). The author, Debora Weber-Wulff, seems to specialize in sleuthing academic verbal cheaters, as if it is a new profession in itself. She goes over the various software developed to detect plagiarism in professional and student papers, and evaluates them and the detection problem itself. Commercial, profiteering, competing software--more than one--to detect academic cheaters!
The commentary mentions strategies that authors use to get multiple pubs on the same subject, and even seems to suggest that publishing an article from or part of your doctoral dissertation is a kind of plagiarism (who in recent memory has searched for or found, much less read doctoral theses after the defense?).
And now, the most bourgeois thing of all, in my opinion, is that there are conferences on academic integrity, and even they have their own plagiarism as the author relates! And as part of the new class system even in esoteric academia, she notes that those that were detected were "demoted" to mere posters. Surely I've mis-read this commentary. Surely!
Somehow, this seems just another routine story about academic life. Since it's basically gossipy, it takes place of honor in Nature. It's a kind of 'how-it's-done' review, as if cheating was as common as, say, making espresso. Can you imagine that such a thing would largely have been unthought of not many decades ago? It's true. I was there (and I didn't plagiarize!).
There must always have been some plagiarism, since there are always rogues. There have long been rewards to publish much the same paper in several different places, to reach different audiences in the days before web-searching. But it was likely much easier to detect real plagiarism, which was doubtlessly far less prevalent in the old days. At least that was my experience in my particular old days. There was no need for competing companies to profiteer by selling plagiarism-detecting programs! That almost institutionalizes cheating as a cat-and-mouse part of modern careerism, and a commentary like Weber-Wulff's that describes plagiaristic ploys almost helps one do it!
One, if not the main, reason for this situation is that far less was being published, less often, in fewer journals, and by far fewer players in a given academic arena. The players were much better known to each other, far fewer published more than an article now and then, and most readers knew the relevant literature (and each other). The pace was less. The Malthusian academic overpopulation didn't exist, so the competition was less (even if there were of course Big Egos competing). Publishers were mainly non-profit, careers less intensely grant-dependent (and grants were easier to get). The competition was more about ideas and actual substantive impact, and far less about academic score-counting (citations, publication counts, impact factors). Less pressure to survive, and less pressure to cheat. No first-semester graduate seminars on 'grantsmanship'.
....and no need for Nature to have a feature commentary on how to catch academic cheaters.
UPDATE:
Just after this was posted, a commentary appeared in Nature on a major academic fraud case:
https://www.nature.com/articles/d41586-019-01032-w?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews&sf210300477=1
If we really want to encourage honor and honesty in science, we need to look not at the science but at the science culture, the money-driven, competitive, frenetic area--and Nature and its proliferating for-profit satellite publications is a culpable part of the problem. We need to cool down the temperature of the research industry. But to me that requires reducing the amount of selfish gain available--to investigators, journals, universities, equipment suppliers--the academic-industrial complex to pick up on Dwight Eisenhower's long-ago warning about military's similar excesses.
But where is the will to do this?
Showing posts with label academia. Show all posts
Showing posts with label academia. Show all posts
Wednesday, April 3, 2019
Tuesday, November 21, 2017
The Knowledge Factory Crisis: A different, anthropological way to view universities
By
Ken Weiss
Nothing we humans do lives up to its own mythology. We are fallible, social, competitive, acquisitive, our understanding is incomplete, and we have competing interests to address, in our lives and as a society. I posted yesterday about universities as 'knowledge factories, reacting to a BBC radio program that discussed what is happening in universities, when research findings seem unrepeatable.
That program, and my discussion of what is going on at universities, took the generally expressed view of what universities are supposed to be, and examined how that is working. The discussion concerned technical aspects that related to the nature of scientific information universities address or develop. That is, in this context, their 'purpose' for being. How well do they live up to what they are 'supposed' to be?
Many of my points in the post were about the nature of faculty jobs are these days, and the way in which pressures lead to the over-claiming of findings, and so on. I made some suggestions that, in principle, could help science live up to its ideal.
Here in this post, however, I want to challenge what I have said about this. Instead, I want to take a somewhat distanced viewpoint, looking at universities from the outside, in a standard kind of viewpoint that anthropologists take, rather than simply accepting universities' own assessments of what they are about.
Doing poorly by their ideal standard
My post noted ways in which universities have become not just a 'knowledge factory', but more crass business factories, as making money blatantly increasingly over-rides their legitimate--or at least, stated--role as idea and talent engines for society. Here's a story from a few years ago about that, that is still cogent. The fiscal pursuit discussed in this post is part of the phenomenon. As universities are run more and more as businesses, which happens even in state universities, they become more exclusive, belying their original objective which (as in the land-grant public universities) was to make higher education available to everyone. In addition to becoming money makers themselves, academia has become a boon for student-loan bankers, too.
But this is a criticism of university-based science, and expressed as it relates to how universities are structured. That structure, even in science, leads to problems of science. One might think that something so fundamentally wrong would be easy to see and to correct. But perhaps not, because universities are not isolated from society--they are of society, and therein lies some deep truth.
Excelling hugely as viewed anthropologically
If you stop examining how universities compare to their ideals, or to what most people would tell you universities were for, and instead look at them as parts of society, a rather different picture emerges.
Universities are a huge economic engine of society. They garner their very large incomes from various sources: visitors to their football and basketball stadiums, students whose borrowed money pays tuition, and agencies private and public that pour in money for research. Whether or not they are living up to some ideal function or nature, they are a major and rather independent part of our economy.
Their employees, from their wildly paid presidents, down to the building custodians, span every segment of society. The money universities garner pays their salaries, and buys all sorts of things on the open commercial economy, thereby keeping many other people gainfully employed. Their activities (such as the major breakthrough discoveries they announce almost daily) generate material and hence income for the media industries, print and electronic, which in turn helps feed those industries and their relevant commercial influences (such as customers, television sales, and more).
Human society is a collective way for we human organisms to extract our living from Nature. We compete as individuals in doing this, and that leads to hierarchies. Overall, over time, societies have evolved such that these structures extract ever more resources and energy. Via various cultural ideologies we are able to keep things going smoothly enough, at least internally, so as not to disrupt this extractive activity.
Religion, ownership hierarchies, imperialism, military, and other groups have self-justifications that make people feel they belong. This contributes to building pyramids--whether they be literal, or figurative such as religions, universities, armies, political entities, social classes, or companies. Often the justification is religious--nobility by divine right, conquest as manifest destiny, and so on. That not one of these resulting societal structures lives up to its own ideology has long been noted. Why should we expect universities to be any different? These are the cultural ways people organize themselves to extract resources for themselves.
Universities are parasites on society, very hierarchical with obscenely overpaid nobles at the top? They show no limits on the trephining they do on those who depend on them, such as graduating students with life-burdening debt? They churn through those who come to them for whom they claim to 'provide' the good things in life? Of course! Like it or not, by promising membership and a better life, they are just like religions or political classes or corporations!
Institutions may be so caught up in their belief systems that they don't adapt to the times or competitors, or they may change their actions (if not always their self-description). If they don't adapt they eventually crumble and are replaced by new entities with new justifications to gain popular appeal or acceptance. However, fear not, because relative to their actual (as opposed to symbolic) role in societies, universities are doing very well: at present, they very clearly show their adaptability.
In this anthropological sense, universities are doing exceedingly well, far better than ever before, churning resources and money over far faster than ever before. Grumps (like us) may point out the failings of lacking to live up to our own purported principles--but how is that different from any other engine of society?
In that anthropological sense, whether educating people 'properly' or not, whether claiming more discoveries that stand up to scrutiny, universities are doing very, very, very well. And that, not the purported reason that an institution exists, is the measure of how and why societal institutions persist or expand. Hypocrisy and self-justification, or even self-mythology, are always part of social organization. A long-standing anthropological technique for understanding distinguishes what are called emics, from etics: what people say they do, from what they actually do.
Yes, there will have to be some shrinkage with demographic changes, and fewer students attending college, but that doesn't change the fact that, by material measures, universities are incredibly successful parts of society.
What about the intended material aspect of the knowledge factory--knowledge?
But there is another important side to all of this, which takes us back to science itself, which I think is actually important, even if it is naive or pointless to crab at the hypocrisies of science that are explicable in deep societal terms.
This has to do with knowledge itself, and with science on its own terms and goals. It relates to what could, at least in principle, advance the science itself (assuming such changes could happen without first threatening science's and scientists' and universities' assets). That will be the subject of our next post.
That program, and my discussion of what is going on at universities, took the generally expressed view of what universities are supposed to be, and examined how that is working. The discussion concerned technical aspects that related to the nature of scientific information universities address or develop. That is, in this context, their 'purpose' for being. How well do they live up to what they are 'supposed' to be?
Many of my points in the post were about the nature of faculty jobs are these days, and the way in which pressures lead to the over-claiming of findings, and so on. I made some suggestions that, in principle, could help science live up to its ideal.
Here in this post, however, I want to challenge what I have said about this. Instead, I want to take a somewhat distanced viewpoint, looking at universities from the outside, in a standard kind of viewpoint that anthropologists take, rather than simply accepting universities' own assessments of what they are about.
Doing poorly by their ideal standard
My post noted ways in which universities have become not just a 'knowledge factory', but more crass business factories, as making money blatantly increasingly over-rides their legitimate--or at least, stated--role as idea and talent engines for society. Here's a story from a few years ago about that, that is still cogent. The fiscal pursuit discussed in this post is part of the phenomenon. As universities are run more and more as businesses, which happens even in state universities, they become more exclusive, belying their original objective which (as in the land-grant public universities) was to make higher education available to everyone. In addition to becoming money makers themselves, academia has become a boon for student-loan bankers, too.
But this is a criticism of university-based science, and expressed as it relates to how universities are structured. That structure, even in science, leads to problems of science. One might think that something so fundamentally wrong would be easy to see and to correct. But perhaps not, because universities are not isolated from society--they are of society, and therein lies some deep truth.
Excelling hugely as viewed anthropologically
If you stop examining how universities compare to their ideals, or to what most people would tell you universities were for, and instead look at them as parts of society, a rather different picture emerges.
Universities are a huge economic engine of society. They garner their very large incomes from various sources: visitors to their football and basketball stadiums, students whose borrowed money pays tuition, and agencies private and public that pour in money for research. Whether or not they are living up to some ideal function or nature, they are a major and rather independent part of our economy.
Their employees, from their wildly paid presidents, down to the building custodians, span every segment of society. The money universities garner pays their salaries, and buys all sorts of things on the open commercial economy, thereby keeping many other people gainfully employed. Their activities (such as the major breakthrough discoveries they announce almost daily) generate material and hence income for the media industries, print and electronic, which in turn helps feed those industries and their relevant commercial influences (such as customers, television sales, and more).
Human society is a collective way for we human organisms to extract our living from Nature. We compete as individuals in doing this, and that leads to hierarchies. Overall, over time, societies have evolved such that these structures extract ever more resources and energy. Via various cultural ideologies we are able to keep things going smoothly enough, at least internally, so as not to disrupt this extractive activity.
Religion, ownership hierarchies, imperialism, military, and other groups have self-justifications that make people feel they belong. This contributes to building pyramids--whether they be literal, or figurative such as religions, universities, armies, political entities, social classes, or companies. Often the justification is religious--nobility by divine right, conquest as manifest destiny, and so on. That not one of these resulting societal structures lives up to its own ideology has long been noted. Why should we expect universities to be any different? These are the cultural ways people organize themselves to extract resources for themselves.
Universities are parasites on society, very hierarchical with obscenely overpaid nobles at the top? They show no limits on the trephining they do on those who depend on them, such as graduating students with life-burdening debt? They churn through those who come to them for whom they claim to 'provide' the good things in life? Of course! Like it or not, by promising membership and a better life, they are just like religions or political classes or corporations!
Institutions may be so caught up in their belief systems that they don't adapt to the times or competitors, or they may change their actions (if not always their self-description). If they don't adapt they eventually crumble and are replaced by new entities with new justifications to gain popular appeal or acceptance. However, fear not, because relative to their actual (as opposed to symbolic) role in societies, universities are doing very well: at present, they very clearly show their adaptability.
In this anthropological sense, universities are doing exceedingly well, far better than ever before, churning resources and money over far faster than ever before. Grumps (like us) may point out the failings of lacking to live up to our own purported principles--but how is that different from any other engine of society?
In that anthropological sense, whether educating people 'properly' or not, whether claiming more discoveries that stand up to scrutiny, universities are doing very, very, very well. And that, not the purported reason that an institution exists, is the measure of how and why societal institutions persist or expand. Hypocrisy and self-justification, or even self-mythology, are always part of social organization. A long-standing anthropological technique for understanding distinguishes what are called emics, from etics: what people say they do, from what they actually do.
Yes, there will have to be some shrinkage with demographic changes, and fewer students attending college, but that doesn't change the fact that, by material measures, universities are incredibly successful parts of society.
What about the intended material aspect of the knowledge factory--knowledge?
But there is another important side to all of this, which takes us back to science itself, which I think is actually important, even if it is naive or pointless to crab at the hypocrisies of science that are explicable in deep societal terms.
This has to do with knowledge itself, and with science on its own terms and goals. It relates to what could, at least in principle, advance the science itself (assuming such changes could happen without first threatening science's and scientists' and universities' assets). That will be the subject of our next post.
Monday, November 20, 2017
The 'knowledge factory'
By
Ken Weiss
This post reflects much that is in the science news, in particular our current culture's romance with data (or, to be more market-savvy about it, Big Data). I was led to write this after listening to a BBC Radio program, The Inquiry, an ongoing series of discussions of current topics. This particular episode is titled Is The Knowledge Factory Broken?
Replicability: a problem and a symptom
The answer is pretty clearly yes. One of the clearest bits of evidence is the now widespread recognition that too many scientific results, even those published in 'major' journals, are not replicable. When even the same lab tries to reproduce previous results, they often fail. The biggest recent noise on this has been in the social, psychological, and biomedical sciences, but The Inquiry suggests that chemistry and physics also have this problem. If this is true, the bottom line is that we really do have a general problem!
But what is the nature of the problem? If the world out there actually exists and is the result of physical properties of Nature, then properly done studies that aim to describe that world should mostly be replicable. I say 'mostly' because measurement and other wholly innocent errors may lead to some false conclusion. Surprise findings that are the luck of the draw, just innocent flukes, draw headlines and are selectively accepted by the top journals. Properly applied, statistical methods are designed to account for these sorts of things. Even then, in what is very well known as the 'winner's curse', there will always be flukes that survive the test, are touted by the major journals, but pass into history unrepeated (and often unrepentant).
This, however, is just the tip of the bad-luck iceberg. Non-reproducibility is so much more widespread that what we face is more a symptom of underlying issues in the nature of the scientific enterprise itself today than an easily fixable problem. The best fix is to own up to the underlying problem, and address it.
Is it rats, or scientists who are in the treadmill?
Scientists today are in a rat-race, self-developed and self-driven, out of insatiability for resources, ever-newer technology, faculty salaries, hungry universities....and this system can be arguably said to inhibit better ideas. One can liken the problem to the famous skit in a candy factory, on the old TV show I Love Lucy. That is how it feels to many of those in academic science today.
This Inquiry episode about the broken knowledge factory tells it like it is....almost. Despite concluding that science is "sending careers down research dead-ends, wasting talent and massive resources, misleading all of us", in my view, this is not critical enough. The program suggests what I think are plain-vanilla, clearly manipulable 'solutions. They suggest researchers should post their actual data and computer program code in public view so their claims could be scrutinized, that researchers should have better statistical training, and that we should stop publishing just flashy findings. In my view, this doesn't stress the root and branch reform of the research system that is really necessary.
Indeed, some of this is being done already. But the deeper practical realities are that scientific reports are typically very densely detailed, investigators can make weaknesses hard to spot (this can be done inadvertently, or sometimes intentionally as authors try to make their findings dramatically worthy of a major journal--and here I'm not referring to the relatively rare actual fraud).
A deeper reality is that everyone is far too busy on what amounts to a research treadmill. The tsunami of papers and their online supporting documentation is far too overwhelming, and other investigators, including readers, reviewers and even co-authors are far too busy with their own research to give adequate scrutiny to work they review. The reality is that open-publishing of raw data and computer code etc. will not generally be very useful, given the extent of the problem.
Science, like any system, will always be imperfect because it's run by us fallible humans. But things can be reformed, at least, by clearing the money and job-security incentives out of the system--really digging out what the problem is. How we can support research better, to get better research, when it certainly requires resources, is not so simple, but is what should be addressed, and seriously.
We've made some of these points before, but with apology, they really do bear stressing and repeating. Appropriate measures should include:
(1) Stop paying faculty salaries on grants (have the universities who employ them, pay them);
(2) Stop using manipulable score- or impact-factor counting of papers or other counting-based items to evaluate faculty performance, and try instead to evaluate work in terms of better measures of quality rather than quantity;
(3) Stop evaluators considering grants secured when evaluating faculty members;
(4) Place limits on money, numbers of projects, students or post-docs, and even a seniority cap, for any individual investigator;
(5) Reduce university overhead costs, including the bevy of administrators, to reduce the incentive for securing grants by any means;
(6) Hold researchers seriously accountable, in some way, for their published work in terms of its reproducibility or claims made for its 'transformative' nature.
(7) Grants should be smaller in amount, but more numerous (helping more investigators) and for longer terms, so one doesn't have to start scrambling for the next grant just after having received the current one.
(8) Every faculty position whose responsibilities include research should come with at least adequate baseline working funds, not limited to start-up funds.
(9) Faculty should be rewarded for doing good research that does not require external funding but does address an important problem.
(10) Reduce the number of graduate students, at least until the overpopulation ebbs as people retire, or, at least, remove such number-counts from faculty performance evaluation.
Well, these are snarky perhaps and repetitive bleats. But real reform, beyond symbolic band-aids, is never easy, because so many people's lives depend on the system, one we've been building over more than a half-century to what it is today (some authors saw this coming decades ago and wrote with warnings). It can't be changed overnight, but it can be changed, and it can be done humanely.
The Inquiry program reflects things now more often being openly acknowledged. Collectively, we can work to form a more cooperative, substantial world of science. I think we all know what the problems are. The public deserves better. We deserve better!
PS. P.S.: In a next post, I'll consider a more 'anthropological' way of viewing what is happening to our purported 'knowledge factory'.
Even deeper, in regard to the science itself, and underlying many of these issues are aspects of the modes of thought and the tools of inference in science. These have to do with fundamental epistemological issues, and the very basic assumptions of scientific reasoning. They involve ideas about whether the universe is actually universal, or is parametric, or its phenomena replicable. We've discussed aspects of these many times, but will add some relevant thoughts in the near future.
Replicability: a problem and a symptom
The answer is pretty clearly yes. One of the clearest bits of evidence is the now widespread recognition that too many scientific results, even those published in 'major' journals, are not replicable. When even the same lab tries to reproduce previous results, they often fail. The biggest recent noise on this has been in the social, psychological, and biomedical sciences, but The Inquiry suggests that chemistry and physics also have this problem. If this is true, the bottom line is that we really do have a general problem!
But what is the nature of the problem? If the world out there actually exists and is the result of physical properties of Nature, then properly done studies that aim to describe that world should mostly be replicable. I say 'mostly' because measurement and other wholly innocent errors may lead to some false conclusion. Surprise findings that are the luck of the draw, just innocent flukes, draw headlines and are selectively accepted by the top journals. Properly applied, statistical methods are designed to account for these sorts of things. Even then, in what is very well known as the 'winner's curse', there will always be flukes that survive the test, are touted by the major journals, but pass into history unrepeated (and often unrepentant).
This, however, is just the tip of the bad-luck iceberg. Non-reproducibility is so much more widespread that what we face is more a symptom of underlying issues in the nature of the scientific enterprise itself today than an easily fixable problem. The best fix is to own up to the underlying problem, and address it.
Is it rats, or scientists who are in the treadmill?
Scientists today are in a rat-race, self-developed and self-driven, out of insatiability for resources, ever-newer technology, faculty salaries, hungry universities....and this system can be arguably said to inhibit better ideas. One can liken the problem to the famous skit in a candy factory, on the old TV show I Love Lucy. That is how it feels to many of those in academic science today.
This Inquiry episode about the broken knowledge factory tells it like it is....almost. Despite concluding that science is "sending careers down research dead-ends, wasting talent and massive resources, misleading all of us", in my view, this is not critical enough. The program suggests what I think are plain-vanilla, clearly manipulable 'solutions. They suggest researchers should post their actual data and computer program code in public view so their claims could be scrutinized, that researchers should have better statistical training, and that we should stop publishing just flashy findings. In my view, this doesn't stress the root and branch reform of the research system that is really necessary.
Indeed, some of this is being done already. But the deeper practical realities are that scientific reports are typically very densely detailed, investigators can make weaknesses hard to spot (this can be done inadvertently, or sometimes intentionally as authors try to make their findings dramatically worthy of a major journal--and here I'm not referring to the relatively rare actual fraud).
A deeper reality is that everyone is far too busy on what amounts to a research treadmill. The tsunami of papers and their online supporting documentation is far too overwhelming, and other investigators, including readers, reviewers and even co-authors are far too busy with their own research to give adequate scrutiny to work they review. The reality is that open-publishing of raw data and computer code etc. will not generally be very useful, given the extent of the problem.
Science, like any system, will always be imperfect because it's run by us fallible humans. But things can be reformed, at least, by clearing the money and job-security incentives out of the system--really digging out what the problem is. How we can support research better, to get better research, when it certainly requires resources, is not so simple, but is what should be addressed, and seriously.
We've made some of these points before, but with apology, they really do bear stressing and repeating. Appropriate measures should include:
(1) Stop paying faculty salaries on grants (have the universities who employ them, pay them);
(2) Stop using manipulable score- or impact-factor counting of papers or other counting-based items to evaluate faculty performance, and try instead to evaluate work in terms of better measures of quality rather than quantity;
(3) Stop evaluators considering grants secured when evaluating faculty members;
(4) Place limits on money, numbers of projects, students or post-docs, and even a seniority cap, for any individual investigator;
(5) Reduce university overhead costs, including the bevy of administrators, to reduce the incentive for securing grants by any means;
(6) Hold researchers seriously accountable, in some way, for their published work in terms of its reproducibility or claims made for its 'transformative' nature.
(7) Grants should be smaller in amount, but more numerous (helping more investigators) and for longer terms, so one doesn't have to start scrambling for the next grant just after having received the current one.
(8) Every faculty position whose responsibilities include research should come with at least adequate baseline working funds, not limited to start-up funds.
(9) Faculty should be rewarded for doing good research that does not require external funding but does address an important problem.
(10) Reduce the number of graduate students, at least until the overpopulation ebbs as people retire, or, at least, remove such number-counts from faculty performance evaluation.
Well, these are snarky perhaps and repetitive bleats. But real reform, beyond symbolic band-aids, is never easy, because so many people's lives depend on the system, one we've been building over more than a half-century to what it is today (some authors saw this coming decades ago and wrote with warnings). It can't be changed overnight, but it can be changed, and it can be done humanely.
The Inquiry program reflects things now more often being openly acknowledged. Collectively, we can work to form a more cooperative, substantial world of science. I think we all know what the problems are. The public deserves better. We deserve better!
PS. P.S.: In a next post, I'll consider a more 'anthropological' way of viewing what is happening to our purported 'knowledge factory'.
Even deeper, in regard to the science itself, and underlying many of these issues are aspects of the modes of thought and the tools of inference in science. These have to do with fundamental epistemological issues, and the very basic assumptions of scientific reasoning. They involve ideas about whether the universe is actually universal, or is parametric, or its phenomena replicable. We've discussed aspects of these many times, but will add some relevant thoughts in the near future.
Monday, August 24, 2015
The solution to all professors' wardrobe dilemmas
I swear. The regularly scheduled Mermaid's Tale programming that you've come to expect (and to love?) is gearing up to return in full force.
But because so many of us are also gearing up to return to campus...
And while doing so, we're coming across articles like "Female academics: don't power dress, forget heels – and no flowing hair allowed" ...
I need to share something about what I'll be wearing my first semester as a tenured professor.
But to get us there, I'll need to pose a string of rhetorical questions:
If you answered yes to even one of those questions (or to related questions that didn't dawn on me to ask) then may I suggest you try wearing an academic gown to teach?
If your profession comes with its very own costume, why not take advantage of it? It's what I'm going to do starting this semester. I bought a cheap academic gown on-line and I've even started decorating it:
I know this is tradition at a few American schools, but do any of you do this where it isn't? Anyone want to start?
But because so many of us are also gearing up to return to campus...
And while doing so, we're coming across articles like "Female academics: don't power dress, forget heels – and no flowing hair allowed" ...
I need to share something about what I'll be wearing my first semester as a tenured professor.
But to get us there, I'll need to pose a string of rhetorical questions:
- Tired of students rating your course according to what you wear?
- Can't find a way to make the professional looks that you prefer pair with flats or sneakers or anything other than torturous high heels or other dressy shoes?
- Tired of spending precious time and money on work clothes that you change out of the second you get home?
- Tired of choosing between this garment made in a sweat shop and that garment made by children?
- Hate suits?
- Work clothes feel like a costume? Especially out-of-style ones that are too expensive to replace as trends change?
- Tired of spending money on dry-cleaning and all those chemicals?
- Hate the unfair fact that some faculty (like those with white hair, white privilege, or beards) can get away with comfortable and often inexpensive t-shirts, jeans, and flip-flops but others cannot or cannot take the risk to find out if they can?
If you answered yes to even one of those questions (or to related questions that didn't dawn on me to ask) then may I suggest you try wearing an academic gown to teach?
If your profession comes with its very own costume, why not take advantage of it? It's what I'm going to do starting this semester. I bought a cheap academic gown on-line and I've even started decorating it:
| Kind of makes my chair look professorial, doesn't it? |
Wednesday, August 12, 2015
One mermaid's path to tenure
I’m not writing this in hopes that you’ll congratulate
me. This is just meant to give readers a peek behind the curtain.
I’m writing because maybe you, or maybe someone who has sway over your career, has read an academic's blog and wondered how it would affect their chances for tenure. Or, maybe you or a colleague, your chair or department head, your dean, or your provost has wondered why anyone would bother going to the trouble to write on a blog when there are more important things to accomplish. Well, in my case, there clearly weren’t, because I managed to write on this blog and still be awarded tenure. What's more, I know, without a doubt, that my writing on this blog was integral to it.
I’ve written about why I blog before. And, looking back, there is so much more I could add to that post because of what's occurred over time since. However, the main reason still stands: Writing on The Mermaid’s Tale has been immensely important for my academic life. The reading and writing I do here enhances my teaching and research and the enlightening discussions I participate in here and elsewhere (facilitated by my writing here) boost my teaching and research even more. And the people who make the decisions about my tenure definitely noticed. My departmental colleagues, my dean, and the provost all readily acknowledged the value of my blogging in their letters recommending me for tenure. And the good people who served as my external reviewers didn't see my blogging as damning enough to withhold their support. And the good... no, great people who collaborate with me certainly never turned up their noses either!
I've known since receiving the provost's letter that I owe it to readers and other bloggers to post something about getting tenure. I also thought a tenure-related post could help out younger academics, in general, by exposing how someone who's never taught in a graduate program--with all the intellectual buzz and the worker bees to help, you know, the academic model us Ph.D.s are most familiar with--can still get awarded tenure.
But I've been dreading such a post because I really don't want to write a biography right now. It feels quite narcissistic to get tenure and then to post your life story as if tenure somehow validated that, as if anyone could possibly emulate another person's detailed path to tenure, as if anyone would want to!
Plus, where to begin? So much so deep in my past has set me up for getting awarded tenure, so many people have been crucial to this outcome, that it's impossible to know where or with whom to start except, obviously, at conception.
But what I really don't feel like writing about publicly, and in association with hooray-for-tenure, is about why I haven't done much work on early Homo despite studying with a terrifically wonderful advisor, Alan Walker, to do just that. And that's because part of the reason is a statistic here. That, as well as other parts of the explanation take away from the fact that I'm very happy with the way my career has panned out and I continue to look very much forward to every single day I'm an anthropologist.
So instead of a tour through my influenced and circumstantial history leading up to tenure, maybe I'll post the narratives I included in my portfolio. It's going to be uncomfortable. I'm going to have to look away while I paste the text, like I do when the phlebotomist pricks my vein. But here you go, minus the files of evidence that go along with each narrative.* This is a successful tenure portfolio at a small state school, in an undergraduate-only program. Hope it's useful because ouch it feels quite personal:
I’ve taught four different courses so far at URI and they all focus on human origins, evolution and variation. The introductory course, APG 201: Human Origins, counts as a general education requirement for the natural sciences and also is a requirement for majors and minors in anthropology. The upper level courses attract not just anthropology majors and minors but students from diverse scholarly backgrounds who are interested in the in-depth examination of issues in biological anthropology. These upper level courses include: APG 300: The Human Fossil Record (a hands-on course which is why I dedicated a large portion of my start-up funds to the purchase of new fossil casts which augmented the existing collection); APG 310: Sex and Reproduction in Our Species (a course I created because of my new research interests in the evolution of human reproduction, as well as in procreative beliefs and how they have influenced human evolution.); APG 350: Human Variation (in which I will continue to use personal genomics to engage students.). In all of these courses, my two most important teaching goals are:
(1) Students should get as strong a handle on evolution as possible, shedding as many misconceptions as possible, so that they can best comprehend the biological, ecological, and cultural significance of human variation and evolution. (That, in a nutshell, is why human evolution is taught and studied within an anthropological context!)
(2) Students should achieve as much of this evolutionary and anthropological understanding on their own as possible, by thinking creatively, synthetically, and critically about the evidence.
Number one means that I probably take more time with evolutionary theory than most of my colleagues at other institutions. But because biological anthropology is the only college-level exposure to evolution (let alone biology) that many undergraduates have, it’s important that it's strong. Once they graduate, they’re consuming, producing, and voting based in no small part on their understanding of their place in nature and their (and others') place in the human species. This one chance that we get to represent evolutionary theory and human ecology and biology is crucial make-or-break time for us anthropology professors. Number two means that I have to deviate far from the conventional format for the introductory course.
In January 2012, I was awarded a $21,842.50 from the Provost’s Office under their initiative called “Innovative Approaches Using Technology to Enhance the Student Experience at URI”. The title of my proposal, “145 URI undergraduates peer into their genomes to trace their ancestries, discover their individualities, ponder their futures, and celebrate their unified humanity,” sums up nicely what students were able to do. It has been a transformative new curriculum on many planes, from my perspective as a teacher, from student perspectives as learners, and also for the impact it is making on how my colleagues in my field and beyond will use personal genomics to teach anthropology. That is why I will continue to use personal genomics in APG 350.
I'm always updating APG 201, every semester, with new findings in human evolutionary biology and physical/biological anthropology. I'm also always modifying pedagogy and adapting activities with the goals of improving and broadening student learning. For example, I use colored index cards (in lieu of clickers) for regularly practicing questions with immediate feedback, which seem to engage and motivate students in new ways. I also used personal genomics in this course when I awarded the Provost’s grant, however, I plan to only use it in the upper level course (APG 350) in the future, not because it wasn’t a success, but because it takes up too much time away from fundamentals that need to be covered in this introductory course.
Since arriving at URI, I have dramatically rearranged the traditional presentation of APG 201 course materials (as they are presented in every major textbook for this popular Gen Ed course in North America) and have taught it for three years in this new way with great success. The major difference is that I start with active observations and then work on explaining them with evolutionary theory rather than beginning with evolutionary theory and then asking students to apply it thoughtlessly to spoon-fed information. Starting in Spring 2015, I will begin teaching it without a textbook, using two excellent popular science books and many on-line high-quality readings instead. The syllabus for this new curriculum is included in my portfolio. I will provide essential, fundamental material in handouts when it is not covered explicitly in the readings—something I’ve been poised to do since I wrote a reference/textbook Human Origins 101 in 2007. I plan to eventually publish a paper describing this new strategy of guiding who I call “naturalists in a molecular age.” Because word has gotten out to my colleagues about both the personal genomics as well as this new curriculum, I’ve been invited to participate in an education symposium (that’s been accepted) at this year’s physical anthropology meetings in March 2015.
One of the most positive outcomes of the first run of APG 310: Sex and Reproduction in Our Species was the recruitment of a student (name withheld, Anthropology and Chemistry major, class of 2014) to take on a project in APG 470: Directed Research with me, guided by constructive input from the whole class. He updated a survey from a 1960s Master’s thesis at URI that looked into “premarital sexual behavior” of undergraduates here at URI. After earning IRB approval to administer the survey to volunteer participants, he presented his research at the end of the Spring 2014 semester to a group of students and faculty. Most interesting was the result showing no significant difference in the amount of premarital sexual behavior that male and female students reported, as opposed to the significant difference between the sexes that the first survey found, decades ago. This sort of work piques student interest and two anthropology majors from my second-run of APG 310, name withheld and name withheld, have worked with original student name withheld to rewrite the survey to bring it up to date, to be more health-focused, and in hopes of making the results more instructive to the URI community. Name withheld and name withheld will be submitting their proposal to the IRB committee in October 2015 and if approved they’ll be collecting the data and analyzing it over the course of the academic year.
I have been very lucky that some of the most enthusiastic researchers and clever minds have opted to work on projects with me for credits in APG 470. I asked name withheld (Anthropology & Biology) to co-author an article on the evolution of childbirth with me for the Annual Reviews of Anthropology because of her research skills and also her relevant interests as demonstrated in prior anthropology courses with me. And then name withheld (Anthropology & Biology) has gotten a head start on her Honors project with me already. She’s taken up a project that I’ve been wanting to get started since 2006. She’s figuring out how apes lost their tails and, thus, why we ended up tailless. In the coming academic year, she’ll be applying for funding to travel to regional museums to collect data on primate skeletons.
In the next few years, I will be devising short courses and field trips through the study abroad office to sites of anthropological interest—not just to my fossil field sites in Kenya, but to other sites in Africa, Europe, and Latin America where students can chase primates in the wild or crawl into painted Paleolithic caves.
Statement of Research
How did humans become humans, how did apes? How does evolution work? And does it work differently in humans or because of humans? These are the questions that drive my research and educational endeavors. Since arriving in Fall 2011, URI has encouraged and supported my scientific and scholarly activity as I have pursued two main areas of research (below). I have also begun a book project that brings all of these things together. These three research areas should continue to challenge me and create opportunities for students for many years to come.
1. Augmenting and making sense of the fossil record for ape and human evolution.
As part of an international and interdisciplinary team, funded by the Leakey Foundation and the NSF, I perform paleoanthropological fieldwork on Rusinga and Mfangano Islands in Western Kenya. Fossils from these sites represent plants and animals that lived in the early Miocene epoch (dating to about 20-18 million years ago), some of which, like the primate Proconsul, are good candidates for some of the earliest apes. Without the origin of apes, chimpanzees and humans would not have occurred. This work is not only geared toward finding more specimens of Proconsul and other primates, but we are also reconstructing the paleoenvironments in which these primates lived and evolved. Our latest paper to come of this project was published in Nature Communications this year.
Here at home, I continue to work on the functional anatomy and growth and development (ontogenetic) patterns of fossil apes like Proconsul, particularly in their feet and hindlimbs, as those traits relate to locomotion and to the ability to cling to mother during development, and over evolutionary time. It’s important to reconstruct how this fossil ape was moving about if we’re to understand how modern ape and human behavior came about. Since coming to URI I have taken advantage of our proximity to the primate skeletal collections at the American Museum of Natural History where I have gathered data on extant primates to compare against the fossils. Up until recently this work has been a continuation of my doctoral dissertation on anthropoid feet and hindlimbs, but recently I have begun a similar project with undergraduate anthropology/biology major and honors student name withheld on tails. By looking to primates in the fossil record (like the tailless Proconsul), to variation in extant primate tails, and to known genes for tail development, we are answering the question, “Why don’t humans have tails?”
Although there’s much to keep us busy in addressing these matters here in the U.S., I would still like to return semi-regularly to Kenya to continue the lifetime of work that needs to be done at Rusinga and Mfangano Islands, in both fossil collection and analysis. I hope to create a short course with International Programs to give URI students a marvelous experience doing paleoanthropology.
2. Reconstructing the evolution of human pregnancy, childbirth, and infant development
Living apes, not just fossils, also offer a glimpse of evolution. So along with another team of collaborators, I study energy use in apes and other mammals. Mammals process energy differently from one another and these differences may reflect different evolutionary selection pressures both internally within the organism and externally from the environment. Energetic use in humans is fairly well understood but it's only through comparison with other species that we can understand human energetics from an evolutionary perspective. Likewise, human data are necessary for understanding the energetic use of other primates. To this end, I’ve collected energetic and behavioral data from the chimpanzees and gorillas at Lincoln Park Zoo. The first paper to come of this work was published this year in PNAS.
I am particularly interested in the energetics and metabolic parameters of pregnancy, fetal growth, infant growth, and lactation and how those determine the timing of birth in humans and other mammals. This is a significant area of anthropological research, given how it has long been assumed that the unique human skeleton, particularly the pelvis and how it’s metamorphosed for upright walking, has limited gestation and fetal growth—that the skeleton explains why our babies are difficult to birth and are quite helpless when they arrive. My research has shown that this traditional pelvic explanation (the “obstetrical dilemma”) is much weaker than its popularity indicates and that maternal metabolism and how mothers process energy are likely to be the primary determinants of gestation length and fetal growth, not just in humans but across primates and placental mammals. Although it is my primary goal to reconstruct human evolutionary history as accurately as possible (or at least as plausibly as possible), there are also potential applications of this research toward better understanding the causes of pregnancy disorders like preeclampsia.
This research has attracted attention and I have been invited to give talks at numerous college campuses as a result. The highlight so far has been the invitation from organizers (and established human reproduction researchers) Karen Rosenberg and Wenda Trevathan to participate in a scholarly seminar at Santa Fe’s School for Advanced Research (SAR) titled “Costly but cute: How helpless babies made us human.” The collection of our papers is currently under peer-review and the volume should be published next year. I’ve also been invited to write on the evolution of childbirth for the Annual Reviews of Anthropology. That manuscript is due in January 2015 and I’ve enlisted a keen undergraduate anthropology/biology major, name withheld, to co-author the piece with me.
I am currently scheming up my next research steps. (The rest of this paragraph is redacted because it's a big fun secret for now.)
The Baby Makers: Scholarly/Popular Trade Book Project
For the last two years I’ve been working with a literary agent on a proposal for a book that I’m very excited about. It’s requiring me to scratch at the overlap between evolutionary biology and cultural anthropology—disciplines that are diametrically opposed in the eyes of many scholars. Reconciling these two schools of thought as well as discovering what, perhaps, evolutionary biology cannot explain is challenging but feels necessary in order for me to go on as both an anthropologist and an educator. The book assumes, as its premise, that ... (The rest is redacted because it's a big fun secret for now. It's a project that I've since partnered-up with Anne in, and we'll gladly talk about it but not post much about yet. We're very excited and having a ball.)
Statement of Service and Professional Outreach
I have participated in service projects at many levels at URI and within my field, while I have also prioritized outreach, locally and beyond. I will continue to perform these duties and hope to increase my contributions and impact, but here is what I have done so far:
Our department had a successful search for a new colleague and I’m proud to have been on the committee that helped to accomplish it. In addition to our regular advising majors and minors, I served as the anthropology advisor at University College for the 2013-14 academic year. The same year I joined the Faculty Senate and I served on name withheld’s Master’s committee in CELS where he defended a stellar thesis on shark feeding morphology. This year I served on the search committee for a multicultural postdoctoral fellow in BES/CELS chaired by name withheld.
Beyond URI, I have reviewed manuscripts for several scientific and scholarly journals, as well as grant proposals for NSF and the Leakey Foundation. In 2013, Nature Education launched the room of open-access, peer-reviewed articles on The Human Fossil Record, which I edited as part of their Biological Anthropology series. There are even more articles in press that will be posted soon. In addition, I was invited to give a talk at the California Academy of Sciences in November 2012 about my experience with personal genomics (23andMe) as an educator, as an anthropologist, and as a human being. While I was in San Francisco I visited an assembly of 3-8 graders as well as a high school genetics class and talked with them about science, genetics, paleoanthropology, and evolution. There, I also gave a presentation to the Leakey Foundation’s Scientific and Executive Boards about the research I’ve done that they’ve funded and will hopefully continue to support. It was well received and I was encouraged to keep applying for funding. Here in Rhode Island I have presented on evolution at a public library, a Masonic lodge, a Catholic elementary school, and three times at assisted living/retirement homes.
For the past three years I have been a core team member of the Smithsonian Institution National Museum of Natural History’s Human Origins Program Educator’s Network (HopEdNet). My duties include fielding questions about human evolution, via email, that visitors to the exhibit hall in Washington DC type into the computer. I’m also involved with the Smithsonian in a magnificent project called “Teaching Evolution Through Human Examples” (or “TetHE”) which is led by PIs name withheld and name withheld of the Smithsonian’s Human Origins Program. I have helped to create new resource activities and teaching strategies focused on human evolution for AP Biology. My primary role is as scientific content consultant but I am part of a larger group of people, including the leaders of the AP Biology standards as well as pedagogy experts, all working together on this project. These curricular packets will be complete in date withheld and I am very much looking forward to using them in APG 350: Human Variation, both to teach biological anthropology but also to expose our students to this pedagogy should they become educators themselves. It’s through my TetHE colleagues that I got my scientific process lesson plan published at Berkeley’s Understanding Science site. It’s currently one of the top three teacher resources there.
I try very hard, where and when I can, to engage the greater public in anthropology, evolution, and science and so I continue to write on the blog The Mermaid’s Tale. I write about new discoveries in biological anthropology, including my own, as well as educational issues (mainly for my colleagues), and larger “how do we know what we know” questions. This is most definitely an outreach endeavor, however, the boost to my own research and teaching that comes from writing here, and engaging with my blog’s co-authors and colleagues who read the blog, is significant. A list of my best, most popular posts is here. In total, my posts have received 110,000 hits since I began writing in 2009. Most of my posts are read by hundreds, but a few have been seen by as many as 13,000+ because some colleagues assign them to students (as do I) and others have cited or re-published them on their own websites. My most recent post on natural selection was republished by the on-line science magazine io9. Another of my posts was re-published on Scientific American’s site. It’s due in part to my activity on my blog that my anthropological research got noticed by the BBC and I was asked to be part of an episode of their science program Horizon (equivalent to our Nova). Here’s the piece in the Guardian that discusses my research and that was published to announce this television program. I also filmed all about ape and human tail loss for PBS’s “Your Inner Fish” program: “How do we know when our ancestors lost their tails?”
***
Do you have questions? Anonymous or otherwise, feel free to post here or on Facebook or Twitter and I'll do my best to answer them. Cheers.
*Here's my CV and here's my scholar.google profile. In the above narratives, you may see some new typos because I've just copied and pasted them from a pdf and also because I make typos. Hyperlinks are gone because they're not the point, and I replaced all names with "name withheld" because Google searches for those people shouldn't land on this.
I’m writing because maybe you, or maybe someone who has sway over your career, has read an academic's blog and wondered how it would affect their chances for tenure. Or, maybe you or a colleague, your chair or department head, your dean, or your provost has wondered why anyone would bother going to the trouble to write on a blog when there are more important things to accomplish. Well, in my case, there clearly weren’t, because I managed to write on this blog and still be awarded tenure. What's more, I know, without a doubt, that my writing on this blog was integral to it.
![]() |
| Not me and not Proconsul. This is the seriously awesome result of googling for "Mermaid Professor." (source) |
I’ve written about why I blog before. And, looking back, there is so much more I could add to that post because of what's occurred over time since. However, the main reason still stands: Writing on The Mermaid’s Tale has been immensely important for my academic life. The reading and writing I do here enhances my teaching and research and the enlightening discussions I participate in here and elsewhere (facilitated by my writing here) boost my teaching and research even more. And the people who make the decisions about my tenure definitely noticed. My departmental colleagues, my dean, and the provost all readily acknowledged the value of my blogging in their letters recommending me for tenure. And the good people who served as my external reviewers didn't see my blogging as damning enough to withhold their support. And the good... no, great people who collaborate with me certainly never turned up their noses either!
I've known since receiving the provost's letter that I owe it to readers and other bloggers to post something about getting tenure. I also thought a tenure-related post could help out younger academics, in general, by exposing how someone who's never taught in a graduate program--with all the intellectual buzz and the worker bees to help, you know, the academic model us Ph.D.s are most familiar with--can still get awarded tenure.
But I've been dreading such a post because I really don't want to write a biography right now. It feels quite narcissistic to get tenure and then to post your life story as if tenure somehow validated that, as if anyone could possibly emulate another person's detailed path to tenure, as if anyone would want to!
Plus, where to begin? So much so deep in my past has set me up for getting awarded tenure, so many people have been crucial to this outcome, that it's impossible to know where or with whom to start except, obviously, at conception.
But what I really don't feel like writing about publicly, and in association with hooray-for-tenure, is about why I haven't done much work on early Homo despite studying with a terrifically wonderful advisor, Alan Walker, to do just that. And that's because part of the reason is a statistic here. That, as well as other parts of the explanation take away from the fact that I'm very happy with the way my career has panned out and I continue to look very much forward to every single day I'm an anthropologist.
So instead of a tour through my influenced and circumstantial history leading up to tenure, maybe I'll post the narratives I included in my portfolio. It's going to be uncomfortable. I'm going to have to look away while I paste the text, like I do when the phlebotomist pricks my vein. But here you go, minus the files of evidence that go along with each narrative.* This is a successful tenure portfolio at a small state school, in an undergraduate-only program. Hope it's useful because ouch it feels quite personal:
Tenure Portfolio Narratives
Statement of Teaching and LearningI’ve taught four different courses so far at URI and they all focus on human origins, evolution and variation. The introductory course, APG 201: Human Origins, counts as a general education requirement for the natural sciences and also is a requirement for majors and minors in anthropology. The upper level courses attract not just anthropology majors and minors but students from diverse scholarly backgrounds who are interested in the in-depth examination of issues in biological anthropology. These upper level courses include: APG 300: The Human Fossil Record (a hands-on course which is why I dedicated a large portion of my start-up funds to the purchase of new fossil casts which augmented the existing collection); APG 310: Sex and Reproduction in Our Species (a course I created because of my new research interests in the evolution of human reproduction, as well as in procreative beliefs and how they have influenced human evolution.); APG 350: Human Variation (in which I will continue to use personal genomics to engage students.). In all of these courses, my two most important teaching goals are:
(1) Students should get as strong a handle on evolution as possible, shedding as many misconceptions as possible, so that they can best comprehend the biological, ecological, and cultural significance of human variation and evolution. (That, in a nutshell, is why human evolution is taught and studied within an anthropological context!)
(2) Students should achieve as much of this evolutionary and anthropological understanding on their own as possible, by thinking creatively, synthetically, and critically about the evidence.
Number one means that I probably take more time with evolutionary theory than most of my colleagues at other institutions. But because biological anthropology is the only college-level exposure to evolution (let alone biology) that many undergraduates have, it’s important that it's strong. Once they graduate, they’re consuming, producing, and voting based in no small part on their understanding of their place in nature and their (and others') place in the human species. This one chance that we get to represent evolutionary theory and human ecology and biology is crucial make-or-break time for us anthropology professors. Number two means that I have to deviate far from the conventional format for the introductory course.
In January 2012, I was awarded a $21,842.50 from the Provost’s Office under their initiative called “Innovative Approaches Using Technology to Enhance the Student Experience at URI”. The title of my proposal, “145 URI undergraduates peer into their genomes to trace their ancestries, discover their individualities, ponder their futures, and celebrate their unified humanity,” sums up nicely what students were able to do. It has been a transformative new curriculum on many planes, from my perspective as a teacher, from student perspectives as learners, and also for the impact it is making on how my colleagues in my field and beyond will use personal genomics to teach anthropology. That is why I will continue to use personal genomics in APG 350.
I'm always updating APG 201, every semester, with new findings in human evolutionary biology and physical/biological anthropology. I'm also always modifying pedagogy and adapting activities with the goals of improving and broadening student learning. For example, I use colored index cards (in lieu of clickers) for regularly practicing questions with immediate feedback, which seem to engage and motivate students in new ways. I also used personal genomics in this course when I awarded the Provost’s grant, however, I plan to only use it in the upper level course (APG 350) in the future, not because it wasn’t a success, but because it takes up too much time away from fundamentals that need to be covered in this introductory course.
Since arriving at URI, I have dramatically rearranged the traditional presentation of APG 201 course materials (as they are presented in every major textbook for this popular Gen Ed course in North America) and have taught it for three years in this new way with great success. The major difference is that I start with active observations and then work on explaining them with evolutionary theory rather than beginning with evolutionary theory and then asking students to apply it thoughtlessly to spoon-fed information. Starting in Spring 2015, I will begin teaching it without a textbook, using two excellent popular science books and many on-line high-quality readings instead. The syllabus for this new curriculum is included in my portfolio. I will provide essential, fundamental material in handouts when it is not covered explicitly in the readings—something I’ve been poised to do since I wrote a reference/textbook Human Origins 101 in 2007. I plan to eventually publish a paper describing this new strategy of guiding who I call “naturalists in a molecular age.” Because word has gotten out to my colleagues about both the personal genomics as well as this new curriculum, I’ve been invited to participate in an education symposium (that’s been accepted) at this year’s physical anthropology meetings in March 2015.
One of the most positive outcomes of the first run of APG 310: Sex and Reproduction in Our Species was the recruitment of a student (name withheld, Anthropology and Chemistry major, class of 2014) to take on a project in APG 470: Directed Research with me, guided by constructive input from the whole class. He updated a survey from a 1960s Master’s thesis at URI that looked into “premarital sexual behavior” of undergraduates here at URI. After earning IRB approval to administer the survey to volunteer participants, he presented his research at the end of the Spring 2014 semester to a group of students and faculty. Most interesting was the result showing no significant difference in the amount of premarital sexual behavior that male and female students reported, as opposed to the significant difference between the sexes that the first survey found, decades ago. This sort of work piques student interest and two anthropology majors from my second-run of APG 310, name withheld and name withheld, have worked with original student name withheld to rewrite the survey to bring it up to date, to be more health-focused, and in hopes of making the results more instructive to the URI community. Name withheld and name withheld will be submitting their proposal to the IRB committee in October 2015 and if approved they’ll be collecting the data and analyzing it over the course of the academic year.
I have been very lucky that some of the most enthusiastic researchers and clever minds have opted to work on projects with me for credits in APG 470. I asked name withheld (Anthropology & Biology) to co-author an article on the evolution of childbirth with me for the Annual Reviews of Anthropology because of her research skills and also her relevant interests as demonstrated in prior anthropology courses with me. And then name withheld (Anthropology & Biology) has gotten a head start on her Honors project with me already. She’s taken up a project that I’ve been wanting to get started since 2006. She’s figuring out how apes lost their tails and, thus, why we ended up tailless. In the coming academic year, she’ll be applying for funding to travel to regional museums to collect data on primate skeletons.
In the next few years, I will be devising short courses and field trips through the study abroad office to sites of anthropological interest—not just to my fossil field sites in Kenya, but to other sites in Africa, Europe, and Latin America where students can chase primates in the wild or crawl into painted Paleolithic caves.
Statement of Research
How did humans become humans, how did apes? How does evolution work? And does it work differently in humans or because of humans? These are the questions that drive my research and educational endeavors. Since arriving in Fall 2011, URI has encouraged and supported my scientific and scholarly activity as I have pursued two main areas of research (below). I have also begun a book project that brings all of these things together. These three research areas should continue to challenge me and create opportunities for students for many years to come.
1. Augmenting and making sense of the fossil record for ape and human evolution.
As part of an international and interdisciplinary team, funded by the Leakey Foundation and the NSF, I perform paleoanthropological fieldwork on Rusinga and Mfangano Islands in Western Kenya. Fossils from these sites represent plants and animals that lived in the early Miocene epoch (dating to about 20-18 million years ago), some of which, like the primate Proconsul, are good candidates for some of the earliest apes. Without the origin of apes, chimpanzees and humans would not have occurred. This work is not only geared toward finding more specimens of Proconsul and other primates, but we are also reconstructing the paleoenvironments in which these primates lived and evolved. Our latest paper to come of this project was published in Nature Communications this year.
Here at home, I continue to work on the functional anatomy and growth and development (ontogenetic) patterns of fossil apes like Proconsul, particularly in their feet and hindlimbs, as those traits relate to locomotion and to the ability to cling to mother during development, and over evolutionary time. It’s important to reconstruct how this fossil ape was moving about if we’re to understand how modern ape and human behavior came about. Since coming to URI I have taken advantage of our proximity to the primate skeletal collections at the American Museum of Natural History where I have gathered data on extant primates to compare against the fossils. Up until recently this work has been a continuation of my doctoral dissertation on anthropoid feet and hindlimbs, but recently I have begun a similar project with undergraduate anthropology/biology major and honors student name withheld on tails. By looking to primates in the fossil record (like the tailless Proconsul), to variation in extant primate tails, and to known genes for tail development, we are answering the question, “Why don’t humans have tails?”
Although there’s much to keep us busy in addressing these matters here in the U.S., I would still like to return semi-regularly to Kenya to continue the lifetime of work that needs to be done at Rusinga and Mfangano Islands, in both fossil collection and analysis. I hope to create a short course with International Programs to give URI students a marvelous experience doing paleoanthropology.
2. Reconstructing the evolution of human pregnancy, childbirth, and infant development
Living apes, not just fossils, also offer a glimpse of evolution. So along with another team of collaborators, I study energy use in apes and other mammals. Mammals process energy differently from one another and these differences may reflect different evolutionary selection pressures both internally within the organism and externally from the environment. Energetic use in humans is fairly well understood but it's only through comparison with other species that we can understand human energetics from an evolutionary perspective. Likewise, human data are necessary for understanding the energetic use of other primates. To this end, I’ve collected energetic and behavioral data from the chimpanzees and gorillas at Lincoln Park Zoo. The first paper to come of this work was published this year in PNAS.
I am particularly interested in the energetics and metabolic parameters of pregnancy, fetal growth, infant growth, and lactation and how those determine the timing of birth in humans and other mammals. This is a significant area of anthropological research, given how it has long been assumed that the unique human skeleton, particularly the pelvis and how it’s metamorphosed for upright walking, has limited gestation and fetal growth—that the skeleton explains why our babies are difficult to birth and are quite helpless when they arrive. My research has shown that this traditional pelvic explanation (the “obstetrical dilemma”) is much weaker than its popularity indicates and that maternal metabolism and how mothers process energy are likely to be the primary determinants of gestation length and fetal growth, not just in humans but across primates and placental mammals. Although it is my primary goal to reconstruct human evolutionary history as accurately as possible (or at least as plausibly as possible), there are also potential applications of this research toward better understanding the causes of pregnancy disorders like preeclampsia.
This research has attracted attention and I have been invited to give talks at numerous college campuses as a result. The highlight so far has been the invitation from organizers (and established human reproduction researchers) Karen Rosenberg and Wenda Trevathan to participate in a scholarly seminar at Santa Fe’s School for Advanced Research (SAR) titled “Costly but cute: How helpless babies made us human.” The collection of our papers is currently under peer-review and the volume should be published next year. I’ve also been invited to write on the evolution of childbirth for the Annual Reviews of Anthropology. That manuscript is due in January 2015 and I’ve enlisted a keen undergraduate anthropology/biology major, name withheld, to co-author the piece with me.
I am currently scheming up my next research steps. (The rest of this paragraph is redacted because it's a big fun secret for now.)
The Baby Makers: Scholarly/Popular Trade Book Project
For the last two years I’ve been working with a literary agent on a proposal for a book that I’m very excited about. It’s requiring me to scratch at the overlap between evolutionary biology and cultural anthropology—disciplines that are diametrically opposed in the eyes of many scholars. Reconciling these two schools of thought as well as discovering what, perhaps, evolutionary biology cannot explain is challenging but feels necessary in order for me to go on as both an anthropologist and an educator. The book assumes, as its premise, that ... (The rest is redacted because it's a big fun secret for now. It's a project that I've since partnered-up with Anne in, and we'll gladly talk about it but not post much about yet. We're very excited and having a ball.)
Statement of Service and Professional Outreach
I have participated in service projects at many levels at URI and within my field, while I have also prioritized outreach, locally and beyond. I will continue to perform these duties and hope to increase my contributions and impact, but here is what I have done so far:
Our department had a successful search for a new colleague and I’m proud to have been on the committee that helped to accomplish it. In addition to our regular advising majors and minors, I served as the anthropology advisor at University College for the 2013-14 academic year. The same year I joined the Faculty Senate and I served on name withheld’s Master’s committee in CELS where he defended a stellar thesis on shark feeding morphology. This year I served on the search committee for a multicultural postdoctoral fellow in BES/CELS chaired by name withheld.
Beyond URI, I have reviewed manuscripts for several scientific and scholarly journals, as well as grant proposals for NSF and the Leakey Foundation. In 2013, Nature Education launched the room of open-access, peer-reviewed articles on The Human Fossil Record, which I edited as part of their Biological Anthropology series. There are even more articles in press that will be posted soon. In addition, I was invited to give a talk at the California Academy of Sciences in November 2012 about my experience with personal genomics (23andMe) as an educator, as an anthropologist, and as a human being. While I was in San Francisco I visited an assembly of 3-8 graders as well as a high school genetics class and talked with them about science, genetics, paleoanthropology, and evolution. There, I also gave a presentation to the Leakey Foundation’s Scientific and Executive Boards about the research I’ve done that they’ve funded and will hopefully continue to support. It was well received and I was encouraged to keep applying for funding. Here in Rhode Island I have presented on evolution at a public library, a Masonic lodge, a Catholic elementary school, and three times at assisted living/retirement homes.
For the past three years I have been a core team member of the Smithsonian Institution National Museum of Natural History’s Human Origins Program Educator’s Network (HopEdNet). My duties include fielding questions about human evolution, via email, that visitors to the exhibit hall in Washington DC type into the computer. I’m also involved with the Smithsonian in a magnificent project called “Teaching Evolution Through Human Examples” (or “TetHE”) which is led by PIs name withheld and name withheld of the Smithsonian’s Human Origins Program. I have helped to create new resource activities and teaching strategies focused on human evolution for AP Biology. My primary role is as scientific content consultant but I am part of a larger group of people, including the leaders of the AP Biology standards as well as pedagogy experts, all working together on this project. These curricular packets will be complete in date withheld and I am very much looking forward to using them in APG 350: Human Variation, both to teach biological anthropology but also to expose our students to this pedagogy should they become educators themselves. It’s through my TetHE colleagues that I got my scientific process lesson plan published at Berkeley’s Understanding Science site. It’s currently one of the top three teacher resources there.
I try very hard, where and when I can, to engage the greater public in anthropology, evolution, and science and so I continue to write on the blog The Mermaid’s Tale. I write about new discoveries in biological anthropology, including my own, as well as educational issues (mainly for my colleagues), and larger “how do we know what we know” questions. This is most definitely an outreach endeavor, however, the boost to my own research and teaching that comes from writing here, and engaging with my blog’s co-authors and colleagues who read the blog, is significant. A list of my best, most popular posts is here. In total, my posts have received 110,000 hits since I began writing in 2009. Most of my posts are read by hundreds, but a few have been seen by as many as 13,000+ because some colleagues assign them to students (as do I) and others have cited or re-published them on their own websites. My most recent post on natural selection was republished by the on-line science magazine io9. Another of my posts was re-published on Scientific American’s site. It’s due in part to my activity on my blog that my anthropological research got noticed by the BBC and I was asked to be part of an episode of their science program Horizon (equivalent to our Nova). Here’s the piece in the Guardian that discusses my research and that was published to announce this television program. I also filmed all about ape and human tail loss for PBS’s “Your Inner Fish” program: “How do we know when our ancestors lost their tails?”
Do you have questions? Anonymous or otherwise, feel free to post here or on Facebook or Twitter and I'll do my best to answer them. Cheers.
*Here's my CV and here's my scholar.google profile. In the above narratives, you may see some new typos because I've just copied and pasted them from a pdf and also because I make typos. Hyperlinks are gone because they're not the point, and I replaced all names with "name withheld" because Google searches for those people shouldn't land on this.
Tuesday, April 7, 2015
IF: Impact Factor....or inflation factor?
By
Ken Weiss
As in many departments, our graduate students and post-docs here in the Penn State Anthropology Department hold weekly 'journal clubs' where recent interesting papers are discussed. Last week, the students discussed the nature, value and importance of tabulations of journal impact factors (IF), basically the citation rate per published paper. There have been many papers and commentaries on this subject in recent years, but this session focused on a paper by Brembs, Button and Munafo entitled "Deep impact: unintended consequences of journal rank," published in 2013 in Frontiers in Neuroscience.
The
IF scandal
This article assesses the assigned IF of journals relative to their retraction, error or fraud, and reliability or replicability rates. The objective picture of the IFs is not encouraging. Statistical analysis shows that the 'major' journals--the expensive, exclusive, snobbish high-status ones are, in terms of the quality and accuracy of their content no better, and arguably worse than the less prestigious journals. We won't go into the details but basically there is a rush to publish dramatic results in those status journals, the journals are in business to attract and generate attention, and that leads them to receive, and publish, splashier claims. In this self-reinforcing pattern they garner the honors, so it is worrisome that they appear to do this without actually publishing the best research or, worse, systematically publishing unreliable work.Our student seminar discussed this problem and the effect it may have on their careers, if their work is going to be judged by a somewhat rigged, and inaccurate, scoring system. If they can't get into the high-IF elite publishing club, which is somewhat self-reinforcing, how can they compete for jobs and grants and get their work known?
We have several reactions to this. For students and others who may have a stake in (if not in the heart of!) the system, here are a few thoughts on alternatives to the high IF journals. The picture is grim, but in some surprising ways, not at all hopeless.
Some thoughts for students:
First, TheWinnower, founded by Josh Nicholson, a graduate student at Virginia Tech, is a new online site where one can send papers but also where blogs and other such new-media communications can be published, and these publications given a DOI (formal document identifier) and hence be more regularly citable and permanently archived. It's but one of many new communication venues. A lot of what is on these media is of course superficial fluff, but (Sssh!! don't tell anyone!) so is a lot of any sort of publication, even (believe it or not!) in the 'major' journals and so has it always been, even in the old-time printed journals of yore.
Secondly, there are allies in any movement towards change, not just from the grass roots where pressure for social change usually arises. There are thoughtful and progressive administrators, and serious scholars and scientists, who are resisting the pressure to use IF score-counting, in career evaluations, purportedly to make them more 'objective'.
And, there are many people making their way largely and in various ways on blogs, open-access publishing, online teaching, communicating with people via Twitter and other outlets (most of which we, being quite senior, probably don't even know of!). Writing for public media of all sorts has always been a mainline, legitimate way to build careers in anthropology, especially its sociocultural sides. But generally, critiques of the system at all levels, such as repeated revelations about score-counting bureaucracies and IF biases, as well as objections to closed access publishing, will have their impact if they are repeated often and loudly enough.
Thirdly, ironically and reassuringly perhaps, the tightening of the grant prospects and the well-documented concentration of funding in the hands of senior investigators, means more people will have to rely less on grants, and their university employers will simply have to recognize that. Teaching and other forms of service, scholarship, and outreach will simply have to be reinvigorated. Universities aren't just going to close shop because their grant funds shrink. They're not even going to be able to keep shifting towards hiring poorly paid Instructors. So the field is open for innovation and creativity.
Fourthly, also ironically, the greater the rush to the Big Journals, the better it may be for the job prospects of current grad schools? Why? Well, fewer people in the running for each job will have such publications on their CVs than perhaps was the case in the past. As long as applicants realize that others will want the same jobs they do, and they develop their skills and depth of thought accordingly, they'll compete well. After all, colleges and universities will simply not be able to hold out for those few with BigName publications, even if they wanted to. They'll be 'stuck' having to evaluate people on their actual merits. And, not so trivial as you might think, most of their faculty haven't got BigName papers either, and might not want to be outshone by adding a junior hyper-achiever to their midst. Indeed, many less research-intensive but wholly academically serious places feel, correctly, that applicants for faculty positions who have BigName publications don't really want to work there and will move on as soon as they can get a ‘better’ job, and/or in the meantime won't be dedicated to teaching, students and the local institution. So things aren't always as dire or as one-sided as they seem--even if times are relatively difficult right now.First, TheWinnower, founded by Josh Nicholson, a graduate student at Virginia Tech, is a new online site where one can send papers but also where blogs and other such new-media communications can be published, and these publications given a DOI (formal document identifier) and hence be more regularly citable and permanently archived. It's but one of many new communication venues. A lot of what is on these media is of course superficial fluff, but (Sssh!! don't tell anyone!) so is a lot of any sort of publication, even (believe it or not!) in the 'major' journals and so has it always been, even in the old-time printed journals of yore.
Secondly, there are allies in any movement towards change, not just from the grass roots where pressure for social change usually arises. There are thoughtful and progressive administrators, and serious scholars and scientists, who are resisting the pressure to use IF score-counting, in career evaluations, purportedly to make them more 'objective'.
And, there are many people making their way largely and in various ways on blogs, open-access publishing, online teaching, communicating with people via Twitter and other outlets (most of which we, being quite senior, probably don't even know of!). Writing for public media of all sorts has always been a mainline, legitimate way to build careers in anthropology, especially its sociocultural sides. But generally, critiques of the system at all levels, such as repeated revelations about score-counting bureaucracies and IF biases, as well as objections to closed access publishing, will have their impact if they are repeated often and loudly enough.
Thirdly, ironically and reassuringly perhaps, the tightening of the grant prospects and the well-documented concentration of funding in the hands of senior investigators, means more people will have to rely less on grants, and their university employers will simply have to recognize that. Teaching and other forms of service, scholarship, and outreach will simply have to be reinvigorated. Universities aren't just going to close shop because their grant funds shrink. They're not even going to be able to keep shifting towards hiring poorly paid Instructors. So the field is open for innovation and creativity.
Fifth, if the intense rigors of the research-intensive Fast Lane appeal to you, well, you know the gig and its competitive nature, and if you get your advanced degree from a fine and well-regarded program that will give you a chance at getting the brass ring. Those avenues are of course open, even if highly competitive.
| "Painted Pony Bean" by Liveon001 © Travis K. Witt - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - |
But why does anyone even tally such things as impact factors?
An obvious question one should be why anybody would tally impact factors in the first place? Who has what to gain? The answer has to be that it is in someone's interest and someone will gain by it. After all, when some of us started our careers, there was no such thing (or, the earlier version Science Citation Index, was remote, in the library, laborious to look through and then usually only for legitimate resource searching). Scholarship itself was on average at least as good as now, careers were made without bean-counting but more on merit and substance, and bean-counting expectations were lower (and respect for teaching higher), the grant game much, much less intense.
IF scores are computed by a commercial company, Thompson-Reuters, as---what? As a favor to the publishing industry, and for what we would call a kind of academic bourgeois market for baubles and vanity. Journals self-promote by gaming their IFs, universities self-promote by gaming their faculty's IF ratings. They have money to make by promoting and, yes, manipulating their IFs (see the above article for just some of the ways). One can ask whether there is even a single reason for such score-keeping to be done other than for reasons of artificially constructed status hierarchies.
One motivation for this bean-counting is the heavy proliferation of online journals. Some of these are very highly respected, and deservedly, while others are chaff or, worse, scams for making money playing on fears and insecurities of faculty members needing advancement. IFs will at least be some assistance to an administrator or grant reviewer who wants to have an idea of a faculty candidate's record. But if the IFs are systematically unreliable, or manipulated, or even reverse indicators of actual work quality as some articles like the one above have suggested, that is a rather lame rationale for using IFs. Administrators evaluating their faculty members' careers should look at the actual work, not just some computer-tallied score about it. That may not be easy, but administrators are well-paid and accepted their jobs, after all.
There was in the past an insider Old Boy network in academe, that discriminated more arbitrarily in terms of funding, over-powerful Editors who controlled who published and what they published, and less opportunity for women and cultural minorities (based on ethnic as well as university status hierarchies). To increase fairness, but also to avoid discrimination lawsuits, and to play the self-promotion PR spin game, universities and their administrative officials learned the value of being 'objective' and hiding behind Excel spreadsheets rather than judgment. More objectivity did in many ways dislodge the older elite insider networks, but a system of elites has clearly re-established itself, and manipulable IF factors and their associated commercial incentives have helped reestablish some dominance in the academic system. It may still be wide open in many ways, but is heavily burdened by the game because the corporate university has become so money-oriented. This is very well documented. Things like IFs serve those interests.
The academic world will experience change, and is changing, and the new ways of communication are better and faster and more open-structured than ever. They make life more frenetic, but that will probably calm down because it's exhausting everyone. There will of course always be an elite, and for some that's a happy community to be part of. But it's not to everyone's taste. How long it will take coup-counting administrators to accept these other venues such as online communications, is unclear, but it's happening.
Social change requires resistance to the status quo, usually organized resistance (or else money-based leverage). Bureaucracies do need to be pressured, by faculty, graduate students and post-docs, and people like Department Heads and Chairs. But, it has to happen, and it will.
An obvious question one should be why anybody would tally impact factors in the first place? Who has what to gain? The answer has to be that it is in someone's interest and someone will gain by it. After all, when some of us started our careers, there was no such thing (or, the earlier version Science Citation Index, was remote, in the library, laborious to look through and then usually only for legitimate resource searching). Scholarship itself was on average at least as good as now, careers were made without bean-counting but more on merit and substance, and bean-counting expectations were lower (and respect for teaching higher), the grant game much, much less intense.
IF scores are computed by a commercial company, Thompson-Reuters, as---what? As a favor to the publishing industry, and for what we would call a kind of academic bourgeois market for baubles and vanity. Journals self-promote by gaming their IFs, universities self-promote by gaming their faculty's IF ratings. They have money to make by promoting and, yes, manipulating their IFs (see the above article for just some of the ways). One can ask whether there is even a single reason for such score-keeping to be done other than for reasons of artificially constructed status hierarchies.
One motivation for this bean-counting is the heavy proliferation of online journals. Some of these are very highly respected, and deservedly, while others are chaff or, worse, scams for making money playing on fears and insecurities of faculty members needing advancement. IFs will at least be some assistance to an administrator or grant reviewer who wants to have an idea of a faculty candidate's record. But if the IFs are systematically unreliable, or manipulated, or even reverse indicators of actual work quality as some articles like the one above have suggested, that is a rather lame rationale for using IFs. Administrators evaluating their faculty members' careers should look at the actual work, not just some computer-tallied score about it. That may not be easy, but administrators are well-paid and accepted their jobs, after all.
There was in the past an insider Old Boy network in academe, that discriminated more arbitrarily in terms of funding, over-powerful Editors who controlled who published and what they published, and less opportunity for women and cultural minorities (based on ethnic as well as university status hierarchies). To increase fairness, but also to avoid discrimination lawsuits, and to play the self-promotion PR spin game, universities and their administrative officials learned the value of being 'objective' and hiding behind Excel spreadsheets rather than judgment. More objectivity did in many ways dislodge the older elite insider networks, but a system of elites has clearly re-established itself, and manipulable IF factors and their associated commercial incentives have helped reestablish some dominance in the academic system. It may still be wide open in many ways, but is heavily burdened by the game because the corporate university has become so money-oriented. This is very well documented. Things like IFs serve those interests.
The academic world will experience change, and is changing, and the new ways of communication are better and faster and more open-structured than ever. They make life more frenetic, but that will probably calm down because it's exhausting everyone. There will of course always be an elite, and for some that's a happy community to be part of. But it's not to everyone's taste. How long it will take coup-counting administrators to accept these other venues such as online communications, is unclear, but it's happening.
Social change requires resistance to the status quo, usually organized resistance (or else money-based leverage). Bureaucracies do need to be pressured, by faculty, graduate students and post-docs, and people like Department Heads and Chairs. But, it has to happen, and it will.
Tuesday, January 13, 2015
The Genome Institute and its role
By
Ken Weiss
The NIH-based Human Genome Research Institute (NHGRI) has for a long time been funding the Big Data kinds of science that is growing like mushrooms on the funding landscape. Even if overall funding is constrained, and even if this also applies to the NHGRI (I don't happen to know), the sequestration of funds in too-big-to-stop projects is clear. Even Francis Collins and some NIH efforts to reinvigorate individual-investigator RO1 awards don't really seem to have stopped the grab for Big Data funds.
That's quite natural. If your career, status, or lab depends on how much money you bring into your institution, or how many papers you publish, or how many post-docs you have in your stable, or your salary and space depend on that, you will have to respond in ways that generate those score-counting coups. You'll naturally exaggerate the importance of your findings, run quickly to the public news media, and do whatever other manipulations you can to further your career. If you have a big lab and the prestige and local or even broader influence that goes with that, you won't give that up easily so that others, your juniors or even competitors can have smaller projects instead. In our culture, who could blame you?
But some bloggers, Tweeters, and Commenters have been asking if there is a solution to this kind of fund sequestration, largely reserved (even if informally) for the big usually private universities. The arguments have ranged from asking if the NHGRI should be shut down (e.g., here) to just groping for suggestions. Since many of these questions have been addressed to me, I thought I would chime in briefly.
First, a bit of history or perspective, as informally seen over the years from my own perspective (that is, not documented or intended to be precise, but a broad view as I saw things):
That's quite natural. If your career, status, or lab depends on how much money you bring into your institution, or how many papers you publish, or how many post-docs you have in your stable, or your salary and space depend on that, you will have to respond in ways that generate those score-counting coups. You'll naturally exaggerate the importance of your findings, run quickly to the public news media, and do whatever other manipulations you can to further your career. If you have a big lab and the prestige and local or even broader influence that goes with that, you won't give that up easily so that others, your juniors or even competitors can have smaller projects instead. In our culture, who could blame you?
But some bloggers, Tweeters, and Commenters have been asking if there is a solution to this kind of fund sequestration, largely reserved (even if informally) for the big usually private universities. The arguments have ranged from asking if the NHGRI should be shut down (e.g., here) to just groping for suggestions. Since many of these questions have been addressed to me, I thought I would chime in briefly.
First, a bit of history or perspective, as informally seen over the years from my own perspective (that is, not documented or intended to be precise, but a broad view as I saw things):
The NHGRI was located administratively where it was for reasons I don’t know. Several federal institutes were supporting scientific research. NIH was about health, and health 'sells', and understandably a lot of fund is committed to health research. It was natural to think that genome sequences and sciences would have major health implications, if the theory that genes are the fundamental causal elements of life was in fact true. Initially James Watson, discoverer of DNA's structure, and perhaps others advocated the effort. He was succeeded by Francis Collins who is a physician and clever politician.
However, there was competition for the genome ‘territory’, at least with the Atomic Energy Commission. I don’t know if NSF was ever in the ‘race’ to fund genomic research, but one driving force at the time was the fear of mutations that atomic radiation (therapeutic, from wars, diagnostic tests, and weapons fallout) generated. There was also a race with the private sector, notably Celera as a commercial competitor that would privatize the genome sequence. Dr Collins prominently, successfully, and fortunately defended the idea of open and free public access. The effort was seen as important for many reasons, including commercial ones, and there were international claimants in Japan, the UK, and perhaps elsewhere, that wanted to be in on the act. So the politics were rife as well as the science, understandably.
It is possible that only with the health-related promises was enough funding going to be available, although nuclear fears about mutations and the Cold War probably contributed, along with the usual less savory for self-interest, to AEC's interests.
Once a basic human genome sequence was available, there was no slowing the train. Technology, including public and private innovation promised much quicker sequencing in the future, that was quickly to become available even to ordinary labs (like mine, at the time!). And once the Genome Institute (and other places such as the Sanger Centre in Britain and centers in Japan, China, and elsewhere) were established, they weren't going to close down! So other sequences entered the picture--microbes, other species, and so on.
It became a fad and an internecine competition within NIH. I know from personal experiences at the time that program managers felt the need to do 'genomics' so they would be in on the act and keep their budgets. They had to contribute funds, in some way I don't recall, to the NHGRI's projects or in other ways keep their portfolios by having genomics as part of this. -Omics sprung up like weeds, and new fields such as nutrigenomics, cancer genomics, microbiomics and many more began to pull in funding, and institutes (and the investigators across the country) hopped aboard. Imitation, especially when funds and current fashion are involved, is not at all a surprise, and efficiency or relative payoff in results took the inevitable back seat: promises rather than deliveries naturally triumphed.
In many ways this has led to the current of exhaustively enumerative Big Data: a return to 17th century induction. This has to do not just with competition for resources, but a changed belief system also spurred by computing power: Just sample everything and pattern will emerge!
Over the decades the biomedical (and to some lesser extent biological) university establishment grew on the back of the external funding which was so generous for so long. But it has led to a dependency. Along with exponential growth in the number of competitors, hierarchies of elite research groups developed--another natural human tendency. We all know the career limitations that are resulting from this. And competition has meant that deans and chairs expect investigators always to be funded, in part because there aren't internal funds to keep labs running in the absence of grants. It's been a vicious self-reinforcing circle over the past 50 years.
As hierarchies built, private donors were convinced (conned?) into believing that their largesse would lead to the elimination of target diseases ('target' often meaning those in the rich donors' families). Big Data today is the grandchild of the major projects, like the Manhattan Project in WWII, that showed that some kinds of science could be done on a large scale. Many, many projects during past decades showed something else: Fund a big project, and you can't pull the plug on it! It becomes too entrenched politically.
The precedents were not lost on investigators! Plead for bigger, longer studies, with very large investments, and you have a safe bet for decades, perhaps your whole career. Once started, cost-benefit analysis has a hard time paring back, much less stopping such projects. There are many examples, and I won't single any of them out. But after some early splash, by and large they have got to diminishing returns but not got to any real sense of termination: too big to kill.
This is to some extent the same story with the NHGRI. The NIH has got too enamored of Big Data to keep the NHGRI as limited or focused as perhaps it should have been (or should be). In a sense it became an openly anti-focused-research sugar daddy (Dr Collins said, perhaps officially, that NHGRI didn’t fund ‘hypothesis-based research”) based on pure inductionism and reductionism, so it did not have to have well-posed questions. It basically bragged about not being focused.
This could be a change in the nature of science, driven by technology, that is obsolescing the nature of science that was set in motion in the Enlightenment era, by the likes of Galileo, Newton, Bacon, Descartes and others. We'll see. But the socioeconomic, political sides of things are part of the process, and that may not be a good thing.
Will focused, hypothesis-based research make a comeback? Not if Big Data yields great results, but decades of it, no matter how fancy, have not shown the major payoff that has been promised. Indeed, historians of science often write that the rationale, that if you collect enough data its patterns (that is, a theory) will emerge, has rarely been realized. Selective retrospective examples don't carry the weight often given them.
There is also our cultural love affair with science. We know very clearly that many things we might do at very low cost would yield health benefits far exceeding even the rosy promises of the genomic lobby. Most are lifestyle changes. For example, even geneticists would (privately, at least) acknowledge that if every 'diabetes' gene variant were fixed, only a small fraction of diabetes cases would be eliminated. The recent claim that much of cancer is due just to bad mutational luck has raised lots of objections--in large part because Big Data researchers' business would be curtailed. Everyone knows these things.
What would it take to kill the Big Data era, given the huge array of commercial, technological, and professional commitments we have built, if it doesn't actually pay off on its promises? Is focused science a nostalgic illusion? No matter what, we have a major vested interest on a huge scale in the NHGRI and other similar institutes elsewhere, and grantees in medical schools are a privileged, very well-heeled lot, regardless of whether their research is yielding what it promises.
Or, put another way, where are the areas in which Big Data of the genomic sort might actually pay, and where is this just funding-related institutional and cultural momentum? How would we decide?
So what do to? It won't happen, but in my view the NHGRI does not, and never did, belong properly in NIH. It should have been in NSF, where basic science is done. Only when clearly relevant to disease should genomics be funded for that purpose (and by NIH, not NSF). It should be focused on soluble problems in that context.
There is also our cultural love affair with science. We know very clearly that many things we might do at very low cost would yield health benefits far exceeding even the rosy promises of the genomic lobby. Most are lifestyle changes. For example, even geneticists would (privately, at least) acknowledge that if every 'diabetes' gene variant were fixed, only a small fraction of diabetes cases would be eliminated. The recent claim that much of cancer is due just to bad mutational luck has raised lots of objections--in large part because Big Data researchers' business would be curtailed. Everyone knows these things.
What would it take to kill the Big Data era, given the huge array of commercial, technological, and professional commitments we have built, if it doesn't actually pay off on its promises? Is focused science a nostalgic illusion? No matter what, we have a major vested interest on a huge scale in the NHGRI and other similar institutes elsewhere, and grantees in medical schools are a privileged, very well-heeled lot, regardless of whether their research is yielding what it promises.
Or, put another way, where are the areas in which Big Data of the genomic sort might actually pay, and where is this just funding-related institutional and cultural momentum? How would we decide?
So what do to? It won't happen, but in my view the NHGRI does not, and never did, belong properly in NIH. It should have been in NSF, where basic science is done. Only when clearly relevant to disease should genomics be funded for that purpose (and by NIH, not NSF). It should be focused on soluble problems in that context.
NIH funds the greedy maw of medical schools. The faculty don't work for the university, but for NIH. Their idea of 'teaching' often means giving 5-10 lectures a year that mainly consist of self-promoting reports about their labs, perhaps the talks they've just given at some meeting somewhere. Salaries are much higher than at non-medical universities--but in my view grants simply should not pay faculty salaries. Universities should. If research is part of your job's requirements, its their job to pay you. Grants should cover research staff, supplies and so on.
Much of this could happen (in principle) if the NHGRI were transferred to NSF and had to fund on an NSF-level budget policy. Smaller amounts, to more people, on focussed basic research. The same total budget would go a lot farther, and if it were restricted to non-medical school investigators there would be the additional payoff that most of them actually teach, so that they disseminate the knowledge to large numbers of students who can then go out into the private sector and apply what they've learned. That's an old-fashioned, perhaps nostalgic(?) view of what being a 'professor' should mean.
Major pare-backs of grant size and duration could be quite salubrious for science, making it more focused and in that sense accountable. The employment problem for scientists could also be ameliorated. Of course, in a transition phase, universities would have to learn how to actually pay their employees.
Of course, it won't happen, even if it would work, because it's so against the current power structure of science. And although Dr Collins has threatened to fund more small RO1 grants it isn't clear how or whether that will really happen. That's because there doesn't seem to be any real will to change among enough people with the leverage to make it happen, and the newcomers who would benefit are, like all such grass-roots elements, not unified enough.
These are just some thoughts, or assertions, or day-dreams about the evolution of science in the developed world over the last 50 years or so. Clearly there is widespread discontent, clearly there is large funding going on with proportionately little results. Major results in biomedical areas can't be expected over night. But we might expect that research had more accountability.
Friday, October 10, 2014
The post-doc glut: who's responsible? We all are!
By
Ken Weiss
A 1952 French movie called Nous Sommes tous des Assassins! had a strong anti-capital-punishment message: when it comes to an unfair penal system, we are all assassins. We have set up a society that generates criminals for many reasons based on inequity, and we are not all equal in the face of the law. The societal cullpability for inequities that can be avoided extends to many other areas.
This week, the Boston Globe ran a story on the glut of post-docs in the prestigious universities in Beantown. It bemoaned the long-term holding company that had been established, by which with the shrinking funding base in our current economy, many students with PhDs cannot get regular full-time faculty jobs and must take post-doctoral positions instead. These are useful and traditional, but had at one time been a short year or two in which new PhDs could learn new skills, publish their dissertation research, and establish themselves. Then, there were faculty jobs awaiting.
But no longer. The reason is that we have trained too many PhDs. But why is that? Some might suggest that the problem is that we've been doing our job but the country's inability to keep expanding the grant fund pool has failed us. That's a convenient way to look at things. But the truth is more sobering, and the finger of guilt needs to point not at the government, but at ourselves. This bottleneck to academic jobs is not just restricted to the snooty academic world of Boston. We are all the assassins of the hoped-for career path!
In science, everyone in a faculty job, especially at professional schools where salaries must come all or mainly from grants, has naturally been pressured to do whatever we can to get grants. Since that means spending most of our time writing them, we need staff to do the actual work (the research). That means post-docs who are better than grad students because they've got only time to work on our projects. And that leads to more grants, and the more grants we get, the more promotions we get and the higher our salaries, because we have to please our Chairs and Deans.
With everyone in a faculty job being pressed to see his/her status in terms of the number of publications, we need to spend our time writing papers and that also means having staff to help write them and to do the actual work we are writing about (the research). That means post-docs! The more papers we write, the more promotions we get and the higher our salaries, because we have to please our Chairs and Deans.
With everyone in a faculty job being judged by how many graduate students s/he trains, we still need to employ, or even require, graduate students to help the post-docs do the actual work (the research). The more students we train, the more promotions we get and the higher our salaries, because we have to please our Chairs and Deans.
With everyone in the grant agencies being judged by the size of their portfolios, they will want to fund those who churn out results the administrator can use to brag about what they are doing. That, too, means more, more more! The more churned out, the more promotions and raises the administrators get to advance their careers.
It's the system itself that needs changing. We're all smart enough to know that if each of us trains more than one new PhD we generate exponential growth in the science population. We are smart enough to know that exponential growth reaches a plateau. We are smart enough to know we are exploiting other young, innocent people by generating an unsustainable job market. And we are thus selfish enough to be doing what we are doing knowingly.
We are all assassins!
This week, the Boston Globe ran a story on the glut of post-docs in the prestigious universities in Beantown. It bemoaned the long-term holding company that had been established, by which with the shrinking funding base in our current economy, many students with PhDs cannot get regular full-time faculty jobs and must take post-doctoral positions instead. These are useful and traditional, but had at one time been a short year or two in which new PhDs could learn new skills, publish their dissertation research, and establish themselves. Then, there were faculty jobs awaiting.
But no longer. The reason is that we have trained too many PhDs. But why is that? Some might suggest that the problem is that we've been doing our job but the country's inability to keep expanding the grant fund pool has failed us. That's a convenient way to look at things. But the truth is more sobering, and the finger of guilt needs to point not at the government, but at ourselves. This bottleneck to academic jobs is not just restricted to the snooty academic world of Boston. We are all the assassins of the hoped-for career path!
In science, everyone in a faculty job, especially at professional schools where salaries must come all or mainly from grants, has naturally been pressured to do whatever we can to get grants. Since that means spending most of our time writing them, we need staff to do the actual work (the research). That means post-docs who are better than grad students because they've got only time to work on our projects. And that leads to more grants, and the more grants we get, the more promotions we get and the higher our salaries, because we have to please our Chairs and Deans.
With everyone in a faculty job being pressed to see his/her status in terms of the number of publications, we need to spend our time writing papers and that also means having staff to help write them and to do the actual work we are writing about (the research). That means post-docs! The more papers we write, the more promotions we get and the higher our salaries, because we have to please our Chairs and Deans.
With everyone in a faculty job being judged by how many graduate students s/he trains, we still need to employ, or even require, graduate students to help the post-docs do the actual work (the research). The more students we train, the more promotions we get and the higher our salaries, because we have to please our Chairs and Deans.
With everyone in the grant agencies being judged by the size of their portfolios, they will want to fund those who churn out results the administrator can use to brag about what they are doing. That, too, means more, more more! The more churned out, the more promotions and raises the administrators get to advance their careers.
It's the system itself that needs changing. We're all smart enough to know that if each of us trains more than one new PhD we generate exponential growth in the science population. We are smart enough to know that exponential growth reaches a plateau. We are smart enough to know we are exploiting other young, innocent people by generating an unsustainable job market. And we are thus selfish enough to be doing what we are doing knowingly.
We are all assassins!
Monday, July 7, 2014
IRBs: Insider control can't do what's expected. Part II: Loss of control going viral
By
Ken Weiss
The virus that might roar
A story published in The Independent last week reported the controversial work of virologist Dr Yoshihiro Kawaoka at the University of Wisconsin-Madison. Kawaoka was in the news several years ago for manipulating the H5N1 strain of flu virus so that it would be able to evade the immune defenses that much of the world developed when the virus was pandemic in 2009, killing over 500,000 people (the story was covered at the time by ScienceInsider). That work was the subject of intense debate and scrutiny, and a moratorium was imposed while it underwent review. The moratorium was lifted last year, and the work eventually cleared for publication.
According to a recent piece in the Wisconsin State Journal, during the moratorium Kawaoka began to do the same kind of work with the virus that killed so many people globally in 1918. The results of that project were recently published in Cell Host and Microbe. Kawaoka's goal is to understand the kinds of genetic changes that would make these viruses circumvent human immunity to become even more infectious or more lethal. The rationale, according to The Independent, is that it will help in the development of vaccines if such genetic changes were to occur in the wild.
The problem, as many see it, is that there is no guarantee that these virulent strains won't escape from the lab and do much harm. While Kawaoka says this won't happen, other lethal experimental organisms have done, and for new technologies like this such risk is always a concern. Indeed, for old technologies -- the debate about whether to keep smallpox virus in labs has been going on for decades. Kawaoka's work was approved by the university institutional review board although, according to The Independent, at least one member of the board was not willing to approve his current project.
Does the fact that Kawaoka is a star on the faculty of the University of Wisconsin, where he has been treated extremely well, influence the IRB? He is well-known in the field of influenza research, and has been involved in much recent work on emerging viruses, and no doubt his track record should count when his work is evaluated, but it's also possible, as always when power may be an issue, that Kawaoka's proposals have an easier time passing review than, say, a new researcher's would.
But what is the IRB's role here? Is it the board's job to decide what kind of risk society should be subjected to when academics do their work? Or is that the job of an inter-institutional, or governmental agency, such as the U.S. National Science Advisory Board for Biosecurity (NSABB), which reviewed, and approved, Kawaoka's earlier work?
The risk of an inadvertent epidemic or even pandemic from this research may be small to very slight, but the consequences of such a thing would be so huge as to ask about the risk-benefit balance. The importance of the discovery, should the research be successful, could be very great as well. So there is no easy answer.
But that the University of Wisconsin allowed one of its very well-heeled faculty members to develop a modified pathogenic virus to which humans would no longer have resistance sounds like something out of Dr Strangelove. How often is this sort of thing being done in a university near you--with or without its noble IRB being aware of it?
As we noted above, the previous work that got Kawaoka and Dutch investigators into hot water involved tinkering with the H5N1 flu virus to see what it would take to escape our immune system. Their idea was essentially to test virus genomic modification on ferrets, who in many ways are similar to humans immunologically. The work was allowed to proceed after review, but in fact how can anyone guarantee that an accident won't happen? We don't happen to know the conditions of the lifting of the moratorium, but no matter how extensive the review, or how cautious the scientists promised to be, no one can be absolutely certain that an accidental release of these viruses won't occur. It reminds me of the time a little boy was getting on his bike to ride down the hill in front of our house. His father reminded him to put on his helmet before he went, in case he fell off the bike. "But Dad," he protested, "I'm not going to fall off!"
Similar concerns about recombinant DNA were raised a generation ago, and over time adequate protections were worked out and no disaster occurred that we know of. But recombinant DNA doesn't pose the kinds of dangers that virulent viruses do. And we have seen with other things, like stem cells, that scientists will do their best to find ways to do what they want to do. Scientists, and the private sector, are both anxious to find new cures and also, one must acknowledge, looking for the major profits that are to be made. The stem cell issue is more complicated because objections largely were religious. Scientists may sneer at such things as ignorance standing the way of progress, but religious people are citizens and taxpayers, and if they are the majority, and aren't in favor of such a project, in a democracy, perhaps that should rule, whether frustrated scientists like it or not.
And there are other issues. If a rock-star scientist threatens to leave the institution and go work elsewhere, this can be an incentive for an institution that treats faculty members like celebrities--particularly if they bring in big grant money--to compromise standards.
And, should a properly independent system, with zero vested interests, be allowed or instructed to impose research bans for some number of years, appropriate to the offense, for investigators about whom there is evidence of misleading the IRB, or doing things not approved or even disapproved?
It is, as in most similar kinds of situations, difficult to see how policy should be formed and implemented. After all, even amoral scientists are still scientists and citizens, and if they think something should be done, they have their votes, too. And major public good might often also entail risks.
The IRBs were started in the wake of abuses by Nazi and other scientists, including the most respected pillars of their society, and including in our country, as we mentioned last week. That showed that scientists can't automatically be trusted not to intentionally, or even inadvertently do harm. But many of us feel that the tenor of the committees has itself drifted from that proper gate-keeping job to a primary function to protect the institution against law-suits, part of a general trend in universities that is stifling in many ways, as well as costly in time and resources.
Our mistaken mixing of messages
Making decisions is not easy, but there should be a balance of power. However, in Thursday's post on IRBs, we mixed two aspects of bioethics. One was about treatment of research subjects, human or otherwise. The other was about priorities for spending society's resources (both are involved in our discussion here as well). The issues overlap somewhat but we probably should have kept them separate. IRBs are not mandated to deal with research priorities or societal concerns, though they do have to judge whether a project violates those concerns, and about whether doing some procedure on mice or other animals is warranted for the stated purpose of a project.
The peer review and policies of funders are the bodies that deal with research priorities. My view is, as stated in Part I and elsewhere is that our priorities often too much depend on vested interests. That is because agencies like NIH ask scientists what should be the next research priority. Indeed, as I have seen directly several times, an agency like the National Academy of Sciences, entrusted with advising the government, can be paid by an NIH agency to hold a meeting about priorities, at which the agency's funded clients, and agency administrators, attend. This is, essentially, insider trading and the NAS should not accept such contracts. However, how to set priorities is not an easy thing to decide, since asking scientists their view is begging for self-interest to be at play, yet scientists know better than the public what the issues are.
In this sense, humans or animals are involved in projects that subject them to conditions that are allowed because of the social politics of the funding and academic career apparatus. Are we out of proper alignment with what most would agree are appropriate societal priorities? The payoff in actual public or scientific good is often, I think, far below what is promised. This is of course a value judgment, but so are all IRB decisions and policies.
In any case, the Wisconsin issue that triggered these comments is more closely related to IRBs and its degree of real control of research ethics than about whether funds should be spent on this type of project rather than some other. Here, in fact, the story as written suggests serious abuse of what IRBs should rightly be policing. One can argue that the knowledge being sought would properly have very high societal priority (because it deals with dangerous infectious disease), but that's a separate question.
More generally, the funding priority issue may often even more important than the safer, local IRB protections. Billions of dollars go to feed the established research system, making it very self-aggrandizing and far less innovative than it might be if funding commitments, mega-longterm projects and the like were not so entrenched. Instead of spending mega-bucks on more Big Data surveys we might focus funding on problems that were well-posed enough to be soluble. This is again a societal issue about how resources are used, or captured, which does, of course, go beyond local IRB concerns that we were mainly intending to comment on.
So, while the ethical issues are not entirely separate, it confuses things to mix them as I did in our previous post.
A story published in The Independent last week reported the controversial work of virologist Dr Yoshihiro Kawaoka at the University of Wisconsin-Madison. Kawaoka was in the news several years ago for manipulating the H5N1 strain of flu virus so that it would be able to evade the immune defenses that much of the world developed when the virus was pandemic in 2009, killing over 500,000 people (the story was covered at the time by ScienceInsider). That work was the subject of intense debate and scrutiny, and a moratorium was imposed while it underwent review. The moratorium was lifted last year, and the work eventually cleared for publication.
According to a recent piece in the Wisconsin State Journal, during the moratorium Kawaoka began to do the same kind of work with the virus that killed so many people globally in 1918. The results of that project were recently published in Cell Host and Microbe. Kawaoka's goal is to understand the kinds of genetic changes that would make these viruses circumvent human immunity to become even more infectious or more lethal. The rationale, according to The Independent, is that it will help in the development of vaccines if such genetic changes were to occur in the wild.
The problem, as many see it, is that there is no guarantee that these virulent strains won't escape from the lab and do much harm. While Kawaoka says this won't happen, other lethal experimental organisms have done, and for new technologies like this such risk is always a concern. Indeed, for old technologies -- the debate about whether to keep smallpox virus in labs has been going on for decades. Kawaoka's work was approved by the university institutional review board although, according to The Independent, at least one member of the board was not willing to approve his current project.
Does the fact that Kawaoka is a star on the faculty of the University of Wisconsin, where he has been treated extremely well, influence the IRB? He is well-known in the field of influenza research, and has been involved in much recent work on emerging viruses, and no doubt his track record should count when his work is evaluated, but it's also possible, as always when power may be an issue, that Kawaoka's proposals have an easier time passing review than, say, a new researcher's would.
But what is the IRB's role here? Is it the board's job to decide what kind of risk society should be subjected to when academics do their work? Or is that the job of an inter-institutional, or governmental agency, such as the U.S. National Science Advisory Board for Biosecurity (NSABB), which reviewed, and approved, Kawaoka's earlier work?
The risk of an inadvertent epidemic or even pandemic from this research may be small to very slight, but the consequences of such a thing would be so huge as to ask about the risk-benefit balance. The importance of the discovery, should the research be successful, could be very great as well. So there is no easy answer.
But that the University of Wisconsin allowed one of its very well-heeled faculty members to develop a modified pathogenic virus to which humans would no longer have resistance sounds like something out of Dr Strangelove. How often is this sort of thing being done in a university near you--with or without its noble IRB being aware of it?
As we noted above, the previous work that got Kawaoka and Dutch investigators into hot water involved tinkering with the H5N1 flu virus to see what it would take to escape our immune system. Their idea was essentially to test virus genomic modification on ferrets, who in many ways are similar to humans immunologically. The work was allowed to proceed after review, but in fact how can anyone guarantee that an accident won't happen? We don't happen to know the conditions of the lifting of the moratorium, but no matter how extensive the review, or how cautious the scientists promised to be, no one can be absolutely certain that an accidental release of these viruses won't occur. It reminds me of the time a little boy was getting on his bike to ride down the hill in front of our house. His father reminded him to put on his helmet before he went, in case he fell off the bike. "But Dad," he protested, "I'm not going to fall off!"
Similar concerns about recombinant DNA were raised a generation ago, and over time adequate protections were worked out and no disaster occurred that we know of. But recombinant DNA doesn't pose the kinds of dangers that virulent viruses do. And we have seen with other things, like stem cells, that scientists will do their best to find ways to do what they want to do. Scientists, and the private sector, are both anxious to find new cures and also, one must acknowledge, looking for the major profits that are to be made. The stem cell issue is more complicated because objections largely were religious. Scientists may sneer at such things as ignorance standing the way of progress, but religious people are citizens and taxpayers, and if they are the majority, and aren't in favor of such a project, in a democracy, perhaps that should rule, whether frustrated scientists like it or not.
And there are other issues. If a rock-star scientist threatens to leave the institution and go work elsewhere, this can be an incentive for an institution that treats faculty members like celebrities--particularly if they bring in big grant money--to compromise standards.
And, should a properly independent system, with zero vested interests, be allowed or instructed to impose research bans for some number of years, appropriate to the offense, for investigators about whom there is evidence of misleading the IRB, or doing things not approved or even disapproved?
It is, as in most similar kinds of situations, difficult to see how policy should be formed and implemented. After all, even amoral scientists are still scientists and citizens, and if they think something should be done, they have their votes, too. And major public good might often also entail risks.
The IRBs were started in the wake of abuses by Nazi and other scientists, including the most respected pillars of their society, and including in our country, as we mentioned last week. That showed that scientists can't automatically be trusted not to intentionally, or even inadvertently do harm. But many of us feel that the tenor of the committees has itself drifted from that proper gate-keeping job to a primary function to protect the institution against law-suits, part of a general trend in universities that is stifling in many ways, as well as costly in time and resources.
Our mistaken mixing of messages
Making decisions is not easy, but there should be a balance of power. However, in Thursday's post on IRBs, we mixed two aspects of bioethics. One was about treatment of research subjects, human or otherwise. The other was about priorities for spending society's resources (both are involved in our discussion here as well). The issues overlap somewhat but we probably should have kept them separate. IRBs are not mandated to deal with research priorities or societal concerns, though they do have to judge whether a project violates those concerns, and about whether doing some procedure on mice or other animals is warranted for the stated purpose of a project.
The peer review and policies of funders are the bodies that deal with research priorities. My view is, as stated in Part I and elsewhere is that our priorities often too much depend on vested interests. That is because agencies like NIH ask scientists what should be the next research priority. Indeed, as I have seen directly several times, an agency like the National Academy of Sciences, entrusted with advising the government, can be paid by an NIH agency to hold a meeting about priorities, at which the agency's funded clients, and agency administrators, attend. This is, essentially, insider trading and the NAS should not accept such contracts. However, how to set priorities is not an easy thing to decide, since asking scientists their view is begging for self-interest to be at play, yet scientists know better than the public what the issues are.
In this sense, humans or animals are involved in projects that subject them to conditions that are allowed because of the social politics of the funding and academic career apparatus. Are we out of proper alignment with what most would agree are appropriate societal priorities? The payoff in actual public or scientific good is often, I think, far below what is promised. This is of course a value judgment, but so are all IRB decisions and policies.
In any case, the Wisconsin issue that triggered these comments is more closely related to IRBs and its degree of real control of research ethics than about whether funds should be spent on this type of project rather than some other. Here, in fact, the story as written suggests serious abuse of what IRBs should rightly be policing. One can argue that the knowledge being sought would properly have very high societal priority (because it deals with dangerous infectious disease), but that's a separate question.
More generally, the funding priority issue may often even more important than the safer, local IRB protections. Billions of dollars go to feed the established research system, making it very self-aggrandizing and far less innovative than it might be if funding commitments, mega-longterm projects and the like were not so entrenched. Instead of spending mega-bucks on more Big Data surveys we might focus funding on problems that were well-posed enough to be soluble. This is again a societal issue about how resources are used, or captured, which does, of course, go beyond local IRB concerns that we were mainly intending to comment on.
So, while the ethical issues are not entirely separate, it confuses things to mix them as I did in our previous post.
Thursday, July 3, 2014
IRBs: Insider control can't do what's expected. Part I: some history
By
Ken Weiss
We are supposedly able to sleep peacefully in the security of our homes because Institutional Review Boards (IRBs) are on guard to protect us from harm at the hands of universities' Dr Frankensteins. But the system was built by the potential Frankensteins, so any such comfort goes the way of any belief that people can police their own ethics, especially when money is involved. This is shown by a recent revelation in the news (the short version: scientist creates flu strain that human immune system can't fight, with IRB approval), that we'll be seeing more about in the near future. So, get your face mask on and head under the covers if you want to sleep in peaceful bliss.
First, however, a brief history of IRBs
What protects us from mad scientists?
The idea of IRBs arose largely not from Frankenstein but from abuses, especially courtesy of the Nazis. Absolutely horrid crimes by almost any standard were committed in the name of research. It wasn't just the Germans. The Nuremberg Code for research, which stipulates essentially that it must involve voluntary consent, do no harm, have some benefit, and so on, was one result.
But abuses weren't patented by the Nazis. Anatomists at least as far back as Galen did vivisection, at least on mammals and perhaps on humans. People still object to vivisection--animal research--and if you knew what is allowed you might join them, even though the rationale is, as it has been since ancient times, that we make the animals suffer ultimately to relieve human disease. Of course, we claim the animals aren't suffering, on various rationales (they aren't sentient, aren't conscious, aren't really suffering, .....).
The abuses before WWII didn't stop what happened afterwards. The well-documented Tuskegee study of southern black men affected by syphilis was, once revealed for its cruelty, another motivation for current IRBs. A similar study in Guatemala and some shady doings of research in Africa because it can't be done here, all show the pressure that is needed to keep scientists under control. Formal requirements for each research institute to form an IRB to review and approve any research done there, has led to very widely applied general standards, in principle consistent with the Nuremberg Code. More recently up to date issues, like confidentiality in the computer-data era, have been added.
The idea is that the IRB will prevent research that violates a stated set of principles from being done in their facilities or by their employees. Over the past few decades, everyone entering the research system has become aware (indeed, via formal training, has been made to become aware) of these rules and standards. Every proposal must show how it adheres to them.
So, the rationale behind IRBs is unquestionably good, and much that is positive has resulted. In broadest terms, we each know that we must pay attention to the ethical criteria for conducting research. Of course, we are humans and the reality may not match the ideal.
From ideal to institutionalization
IRBs are committees comprised of a panel of investigators from the institution (though, even there one can't review one's own proposals), plus administrators working for the institution, and at least one 'community' member. The latter may be a minister, nurse, or some other outsider.
The idea is that each institution knows its own circumstances best and having its own independent IRB is better than some meddling government behemoth like, say NIH, that would make decisions from the outside (when NIH is, for example, the funder who will decide what will be funded--an obvious conflict of interest). So those in, say, Wisconsin, know what's ethical for cheesedom, while Alabamians and San Franciscans have their own high ethical sense of self-restraint.
But this is a system run by humans, and over the decades it has become something of a System. For example, perhaps you can imagine how a non-academic member from the community, even a minister, might be cajoled or cowed by the huge majority of insiders, the often knowingly obfuscating technological thicket of proposals, and so on. As is also a problem for any peer review system, IRB members may or may not be anonymous, but within an institution even if they are, their identity can certainly be discovered. They know, even if it's never said out loud, that if they scotch a proposal from someone on their campus, that person will be on the IRB in the future and could return the favor. This can obviously be corrupting in itself, even if the IRB members take the care required to read each proposal carefully, and even if everything proposed is clearly stated. Sometimes they do, but being on the board is a largely thankless task and how often do they not take that care?
It is not hard to see how IRBs will pay close attention to the details and insist on this or that tweak of a proposed protocol, what I call safe ethics. They certainly do impose this sort of ethics--ethics that don't really stand in the way of what their faculty want to do. But they may be reluctant to simply follow Nancy Reagan and just say 'no' to a major proposal.
IRB members from the administration are bureaucrats whose first instinct is to protect their institution (and, perhaps, their own jobs?). They want to avoid public scandal and obvious abuse, but every proposal that is rejected is a proposal that can't be funded, and won't bring in overhead money and generate publications for the institution to boast about. I have personally known of a case in a major university medical school whose administrator-member unashamedly (though privately) acknowledged discouraging their IRB from rejecting proposals because the institution wanted the overhead. You can guess whether research that ordinary people, people without a vested interest, might consider objectionable--such as unnecessary harsh experiments on hapless mice or other animals or studies that could jeopardize human confidentiality but with realistically scant likelihood to discover anything really important--is going to get a pass. Maybe the investigator will be asked for some minor revisions. But a lot of dicey research gets approved.
There are professional bioethicists in most large research-based universities including medical schools. They may have PhDs in ethics per se, and can be very good and perceptive people (I've trained some myself). They write compelling, widely seen papers on their subject. But in most cases they live directly or indirectly on grant funds. They may get 5% or so of their salary on a grant as the project's ethicist. Their careers, especially in medical schools, depend on bringing in external funds. This is almost automatically corrupting. Do you think it affords any sort of actual protection of research subjects for more than some rather formal issues like guaranteeing anonymity that usually few would object to? How likely is it that a project's pet ethicist can say simply "No, this is wrong and you can't do it!"? Surely it does sometimes happen, but since ethicists must make their own careers by being part of research projects, this really is an obvious case of foxes guarding hen-houses.
The Human Genome Research Institute (NHGRI) at NIH has had some fraction, we think 3%, of its research budget mandated to cover ethics related to genomic studies. Decades of experience show that this should be re-named 'safe ethics'. NIH does protect (where possible) against plagiarism, unethical revealing of subject identities, and that sort of thing. But not against whole enterprises they want to fund that might be very wasteful (e.g., the funds would buy much more actual health--the 'H' in NIH--than, say, another mega-genomics study). This is a truly and deeply ethical issue that cuts to the bone of vested interests, even in this case of the NHGRI. If such things have ever been prohibited, we don't know of them, and they surely are the exception rather than the rule. Even harmless research in the human rights sense, that is very costly, is an ethical affront to the competing interests of even more important things society can do with its funds. But reports from the NIH ELSI (ethics) meetings have always been entirely consistent with the view I'm laying out here.
The truth is that in science, as in other areas of human affairs, money talks, and mutual or reciprocal interests lead to a system predominated by insider-trading. The untold millions being spent on countless studies of humans or other animals, whose serious payoff to the society supporting them, if any, is no closer than light years away, is, in my opinion, offensive. Peer review is not totally useless by any means, and doesn't always fund the insiders, but there are certainly major aspects of interlocking conflicts of interest in science.
Scientists are experts at hiding behind complex technical details and rhetoric, and we are as self-interested as any other group of humans. We have our Frankensteins, who are amorally driven to study whatever interests them, rationalizing all the way that they're just innocent babes just following Nature's trail, and if what they do might be harmful (to humans, forget what it might do to mice, who can't vote) it's up to the political system, not scientists, to prevent that. It's an age-old argument.
One must admit that having bureaucrats and real outsiders make decisions about what sort of research should be allowed, has its own problems. Bureaucrats have careers, and often live by protecting their bailiwicks and the thicket of rules by which they wield power. There aren't any easy answers. And not all scientists are Frankensteins by any means, most being truly hoping to do good. But the motivation to do whatever one wants even if, or perhaps especially if, it is edgy and has shock-value is often coin of the realm today.
Tomorrow, in Part II, we'll take a look at a most recent example, the influenza research mentioned at the beginning, of what is an abject failure at worst, and at best a questionable lack of institutional oversight of its own IRB.
First, however, a brief history of IRBs
What protects us from mad scientists?
The idea of IRBs arose largely not from Frankenstein but from abuses, especially courtesy of the Nazis. Absolutely horrid crimes by almost any standard were committed in the name of research. It wasn't just the Germans. The Nuremberg Code for research, which stipulates essentially that it must involve voluntary consent, do no harm, have some benefit, and so on, was one result.
But abuses weren't patented by the Nazis. Anatomists at least as far back as Galen did vivisection, at least on mammals and perhaps on humans. People still object to vivisection--animal research--and if you knew what is allowed you might join them, even though the rationale is, as it has been since ancient times, that we make the animals suffer ultimately to relieve human disease. Of course, we claim the animals aren't suffering, on various rationales (they aren't sentient, aren't conscious, aren't really suffering, .....).
The abuses before WWII didn't stop what happened afterwards. The well-documented Tuskegee study of southern black men affected by syphilis was, once revealed for its cruelty, another motivation for current IRBs. A similar study in Guatemala and some shady doings of research in Africa because it can't be done here, all show the pressure that is needed to keep scientists under control. Formal requirements for each research institute to form an IRB to review and approve any research done there, has led to very widely applied general standards, in principle consistent with the Nuremberg Code. More recently up to date issues, like confidentiality in the computer-data era, have been added.
The idea is that the IRB will prevent research that violates a stated set of principles from being done in their facilities or by their employees. Over the past few decades, everyone entering the research system has become aware (indeed, via formal training, has been made to become aware) of these rules and standards. Every proposal must show how it adheres to them.
So, the rationale behind IRBs is unquestionably good, and much that is positive has resulted. In broadest terms, we each know that we must pay attention to the ethical criteria for conducting research. Of course, we are humans and the reality may not match the ideal.
From ideal to institutionalization
IRBs are committees comprised of a panel of investigators from the institution (though, even there one can't review one's own proposals), plus administrators working for the institution, and at least one 'community' member. The latter may be a minister, nurse, or some other outsider.
The idea is that each institution knows its own circumstances best and having its own independent IRB is better than some meddling government behemoth like, say NIH, that would make decisions from the outside (when NIH is, for example, the funder who will decide what will be funded--an obvious conflict of interest). So those in, say, Wisconsin, know what's ethical for cheesedom, while Alabamians and San Franciscans have their own high ethical sense of self-restraint.
But this is a system run by humans, and over the decades it has become something of a System. For example, perhaps you can imagine how a non-academic member from the community, even a minister, might be cajoled or cowed by the huge majority of insiders, the often knowingly obfuscating technological thicket of proposals, and so on. As is also a problem for any peer review system, IRB members may or may not be anonymous, but within an institution even if they are, their identity can certainly be discovered. They know, even if it's never said out loud, that if they scotch a proposal from someone on their campus, that person will be on the IRB in the future and could return the favor. This can obviously be corrupting in itself, even if the IRB members take the care required to read each proposal carefully, and even if everything proposed is clearly stated. Sometimes they do, but being on the board is a largely thankless task and how often do they not take that care?
It is not hard to see how IRBs will pay close attention to the details and insist on this or that tweak of a proposed protocol, what I call safe ethics. They certainly do impose this sort of ethics--ethics that don't really stand in the way of what their faculty want to do. But they may be reluctant to simply follow Nancy Reagan and just say 'no' to a major proposal.
IRB members from the administration are bureaucrats whose first instinct is to protect their institution (and, perhaps, their own jobs?). They want to avoid public scandal and obvious abuse, but every proposal that is rejected is a proposal that can't be funded, and won't bring in overhead money and generate publications for the institution to boast about. I have personally known of a case in a major university medical school whose administrator-member unashamedly (though privately) acknowledged discouraging their IRB from rejecting proposals because the institution wanted the overhead. You can guess whether research that ordinary people, people without a vested interest, might consider objectionable--such as unnecessary harsh experiments on hapless mice or other animals or studies that could jeopardize human confidentiality but with realistically scant likelihood to discover anything really important--is going to get a pass. Maybe the investigator will be asked for some minor revisions. But a lot of dicey research gets approved.
There are professional bioethicists in most large research-based universities including medical schools. They may have PhDs in ethics per se, and can be very good and perceptive people (I've trained some myself). They write compelling, widely seen papers on their subject. But in most cases they live directly or indirectly on grant funds. They may get 5% or so of their salary on a grant as the project's ethicist. Their careers, especially in medical schools, depend on bringing in external funds. This is almost automatically corrupting. Do you think it affords any sort of actual protection of research subjects for more than some rather formal issues like guaranteeing anonymity that usually few would object to? How likely is it that a project's pet ethicist can say simply "No, this is wrong and you can't do it!"? Surely it does sometimes happen, but since ethicists must make their own careers by being part of research projects, this really is an obvious case of foxes guarding hen-houses.
The Human Genome Research Institute (NHGRI) at NIH has had some fraction, we think 3%, of its research budget mandated to cover ethics related to genomic studies. Decades of experience show that this should be re-named 'safe ethics'. NIH does protect (where possible) against plagiarism, unethical revealing of subject identities, and that sort of thing. But not against whole enterprises they want to fund that might be very wasteful (e.g., the funds would buy much more actual health--the 'H' in NIH--than, say, another mega-genomics study). This is a truly and deeply ethical issue that cuts to the bone of vested interests, even in this case of the NHGRI. If such things have ever been prohibited, we don't know of them, and they surely are the exception rather than the rule. Even harmless research in the human rights sense, that is very costly, is an ethical affront to the competing interests of even more important things society can do with its funds. But reports from the NIH ELSI (ethics) meetings have always been entirely consistent with the view I'm laying out here.
The truth is that in science, as in other areas of human affairs, money talks, and mutual or reciprocal interests lead to a system predominated by insider-trading. The untold millions being spent on countless studies of humans or other animals, whose serious payoff to the society supporting them, if any, is no closer than light years away, is, in my opinion, offensive. Peer review is not totally useless by any means, and doesn't always fund the insiders, but there are certainly major aspects of interlocking conflicts of interest in science.
Scientists are experts at hiding behind complex technical details and rhetoric, and we are as self-interested as any other group of humans. We have our Frankensteins, who are amorally driven to study whatever interests them, rationalizing all the way that they're just innocent babes just following Nature's trail, and if what they do might be harmful (to humans, forget what it might do to mice, who can't vote) it's up to the political system, not scientists, to prevent that. It's an age-old argument.
One must admit that having bureaucrats and real outsiders make decisions about what sort of research should be allowed, has its own problems. Bureaucrats have careers, and often live by protecting their bailiwicks and the thicket of rules by which they wield power. There aren't any easy answers. And not all scientists are Frankensteins by any means, most being truly hoping to do good. But the motivation to do whatever one wants even if, or perhaps especially if, it is edgy and has shock-value is often coin of the realm today.
Tomorrow, in Part II, we'll take a look at a most recent example, the influenza research mentioned at the beginning, of what is an abject failure at worst, and at best a questionable lack of institutional oversight of its own IRB.
Subscribe to:
Posts (Atom)

