As in many departments, our graduate students and post-docs here in the Penn State Anthropology Department hold weekly 'journal clubs' where recent interesting papers are discussed. Last week, the students discussed the nature, value and importance of tabulations of journal impact factors (IF), basically the citation rate per published paper. There have been many papers and commentaries on this subject in recent years, but this session focused on a paper by Brembs, Button and Munafo entitled "Deep impact: unintended consequences of journal rank," published in 2013 in Frontiers in Neuroscience.
The
IF scandal
This article assesses the assigned IF of journals relative to their retraction, error or fraud, and reliability or replicability rates. The objective picture of the IFs is not encouraging. Statistical analysis shows that the 'major' journals--the expensive, exclusive, snobbish high-status ones are, in terms of the quality and accuracy of their content no better, and arguably worse than the less prestigious journals. We won't go into the details but basically there is a rush to publish dramatic results in those status journals, the journals are in business to attract and generate attention, and that leads them to receive, and publish, splashier claims. In this self-reinforcing pattern they garner the honors, so it is worrisome that they appear to do this without actually publishing the best research or, worse, systematically publishing unreliable work.Our student seminar discussed this problem and the effect it may have on their careers, if their work is going to be judged by a somewhat rigged, and inaccurate, scoring system. If they can't get into the high-IF elite publishing club, which is somewhat self-reinforcing, how can they compete for jobs and grants and get their work known?
We have several reactions to this. For students and others who may have a stake in (if not in the heart of!) the system, here are a few thoughts on alternatives to the high IF journals. The picture is grim, but in some surprising ways, not at all hopeless.
Some thoughts for students:
First, TheWinnower, founded by Josh Nicholson, a graduate student at Virginia Tech, is a new online site where one can send papers but also where blogs and other such new-media communications can be published, and these publications given a DOI (formal document identifier) and hence be more regularly citable and permanently archived. It's but one of many new communication venues. A lot of what is on these media is of course superficial fluff, but (Sssh!! don't tell anyone!) so is a lot of any sort of publication, even (believe it or not!) in the 'major' journals and so has it always been, even in the old-time printed journals of yore.
Secondly, there are allies in any movement towards change, not just from the grass roots where pressure for social change usually arises. There are thoughtful and progressive administrators, and serious scholars and scientists, who are resisting the pressure to use IF score-counting, in career evaluations, purportedly to make them more 'objective'.
And, there are many people making their way largely and in various ways on blogs, open-access publishing, online teaching, communicating with people via Twitter and other outlets (most of which we, being quite senior, probably don't even know of!). Writing for public media of all sorts has always been a mainline, legitimate way to build careers in anthropology, especially its sociocultural sides. But generally, critiques of the system at all levels, such as repeated revelations about score-counting bureaucracies and IF biases, as well as objections to closed access publishing, will have their impact if they are repeated often and loudly enough.
Thirdly, ironically and reassuringly perhaps, the tightening of the grant prospects and the well-documented concentration of funding in the hands of senior investigators, means more people will have to rely less on grants, and their university employers will simply have to recognize that. Teaching and other forms of service, scholarship, and outreach will simply have to be reinvigorated. Universities aren't just going to close shop because their grant funds shrink. They're not even going to be able to keep shifting towards hiring poorly paid Instructors. So the field is open for innovation and creativity.
Fourthly, also ironically, the greater the rush to the Big Journals, the better it may be for the job prospects of current grad schools? Why? Well, fewer people in the running for each job will have such publications on their CVs than perhaps was the case in the past. As long as applicants realize that others will want the same jobs they do, and they develop their skills and depth of thought accordingly, they'll compete well. After all, colleges and universities will simply not be able to hold out for those few with BigName publications, even if they wanted to. They'll be 'stuck' having to evaluate people on their actual merits. And, not so trivial as you might think, most of their faculty haven't got BigName papers either, and might not want to be outshone by adding a junior hyper-achiever to their midst. Indeed, many less research-intensive but wholly academically serious places feel, correctly, that applicants for faculty positions who have BigName publications don't really want to work there and will move on as soon as they can get a ‘better’ job, and/or in the meantime won't be dedicated to teaching, students and the local institution. So things aren't always as dire or as one-sided as they seem--even if times are relatively difficult right now.First, TheWinnower, founded by Josh Nicholson, a graduate student at Virginia Tech, is a new online site where one can send papers but also where blogs and other such new-media communications can be published, and these publications given a DOI (formal document identifier) and hence be more regularly citable and permanently archived. It's but one of many new communication venues. A lot of what is on these media is of course superficial fluff, but (Sssh!! don't tell anyone!) so is a lot of any sort of publication, even (believe it or not!) in the 'major' journals and so has it always been, even in the old-time printed journals of yore.
Secondly, there are allies in any movement towards change, not just from the grass roots where pressure for social change usually arises. There are thoughtful and progressive administrators, and serious scholars and scientists, who are resisting the pressure to use IF score-counting, in career evaluations, purportedly to make them more 'objective'.
And, there are many people making their way largely and in various ways on blogs, open-access publishing, online teaching, communicating with people via Twitter and other outlets (most of which we, being quite senior, probably don't even know of!). Writing for public media of all sorts has always been a mainline, legitimate way to build careers in anthropology, especially its sociocultural sides. But generally, critiques of the system at all levels, such as repeated revelations about score-counting bureaucracies and IF biases, as well as objections to closed access publishing, will have their impact if they are repeated often and loudly enough.
Thirdly, ironically and reassuringly perhaps, the tightening of the grant prospects and the well-documented concentration of funding in the hands of senior investigators, means more people will have to rely less on grants, and their university employers will simply have to recognize that. Teaching and other forms of service, scholarship, and outreach will simply have to be reinvigorated. Universities aren't just going to close shop because their grant funds shrink. They're not even going to be able to keep shifting towards hiring poorly paid Instructors. So the field is open for innovation and creativity.
Fifth, if the intense rigors of the research-intensive Fast Lane appeal to you, well, you know the gig and its competitive nature, and if you get your advanced degree from a fine and well-regarded program that will give you a chance at getting the brass ring. Those avenues are of course open, even if highly competitive.
"Painted Pony Bean" by Liveon001 © Travis K. Witt - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - |
But why does anyone even tally such things as impact factors?
An obvious question one should be why anybody would tally impact factors in the first place? Who has what to gain? The answer has to be that it is in someone's interest and someone will gain by it. After all, when some of us started our careers, there was no such thing (or, the earlier version Science Citation Index, was remote, in the library, laborious to look through and then usually only for legitimate resource searching). Scholarship itself was on average at least as good as now, careers were made without bean-counting but more on merit and substance, and bean-counting expectations were lower (and respect for teaching higher), the grant game much, much less intense.
IF scores are computed by a commercial company, Thompson-Reuters, as---what? As a favor to the publishing industry, and for what we would call a kind of academic bourgeois market for baubles and vanity. Journals self-promote by gaming their IFs, universities self-promote by gaming their faculty's IF ratings. They have money to make by promoting and, yes, manipulating their IFs (see the above article for just some of the ways). One can ask whether there is even a single reason for such score-keeping to be done other than for reasons of artificially constructed status hierarchies.
One motivation for this bean-counting is the heavy proliferation of online journals. Some of these are very highly respected, and deservedly, while others are chaff or, worse, scams for making money playing on fears and insecurities of faculty members needing advancement. IFs will at least be some assistance to an administrator or grant reviewer who wants to have an idea of a faculty candidate's record. But if the IFs are systematically unreliable, or manipulated, or even reverse indicators of actual work quality as some articles like the one above have suggested, that is a rather lame rationale for using IFs. Administrators evaluating their faculty members' careers should look at the actual work, not just some computer-tallied score about it. That may not be easy, but administrators are well-paid and accepted their jobs, after all.
There was in the past an insider Old Boy network in academe, that discriminated more arbitrarily in terms of funding, over-powerful Editors who controlled who published and what they published, and less opportunity for women and cultural minorities (based on ethnic as well as university status hierarchies). To increase fairness, but also to avoid discrimination lawsuits, and to play the self-promotion PR spin game, universities and their administrative officials learned the value of being 'objective' and hiding behind Excel spreadsheets rather than judgment. More objectivity did in many ways dislodge the older elite insider networks, but a system of elites has clearly re-established itself, and manipulable IF factors and their associated commercial incentives have helped reestablish some dominance in the academic system. It may still be wide open in many ways, but is heavily burdened by the game because the corporate university has become so money-oriented. This is very well documented. Things like IFs serve those interests.
The academic world will experience change, and is changing, and the new ways of communication are better and faster and more open-structured than ever. They make life more frenetic, but that will probably calm down because it's exhausting everyone. There will of course always be an elite, and for some that's a happy community to be part of. But it's not to everyone's taste. How long it will take coup-counting administrators to accept these other venues such as online communications, is unclear, but it's happening.
Social change requires resistance to the status quo, usually organized resistance (or else money-based leverage). Bureaucracies do need to be pressured, by faculty, graduate students and post-docs, and people like Department Heads and Chairs. But, it has to happen, and it will.
An obvious question one should be why anybody would tally impact factors in the first place? Who has what to gain? The answer has to be that it is in someone's interest and someone will gain by it. After all, when some of us started our careers, there was no such thing (or, the earlier version Science Citation Index, was remote, in the library, laborious to look through and then usually only for legitimate resource searching). Scholarship itself was on average at least as good as now, careers were made without bean-counting but more on merit and substance, and bean-counting expectations were lower (and respect for teaching higher), the grant game much, much less intense.
IF scores are computed by a commercial company, Thompson-Reuters, as---what? As a favor to the publishing industry, and for what we would call a kind of academic bourgeois market for baubles and vanity. Journals self-promote by gaming their IFs, universities self-promote by gaming their faculty's IF ratings. They have money to make by promoting and, yes, manipulating their IFs (see the above article for just some of the ways). One can ask whether there is even a single reason for such score-keeping to be done other than for reasons of artificially constructed status hierarchies.
One motivation for this bean-counting is the heavy proliferation of online journals. Some of these are very highly respected, and deservedly, while others are chaff or, worse, scams for making money playing on fears and insecurities of faculty members needing advancement. IFs will at least be some assistance to an administrator or grant reviewer who wants to have an idea of a faculty candidate's record. But if the IFs are systematically unreliable, or manipulated, or even reverse indicators of actual work quality as some articles like the one above have suggested, that is a rather lame rationale for using IFs. Administrators evaluating their faculty members' careers should look at the actual work, not just some computer-tallied score about it. That may not be easy, but administrators are well-paid and accepted their jobs, after all.
There was in the past an insider Old Boy network in academe, that discriminated more arbitrarily in terms of funding, over-powerful Editors who controlled who published and what they published, and less opportunity for women and cultural minorities (based on ethnic as well as university status hierarchies). To increase fairness, but also to avoid discrimination lawsuits, and to play the self-promotion PR spin game, universities and their administrative officials learned the value of being 'objective' and hiding behind Excel spreadsheets rather than judgment. More objectivity did in many ways dislodge the older elite insider networks, but a system of elites has clearly re-established itself, and manipulable IF factors and their associated commercial incentives have helped reestablish some dominance in the academic system. It may still be wide open in many ways, but is heavily burdened by the game because the corporate university has become so money-oriented. This is very well documented. Things like IFs serve those interests.
The academic world will experience change, and is changing, and the new ways of communication are better and faster and more open-structured than ever. They make life more frenetic, but that will probably calm down because it's exhausting everyone. There will of course always be an elite, and for some that's a happy community to be part of. But it's not to everyone's taste. How long it will take coup-counting administrators to accept these other venues such as online communications, is unclear, but it's happening.
Social change requires resistance to the status quo, usually organized resistance (or else money-based leverage). Bureaucracies do need to be pressured, by faculty, graduate students and post-docs, and people like Department Heads and Chairs. But, it has to happen, and it will.
5 comments:
One reader sent us an email to point out some other very well-regarded online sites: arxiv and biorxiv. They are free preprint servers working well for physicists and mathematicians, and will work well for
biologists after they run out of government money.
Open review sites put one's ideas on the line, properly, and there are many quite substantial papers that appear on then which also are noted by the science news media.
And there are Aeon and Nautilus, high-level online magazines comparable in quality to The Atlantic, Harper's and others of that sort.
There are many ways to make waves....
Authenticity is a big issue in scientific publishing, because how else does one establish that he is a genuine scientist?
In the past, it was done through the career reputation of scientists, and any co-author of reputed scientist got a boost in authenticity. I remember that model during my days in physics, and it was the reasons for for many young students to join reputed labs .
That model broke down since professors stopped taking responsibilities for failures in jointly published papers. Francis Collins was one of the earliest to pull that trick. So, Nature and Science came in to fill the vacuum and claimed that they were more authentic than others.
If scientists are forced to have consequences for being joint authors in bad papers, we may go back to the old models. Otherwise, the free websites are irrelevant. Peer review, etc. can all be stacked.
Much has changed. In the past, authors really were authors, and each was expected to be able to explain the paper with only short notice--that is, really knew what was done. Collaborators or technicians and so on were listed in Acknowledgments, not as authors. Careerism and Excel bean counting has changed that.
There were always issues about quality (though rarely about honesty), though often the Big Boys didn't include their students as co-authors even if the student did the work (I think that may still happen, or students' roles are minimized in the author list order).
I think there is no magic solution; reputation will always be a factor, elites will always gain by their university affiliation etc. But perhaps more open forums give a better chance for people to be recognized, and in hyper-competitive grant and Big Journal markets, more ways to communicate in science.
The online open review sites like arxiv and biorxiv are part of an improving system. But far more improvement is called for.
There is another reason TR publishes the IF: libraries pay upward of 30k per year in subscription to the various TR databases. These bundles most often include the IF database.
TR acquired the ISI (which generates the IF) about ten years ago, IIRC, and it's been quite a lucrative business ever since.
Reply to Bjoern,
I remember the old ISI in-print Science Citation Index, which years ago when as a Department Head I used as a guide in evaluating faculty members. It was then but a pale shadow of its ominous self today. Of course, it's a business, and bundling has coerced libraries into paying for many things they shouldn't (all the academic publishers use such intimidation). And the publishers are spawning new journals regularly it seems.
As long as there is no real resistance the system will say as it is, or keep increasing til the bursting point. Online publishing may help one would think, except that the libraries are, naturally, being squeezed by publishers for their bundled subscriptions.
Universities are now viewing themselves as businesses so can't really object to being treated as such. But not enough people are mad enough at being squeezed and Excel-evaluated to change things....not yet anyway.
Post a Comment