Thursday, May 2, 2013

You keep citing our paper. I don't think it means what you think it means.

Quote here
I haven't published much. Activity in my profile isn't so hectic and exponential that I can't dig into things a little more than your more prolific scholar.

So as I continue the line of research kicked off by our paper (Metabolic hypothesis for human altriciality), I naturally want to read the articles that cite it. is great for showing me those. And while I dig around, I also have the opportunity to see why these authors cited our paper.

For background, here's a short synopsis of our paper that I'm Google-stalking:
Humans are thought to be born when we're born to escape the bipedally-adapted, gestation-constraining birth canal ("obstetrical dilemma"), but it's more likely that we're born when we're born because that's all the fetal tissue our mother can grow ("EGG: energetics of gestation and growth"). 

That's it. There's a lot more in the paper but that's the gist. Now, how's it being cited?

Let's go paper by paper. Don't worry. There are only five.
Increased morphological asymmetry, evolvability and plasticity in human brain evolution
Our paper is #69 in this one.Here's where it's cited:
"Biomechanical [68] or metabolic [69] constraints causing human altriciality may have provided another key preadaptation for the evolution of modern human cognition by allowing an increased period of postnatal modelling of the developing brain via the interaction with complex social and cultural environments [70]."
Considering the results of energetic limits and metabolic constraints to be adaptive ranks pretty far up there on the Panglossian scale. Regardless, this is a perfectly good citation of our paper. Thanks to the authors for not citing us later in the sentence.

Good? Yes. 

Global Geometric Morphometric Analyses of the Human Pelvis Reveal Substantial Neutral Population History Effects, Even across Sexes
Our paper is #90 in this one. (OoA = Out of Africa) Here's where it's cited:
"Our analyses testing for obstetrical constraints in shape variation indicated no difference in the neutral OoA pattern between males and females, a difference that would be expected if constraints were stronger in females than in males. This result is consistent with Tague’s [51] finding that males are not necessarily more variable in pelvic morphology than females. It is also consistent with recent suggestions that the obstetric dilemma may be influenced more by maternal energetics than pelvic morphology per se [90,91]."
Good? Yes.
Many ways to die, one way to arrive: How selection acts through pregnancy
Our paper is #23 in this one. (GDM = gestational diabetes mellitus.) Here's where it's cited:
"GDM and preeclampsia are common diseases, with grave consequences in pregnancy, and thus may strongly impact upon reproductive fitness. GDM affects 4–20% of pregnancies in different populations worldwide [19]. It can cause macrosomia, in which the fetus grows too large to fit through the maternal pelvis [20–23]."
Here is the only part of our paper that mentions diabetes:
"uncontrolled gestational diabetes is commonly associated with postterm parturition (Langer O, Kozlowski S, Brustman L. 1991. Abnormal growth patterns in diabetes in pregnancy: A longitudinal study. Isr J Med Sci 27:516–523.)"

Notice how our paper talks about postterm parturition and neither fetal size nor fetal size relative to maternal pelvis. Maybe the paper we cite does.

Good? No.
Bony pelvic canal size and shape in relation to body proportionality in humans
 Here's where it's cited:
"This “obstetric dilemma” (Washburn, 1960) has recently been questioned on biomechanical (Warrener, 2011; but see Whitcome et al., 2012) and energetic grounds (Dunsworth et al., 2012)."
Good? Yes. Although it could also go under the biomechanical part of that sentence.
Teaching the principle of biological optimization
Our paper is #4 in this one. Here's where it's cited:

"It has been hypothesized that the timing of human birth optimizes the ability of cognitive and motor neuronal development in the child by allowing the child to maximize the absorption of important cultural information (memes) in its environment [4]." 
Yes it has, but not by us. Here's what our paper says (by the way, we were forced to address this by reviewers):
"Finally, a fourth possibility, originally proposed by Portmann (5), is that the timing of human birth and degree of neonatal brain development optimizes cognitive and motor neuronal development (50)."
Our paper points to others which should be dug up when possible and, if not, then it should be explicit that the author means to point readers to references within ours, not to ours itself.

Good? No. 
Sure it's a small sample size but people aren't as good as they should be at citing our paper.  I know I've made these same mistakes and I know there's no way coauthors and reviewers can catch all of them especially in interdisciplinary works, but I just thought I'd pipe up and go through with this exercise here to remind myself and others to do our homework better.

But beyond a nice little reality check, there's a larger issue...Can I really consider all these citations as meaningful indicators of my research's value? Are citations truly reflecting the impact of my work if they misrepresent it? Should I include this little expose in my portfolio come promotion and tenure time to demonstrate how I should, in reality, have a lower citation count... that I should have fewer points in this game?


Here's why I won't be revealing any of this to my evaluators: I'm being compared against others whose citation totals are also padded with misrepresentation. It's only fair that I assume the same advantages that everyone else does. Who cares about all the larger implications for Scholarship, Evidence, and Knowledge when my job is on the line? I need those points.

So I change my tune. I don't care how you cite our paper. Thank you for citing! Cite as you wish.

Note: Earlier in 2013 I read The Princess Bride for the first time and YOU ARE MISSING OUT if you haven't.


Holly Dunsworth said...

There are a few ways to go with this post. One is to consider how (according to google), the most citations I have for a single peer-reviewed paper is 24. Compare that to my blog post "Deep Time the Movie" which (according to google) has 5,932 hits. And to bring it 'round....that's a blog post about a peer-reviewed paper in PaleoAnthropology that (according to google) has NEVER been cited.

Holly Dunsworth said...


"Forget bipedalism, what about babyism" (a blog post about a new discovery in paleoanthro put into context thanks to my dissertation) has over 5,200 hits. My dissertation has 3 citations.

Ken Weiss said...

This is a very fine example of the contemporary problem, and the need for a solution. If we are counting impact and not just some arbitrary score, then impact has to be updated.

For someone who 'hasn't published much', Holly, I think it is fair to say that you are quite prominent, well-known, and highly respected in your profession. That is clear from the annual meetings. And how many have the talent to do the kind of class-A blogging and social-media work that you do?

PNAS is a fine journal with high reputation and impact. But impact isn't just how many libraries subscribe, or how many may look at the articles, or cite them in the usual (now rather stodgy) way, not to mention citing-without-actually-reading-much-less-understanding.

Impact should be what's real, not what the middle-class workers guild (us) want to use as our scorecards. And, I can say from personal experience, this has changed. All the automated citation counting and impact factoring and Winner-take-all ranking grew gradually. It is the once useful but now lascivious product of computerization (Big Data, that so many self-interested people are so highly touting now).

The idea that one could see what is being cited by looking at a published (that is, printed on real paper at the time) Science Citation Index, was helpful. It also allowed you to look up papers by author names, and as I recall by topic. This was terrific when the alternative was to literally browse every issue of every journal one thought might be relevant.

But it turned into a bourgeois ranking system over the years. People cottoned to the idea that self-citation was rewarded (then, poor Dept Heads evaluating their faculty had to wade through citations to remove self-citations, which was difficult with the on-paper editions).

Then, people who had talked to an author while s/he was typing the manuscript got the idea they should become authors of the paper, too, and those who formerly were acknowledged became authors, padding CV's with meaninglessly inflated publications and the journals with 100-author papers.

But the ISI and other citation 'services' knew where their middle-class bread was buttered,and the vanity component grew, to their great profit. It is easier to count citations than to evaluate actual impact, substance, etc.

I could go on, but it's not necessary. What should be necessary, are better ways to judge actual impact. No easy challenge.

Jennifer said...

mmm.... so if I should write and publish a paper about goat gestation, labor, delivery and maturing, I should cite your paper a few times? I'll keep it in mind!

Holly Dunsworth said...

If only your goats birthed singletons I'd have been knocking on your door for such a study :)

Jason Hodgson said...

I'm curious to know whether your 60% good citation rate for this paper is typical. In my limited experience I find that my papers are seldom ever cited for the reasons I imagine they will be cited when writing them. coauthors and I have a paper, that is really just a comment on another paper, that has a 100% rate of being cited to say the exact opposite of what we actually say, for example.

I wonder how this happens, but then just yesterday I was trying to dig up a citation for some idea that I knew I had cited in an old paper I had written. I found the citation and then reread the paper to be clear about what I was writing. For the life of me I cannot figure out why I cited that particular paper for that particular idea. There was absolutely nothing in the paper to support the citation as far as I could find. I'm guilty too.

Holly Dunsworth said...

I have no idea what's typical and I'd imagine it depends on the kind of paper it is. But I know for a fact that numbering citations (especially for those of us who waive our middle fingers at the "helpful" macros that do such a thing) can lead to mistakes in linking the text to the proper reference in the list at the end. Beyond that, I think there's also a real difference between different scholars in what they think defines a good citation. I think just citing a paper instead of the citations inside it is totally acceptable to some but all it shows is that they didn't go to the primary sources themself and that's not scientific. If citations are meant to conform to the scientific method, if they're to demonstrate evidence, then we have to use them as such and when they're not explicitly that, we should write things like "see refs in" citation or "contrary to" citation.

Holly Dunsworth said...

waive was such a Freudian slip. I waive the use of reference managing software!

Ken Weiss said...

Once a number of years ago when I was on sabbatical learning some lab procedures, I was told by someone that a particular stage wasn't necessary. I asked why that stage was in all the protocol books, and they said they didn't know but skipping the stage made no difference and nobody in the lab I was visiting bothered to use that stage.

So, out of curiosity, I chased down the original citation that had proliferated through all the lab handbooks and Methods sections of papers. In fact, it was something some paper used, it was mentioned in passing, it was not justified in terms of why it was done, etc.

This is a common phenomenon and occurs especially in widely used textbooks. Stephen Jay Gould once used the example of the characterization of an early horse ancestor (eohippius if I recall correctly) as being the size of a terrier, a characterization repeated over generations of textbooks, when there was no basis for that.

Holly Dunsworth said...

"as noted by" citation. "as mentioned briefly and nearly off topic in" citation.

Holly Dunsworth said...

So far it seems like people like our mention of the little adaptation-heavy hypothesis (Portmann's "extrauterine spring") that we were forced to acknowledge by reviewers (who did not provide a name or reference) even though we did not address it in our research.

Ken Weiss said...

This discussion shows the potential value of a future post (and, hence, discussion) on the way science works vis-a-vis how it claims it works in science textbooks and the way people describe what they think they are doing and their rationales.

In this case, despite a fog of mis-citation, do we progress reasonably well, or are we often badly misdirected until some form of corrective occurs?

Anne Buchanan said...

There's another issue -- well, many issues, but I'll just mention one that Holly's last comment brings to mind. Scientists, or perhaps I should say even scientists, being people, tend to see what they want to see. And since they're usually looking for support for their theory, not a way to destroy it (though in fact Darwin was ever vigilant for the piece of information that was going to be the death knell for his theory), that's what they find. Even if they have to force the fit.

Science lumbers on, encumbered by the humanness of its practitioners who bring to it things like citation counts and impact factors. Fortunately for science, I think it's fair to say that eventually the truth breaks free. Just not always fast enough for promotion and tenure committees.

Ken Weiss said...

Eventually revisions break free, and false ideas fade, may be more accurate. Your comment is relevant, too, to the silliness of the Popperian idea that science works via falsification, and that our daily lives are spent intensely trying to falsify our ideas. What a laugh that idea is, no matter how often it's uttered!

Holly Dunsworth said...

Which brings to mind this funny conversation (which is funny for Popperian naivite and also for our varied notions of what love is): Is loving your hypothesis good for science?

Ken Weiss said...

People have always loved their own ideas, which of course is only natural, even if those who are reflective know that almost all ideas are wrong to at least some extent.

But the idea that we fervently, untiringly try to falsify our ideas is a serious fiction.

On the other hand, we are almost forced by societal factors to advocate ideas and try to prove them rather than disprove them. So in that sense, we 'should' love our ideas.

Unfortunately, both history and the history of the philosophy of science have shown the problems. Not least is that the old criteria of 'verification', a mainstay of early 20th century philosophy of science, was shown to be wanting, and that is partly what led to Popper's falsificationism.

So we're stuck with science as a groping game embedded in all our human foibles. Of course, that doesn't mean we shouldn't try to do better or to critique what we're doing (including our love affairs).

Holly Dunsworth said...

I think it's fine to love our ideas as long as we know they belong to everyone, not just *me.* But that's not the perspective you find running amok out there in scienceland. Too many ideas are *mine* or LastName's Hypothesis Such and Such.

Holly Dunsworth said...

Once my name's on it, then it *has* to be kept up. Falsifying it will be falsifying me and my intellect rather than bringing the collective one step toward the truth. We've made the quest for shared truth about me.

Anne Buchanan said...

Yes, and the reward system, from tenure to Nobel Prizes, perpetuates that.

Ken Weiss said...

Right. This started perhaps with Euclid, Ptolemy, Aristotle and Plato, so there is a long tradition personalizing ideas (and then there's Christ, Mohammed, Buddha, etc.).

Then, I think, the Enlightenment period starting around 400 years ago, entrenched this idea with Galileo, Newton, Darwin, and so on. Then, in the 20th century, the Media Age, we celebrated the genius with deep insight, personified with Einstein etc.

And now, the more middle-class this enterprise becomes, and the more therefore part of mass culture, the more we seek the recognition.

Actually, we're lucky in the 'hard' sciences, since in other areas of academe, esp. the humanities and social sciences, the problem is often grotesquely worse: citing someone's ideas by name as if that makes them true.