Tuesday, July 31, 2012

Hands rough as sandpaper

Is climate change real?  If so, have humans caused it?  Too often the debate is just words, unrelated to real world consequences, except when politicians use it for political advantage, or maybe when we've got to use more air conditioning or do more shoveling than we used to. 

Mario Machado, a student in a class Ken and I co-taught a few years ago, is now a Peace Corps volunteer in Paraguay.  And a blogger; he blogs here and here.  His thoughts and experiences are always interesting and well-worth reading, but his July 29th post on the Tones of Home blog is, we thought, worth mentioning on MT for what it tells us about the frequent disconnect between science and people just living their lives.

Mario calls his post "The Other Side of the Climate Debate" in part, we're guessing, because he's on the other side of the world now, and in part because he's looking at climate change through the eyes of rural farmers in a poor part of Paraguay where people frequently are living hand-to-mouth.  He's seeing what they see, hearing their thoughts about the very real effect a changing climate is having on their lives and their prospects for staying on the land -- while scientists, politicians, business people and the rest of us in the rich world debate whether or not it's real.

The people he's working with have no doubt.  They can work the fields in sandals and t-shirts this winter, the rains come unpredictably or not at all.  The effects are manageable so far, but they anticipate they won't stay that way.  As Mario says, "these are the very people who will shoulder the brunt of the burden that a gradually warming planet will bring."
The science, even at its best, cannot adequately convey in humanistic terms the social impacts of rising global temperatures. To grasp this properly, sit in a field with a farmer somewhere in the third world, ask him about his family and how his crops have been in the past few seasons. Ask his wife how far she must walk for water or what she does to care for her sickly child. Over dinner, look at their calluses as they handle their fork and pray that their hands will always feel as rough as sandpaper, for the day these scars of the trade have faded is the day that the land has nothing left to give.
Thanks again, Mario, for a reminder that there are people at the other end of this debate, and science has consequences.  The issue is not just to provide jobs and journals and media to argue in for the advancement of well-paid, secure careers.  It's real.

Monday, July 30, 2012

That [obstetrical dilemma] really tied the [human evolution] together. Part 1.

Some impressive colleagues and I* are about to have a paper published that pulls the rug out from under a classic paleoanthropological hypothesis/theory.

Scratch that. If we're gonna do this Lebowski theme right, I just have to say that our paper will have “micturated upon” an old beloved rug, I mean story.

Trust me. Our research did not come from a “shut the [bleep] up, Donny” dismissive sort of place nor did it come from a “the bums will always lose” holier-than-thou sort of place. Nothing like that.

It came from honest to goodness seeking to understand, man.

Here's the old beloved story:

The obstetrical dilemma (OD) hypothesis = Simultaneous selection for big-brained (or simply big) babies and bipedal locomotion caused a dilemma because while babies must be large, birth canals must remain small. The consequences of this dilemma, which are often called “solutions” and “tradeoffs,” include (1) difficult and dangerous childbirth with universal assistance due to the tight fit, (2) relatively underdeveloped, helpless, often termed “secondarily altricial” neonates compared to all other primates which are precocial, and (3) compromised or sub-optimal female locomotion, since (4) selection has favored sexual dimorphism in the human pelvis with females having not just relatively wider but absolutely wider dimensions of the birth canal.

Notice how--like the way that a nice oriental rug spiffs up a dumpy Los Angeles living room--the OD skillfully ties together many unique or fascinating phenomena in human evolution, such as human bipedalism, human encephalization, hellish human childbirth, helpless (i.e. hellish?) human babies, male-biased human athletic ability, and broad ladies' hips.

And we haven't proven this story wrong. But we have thrown some serious doubt on it, demonstrating how little of it holds up to current evidence.

“This is our concern, Dude.”

Way back in grad school, my wise advisor Alan Walker gave me a copy of Adolf Portmann's A Zoologist Looks at Humankind in which he argues against a pelvic constraint on human gestation and fetal growth (the OD). So that primed me to carry some doubt in this OD world.

Then later, in 2007, I was post-doc-ing with Nina Jablonski and immersed in the mammalian life history, energetic, and encephalization literature. It occurred to me that, Oh goodie! I’ll find out how selection could have shortened our gestation as we became encephalized but as selection also maintained our small birth canals for bipedalism.

And not only did I strike out all around. But, I mean, the mammalian life history literature looks as if there’s absolutely nothing constrained about human gestation length or the timing of birth. If anything it looks like we’re weirder in the other extreme… having slightly longer gestations than other primates and having relatively big babies. Leading up to birth we’re actually suped-up primates, not limited ones. That we're not particularly different in these terms, and definitely not limited, has all been known for decades. I was late to the party.

When you look at Bob Martin's work, and others' like it (below), you see that the size of the mammalian mother predicts the length of gestation and the size of the offspring.



Manger, PR. 2006. An examination of cetacean brain structure with a novel hypothesis correlating thermogensis to the evolution of a big brain. Biol. Rev. 81: 293–338. "Fig. 17. Allometric plot of the relationship between neonatal (Mbirth) and adult body mass (Mb) in three orders of eutherian mammals. The data used in this plot are derived from that given in Nowack (1999)."


These predictions hold even when you look across mammals that have single births or litters and note how this includes encephalized mammals, like whales, that don't even have bony birth canals!




Sacher GA, Staffeldt EF (1974) Relation of gestation time to brain weight for placental mammals: implications for the theory of vertebrate growth. Am Nat 18(963): 593-615....Using an equation that takes into account neonatal brain weight, litter size, and “brain size advancement” (neonatal brain weight ∕adult brain weight),they predicted the gestation length for species of animals with known gestation length. These few variables, which exclude any pelvic dimensions, were successful at predicting gestation length in the vast majority of species in their study, including humans and the cetaceans which lack constricting bony birth canals.


It seems so obvious that there's an energetic limit to what a pregnant mammal mother can do. And it seems so not obvious that humans are exceptional as the OD would have it. And this perspective was strengthened after I read Peter Ellison's book On Fertile Ground: A Natural History of Human Reproduction in my spare time in the field one summer, fitting in a few pages after each hot day of Proconsul hunting, after retiring each night in my canvas tent on Rusinga Island.


“This [hypothesis] will not stand, man.”

So with all this research out there showing how birth seems to be limited primarily by maternal metabolism, why this notion that we’re compromised by our pelves? Why this notion that we could or should keep babies in our wombs longer if it wasn't for bipedalism keeping our birth canals too small for gestating any longer, for growing bigger babies?

After all, there was Anna Warrener (now at Harvard) presenting her dissertation research at our annual conference showing how wide hips aren't so bad for locomotion. And she cited other papers with similar results!

So why hadn't the OD been reevaluated yet given all this stuff. I wasn’t sure. And to be honest, I thought it was so obvious, this disconnect, this weakness of the obstetrical dilemma hypothesis that after I completed my first draft of the manuscript in 2008, I decided not to do anything with it. It was so obvious, to me, that it became so absolutely ridiculously pointless to write about it. But once I mustered up the gumption to send the manuscript out to several close friends (like Ken and Anne!) for a read and none of those clever folks said it was ridiculous, that’s when I felt some encouragement. It wasn’t ridiculous, it made sense. I just happened to be the first person that my friends had known to put it together.

Or so I thought.

Here I was ready to put this idea out there and then one of my readers, Jeff Kurland, a brilliant and beloved professor from my grad school days at Penn State mentioned, “I'm pretty sure Terry Deacon presented this same thing at the AAPAs back in the 1980s or ‘90s.”

This is when your heart sinks because you just went to all that trouble only to find out that someone already beat you to it. Again, I went back to thinking that my ideas must be so obvious to everyone in the field now. But no searching came up with any Deacon pubs on the topic so I wrote to him directly and he confirmed that he hadn't published it, but he shared the manuscript that he'd stopped working on long ago. It fit with mine in many ways--this doubt of the certainty that the bipedal pelvis is limiting further gestation length and fetal growth--and since he'd presented it publicly, I asked if he'd like to be on the paper and he did. This is when your heart soars because someone so clever shares your thinking. This idea is not ridiculous! (Plus, even if he had published it already, my paper could have been a much updated contribution and still not pointless. Hopefully I would have understood that, or at least someone would have shaken some sense into me.)

“You’re out of your element, [Holly].”

Right around the same time as I was getting this nice feedback from colleagues and friends I spoke to another close colleague, Herman Pontzer, about it. He's known as the "energetics guy" among other things so I figured he was the perfect litmus. And it seemed fairly straight forward to him. Again, it's not ridiculous. Hooray! I wasn't crazy! Plus now I had this fully capable human being on board, ready to replace my stolen figures from other pubs, to make similar points but with updated data. And, even better, he could test this hypothesis about maternal energetics by plotting out the data from various data sources. All was falling into place.

"This is what happens when you [birth] a [baby] in the [pelvis]!" ... “A world of pain.”



As we were putting our story together, I joked to a brand new mother, "You know, there's no obstetrical dilemma." And got a sharp-tongued, Oh yes there is, honey. I just went through labor. Hell yes there is!

In future such discussions, I was always sure to add, "...it's all energetics. A mom gives birth when she does because she can't possibly give any more energy into growing that fetus." And some moms who hear that are like, Duh! I could have told you that! 

You cannot win. At least I could not. But it was still encouraging.

By this point, we (Anna, Herman, Terry and I) had joined forces with Peter Ellison and we submitted our paper to a major journal for review. And I'll be back with a little digest of that paper and some thoughts on it a bit later...

Update, Sept. 1, 2012: Here's the next post. 

****

*Feynman's "half-advanced and half-retarded potentials" describes us nicely, with me as the latter.

Friday, July 27, 2012

Genomic scientists wanted: Healthy skepticism required

Everyone makes mistakes
...but geneticists make them more often.  A Comment in this week's Nature, "Methods: Face up to false positives" by Daniel MacArthur and accompanying editorial are getting a lot of notice around the web.  MacArthur's point is that biologists are too often too quick to submit surprising results for publication, and scientific journals too eager to get them into print.  Much more eager than studies that report results that everyone expected.

This is all encouraged by a lay press that trumpets these kinds of results often without understanding them and certainly without vetting them.  Often results are simply wrong, either for technical reasons or because statistical tests were inappropriate, wrongly done, incorrectly interpreted or poorly understood.  The evidence of this is that journals are now issuing many more retractions than ever before.

Peer review catches some of this before it's published, but not nearly enough; reviewers are often overwhelmed with requests and don't give a manuscript enough attention or sometimes aren't in fact qualified to do so adequately.  And journal editors are clearly not doing a good enough job.

But, as MacArthur says, "Few principles are more depressingly familiar to the veteran scientist: the more surprising a result seems to be, the less likely it is to be true."  And, he says, "it has never been easier to generate high-impact false positives than in the genomic era." And this is a problem because
Flawed papers cause harm beyond their authors: they trigger futile projects, stalling the careers of graduate students and postdocs, and they degrade the reputation of genomic research. To minimize the damage, researchers, reviewers and editors need to raise the standard of evidence required to establish a finding as fact.
It's, as the saying goes, a perfect storm.  The unrelenting pressure to get results that will be published in high-impact journals, and then The New York Times, which can make a career -- i.e., get a post-doc a job or any researcher more grants, tenure, and further rewards -- combined with journals' drive to be 'high-impact' and newspapers' need to sell newspapers all discourages time-consuming attention to detail.  And, as a commenter on the Nature piece said, in this atmosphere "any researcher who [is] more self-critical than average would be at a major competitive disadvantage."

That time-consuming attention to detail would include checking and rechecking data coming off the sequencer, questioning surprising results and redoing them, driven by the recognition that even the sophisticated technology biologists now rely on for the masses of data they are analyzing can and does make mistakes.  Which is why sequencing is often done 30 or more times before it's deemed good enough to believe.  But doing it right takes money as well as time.

Skepticism required
And a healthy skepticism (which we blogged about here), or, as the commenter said, some self-criticism.  You don't have to work with online genomic databases very long before it becomes obvious -- at least to the healthy skeptic -- that you have to check and recheck the data.  Long experience in our lab with these data bases has taught us that they are full of sequence errors that aren't retracted, annotation errors, incorrect sequence assemblies and so on.  And, results based on incorrect data are published and not retracted, but very obvious to, again, the healthy skeptic who checks the data. MacArthur cautions researchers to be stringent with quality control in their own labs, which is essential, but they also need to be aware that publicly available data are not error-free, so that results from comparative genomics must be approached with caution as well. 

We've blogged before about a gene mapping study we're involved in.  We've approached it as skeptics, and, we hope, avoided many common errors that way.  This of course doesn't mean that we've avoided all errors, or that we'll reach important conclusions, but at least our eyes are open.

But just yesterday we ran into another instance of why that's important, and how insidious database errors can be.  We are currently characterizing the SNPs (variants) in genes that differ between the strains of mice we're looking at to try to identify which are responsible for morphological differences between them.

The UCSC genome browser, an invaluable tool for bioinformatics, can show in one screen the structure of a gene of choice for numerous mammals.  One of the ways a gene is identified is by someone having found a messenger RNA 'transcript' (copy) of the DNA sequence.  That shows that the stretch of DNA that looks as if it might be a gene actually is one.  We were looking at a gene that our mapping has identified as a possible candidate of interest and noticed that it was much much shorter in mice than in any of the other mammals shown.  If we had just asked the data base for mouse genes in this chromosome region, we'd have retrieved just this short transcript.  We might have accepted this without thinking and moved on, but this is a very unlikely result given how closely related the listed organisms are so we knew enough to question the data.

But we checked the mouse DNA sequence and other data and, sure enough, longer transcripts corresponding more closely to what's been reported in other mammals have been reported in mice.  And additional parts of the possible gene, that correspond to what is known to be in other mammal transcripts, also exist in the mouse DNA.  This strongly suggests that nobody has reported the longer transcript, but that it most likely exists and is used by mice.  Thus, variation in the unreported parts of the mouse genome might be contributing to the evidence we found for an effect on head shape. But it took knowledge of comparative genomics and a healthy skepticism to figure out that there was something wrong with the original data as presented.

Not a new realization
There is a wealth of literature showing many reasons why first-reports of a new finding are likely to be misleading--either wrong or exaggerated. This is not a matter of dishonest investigators! But it is a matter of too-hasty ones. The reason is that if you search for things, those that by pure statistical fluke pop out are the ones that are going to be noticed. If you're not sufficiently critical of the possibility that they are artifacts of your study design, and you take the results seriously, you will report them to the major journals. And your career takes off!

A traditional football coach once said of forward passes, that there are three things that can happen (incomplete, complete, intercepted) and only one of them is good....so he didn't like to pass. Something similar applies here: If you are circumspect, you may

1. later have the let-down experience of realizing that there was some error--not carelessness, just aspects of luck or things like problems with the DNA sequencer's ability to find variants in a sample, and so on.

2. Then you don't get your first Big Story paper, much less the later ones that refine the finding (that is, acknowledge it was wrong without actually saying so).

3. Worse, if it's actually right but you wait til you've appropriately dotted your i's and crossed your t's, somebody else might find the same thing and report it, and they get all the credit! You may be wrong and later data dampens your results, but nobody remembers the exaggeration, Nature and the NY Times don't retract the story, your paper still gets all the citations (nobody 'vacates' them the way Penn State's football victories were vacated by the NCAA), you already got your merit raise based on the paper.....you win even when you lose!

So the pressures are on everyone to rush to judgment, and the penalties are mild (here, of course, we're not talking about any sort of fraud or dishonesty). Again, many papers and examples exist pointing the issues out, and the subject has been written about time and again. But in whose interest is it to change operating procedures?

Even so, it's refreshing to see this cautionary piece in a major journal. Will it make a difference? Not unless students are taught to be skeptical about results from the very start. And the journals' confessions aren't sincere: Tomorrow, you can safely bet that the same journals will be back to business as usual.

Thursday, July 26, 2012

Salt and stomach cancer -- the data aren't terribly convincing

Warning!
A new cancer scare has been all over the British press -- the World Cancer Research Fund (WCRF) is recommending that because salt has been linked with stomach cancer, "traffic light" color-coded food labeling should be required of all processed foods, the most significant source of dietary salt.  The WCRF says that people should eat 6 grams of salt per day, or about a teaspoon; 75% of salt consumption is from processed food, only 25% added at the table.

Source: BBC
How does salt cause cancer?  The explanation that comes with these new warnings is that salt damages the stomach lining, which leads to disease -- as far as we can tell, this is based on experimental studies feeding rodents high salt diets.  Of course, this doesn't really explain it since cancers involve genetic mutations, whether inherited or somatic. 

Some papers, including some rodent studies, suggest it's an association with Helicobacter pylori, the very widespread bacterium that causes ulcers. Or, it's not added salt but salt-processed foods like meats and pickles. Whatever the mechanism, The Guardian reports that an estimated 14% of stomach cancers could be prevented if salt consumption were reduced (this estimate comes from the WCRF).  We blogged about salt and cancer a while ago (here; sorry, ugly table in that post), but now that the story is back, we thought we'd take a look at how definitive the data are.

But...
First, as we point out every time we blog about dietary factors and epidemiology, it's extremely difficult to get reliable data on any food consumption, and salt is no exception.  Data are almost all retrospective (taken after the fact) and either population based, with the correlation made between salt consumption and stomach cancer rates based on population-level data, or based on individual data from dietary recall (sometimes asking people to remember their diet a year ago or more) or food diaries, which are notoriously unreliable, or household food intake, which assume everyone in the household eats the same diet, and so on.  There's no good way to figure out how much salt people eat in the long run, nor is it clear how long before a tumor excess salt intake would be carcinogenic, nor for how long.  And, the few prospective studies of salt and cancer (following healthy people for years, and assessing their diet along the way) have had conflicting results.  And so on.

So, at the very best, the data are rough.  But don't just take our word for it. A 2011 paper in the British Journal of Cancer on lifestyle factors and cancer says this:  "Although it is currently not possible to pinpoint exactly what constituents of diet are protective against cancer, there is a consensus that diet is an important component of cancer risk."  Not terribly helpful.

The same paper states: 
The difficulties in estimating salt consumption in epidemiological studies probably contribute to the very heterogeneous findings; nevertheless, the consensus view, most recently expressed in the WCRF report (2007), is that salt intake (as well as sodium intake and salty and salted foods) is a probable cause of gastric cancer.  
The calculation of excess risk assumes a simple log-linear increase in the risk of gastric cancer with increasing salt intake. The evidence for this is somewhat equivocal: it is apparent for total salt use in cohort but not case–control studies, whereas for sodium intake it was also apparent in case–control studies; for salted and salty foods, the reverse was observed (dose–response relationship in case–control but not cohort studies; WCRF, 2007).
A book chapter on stomach cancer ("The Epidemiology of Stomach Cancer," in a 2009 book called Cancer Epidemiology) states
The best established risk factors for stomach cancer are Helicobacter pylori infection, the by far strongest established risk factor for distal stomach cancer, and male sex, a family history of stomach cancer, and smoking . While some factors related to diet and food preservation, such as high intake of salt-preserved foods and dietary nitrite or low intake of fruit and vegetables, are likely to increase the risk of stomach cancer, the quantitative impact of many dietary factors remains uncertain, partly due to limitations of exposure assessment and control for confounding factors [italics ours].
And, confounders again...
And, there's the problem of confounders, variables that may be relevant but aren't measured or are difficult to control for.  Every story we've seen about the salt and cancer link this week includes this quote:
Kate Mendoza, head of information at the [WCRF], said: "Stomach cancer is difficult to treat successfully because most cases are not caught until the disease is well-established.
"This places even greater emphasis on making lifestyle choices to prevent the disease occurring in the first place – such as cutting down on salt intake and eating more fruit and vegetables.
Cutting down on salt and eating more non-processed foods.  That's changing two factors, but it's hard to measure the effect.  Are fruits and vegetables actually protective?  Or is it that replacing the bad processed foods with neutral fresh foods decreases salt exposure, and thus cancer risk?  How does that get sorted out?

A book chapter on stomach cancer ("The Epidemiology of Stomach Cancer," in a 2009 book called Cancer Epidemiology) states
The best established risk factors for stomach cancer are Helicobacter pylori infection, the by far strongest established risk factor for distal stomach cancer, and male sex, a family history of stomach cancer, and smoking . While some factors related to diet and food preservation, such as high intake of salt-preserved foods and dietary nitrite or low intake of fruit and vegetables, are likely to increase the risk of stomach cancer, the quantitative impact of many dietary factors remains uncertain, partly due to limitations of exposure assessment and control for confounding factors [italics ours].
No one eats their daily 8.6 grams of salt alone.  High salt consumption generally implies high processed food consumption, and/or high nitrates consumption.  It's very hard to control for that -- that is, to determine that it's the salt and not any other component of the diet that might be associated with stomach cancer.  This is true of most studies of dietary components.  And, if the researcher just loads up rats with salt and measures its effects, that's an unnatural test of salt consumption and may have no correspondence with what's actually happening in human stomachs dealing with a normal diet.

We're not convinced
So we, at least, are not entirely convinced that salt alone is a significant cancer risk.  This, from the same British Journal of Cancer paper we cite above, doesn't help make the case:
The likely adverse effect on cancer risk in the UK is small, as the incidence of gastric cancer is low (gastric cancer ranks only 13th in terms of incidence in the UK, with incidence rates well below the European average (CRUK, 2011)). Average [salt] consumption in the UK is around 10g per day, and had shown little change between 1986–7 and 2001 (Food Standards Agency, 2004)...There is no direct evidence from intervention studies of the benefit of reduced salt intake with respect to gastric cancer. In Japan, the national dietary policy has resulted in declines in dietary salt intake, and there has been an equivalent reduction in the incidence of gastric cancer (Tominaga and Kuroishi, 1997); however, there have been other changes in prevalence of gastric cancer risk factors – notably in prevalence of infection with Helicobacter pylori (Kobayashi et al, 2004) – and thus the part played by salt reduction is far from clear.
Note that the WCRF is recommending people consume 6g of salt daily, but consumption in the UK is now around 10 g per day -- and gastric cancer rates are low.  In fact, until the 1990's stomach cancer deaths were the leading cause of cancer death worldwide, but mortality had been falling for decades, and currently stomach cancer is "relatively rare" in North America and Northern and Western Europe, although it is still high in Eastern Europe, Russia and parts of Central and South America or East Asia (data from a 2009 paper in the International Journal of Cancer).

There is some thought that the year round availability of fresh fruits and vegetables is responsible for this, and the reduction in mortality may be due in some part to improvements in cancer detection and treatment may be as well. But, if salt is the strong risk factor the WCFR is suggesting it is, given that average salt consumption in the UK is almost twice the recommended amount, stomach cancer rates should not have been falling. 

So, a quick morning's review of the data on salt and stomach cancer leaves us unconvinced that the data are really solid on this.

Wednesday, July 25, 2012

All of it. Wanting it and having it, for each of us is our own. And whether or not we got all we imagined, that our dreams and our realities were ours was good. Because that's all there was.


Sally Ride
wired.com

I’m sensitive about the notion of “having it all.”  Who isn’t?

Why Women Still Can't Have It All by Anne-Marie Slaughter is a great read. And I'm grateful that she wrote it. It illuminates how misleading and anachronistic “having it all” sounds anymore to so many of us. And it reveals how potentially damaging it is if we continue to cling to this ideal, unexamined.

Here I've tried to compose my thoughts since reading Slaughter's piece, in a reasonably small space and in a reasonably coherent manner.


A case for replacing “HAVING IT ALL” with “HAVING ALL THE CHOICES, WITHIN REASON AND MEANS”

If we’re going to take down “having it all” we should define it, first.

Having it all - Describes the lifestyle in which a human being is a parent while having a successful career.

There are two major assumptions implicit in “having it all”:

1. A much higher frequency of men “have it all” compared to women.

2. The gender gap in “having it all” is due to the combination of gender (culture) and sex (biology) inequality.

Sounds simple enough. But it’s actually not that simple. Nobody said that you thought it was, but I wanted to put it out there before I articulate my main problem with it, which is:

On the one hand we like to define “having it all” as the same for both men and women. We've put it on a pedestal as something that men can have far more often than women.

Yet on the other hand, as Slaughter points out, women's conception of “having it all” is actually more intense than what men who "have it all" have. And this different, suped-up model of "having it all” as a woman is why “having it all” is harder for women to achieve than men.

Criteria for "successful career," however defined, can be equal for both men and women, but on the parenting ticket, things can be far from equal. Being a good mother takes more, physically and immediately, out of a woman than being a good father takes out of a man.

Don’t point your hackles at me! That’s what the articles says and that’s what we all understand, men and women alike, no matter how much we squirm when we read it or hear it. No matter how much or how little we've been professionally trained in biology and ecology, we know this because we live it. Mothers must be more devoted to their children to be good mothers than fathers must be to be good fathers. That’s accepted as fact. And our cultural expectations reflect that. I have no idea if this "fact" is quantitatively supported anywhere, but that doesn’t actually matter. Showing, scientifically, that good mothers invest more of their physical selves than bad mothers and showing that good mothers invest more of their physical selves than good fathers is a redundant exercise isn’t it?

“Having it all” for a woman involves more on the parenting end of the equation, so it means going beyond most of the men who “have it all.” This, therefore, means that women don’t actually want to “have it all” in the traditional sense, in the masculine sense, anymore. Many women may go for that now and they may have gone for that in the past because they wanted to then too, or because they had to in order to have a shot at career success. And all of that brought us, thankfully, to here. But here is different now as a result. Now women like Slaughter and those she describes want to “have it all” woman-style. And I think that’s one of Slaughter’s main points. “Having it all” for a woman is harder to achieve for inescapable fundamental parenting reasons. "Having it all" for women or for the men who parent like women, is only, as she points out, for the super-lucky, the super-rich and the super-human.

So now that we've established that the "having it all" that women want isn't about equality, it's actually beyond that. Then we can be honest about what "having it all" means. It means dreaming big. Perhaps, the biggest.

And if you can achieve your career goals and also be a good parent and these two things are the two biggest things you want out of life and you feel like you’re "having it all" then let's celebrate! And more power to you! Men and women, both! But while some of us continue to shoot for “having it all’ because that’s a fine dream, and while we can support those who have this fine dream, others of us can move onto something else that may be just as challenging to make a commonplace reality but that's a bit more flexible of a dream at the same time. It's a dream of choice. The one-size-fits-all dream that when dreamt by one is easier dreamt by others. We can dream of “having all the choices, within reason and means.” This perspective can certainly be a path to “having it all” but it’s also a path to building your life outside of being a [insert job title here] or outside of being a parent. Or outside of being either, or both, or neither.

“Having it all” is an unrealistic goal for the majority of human beings on earth. Even those who appear to “have it all” are surging ahead. People are relentless star-shooters. People do amazing things! But what’s important to talk about now, at this moment in human history, is how “having it all’ is not the only dream and it's certainly not a dream shared by many of us anymore. Not only do many of us see "having it all" as close to impossible to achieve but it's not necessary for everyone to achieve either: Promotion alone is great. Procreation alone is great. Plus, there are other equally valid stars to shoot for besides promotion and procreation. There are equally valid answers to "what do you do?" that don't involve your job or your family.

Some of those alternate answers sound remarkably like the stars (big and small) that we encourage kids to shoot for. And we can keep encouraging them to shoot for the stars, while also helping them see how seemingly infinite yet actually precious and few those stars for shooting at really are.

I haven't read one yet, but I hope there’s an inspiring essay out there that explains to young women or girls what I'm betting their parents and teachers rarely do: That they’ll have to choose. It's either having babies or landing the first manned and womanned spacecraft on Mars. It's probably not going to be both. (Note: Human reproduction and interplanetary travel do not mix. At least not that I know of, yet.) However, revealing this simple fact of life--that women cannot easily grow up to be both astronauts and mothers and that women who want both will probably have to choose one or the other--has to come with the acknowledgment of how mysterious "choice" really is. Like how most of the time the choices aren’t really presented to us. Often they're made by others. And most of the time life just unfolds, step by step. And then the choices are gone and there are new, unpredicted ones, far removed from babies and space travel. But, existential issues aside, maybe the message earlier in life needs to be Girls: It's highly unlikely that you'll, both, have babies and fly to Mars. So plan accordingly.

Those are the cold hard facts for the vast majority of us non-supers. And if it’s up to us to choose to shoot for “having it all” or to have a family or to fly to Mars or to run a business or to run twenty-six mega-marathons, because you can’t do them all. Then if we face those facts, it's clearer than ever how much we deserve the opportunities to choose. If we’re limited in how many stars we can shoot for in this one life, we should be allowed to choose which stars those will be. So it’s our job to make sure we all, all human beings, have those choices, if nothing but to protect and enable the chances to make our own.

And maybe, as I said, the choices are an illusion, that life unfolds and we get what we get because of each step of the way, adding up. I can certainly feel that I’m where I am due to choices, that I’m driving my life, but at the same time, life's always driving me.

I can hardly imagine a scenario in which I'm not striving to be a tenured professor. I got this Ph.D. afterall. I got this tenure-track job. This is what I have to do. I have to be successful, at least try my hardest. And after that I have to try to get full professor. And I have to try to get fancy awards and write fancy books and make a fancy name for myself. It's expected that I at least try, but mostly it's expected that I just do. I'm very lucky to have these expectations on me even if they do take choices away from me, because they greatly help me in my making choices, in my star-shooting.

And as for the other half of "having it all"... I have a hard time understanding how to have a baby being so far from my own mother (a curse of the incredible luck of getting a tenure-track job), having no close girlfriends in my new hometown, having huge expectations by my university to do research in Kenya (or to do something else monumental) in order to get tenure, and having to wait to go up for tenure until I’m 38 which is past my reproductive prime and if I wait that long to try to procreate I might miss my glorious chance at being part of the unbroken thread of life. And then, seventeen years later, I will have missed my glorious chance to relearn calculus.

Do I regret any of my life that brought me to this? Not a single thing. (Well, except one: That hundreds of sterile, disease-free men didn't seduce me before I met Kevin.)

I love my life. I love Kevin. I love being a professor. Will I "have it all" someday? Depends. Is that the point? Not even close. But it's certainly there. Pressuring me. And it's not all culture's fault. It's not all feminism's fault. It's the plain and simple fact that there's only so much wonderful stuff I can try and that I can do before I die. And I'd like to try and do it all. Who doesn't?

It feels like so many people are here with me. We're somewhat, or even greatly, relieved of the pressure to “have it all” and are less burdened by the baggage that comes with that pressure. Instead, it feels like more of us value “having all the choices, within reason and means” than this old fashioned pipe dream to be both a loving attentive mother and a CEO, as if every woman wants to be at least one of those things! And maybe that's just us secular folks who don't believe in the afterlife, and maybe that's just us pragmatic folks who've had the American Dream discredited and even killed before our very eyes, and maybe that's just the cold hard reality of the economy shifting our goalposts for us, but it's also thanks to the struggles and victories of those women and men who came before us. Those who wanted to "have it all" and wanted us to too. They probably didn't know it, but they tried to "have it all" so we don't have to.

Choice, and valuing it, and supporting changes that enable it, seem to be on the rise, along with all the magnificent spoils. Only problem is, there are so many people making those choices, valuing a much larger definition of "having it all," carving out very personal definitions of "having it all," that there are so many people to be jealous of! Especially for those of us too far along on our lives to jump ship and even attempt to emulate them. There are just so many amazing people to celebrate and be sickeningly envious of. For starters, those assholes who will go to Mars one day.


Thinking of my mom who didn't "have it all" because she had me, and also Sally Ride who didn't "have it all" because she didn't reproduce, yet both are remarkable human beings.



Thanks

... to the inspiring dialogue with my friends, especially Ellen Quillen, on Facebook about Slaughter's piece.

Tuesday, July 24, 2012

The hammer falls; now let's do some academic reform to show we've learned

The Joe Paterno statue is down, and the NCAA has hammered Penn State because of Joe's and the University's inaction at a repeat child abuser who liked the cozy showers of our Athletic Department.  It's not fair in lots of ways, but life is that way. The University didn't see to business properly, and a number of boys were seriously harmed--even if this had nothing to do with the athletes or the rest of the university.

But the memory of Joe Paterno, the rightfully legendary coach, will now be in disgrace for a fault that will overshadow his accomplishments.  Taking away his victory total is a ridiculous part of the punishment because it's wholly unrelated to the failings. But there were certainly real faults--there was clearly too much power given to athletics and too much idolization of the coach, too much brand-consciousness and risk aversion.  Keep anything negative quiet!

So Paterno's statue, which really should never have been put up while he was still active, is gone.  In fact, as most of us here knew, he had outlived his glory days before the main events that caused all this.  Had he retired at age 70 or so, when his coaching and recruiting skills and involvement were waning, and there was a lot of sentiment for him to become a senior statesman, this wouldn't have happened.

Of course, it did happen, and we pay the price.  So what do we do?  Our new administration has been saying the university will re-balance 'academics' with athletics.  Maybe the NCAA's sanctions will force us to actually do something to match those words--which, otherwise, will just be more cotton-candy from the spin machine that is in full gear trying to do just what got us into trouble: damage control.

Now that we will have a limited-talent football team for several years, perhaps we can really take our academic responsibilities seriously.  There might be, say, 50,000 fewer people at our down-graded football games.  Maybe some of the students just won't bother to go.  This should be seen as an opportunity.

We should raise the standards for student admissions, to attract here students who actually want to study and learn something; there might not be be nearly as many as we have here now, because this won't just be a 4-year party & football attraction.  Maybe they'll actually go to class (sober, at least more of the time), and classes will be smaller.  This will surprise them, since suddenly their work will be under scrutiny....but they'll also get more actual faculty attention than they can now.  Which is what they're paying for.

We should raise the standards for earning high grades, retention, and passing courses.  Reduce the subtle pressure on faculty to be entertainers, or to retain students who don't measure up, freeing those students to transfer and save on the cost of our very high tuition and go somewhere more suited to their abilities and interests--and enabling us to provide a better product to those many very fine students here who deserve that.  More homework, no graduation without basic skills compatible with a major university education.  Raise what it means to have earned a Penn State degree.

The sexual abuse scandal was dreadful, but its having been bottled up reflected a much broader pattern of image ('brand') protection, and looking the other way from serious problems, hoping they'll just go away has become a widespread kind of institutional reflex.  It's about image and revenue--and Penn State is not the only one at this game!

There is a national problem in higher education (not to mention K-12), in which universities are processing their 'student customers' in an exploitative way for their tuition that is not so different from how we exploit 'student athletes' for ticket and TV money (except Penn State and Joe Paterno insisted that the 'student-athletes' actually go to class!).

We can perhaps take some leadership nationally now that even if we wanted to remain obsessed with sports, we won't be able to.  We'll have to find some other way to contribute in a worthy way to society.  Why not start by giving our students a much better education, when they're no longer so distracted by football?

Monday, July 23, 2012

Genomic medicine reality check

Dumping cold water on personalized genomic medicine
A news focus piece in last week's Science about cancer geneticist Bert Vogelstein is right up our alley. The piece begins, "Their lab helped reveal how faulty genes cause cancer, but Bert Vogelstein and [laboratory co-director] Kenneth Kinzler sometimes irk colleagues with their “reality check” comments on genomic medicine." Their point? Whole genome sequencing is not going to be useful for predicting who will and who won't get cancer. And they back this up with a study of disease risk in identical twins, described in an April Science Translational Medicine paper ("The predictive capacity of personal genome sequencing").

Vogelstein has long been interested in characterizing genes that are mutated in tumors, long ago identifying genes associated with the development of colorectal cancer. He and Kinzler when the latter was a student in Vogelstein's lab, showed how the slow accumulation of mutations in previously identified genes, including tumor repressor genes that no longer do their job when mutated, lead to tumor growth.

In the last decade, Vogelstein and Kinzler were the first lab to publish an extensive tumor exome sequence, all the coding regions of breast and colorectal cancers. They identified both known and novel genes involved in tumorigenesis. The work was done in the days before high-throughput sequencing was commonplace, however, and their work was criticized as not having been thorough enough or well-analyzed statistically. Though, their results were subsequently confirmed by others.

Whole genome tumor sequencing is much easier and more complete now, but Vogelstein and Kinzler don't see much more to be gained with it, and they've moved on. Their recent work has involved looking at identical twins to determine whether what they call the "genometype" would allow prediction of disease risk. That is, based on the assumption that monozygotic twins share essentially the same genotype, is it possible to predict risk of disease to a second twin if the first one has it? This of course depends on the extent to which the disease is genetically determined.
This basic observation, that monozygotic twins of a pair are not always afflicted by the same maladies, combined with extensive epidemiologic studies of twins and statistical modeling, allows us to estimate upper- and lower- bounds of the predictive value of whole-genome sequencing.
On the negative side, our results show that the majority of tested individuals would receive negative tests for most diseases. Moreover, the predictive value of these negative tests would generally be small, as the total risk for acquiring the disease in an individual testing negative would be similar to that of the general population.
The authors go on to point out that this is consistent with what has been found with GWAS -- many genes explain little risk. 
Thus, our results suggest that genetic testing, at its best, will not be the dominant determinant of patient care and will not be a substitute for preventative medicine strategies incorporating routine checkups and risk management based on the history, physical status and life style of the patient.
The story is different, they point out, for rare monogenic diseases, where whole genome sequencing has already been shown to be informative -- but then, so have association studies and the like.

Why the cold water is warranted
The first point one would make is that most genetic disease susceptibility seems to be due to what is known as the constitutive genome, that is, the DNA sequence you inherited when you were just a single cell, a fertilized egg. As they divide and divide during life, all your cells have a copy of the same genotype--almost. Each time they divide, some DNA copying errors are made, and the descendant 'daughter' cells are slightly different. Since you're made of billions upon billions of cells you have just as many different genotypes.

Most such somatic mutations are never seen clinically. Whether they help or harm, they're just in a single cell, and their effects are swamped by the sea of surrounding cells in the same tissue, that basically have your constitutive genotype at genes relevant to that tissue. If the constitutive genotype confers risk, then basically all cells in that tissue are at risk.

The difference with regard to cancer is that when a bad combination of mutations occurs in a single cell, it doesn't just die or stagger along doing no harm to you, but it proliferates, amplifying the signal of that mutation. It takes many different mutations to transform a cell from normal to cancerous. This is why cancer risk is poorly predictive from your constitutive genotype: most of the changes that lead to disease occur somatically in this or that cell until a bad combo arises in one of billions of cells.

So you'd think at least looking at the tumor cells would show what mutations were important. To some extent that's true, and though it isn't much use in predicting cancer (since the mtuations are found after you already have cancer!), this may provide ideas on how to target the cancer cells. The problem with even that is that a cancer in a single person is continually evolving, rapidly accumulating even more mutations, so that not all cells in the same tumor are cancerous for the same reasons.

You'd have to sample many different parts of the tumor to identify the different variants. And some recent studies have done just that, and shown that different secondary tumors--descendants of the primary tumor, within one patient's body--are genetically different. In part, at least, this is what enables cancer to metastasize, to colonize different parts of the body from the tissue they started in.

But if cancer is therefore not well predicted from your constitutive genome, one might expect that diabetes and heart disease would be predictable because they aren't the same kind of proliferating disorder. But despite what the genome-selling companies would like you to believe, that is turning out not to be true, either, and we have discussed this countless times before, in the context of GWAS and other studies.

Evolutionary implications
This is all consistent with evolution as well. The same genomic complexity that makes your traits, but makes finding single genes 'for' the trait difficult, is exactly what means that natural selection is not working very closely on one or a small number of specific genes. If GWAS can't find causal genes for a trait, even if you have the trait, natural selection can't do that either.

This means that traits can evolve adaptively via natural selection in the way Darwin explained, without this being very tractably understood at specific single genes, and indeed the indirect genomic effects of selection that is merely screening traits is what led causation to be so complex in the first place.

There are many parallels between what happens among cells in your body, and individuals in a species, and Ken wrote about that in 2005 in Trends In Genetics, where he discussed ways in which diseases other than cancer might be caused by somatic mutations whose effect could somehow be amplified so you would notice it at the organism level. Diseases like epilepsy were examples discussed there.

Causation may be genetic in the trait or evolutionary sense, but the specific genotypes that are responsible may be difficult or impossible to detect, or so variable among cases and individuals that by and large it's not worth taking that approach--something roughly consistent with what Vogelstein was saying in the story about him.

Friday, July 20, 2012

What makes our language abilities unique? Or are they?

Anthropologists have long assumed that the more we understand of non-human primates, our closest relatives, the more we'll understand ourselves.  Anthropologists have spent untold hours observing chimps, gorillas, lemurs, baboons, and other primates in the expectation that what they'll learn will allow them to decouple nature from nurture, genetic from cultural influences on how we behave, as well as elucidate what it is about us that makes us unique: our upright posture, the size of our brains, our opposable thumbs, our language ability?

Traditional reasoning has it that tool use and abstract symbolism made our social and material world different from and superior to in a competitive sense, other species in Africa. Symboling came to involve verbal communication--language--in the way we do, that has largely been assumed to be unique and to have evolved as unique out of rudiments present, at most, in close ape relatives.

But it seems that more distant relatives can be informative as well.  In fact birds have more in common with our language abilities than do our nearer relatives in that we all, unlike our primate relatives, have auditory-vocal learning abilities.  That is, we can hear something and repeat it.  A paper in the July 2011 Nature Neuroscience, nicely summarized in Cosmos here (which, for unremembered reasons, just came up in our Twitter feed which, by definition, makes it current, right?), looks at the language of songbirds to address the question of what is unique about human language.

A paper in Science in 2002 by evolutionary biologists Marc Hauser, Tecumseh Fitch and linguist Noam Chomsky suggested that it's recursiveness, our ability to embed more and more modular bits into sentences ad infinite, that's what makes our language special.  As the Cosmos piece explained,
Recursion enables language to become an infinite system. Because clauses (e.g. "Holmes studied the footprint") can always be embedded in the next clause (e.g. "Watson said that Holmes studied the footprint") which can then be returned for the next combination (e.g. "I read that Watson said that Holmes studied the footprint"), the set of sentences and clauses that can be generated is technically infinite.
The three scientists claimed that complex, recursive syntax cannot be learned by humans or other animals, but that we must have a unique, innate specialisation for recursion and, by extension, complex syntax.
Cotton-top tamarin; Wikipedia
Fitch and Hauser followed this up with a 2004 study of cotton-top tamarins showing that they weren't able to learn recursive patterns after listening to human speech.   Then, in 2006 Timothy Gentner et al. suggested that starlings could understand recursive song, but only within pre-set structures (nicely described here, with the added plus of audio of starling songs).  This paper was challenged, however, the data said to have been wrongly interpreted.  So, back to the starting line.

And then along came Abe and Watanabe to say that songbirds really can do recursion.
We analyzed their spontaneous discrimination of auditory stimuli and found that the Bengalese finch (Lonchura striata var. domestica) can use the syntactical information processing of syllables to discriminate songs. These finches were also able to acquire artificial grammatical rules from synthesized syllable strings and to discriminate novel auditory information according to them. We found that a specific brain region was involved in such discrimination and that this ability was acquired postnatally through the encounter with various conspecific songs. Our results indicate that passerine songbirds spontaneously acquire the ability to process hierarchical structures, an ability that was previously supposed to be specific to humans.


A similar challenge has been made to this study, however, as to the 2006 Gentner et al. study, namely that there are alternative explanations for how these birds discriminate sound.  And anyway, what happens in the songbird brain is different from what happens in the human brain when these sounds are processed.

European starling; Wikipedia
But as with many many biological traits, there can be many pathways to the same trait, whether it is a morphological or a behavioral one, so the idea that only when the neural/functional pathway is the same can we say that songbirds and humans have the same language abilities is patently a non-starter.  If songbirds can do it, they can do it however they get there.

There is a lot going on in this debate, and though it's not our area, just from reading the papers and reports about the papers it seems that people are taking sides, digging in their heels and having a hard time agreeing on what constitutes evidence and falsification.

But, as with all human traits, our language ability did not evolve from whole cloth.  It has its origins in earlier traits, many of which we share with other lineages.  And it's complex and involves our complex ability to speak, our brain's complex ability to let us speak and to process sound and abstract ideas, as well as our open-ended ability to learn.  And there's that window of opportunity in young children that allows language to develop more easily than when we're older (which is why Gentner chose next to study young starlings).  We may not have much in common with birds in terms of how we produce sound or the structures of our brains that process it, and indeed we're much closer to non-human primates in that, but this may be insignificant.

If it's true that songbirds do have a sort of recursive language, as we do, it may well have evolved separately from our own ability; an example of a kind of convergent evolution.  But what this whole story means to us is that the idea that a single trait can be plucked from the highly complex mix of traits that is language, which involves so many different parts of our anatomy and our brain, and can be used to define what it is that makes our language abilities unique is, to us, not a very useful one.  And, if it's possible to contest the uniqueness of any trait that is said to be the one that makes our language special, this is a good indication that it's not a single trait at all, but a whole suite of things that, added up, give us the ability to argue about this at all.

This in no way minimizes the interest or importance of understanding how symboling and language have characterized or shaped our species during its evolution, nor of the complex neural mechanisms that must be involved.  But it does show that it may be the trait, and not some particular mechanism, that is what is important, and that we should examine language on its own terms--in whatever species--rather than be too careless in human exceptionalism.

Thursday, July 19, 2012

The genomics of potatoes: couch potatoes!

Here is the latest on your tax money at work.  A study, as intellectually deep as the cushion in your TV room, published in the special physical activity issue of The Lancet that precedes the Olympics, shows the stunning fact that idleness is as dangerous to your health as smoking.  It's not just that every idle person, like every smoker, actually dies (though perhaps in a blissful state in at least the former case) but that the risk of death from diabetes, heart disease and even some cancers is comparable to the excess risk of smoking.  Say the authors,
Strong evidence shows that physical inactivity increases the risk of many adverse health conditions, including major non-communicable diseases such as coronary heart disease, type 2 diabetes, and breast and colon cancers, and shortens life expectancy. Because much of the world's population is inactive, this link presents a major public health issue. We aimed to quantify the effect of physical inactivity on these major non-communicable diseases by estimating how much disease could be averted if inactive people were to become active and to estimate gain in life expectancy at the population level.
We're not minimizing the nature of the finding, because results from many studies over many years make it no surprise whatever that exercise is good for your health.  The cure is not to have more and more expensive studies, but....well, to just get off your duff!

Of course, the Pharmas that want you on lifetime meds will argue, perhaps not explicitly, that 'today's lifestyle' puts people at risk of X disease for which they have the preventive pill.

If this is, as the lead author of the paper dramatically says, a pandemic (at least in Britain where the study was done), then we have the makings of the next round of Big GWAS studies.  Another paper in The Lancet addresses ecological factors that might explain why some people are active and some are not, but surely there will soon be a demand by the general public (of epidemiologists and geneticists) to know the genetics of who is is exactly that is vulnerable to this disease (which is likely soon to be named, say, as indolitis, and added to the official list of diseases, thus making it a disease).

For some, we'll surely hear, watching endless sports and mindless programs will do no damage to life expectancy (other than making you brain-dead at a very young age).  We need, absolutely, to know their genotypes and once we do, if they show their genotype diagnosis from 23andMe they'll be allowed to buy a huge-screen TV.  Without a clean bill of genetics, they'll be denied access to Best Buy.   Or, perhaps new TV's will have a slot for you to put in a credit-card size genotype record before you can turn it on, much like some cars have breathalyzer screening before you can start the engine.

Now, we already do know a lot of the genotypes, but until now we didn't know why.  Those are the countless genes associated in existing GWAS studies with diabetes, cancer, heart disease and so on.  We thought this had something to do with the gene networks involved in glucose metabolism or detection of aberrant cell muations that might lead to cancer.  But instead, these genotypes will now have to be studied to see how they interact with sofa postures, and perhaps channel choices (will those who watch sports be at even higher risk than average for an indolitis victim, or will that be somehow protective because sudden-death overtimes stimulate adrenaline release?).

Other potato-consequences you might not realize!
You not might think about it right away, but indolitis actually reduces the risk of many different diseases.  Most cancers, many types of senile dementia, and a host of other diseases will decrease in frequency.  In that sense, being a couch potato protects you against those, while excercise protects you against the diseases mentioned in the Lancet article.  That's because indolence will lead to those latter traits, thus preventing you from dying of the others.  It may be true that those would have got you at a later age, but, hell, we all have to go sometime, so it's your choice of how and when. If you have indolitis, you at least have some information....almost as precise as if it came from a genome company.

Good for business, too
Soon, an epidemiologist from a university near you will be asking for a blood sample and an exact reckoning of your TV-watching habits.  Please cooperate, as this is going to be the mother of all epidemiological studies.

And of course this will be good for anthropology, too (we're happy to say!) because it will allow some of us quickly to get in touch with the NY Times (or Nature or Science) to explain how it may seem that we evolved to be active and hence healthy, but perhaps others will argue that we evolved to sit around the campfire gossiping rather than wasting energy chasing wildebeests or berries too high on the bush to reach.  Whatever our story, we'll surely be searching for evidence of natural selection in the genes that are implicated in indolitis, so we can explain why they're here.

Good for one and all!  So sit back and relax until the stories start appearing....

Wednesday, July 18, 2012

Penn State: just another pedophilia scandal

Here's an interesting piece from The Atlantic from July 17.  "Could the Penn State Abuse Scandal Happen Anyplace Else?  Definitely."  It's an interview with Chris Gavagan who was involved in making a documentary of sexual abuse in sports called "Coached into Silence" and his answer is a resounding yes.  What happened here at Penn State was a classic case of pedophilia, the enabling of pedophilia, and the covering up of pedophilia to protect the institution and the people who should have been protecting the kids.  Nothing unique here.

The relevance to MT, besides that we live and work here, is that more and more is being published to document the high prevalence of pedophilia, including stories about its purported prevalence in other athletic programs.  If a high fraction, like 1 in 4 or 6, of children experience some sort of sexual abuse at the hands of adults, then not only do we have a social problem, but we have to re-think some of the commonly held views about a central area of biology: sex, and its relation to gender.

We've posted on this before.  Sex in terms of chromosome number (XX and XY) varies, with a non-trivial fraction of the population having some different number of X and/or Y chromosomes.  Or, they have mutational variants in their chromosomes that lead to unusual physical or reproductive traits.  While most people are XX or XY and most have the genital and other bodily manifestations associated with the normal genotypes, there is variation and whether or what aspects one wishes to characterize as, for example, 'disease' is somewhat subjective.  An evolutionary viewpoint would say that if the variant prevented successful reproduction it was dysfunctional or, in our cultural terms, 'disease'.

But that's not so clear, because many people have normal appearing chromosomes and normal appearing plumbing but bear no children despite having normal heterosexual relations.  How do we characterize that?  Here we tend to assume something psychological or cultural, and most of the time we'll allow it to be 'normal'.  You are not 'diseased' if you stay single, marry but choose not to reproduce (or simply don't end up having children despite trying), or become a nun or priest.  Or your tendency to honor monogamy, and so on.

Or, we have reasons in some instances to say that you have a physiological 'problem' or 'anomaly', that affects your sexual preference, behavior type ('gay' personality), or you look unusual for your sex, and so on.  This will be attributed largely to your genes, which then could be argued to mean that you may have the plumbing but you really aren't 'male' or 'female'.

Then some would argue that for cultural or physiological reasons you are of a normal 'sex' but a different-from-typical 'gender'.  Your behavior makes you act differently than you would for someone of your sex. Homosexuality would be one such variant.  But stereotypical homosexual behaviors--call them gender behaviors if you will--are not always associated with homosexuality.  Clearly there is variation and it's far from dichotomous.  There is not just one set of two distinct genders, and what one wishes to call abnormal or 'diseased' is subjective to a great extent.

Pedophilia seems to be an example.  Pedophiles have normal plumbing, are not gay, but prey on children (sometimes same-sex and sometimes opposite sex).  The 'ped' part is what's different and doesn't put you in one of the other classes--it seems to be a class of its own.  Psychologists apparently find that this is as ingrained as sexual preference, and is resistant to attempts to change it.  It is, somehow, born of the person's 'genes' or their interaction with early environments in unclear but clearly complex ways.

Now, if pedophiles are so common that 1/4 of all children experience their assault, clearly most pedophiles also marry and reproduce.  So are they 'diseased' in ways other than by social definition?  And, yes, pedophilia is an entry in the DSM (the Diagnostic and Statistical Manual of Mental Disorders).

This is a legitimate question since the age of 'consent' varies greatly among human cultures.  In turn that means that what we've been treating as a rare disorder is part of a continuum of variation that's not so rare at all.

What this would imply both about the biology of sex and gender, and about its evolution, is that we've been far oversimplifying the reality.  As is our usual wont, here we'll point out that with such complex and gradual variation, there won't likely be a single gene 'for' the trait, like pedophilia, nor a variant that isn't also found in 'normal' people.  It's an aggregate genotypic effect, of variants at many genes, interacting with environments--even if the result, like pedophilia, is built-in to the person when s/he is an adult.

So triggered by this scandal here at Penn State, and the facts it is evoking nationwide (along with the prior stimulus of the Catholic church and scout problems), this should force biologists to think more about the nature of sex and gender--so much at the heart of successful reproduction in a species, and at the same time, so variable.

Tuesday, July 17, 2012

Variation in levels of gene expression are easy to document but hard to interpret

Has gene regulation been a significant player in speciation and adaptive evolution?  It has long been assumed that the answer to this is yes, and the prevailing view is, in essence, that everything that we see must have been screened by natural selection, except for such things as minor variation or measurement error, including the regulation of gene expression.

However, this is as much a faith as a fact, and a paper in the July Nature Reviews ("Comparative studies of gene expression and the evolution of gene regulation," Gallego Romero et al.) systematically reviews the evidence based on new molecular techniques, and suggests that it's a hard question to answer.

The assumption, based on comparative studies, is that much of the variation in gene expression is due to genetic variation and is heritable. 
This finding provided a strong motivation for comparative studies to focus on expression levels as an important intermediate molecular phenotype: one that ultimately determines heritable variation in complex morphological and physiological phenotypes, including traits that evolved under natural selection.
Gallego Romero et al. describe the state-of-the-art technologies that have been used in this work, including RNA sequencing (RNA-seq), which has replaced microarray analysis in many instances.  RNA-seq allows more precision in estimating gene expression levels, which is important for this work. RNA-seq sequences every copy of mRNA extracted from a given set of cells of some chosen type. The more times you see the same gene's mRNA, the higher the expression level.  For microarrays, the concentration of a given gene's message was less easily quantifiable.

Assessing the effects of gene regulation on evolution naturally enough encourages evolutionary biologists to try to identify selective scenarios that might explain variation although, as the authors say, "To do so, it is necessary to distinguish between the environmental and genetic effects on gene regulation as well as to control for a large number of potential sources of variation and error" (which can be environmental or experimental). 

To do so, it is also necessary to believe that speciation is always due to natural selection.  Gallego Romero et al. do cite previous discussion of whether gene expression variation is always such, and clearly themselves recognize that the answer is not necessarily straightforward nor uniform.  Gene expression evolution may be due to selection -- often stabilizing selection, which eliminates the extremes, but sometimes directional, meaning that there would be positive benefits to increased expression -- or it may be neutral, that is there's no measurable effect on fitness when expression varies. But, as the paper also says, "Alternative explanations for gene expression differences between species, such as consistent inter-species differences in environments, are often difficult to exclude, especially in primates."

In other words, a definite 'maybe'!

Another complication includes the possibility that gene expression levels may vary by tissue, which at least one comparative study showed.  Indeed, as Gallego Romero et al. suggest, documenting variation in gene expression levels across species is the easy part, whatever tissue you choose to use, so long as it's the same for the different species.  But there are many issues in deciding what to look at.  Making sense of it is much more complex because of questions of how much is due to genetic variation, regulation, environmental influences, explaining underlying molecular mechanisms and so on.

Although progress has been slow, it is now possible to identify functional elements of DNA from nucleotide sequence analysis.  Gallego Romero et al. predict that it will one day be possible to predict gene expression patterns from the sequence of their regulatory elements.  However, all the same caveats will continue to be true -- gene expression is affected by environmental and epigenetic (non-sequence related changes in DNA) variables, and these will continue to be unpredictable.  The authors predict, though, that the use of stem cells will one day make "a reality detailed mechanistic functional studies of gene expression evolution in primates."  Stem cells could be induced to behave like, say, liver or kidney or skin cells.  Whether they will express genes in the same way out of tissue context as in it is another question.

This paper raises the question of what gene expression variation can tell us about phenotypic evolution, and points out that with new molecular techniques we can document gene expression levels in more detail than ever before.  But what these levels actually represent is another question, since, as the paper points out, there are many variables than affect gene expression levels.  And, there are numerous reasons for cross-species changes in gene expression levels, including but not limited to natural selection.  Indeed, one can imagine that speciation may precede changes in gene expression levels. Or that expression levels just change due to chance changes in the mechanism that don't affect the organism or its 'fitness'.

And as with most aspects of life, there are going to be multiple explanations for speciation.  Gene regulation may explain some, but, e.g., Allen Orr is among the more prominent evolutionary biologists documenting genetic causes of speciation such as mutations that create hybrid sterility, or genes that have no harmful effect within a species but when combined with genes from another species cause sterility.

There is no one way and no way to infer from expression differences what their origin might be.

Monday, July 16, 2012

Reading the thoughts of a dead salmon: a poignant tale

How do our brains translate sound waves into information about how far away the sound generator is from our ears?  The direction a sound is coming from is pretty easy to decipher because the sound hits our ears at different times; the difference is minute but enough for our brains to make sense of with respect to where the sound originates.  Distance is another question.  And what about soft-but-nearby vs loud-but-distant?  How does our brain distinguish between these two?

A Scientific American blog, The Scicurious Brain, posted on just this subject the other day, and we're happy it caught our eye.  Scicurious describes a new paper in PNAS by Kopco et al., "Neuronal representations of distance in human auditory cortex."  That is, it's basically an fMRI (functional magnetic resonance imaging) study of where activity happens in the brain when people hear sounds at different distances.

The researchers exposed 12 subjects to sounds of different intensities in a 'virtual reverberant environment', simulating sound coming from 15-100 cm away.  They conclude that neurons in a particular part of the brain (posterior nonprimary auditory cortices, that is a part of the brain already known to be involved in making sense of sound waves) are "sensitive to intensity-independent sound properties relevant for auditory distance perception".  I.e., this part of the brain determines the distance of a sound, at least within 100 centimeters.  How it does so is another question entirely.

fMRI results, Kopco et al.
So, this study doesn't really tell us a whole lot more than we knew before, and Scicurious points out that fMRI studies should be interpreted with caution.  Indeed, she says -- and this is why we love her post -- you can get fMRI signal from a dead fish.  Alas, that finding was reported in 2009 -- so sorry we didn't know about it until now!
Neuroscientist Craig Bennett purchased a whole Atlantic salmon, took it to a lab at Dartmouth, and put it into an fMRI machine used to study the brain. The beautiful fish was to be the lab’s test object as they worked out some new methods.
So, as the fish sat in the scanner, they showed it “a series of photographs depicting human individuals in social situations.” To maintain the rigor of the protocol (and perhaps because it was hilarious), the salmon, just like a human test subject, “was asked to determine what emotion the individual in the photo must have been experiencing."
If that were all that had occurred, the salmon scanning would simply live on in Dartmouth lore as a “crowning achievement in terms of ridiculous objects to scan.” But the fish had a surprise in store. When they got around to analyzing the voxel (think: 3-D or “volumetric” pixel) data, the voxels representing the area where the salmon’s tiny brain sat showed evidence of activity. In the fMRI scan, it looked like the dead salmon was actually thinking about the pictures it had been shown.
“By complete, random chance, we found some voxels that were significant that just happened to be in the fish’s brain,” Bennett said. “And if I were a ridiculous researcher, I’d say, ‘A dead salmon perceiving humans can tell their emotional state.’”
One readily criticizes seances and crystal balls, because we feel that conjuring is a scam rather than a science.  It is not possible to read someone else's thoughts, certainly not if they are among the dearly departed.  Or so we had thought.  Because if dead salmon can think, why not dead Aunt Mazie? 

We are not psychologists or neuroscientists, though we are scientists of a sort and we perhaps have a lot of nerve.  But we don't have enough nerve to challenge the usefulness of fMRI, not after universities have all bought their $1 Million instruments and boasted about how modern they now are.  We feel we should temper our tendency to think that salmon can't really have afterthoughts.  People have, we must admit, often reported 'near-death' experiences, but they weren't actually totally dead at the time!  But a salmon that was cold as a dead fish should not be sending out brain waves.  Of course we are assuming that there wasn't a short in the investigators' fMRI machine. An alternative of course is that there really is an afterlife, and its afterglow appears in the brain for a while--at least thinkers at seminaries should pay close attention to these startling findings.

We cannot personally attest to whether one can communicate with salmon by seance, because it has never crossed our minds to attempt it.  Nor have we any views on whether the cadavers of other fish (or amphibian) species might have similar postmortem brainwaves.  For the same reason, we must cease our glib assertions that "dead men tell no tales," and remain mute about your ability to get in touch with old Aunt Mazie.

Many many fMRI scans have been done since 2009 when the dead salmon results were publicized, so clearly dead fish thinking haven't dimmed researchers' enthusiasm for or faith in the method.  No test is perfect, and every test is at risk of yielding false positives or false negatives, and fMRIs are obviously no exception.   Nor are seances.  There are statistical corrections that can be made to fMRI results -- we don't know of any for seance results -- because fMRI readings have a lot of 'natural noise.'  But added to all the other caveats about fMRI's -- and the fact that whether or not we accept the fMRI findings about where in the brain we process information about how far away a sound is coming from, we still don't know how the brain does it -- we'll retain our skepticism about how much fMRI's can really tell us about ourselves.