Thursday, October 30, 2014

Do scientists have heroes? Should we?

Yes, science has Nobel prizes and MacArthur 'genius' grants and NIH-funded "Centers of Excellence" and highly selective journals, among other ways of plucking ordinary people from the scientific masses and turning them into heroes.  Or at least treating them as such.  Purported major journals are now deifying the greats and near-greats with interviews and splashy stories that match what can be found in mags near the checkout counter.  We've got lists ranking scientists by number of Twitter followers.  But is this an important part of science? Can science proceed without this?  Can it proceed with it -- could this hero-creation side of science be a distraction?


Charles Darwin

Appropriately I suppose, because of the source of inspiration for this post, it requires a bit of name dropping: I was chatting a few days ago with an old friend of mine, Penn Jillette, magician etc., when the subject of heroes came up.  He asked if I'd seen what some famous guys (he named them) had said about a thing we were talking about (which I won't name because which famous guys he was talking about is irrelevant to this story, and the subject might make it obvious), and I said no.  I said I needed heroes less and less as I got older, because I can see the flaws and limits in their science, as we all have, relative to their reputations.  He said it was the opposite for him; he needs them more now.  For inspiration.  But he agreed with me about the flaws in the arguments of the people he'd mentioned.

I thought about this for a bit, intrigued by the idea of scientific heroes.  I told Penn that I was thinking of writing something about how my being a scientist and him being an artist/performer might affect why we differ so much on heroes, but that I didn't really understand what he'd meant.  I said I could imagine what he meant and use his name, or imagine what he meant and not use his name, or he could explain a bit more about what he meant and tell me whether or not to use his name.  He said he had no clue what he meant, and I could decide whether to use his name after I wrote whatever I wrote.  That helped.  Some.

I've known Penn for a long time, so I can think back to when we were young, and find differences between us even way back then.  (Well, other than almost everything.)  I do have to say, though, that most of what I say here is pretty much a guess, not vetted or confirmed by Penn.

Marie Curie; Wikipedia

Penn, of Penn & Teller, is a magician, author, filmmaker, musician, atheist, political commentator, and much more.  Certainly a lot of people look up to him, listen to his opinions, read his books, watch his TV shows and movies. In a what-comes-around-goes-around kind of way, I imagine he's a lot of people's hero.  He probably has inspired a bunch of kids to grow up and do the best they could.

Penn probably was a performer before I knew him, but he certainly was in high school.  He was wowed and inspired by a ton of other performers, and as he said, he has tried hard to be as good or better than people who wowed him when he was young.  I remember spending many hours listening to him practice sounding like Neil Young on the guitar, sitting in a chair by the window of his room.  But he loved a lot of musicians.  And more; comedians, writers, old Vaudevillians, quirky teachers even.  They taught him a lot, and set standards that he wanted to meet or exceed.

Magicians, artists, musicians, build their skills and develop their own voice by spending many many hours -- 10,000, according to lore -- replicating what others have done.  Saw a woman in half, copy the drawings of master artists, play notes written by others until they sound like so many musicians have played before.  Even surgeons learn their craft this way.

Harry Houdini; Wikimedia Commons

I've been a scientist almost as long as Penn has been a performer, though I didn't know in high school where I was headed.  Still, even if I had known I was eventually going to be writing about evolution, it would have done nothing for me to sit at my desk and copy paragraphs out of The Origin of Species, or, say, try to replicate Darwin's handwriting, or reproduce Mendel's experiments with peas.

Of course, science is the accumulation of knowledge more than the accumulation of specific skill sets.  Yes, we have to know how to pipet or follow a lab protocol, but we needn't spend 10,000 hours sitting in a chair by our bedroom window practicing how to pipet like Madame Curie, or Barbara McClintock.

Ken had a student once who was stunned when he learned in class that while biologists respect Darwin, we never go back and check the Origin to see if what we think is true really was, the way some fundamentalists check the Bible for truth.

So, what is it about a scientist that could be heroic?  To me, maybe being able to look at the same data everyone else is looking at, pull together what look like misfitting pieces, and draw a new and different conclusion?  What Thomas Kuhn called a 'paradigm shift', but these turn out to be a lot rarer than most scientists seem to hope.  Not life-saving, necessarily, but stunning, if that counts as heroic.  But one can't really aspire to this.  It just happens.  And this isn't what most Nobel prizes are rewarded for.

A lot of scientists become public figures, and then perhaps someone's hero, because they've done good science.  Or because they write popular science books.  But, to me, this doesn't imbue them with special knowledge about questions that science can't answer -- religion, the existence of free will and so on -- or with deeper insight into non-scientific issues; politics, ethics, just war.  So I don't turn to them for answers or even for insights particularly.  And, good science takes a whole community -- even Darwin wasn't the first, or the only person to articulate ideas about the origins of diversity on earth.

Are Nobel laureates heroes?  Their discoveries may have been remarkable for their time, but a lot of them went on to Important jobs but rarely delivered comparable freight afterwards -- that's nobody's fault, but some universities spend a lot of funds for the name, that, dare one say it?, might better go to younger investigators whose gun hasn't yet been fired.

I recognize that the arts have movements, too, times in history when everyone seems to be doing the same thing, that it's not all about individuals, and famous personalities.  Juggling continues to get more spectacular, based on the work of even nameless jugglers who came before.  But Houdini wasn't just another escape artist.  There are specific things that a magician can learn from Houdini in order to become a better escape artist, but a biologist can do good biology without ever reading Darwin.  Certainly one can draw energy from thinking about how the greats like Darwin, Einstein, Shakespeare or Beethoven saw so deeply compared to their peers.  (Indeed, Alex Ross recently wrote a fine New Yorker piece about Beethoven standing high above composers who came before him and even after him.)  But that's not necessarily the same as insightful inspiration.

Bob Dylan; Wikipedia

So, is it the difference between being inspired by ideas rather than by the people who have the ideas? Perhaps in a way, but artists are inspired by ideas, too.  Penn said that when he was young his heroes were superhuman.  Like Bob Dylan, and they inspired him to grow up and do the best he could.  Now his heroes are just human, like Bob Dylan, and they inspire him to keep going.

I love good science, but I don't revere the people who do it, and never did.  Inspiration?  Excitement?  Yes, but that can come from the work of the non-famous as well as the well-known.  Maybe it's about finding our place in the context of our work, and however we manage to do that, with heroes or without, works for us.

Wednesday, October 29, 2014

Shelling out for Sheldon....again (but without the nudes)!

In the 1940s and into the 50s, WH Sheldon carried out a project on the science, or even (modestly claimed) the new science of human morphological and behavioral science.  The work became known in various ways, but one important term that described it was somatotyping.  

WH Sheldon, from the Wikipage about him
In this project, students at our most Ivied universities were compelled to pose au naturel for front, side, and back photos, so their shapes and ultimately their personal natures could be studied, scientifically!  It should be noted that Sheldon was, after all, at Major Universities for much of his career, so his work must have been Important.

These were in the Olde Tyme hand calipers and graph paper days, without today's more definitive way of taking measurements (that is, using computers to do the measuring).  Means and variances of body shape were computed and, in the usual way that scientists often show the depth of their insight, divided into categories (finding categories makes such tables seem a lot more insightful than mere lists of measures). 

Sheldon used three basic categories to divide up what is in fact a quantitative pattern: he called them endomorphs, mesomorphs, and ectomorphs.  You can call these skinny, normal, and fat (but don't let a scientist hear you using such ordinary language).  To be really scientific, Sheldon devised a 3-number scale for various traits in an individual to reflect how strongly Mes, Ect, or End the trait was in that person. Your resulting somatotype score was with you all your life, from the womb onward, because it represented your very essence, despite how much you ate or whatever (again, recall, this is science).


The tricorn classification.  Each person a mix of the Big Three possibilities.  Modified from Google images


What they really look like in idealized form.  (This image is all over the web, source hard to determine.)

One might expect this sort of work to have been done by anthropologists, but Sheldon was a psychologist. The idea was not just to study body shape, but that shape is a window to the inner soul--Sheldon called this new science 'Constitutional psychology'.  Of course, you know the stereotypes: the jovial Santa-like fat person, edgy skinny one, and manly Heroic (that is, sexily muscled) mesomorph. Sheldon considered (i.e., studied) women, too, but since women aren't very important he mostly cared about men (that is, professionally speaking).

Sheldon wrote his ideas in book form in 1940, as The Varieties of Human Physique, and the 1954 Atlas of Men, and the work was quite influential, surely because of its rigorous scientific quality (that is, not just because of the nude pictures!).  But, he was working in, and part and parcel of, the racist, sexist, and other 'ist' environment of the eugenics era, where experts' value judgments about human qualities were taken as respectable science.


From The Varieties of Human Physique

Sheldon's work did not come out of nowhere. More than a century before Sheldon, Franz Gall, a comparably prominent scientist, made similar in-depth revelations about personality from identifying cranial somatotypes (that field of science was called 'phrenology').  And in Sheldon's own time was his fellow Ivy Leaguer, Carleton Coon, who made his own set of observations about how personality as revealed by morphology demonstrated the racial characteristics of peoples, and the definitive finding that the Europeans were the superior type.

All this other credit aside, Sheldon's work is out of fashion, and out of sight, somehow viewed as rather crude, and embargoed at least in part because at least one photographed subject would go on to become President, and other future movers and shakers posed for Sheldon's eager eye.

Somatotyping redux. . .
Understanding what made you roly-poly and jovial, or gaunt and neurotic was beyond Sheldon's abilities, cramped by the limited knowledge of the time.  Now, of course, we're well past the Sheldon age.  We're beyond daguerrotypes and have real data from DNA sequencers, fancy 3D cameras, CT scanners, and fMRIs. These Big Data technologies now allow us to penetrate a subject's deepest nature, so to speak, and thus objectively reveal the naked truth.  Currently a number of large projects are afoot aiming to do just that in one way or another.  The mega-study called 'GIANT' to measure stature and other anthropometric states is an example.  Of course, whether their estimated thousands of contributing genome regions makes prediction useful is another matter....

With millions of DNA markers or even complete DNA sequence, and moving towards a million or more measured subjects, we will finally be able to relate genetic variation to somatotype (that term itself is no longer in general use, because it smacks of old-style pseudoscience).  We're finding the hundreds or thousands of genes that make your somatotype what it is, which must be predictable from birth, of course, or else the study would largely be of rather less crucial importance to society.  One application, that will be vital, is that the FBI, police, or NSA will be able to use DNA samples from the scene of a crime to predict what the perpetrator looks like.  But, surely they will also be able to use DNA samples to predict the dangerous, antisocial, or undesirable somatotypes they should be on the lookout for as well. 

. . . and more, or is it less?
While the new more deeply intrusive methods will allow sets of genes to be found that explain somatotypic variation, the clear signs are that it will soon go much farther, into territory Sheldonians could only make amateurish guesses about.  For that, three facts are key:  
1.  The traits are already clearly known to be affected by variation in huge numbers (hundreds or more) different genome regions.  That's because interactions among many factors are required to assemble complex physical or behavioral traits. 
2. Most functional genome regions are involved in gene regulation or processing--such as signaling among cells that affect a cell's context-specific gene expression. That's how we develop as differentiated organisms with tissues, organs, and the like. 
3.  Most genes are pleiotropic, that is, they are used in more than one developmental or cellular context or function.  Our total repertoire of genes can be as small as it is because even with limited numbers of genes, there are essentially unlimited numbers of combinations of the genes available for different purposes.
These facts are critical to opening, or perhaps reopening, a rigorous new age of Somatotypology!  Based on the points just listed, the logic goes as follows:

If a given gene is used in more than one context, and one of them makes your somatotype, and another makes your personality, then of course if we know the genetic basis of your somatotype score (new, highly computerized versions of the tricorn figure above), then, obviously, your personality will be predictable as well because the same genes will also be involved in determining your behavior!  A morphological window into your soul.  How you look = how you act! A syllogism so seemingly tight it would make Aristotle smile!

Of course, the syllogism happens to be wrong, but we'll let that pass. But you've undoubtedly noticed a major uptick in studies of the genetics of behavior, reviving notions long thought deservedly dead.  Countless newly energized investigators are mapping every sort of behavior, focusing of course on things like intelligence, violence, criminality, liberal politics, ability to make wise pension investments, and (of course) sexuality normal and antisocial.

So why would this neo-Sheldonian revolution make any difference? Do you think we're being far too suspicious or even paranoid that people are going to be suggesting that facial or other physical appearance is diagnostic of behavior based on the above syllogism? This line of thinking is not our invention, even if the first forays are as yet unpublished gleams in some investigators' eyes. 

Yes, readers, DNA-based personality forensics is on the way!  A rebirth of  'Constitutional Psychology' as written in your very genome!  Sheldon could only assume it; science now can prove it.   Sheldon was right, though of course, the new version won't read like Sheldon.  It will be written in impermeable technical, statistical, and genomic language (details and flaws--if any--entombed in Supplemental Information).  And private companies (as well, of course, as the security agencies) will be using this rigorous knowledge to hawk genomic tests for Find-a-Mate or IVF websites.  You and the government will be able to tell who's a robber, rapist, or good candidate for law school or hedge-fund managing....not to mention who qualifies to marry your daughter!

But--and here's the really good part.  Unlike DNA, which you have to needle or swab people to get from them, or find at already-committed crime scenes, all you'll need will be to see someone or get a photo (say from their Facebook page), to scope out the future criminals, sexual abusers, jovial friends, untrustworthy scoundrels and other deviates, and so much more.  Indeed, this time around we'll be able to go a giant step beyond the crude, restricted range of Sheldon's work.  Basically only wealthy white people, mostly men or women merely in search of husbands, attended the Ivies in his day, but now, thanks to more open admissions, we'll be able to show things that have long been intuitively obvious, though only crudely speculated about, in Sheldon's own day.  We'll now dig deeply into the basic personality differences--and consequent relative ranking--of races (just call them 'ethnic groups' if that mask makes you more comfortable), in terms of important behaviors and talents.  This will be a huge advance for (hu)mankind, and for our own national security.

And think of the research money that will be saved!  The DNA sequencers and expensive labs can be dispensed with, once they've shown that all we really need are photos that any old cell phone can take.  So, maybe the threat that all of us will be shelling out once more for Sheldon, since our taxes will be paying for this vital but erstwhile very expensive work, won't come to pass after all.
  
This is the new wave, real science replacing the former pseudo-science. The charts will be far more aesthetically pleasing than Sheldon's old hand-crafted ones, though new reports will probably not include nudes because you can already see as many of those as you want on the web without having to buy an expensive Body Atlas.  However, don't despair about even that, since one can safely predict that scientific specialists will soon turn their penetrating attention groin-ward in their own highly technical studies.

A new wave replaces the old wave in the nature of things.  How long this tide will stay in is hard to predict.  But you don't want to miss its exciting, not to say titillating, messages.

Tuesday, October 28, 2014

Brain plasticity -- why should intelligence be an exception?

We live in an age that demands we multitask if we're going to get everything done that we need to do.  Answering email, picking up the children, submitting grants for every deadline, getting in 30 minutes of exercise everyday, eating right, keeping up with the literature -- so much pressure.  Fortunately someone's got our backs, and we can now answer email on one screen at our treadmill desks and work on that grant proposal on another, all while we have lunch.  So much easier, so much time saved.

But wait, psychology tells us that, despite appearances, we can't multitask after all, we can't do two cognitive things at once.  Instead we're 'task-switching', reading then speaking, writing a paragraph then answering the text from the child we forgot to pick up. So tread milling, eating and emailing we can do but tread milling, eating, emailing and writing a methods section we can't.

Unless we're musicians.  A new paper in Cognitive Science ("Musical Training, Bilingualism, and Executive Function: A Closer Look at Task Switching and Dual-Task Performance," Moradzadeh et al.) reports that musicians are better at task-switching and 'dual-tasking' than non-musicians.  Task-switching is just what it sounds like, the ability to switch between tasks, and the speed and ease with which this can be done is what was measured.  Dual-tasking is the ability to do two or more things at once. There must be a reason this isn't just 'multi-tasking' but I don't know what it is.

I'm also not sure what constitutes a 'task'.  Indeed, how many tasks is reading music, with all the separate bits it involves (remembering which key has 5 flats, how long to hold a black flagged note compared with an empty oval, what that marking over the final note means, all the Italian notations telling you how to play the piece, turning notes on a page into a melody, keeping time, etc.) or playing the horn, with all the separate bits that involves (how to blow into the mouthpiece, how to press the keys, which fingering to use for each note and how to do that, how to synchronize your breathing with your fingers to get a note, how to play loudly or softly, playing in tune, all while remembering what each of the conductor's hand movements signifies, and staying in time with the players around you)?  Some of these tasks get relegated to muscle memory after enough practice, certainly, but much of musicianship still involves cognition.



Anyway, the researchers compared the ease with which a group of 153 bilingual and monolingual musicians and non-musicians switched between tasks or accomplished more than one at once.  These were apparently standard psychological tests, with task switching involving tracking numbers on a computer screen, and dual-tasking involving tracking a white dot while looking at flashing letters, while being asked to note when an X appears.
Results demonstrated reduced global and local switch costs in musicians compared with non-musicians, suggesting that musical training can contribute to increased efficiency in the ability to shift flexibly between mental sets.... These findings demonstrate that long-term musical training is associated with improvements in task switching and dual-task performance.
The researchers point out that there can be 'far transfer effects' of training or experience on cognition. This is a well-studied area of psychology, and it's known that many hours of things like physical exercise or video-gaming can affect how we think, or remember, and so forth.  So, that something as complex as musical training might affect other mental skills isn't a surprise.

And, it has long been known that longterm musical training has effects on brain structure, including on sensorimotor and auditory areas, but on grey matter as well (references here).  And, London taxi drivers are known to have larger hippocampi, related to spatial navigation, than London bus drivers who spend as much time driving.  Multilinguists have denser grey matter in brain areas related to language and communication than do monolingual people.  And so on.

This is all overwhelming evidence for brain plasticity.  It beats me why anyone would insist that intelligence is an exception, hard-wired, and not at all contingent upon experience.

Thursday, October 23, 2014

What is this 'risk variant' shared by 40% of people with Type 2 Diabetes?

I heard a surprising statistic the other day.  At least it was surprising to me.  The Oct 9 episode of the excellent BBC Radio 4 program "Inside Science" covered a new treatment for type 2 diabetes (T2D). The blurb about the segment said this:
In 2010, a particular gene variant was associated with around 40% of Type 2 diabetics - not directly causal, but this so-called 'risk variant' increases the chance of developing the condition if you have the wrong lifestyle.
The focus of the program was on what seems to be a promising new treatment targeted at people who carry a specific variant in the ADRA2A gene.  The variant inhibits the secretion of insulin from beta cells, and the treatment, yohimbine, seems to reverse that in people with this particular risk variant.  Yohimbine isn't a cure, or the only treatment that would be necessary for carriers of the variant, but it would be used in combination with other drugs.  It seems as though it could potentially be a good addition to control of type 2 diabetes in perhaps a large segment of the population.

But what I was really interested in was this 40% statistic.  It was mentioned but not discussed -- where did it come from, and what did it mean?  Do we now have a very significant explanation for the cause of type 2 diabetes?  If it actually accounts for such a large fraction of cases, why haven't we heard more about it, after so many big genomewide studies?   If not, in what sense is it a 'risk variant'?  And what does it mean that it's causal "if you have the wrong lifestyle"? Presumably some lifestyle risk factor such as energy imbalance or a dietary component interacts with the variant (whatever that means), but that's true of anyone with T2D, so is the causal pathway different in people with the variant?  Do people with the variant and T2D have a different disease in some sense than those without?  What is the frequency of the variant in people without T2D and should we expect it to be much lower than in those who have T2D?

I tried to run down the statistic.  I found a few 2010 papers that looked promising as the source, but I couldn't find it in any of them.  I Tweeted Adam Rutherford, the Inside Science presenter, and he kindly replied with the reference he had seen, a paper in the October 8 issue of Science ("Genotype-based treatment of type 2 diabetes with an a2A-adrenergic receptor antagonist," Tang et al.).  Indeed, the authors of the new, 2014 paper, write:
A genetic variant in ADRA2A was recently associated with defective b cell function (7). The finding represents the first exact disease mechanism for type 2 diabetes associated with a common (30% of the normal population; 40% of patients with type 2 diabetes) risk variant and provides an opportunity to examine the feasibility of a “pharma- cogenetic” approach to treat complex polygenic disorders like type 2 diabetes.
So, there were the numbers but not the actual source of the data.  Citation number 7 turned out to be one of the 2010 papers I'd already looked at, a paper in Science from the same group ("Overexpression of Alpha2A-Adrenergic Receptors Contributes to Type 2 Diabetes," Rosengren et al.) reporting overexpression of an ADRA2A variant associated with suppression of insulin secretion in rats, and humans, and "increased type 2 diabetes risk" in humans.  Rosengren et al. found that "...in a case-control material with 3740 nondiabetics and 2830 diabetics, rs553668 was associated with increased risk of T2D [recessive effect; odds ratio (OR) 1.42, confidence interval (CI) 1.01 to 1.99, P = 0.04]."  That is, risk is 42% higher in those with rs553668 than those without.  Ok, but that wasn't what I was looking for.

So I turned to the Supplemental information.  Here's the best I could do, but it's not the 30% and 40% I was looking for, either:

Table S2; Effects of SNPs on plasma glucose and serum insulin, Supplemental information, Rosengren et al, 2010

The 'A' allele is the 'risk variant', so here 28% of the cases and 24% of the controls have at least one copy, are either GA or AA at the chromosomal locus.  Not a large difference between the two groups, and not the figures I was looking for, either. I emailed the senior author of the 2010 paper last week to ask about these statistics, but haven't heard back.

I'm either overlooking something obvious, or it's not where I'm looking or it has somehow been misreported along the way.  But this is just increasing my curiosity.  What about this paper, reporting on a study of the ADRA2A association with T2D in Sweden?  SNP rs553668 is the 'risk variant' of interest, according to the previous papers I'd read.
SNP rs553668 was associated with T2D in men (odds ratio [OR] = 1.47; 95% confidence interval [CI] = 1.08–2.01; P = 0.015) but this association was lost after adjusting for age and for body mass index (BMI). Associations were also detected when comparing obese NGT and lean NGT subjects (OR = 1.49; 95% CI = 1.07–2.07; P = 0.017), and in obese (OR = 1.62; 95% CI = 1.06–2.49; P = 0.026), but not in lean T2D. In women, multiple logistic regression regarding SNP rs521674 demonstrated an increased OR of 7.61 (95% CI = 1.70–34.17; P = 0.008) for T2D when including age as a covariant. Correcting for BMI removed the significant association. When age was included in the model, association also found when obese T2D patients were compared with lean NGT subjects (P = 0.041). ADRA2A mRNA expression in human pancreatic islets was detectable, but with no statistically significant difference between the diabetic and the control groups.  [Highlighting is mine.]
So 'the' risk variant is associated with obesity in men, but it's a different variant in this gene in women that's associated with obesity.  Of course, obesity is associated with T2D, but that's a step removed from what the other papers are suggesting.

A meta-analysis in 2013 found that SNP rs553668 may be associated with T2D in Europeans, but no other ethnic groups.  But what about this statistic: GWAS have explained only ~10% of "heritability" of T2D, including ADRA2A because that chromosome region is covered by genome-spanning markers used in GWAS.  This doesn't seem to jibe with the idea that 40% of people with T2D have the ADRA2A 'risk variant.'  And of course if 30% of the healthy population has the risk variant, either they went on to develop T2D after the study was completed, or it's simply not a significant risk variant.

This is an important point: if a large fraction of the population carries the variant, it's not a signifiant risk variant by itself, and doesn't cause 40% of cases, even if it may turn out to be a useful variant to know about if you have T2D, in making treatment decisions.  The population risk of T2D is heavily dependent on lifestyles and very changeable over the years (it is rapidly becoming very much more common than even earlier in our own lifetimes).  But if we were to say that 8% of the population will get T2D, the current estimate, then of these, 40%, or 3.2% of the population has an involvement of this particular gene.  That may be important, but it doesn't explain the epidemic, and leaves unanswered questions.

I hoped to get to the bottom of this by the end of this post.  Instead, I remain confused.
-------------

*Update* Oct 26. Dr Rosengren replied to my email this afternoon.  He said that "the major allele frequency for the variant is in Table S1 in the Science paper. " That's this table:

Source
The risk variant is rs553668, for which the minor allele frequency is 0.143.

He also said that "T2D patients have a 40% higher frequency."  I have to say, this doesn't clarify things for me.

Additionally, he said that 'controls' with the variant are at higher risk of developing diabetes, which means, at least to me, that they aren't good controls.

Wednesday, October 22, 2014

Was John Snow more of an empiricist than the miasmatists?

If you know anything about epidemiology, you know that the iconic Broad Street pump in the Soho district of London is the site of what is considered to have been the first modern, epidemiological study.  This is where the man remembered as the first epidemiologist, John Snow, in the first empirical study of its kind, demonstrated that cholera is a waterborne disease.  Or at least that's the legend.

John Snow
The story is well-known in the field of epidemiology, but also beautifully told in Steven Johnson's 2007 book The Ghost Map. John Snow was a physician, and even the anesthesiologist to Queen Victoria at a time when anesthesiology was just being developed, given to testing innovative ideas that hadn't yet caught on.  He had proposed during a cholera outbreak in London in 1849 that the disease wasn't due to 'miasma', or bad air, as was widely thought, but instead was caused by a contagion in the water.  So the outbreak in 1854 in Soho, near where he himself lived, became the perfect natural experiment for him to test -- or confirm -- this idea.


He gathered evidence
The first death in the 1854 epidemic occurred on August 31, and by September 10, 500 people had died.  Snow took the opportunity to collect as much data as he could.  He did an exhaustive review of the deaths, interviewing surviving family members, and drew a map (the 'ghost map') which showed that all the deaths clustered around the Broad Street water pump.  From his interviews, he determined that all those who became ill had drunk water from the well.  He confirmed his earlier reasoning that the worst symptoms were intestinal rather than respiratory, which meant to him that the agent was ingested, not inhaled.  He even found the index case, the case that began the epidemic -- a mother had dumped waste from the diaper of her infected baby near the well and, he reasoned, this had contaminated the well water.

Adding to the credibility of his theory were such findings as that there were no deaths among the 70 workers in a Broad Street brewery, because the men were given free beer all day so never drank water.  And, of the 530 inmates in the workhouse around the corner from the pump, only five contracted cholera, because the building had its own well so few inmates used the Broad St pump. He tried looking at the water under the microscope, but since he didn't know what he was looking for, he was unable to find the offending agent.

Armed with all this evidence, he was able to convince the local authorities that the Broad St pump was the source of the contaminating agent, whatever it was, that had caused the outbreak.  He urged them to remove the handle from the pump to prevent further contamination, and they did so, though grudgingly.  In fact, they recanted not long afterwards, most of the council never believing that cholera was caused by something in the water.

The removal of the handle is not what stopped the epidemic, however, as even Snow recognized.  Incidence had already begun to decline by the time the council took action.  By the end of the epidemic, Snow had at least convinced himself that his theory was correct, and he would certainly be pleased that history vindicated him.

Monster Soup commonly called Thames Water

More convincing evidence
But here's what I think is most interesting about this story.  The miasmatists had evidence, too.  An editorial in the London Times in 1849, for example, cited by Johnson, proposed five possible causes of cholera:
  • “A … theory that supposes the poison to be an emanation from the earth,” 
  • An “electric theory” based on atmospheric conditions,
  • The ozonic theory -- a deficiency of ozone in the aid,
  • “Putrescent yeast, emanations of sewers, graveyards, etc.,”
Or, most unlikely, because it "failed to include all the observed phenomena,"   

  • Cholera was spread by microscopic animalcules or fungi.
It can't be argued that the editors at the Times, indeed, miasmatists everywhere, were not empiricists. They certainly were, and they were all weighing the evidence as well.  It was well-known that most cases were in cities, where the air smelled bad, and open sewers emanated putrid smells, and rivers were filthy and smelled atrocious.  Most cases were in low-lying areas, closer to swamps and so on than hilly areas.  So, the idea that disease was due to bad air made a certain amount of sense, based on available evidence.  Correlation equaled causation.

Wellcome images

But it has to be said that the same was true of Snow's idea.  He had a lot of circumstantial evidence, but he certainly didn't have the smoking gun.  The actual immediate cause of cholera was not to be identified for several more decades, when German virologist Robert Koch (re-)discovered the causal organism, Vibrio cholerae, in 1883.  (It had first been discovered in 1854, however, by Italian anatomist Fillip Pacini, though this wasn't well-known.)  So, Snow's evidence could only in fact be confirmed with hind sight.

Vibrio cholerae

Needed: formal method for determining evidence
Koch and Louis Pasteur in the decades after Snow's study of cholera contributed to the death of the miasma theory and the rise of the germ theory of disease by discovering first the organism that causes puerperal fever in 1860, and then in the 1870's and 80's, the causes of tuberculosis, anthrax and cholera, and many more followed.  They formalized the germ theory in 1878 and 9, and this soon led to the beginning of public health in Europe and the US.

How did they know that a microbe caused disease, though?  To standardize this, Koch proposed four postulates.
1. The microorganism must be found in abundance in all organisms suffering from the disease, but should not be found in healthy organisms.

2. The microorganism must be isolated from a diseased organism and grown in pure culture.

3. The cultured microorganism should cause disease when introduced into a healthy organism.

4. The microorganism must be re-isolated from the inoculated, diseased experimental host and identified as being identical to the original specific causative agent.
But, even he knew that many microbes didn't meet these criteria; they couldn't be grown in the lab, they might be found in healthy individuals, and so on.  Molecular Koch's postulates have been proposed in the modern era, but they, too, aren't always met.  So, it seems that demonstrating causation still often can't be done conclusively.

This all lead to, or at least coincided with the use of statistical criteria in epidemiology and genetics, and the rise of population-based evidence for causation.  In the 19th century, governments began to collect data for demographic and other statistics, and insurers and actuaries began to collect data and compute group statistics to calculate the probably of future events from known data.

Philosopher, logician, mathematician C.S. Peirce is credited with inventing randomized experiments in the late 1800’s, and they then began to be used in psychology and education.   Randomized experiments were popularized in other fields by R.A. Fisher in his 1925 book, Statistical Methods for Research Workers. This book also introduced additional elements of experimental design, and was adopted by epidemiology. Austin Bradford Hill published Principles of Medical Statistics for use in epidemiology in 1937.  And, population genetics, the Modern Synthesis (which showed that Mendelian genetics is consistent with gradual evolution), and discoveries in genetics laid the foundation for approaches to looking for the genetic basis of traits and diseases.

AB Hill suggested a set of nine criteria that could be useful for determining causation in epidemiology.  Called the "Hill Criteria" they are still considered useful today, even though Hill himself wrote that none except the requirement that cause precede effect are necessary.  

Where are we now?
Despite all the progress, we still have trouble determining causation.  Does the use of antibiotics in livestock cause antibiotic resistance in humans?  There's some evidence that it does, but not every study supports this.  Thus, many animal breeders protest that the rise in antibiotic resistance is not their doing, and they should continue to be allowed to use antibiotics to promote growth.  As Maryn McKenna, author of books, magazine pieces and blog posts on antibiotic resistance, noted during a recent excellent discussion of the association of antibiotic resistance and use in animals on the radio program "What Doesn't Kill You" (here), it's true that the evidence isn't unequivocal, but as with smoking and lung cancer, it's time to declare an association and make policy from there.  That's the best we can do.

Demonstrating causation can be difficult in genetics, as well.  There are more than 2000 variants in the CFTR gene that are assumed to cause cystic fibrosis, for example, but this can lead to circular reasoning such that newly identified variants in the gene associated with CF are considered to be causal without causality being demonstrated, largely because it is very difficult to demonstrate.

Indeed, we now insist, as if this is a new discovery, on what is not-so-modestly proclaimed as 'evidence based medicine,' as if our predecessors ignored the facts.  We can say we're empirical, but so were the miasmatists. It's a matter of what we consider 'evidence.'

Tuesday, October 21, 2014

And it's even worse for Big Data.....

Last week we pointed out that the history of technology enabled larger and more extensive and exhaustive enumeration of genomic variation, to apply to understanding the cause of human traits, important diseases or even normal variation like the recent stature paper.

We basically noted that even the earlier, cruder methods such as 'linkage' and 'candidate gene' analysis did not differ logically in their approach as much as advocates of mega-studies often allege, but more importantly, all the approaches have for decades told the same story.  That is, that from a genomic causal point of view, with a few exceptions, complex traits that don't seem to be due to strongly-causal simple genotypes aren't.  Yet the profession has been vigorously insisting on ever bigger and more costly, long-term, too-big-to stop studies to look for some vaguely specified pot of tractably simple causation at the end of the genomic rainbow.


When a genetic effect is strong, we can usually find it, in family members; Wikipedia

But the problem is even worse.  The history we reviewed in last week's post did not include other factors that make the problem even more complex.  First, is somatic mutation, about which we wrote separately last week.  But environment and lifestyle exposures are far more important to the causation of most traits than genomic variation.  The obvious and rather clear evidence is that the heritability is almost always quite modest.  With a few exceptions like stature (which is only an exception if you standardize height for cohort or age, that is, for differing environments), most traits of interest have heritability on the order of 30% or so.

But why are environmental contributions problematic?  If we can Next-Gen our way to enumerating all the (germline) genomic variants, why can't someone invent a Next-Env assessor of environments? There are several reasons, each profound and challenging to say the least.
1. Identifying all the potentially major environmental contributors, which may occur only sporadically, decades before someone is surveyed, unbeknownst to the person, or simply unknown to the investigator, is at least a largely insuperable challenge. 
2.  Defining the exposure is difficult and measurement error probably large and to an unknown or even unknowable extent.  We might refine our list, but how accurately can we really expect to measure subtle things like behavior habits in early childhood, exposures in utero, or even such seemingly obvious things like fat intake, or even smoking exposure. 
3.  Somatic mutations will interact with environmental factors just as germline ones will, so our models of causation are simply inaccurate to an unknown extent. 
4.  Suppose we could get perfect recall, and perfect measurement, of all exposure variables that affected our sampled individuals from in utero to their current ages (and, assuming pre-conception effects on gene expression by such things as epigenetic marking).  That would enable accurate retrospective data fitting, but there is simply no way to predict a current person's future exposures. The agents and exposure levels, and even identity of risk factors is simply unknowable, even in principle.
These facts, and they are relevant facts, show that no matter the DNA sequencing sophistication, we cannot expect to turn complex traits into simple traits, or even enumerable, predictable ones.

More on history
In last Friday's (Oct 17) post on Big Data, we noted that history had shown that even cruder messages were successful at giving us the basic message we sought, even if it wasn't the message we wanted to find.

The history of major, long-term, large-scale studies in general shows another reason we invest false expectations in the desire for Big Data.  Forgetting physics and other sciences, there have been a number of very big, long-term studies of various types.  We have had registries, studies of radiation exposure, follow-ups on diets, drug, hormone and other usage, studies on lipids in places like Framlingham, sometimes multi-generational follow-ups.  They have made some findings, sometimes important findings, but usually two things have happened.  First, they quickly reached the point of diminishing returns, even if the investigators pressed to keep the studies going.  Second, many if not perhaps even most of the major findings have turned out, years or decades later, even in the same study, not to hold up very well.

This isn't the investigators' fault.  It's the fault of the idea itself.  Even things like, say, lipids and other such risk factors are estimated in different ways later in a study than they were earlier, as ideas about what to measure evolves.  Measurement technology evolves so that earlier data are not considered reliable or accurate or cogent enough.  The repeated desire is to re-survey or re-measure, but then all sorts of other things happen to the study population (not least of which is that subjects die and can't be recontacted).  Yet, and we've all heard this excuse for renewal in Study Section meetings, "after all that we've invested, it would be a shame to stop now!"

In a very real scientific investment sense, the shame in many such cases is not to stop now.  That involves some very difficult politics of various kinds, and of course scientists are people, too, and we don't just react to the facts as we clearly know them.

Is an Awakening in progress?
Here and there (not just here on MT!) are signs that not only is the complexity being openly recognized as a problem, but perhaps there's an awakening or a formal recognition of the important lessons of Big Data.

Although the advocates of Big Data Evermore surely remain predominant, a number of Tweets out of this week's American Society of Human Genetics meetings suggest at least a recognition among some that caution is warranted when it comes to the promises of Big Data. The dawning has not yet, we think, reached the serious recognition that Big Data is a wastefully costly and misleading way to go, but at least that genotype-based predictions have been over-promised.  How much more time and intellectual as well as fiscal resources will be dispersed to the winds before a serious re-thinking of the problem or the goals occurs?  Or is the technology first approach too entrenched in current ways of scientific thinking for anything like such a change of direction to happen?

Monday, October 20, 2014

'Obstetric dilemma' skeptic has c-section and remains skeptical ... & ... Why my c-section was natural childbirth

This is a new kind of Tale for me. The rock'n'roll's turned way up, and every couple sentences I have to stop typing to twirl a blue hound dog, a bear holding an umbrella, a Flying Spaghetti Monster, and other oddities that I strung up to hypnotize this little guy into letting me type one thought at a time:

The thing that needs to be hypnotized.
Or the three wise monkeys say: The thing that makes it impossible to create or to dwell on the negative. (e.g. his birth by c-section)

That young primate's the reason I've been quiet for a while here on the MT. And he's the reason I'm a bit more emotional and I cry harder than usual at Rise of the Planet of the Apes (those poor apes!), Cujo (that poor dog!), and other tearjerkers. But he's also the reason my new favorite animal is plain old, fascinating, and dropdead adorable Homo sapiens.

In anthropological terms, he's the reason I'm overwhelmed, not just in love but in new thinking and new questions about the evolution of human life history and reproduction, and then what culture's got to do with it and with our reconstruction of it.

Some context would help, probably.

For the past few years I've been challenging the 'obstetric dilemma' hypothesis--the idea that hominin mothers' bipedal pelves have shortened our species' gestation length and caused infant helplessness, and that antagonistic selection between big-brained babies and constrained bipedal mothers' pelves explains childbirth difficulty too.

[For background see here or here or here or here.]

As part of all that, I've been arguing that the historically recent surge of c-sections and our misguided assumptions about childbirth difficulty and mortality have muddled our thinking about human evolution.

So, once I was pregnant, you might imagine how anxious I was to experience labor and childbirth for myself, to feel what the onset of labor was like, and to feel that notorious "crunch" that is our species's particular brand of childbirth. Luckily I was not anxious about much else the future might hold because modern medicine, paid for by my ample health insurance, would always be there to make it all okay. After a long pregnancy that I didn't enjoy (and am astonished by people who do) I was very much looking forward to experiencing childbirth. In the end, however, my labor was induced and I had a bleeping c-section.

But my bleeping c-section's only worth cussing over for academic reasons because the outcome has been marvelous, and the experience itself was out of this world.

We'll get to the reasons for my c-section in a second, but before that, here are the not-reasons...

First of all, I did not have a c-section because I fell out of a tree with a full bladder.

Second of all, shut your mouth... a c-section was not inevitable because of my hips.

Okay, you got me. I've never been even remotely described as built for babymaking. My hips are only eye-catching in their asymmetry. One side flares out. It might be because when I was 15 years old I walked bent-kneed for a few months pre- and post-ACL reconstruction. That leg's iliac crest may have formed differently under those abnormal forces because, at 15, it probably wasn't fused and done growing yet. If you like thinking in paleoanthropological terms like I do, then my left side is so Lucy.

Anyway. I'm not wide-hipped. However, guess how many nurses, doctors, or midwives who were involved in our baby's birth think my pelvis was a note-worthy factor in my c-section? Not one.

Hips do lie! Inside mine there's plenty of room to birth a large baby. Two independent pelvic exams from different midwives (who knew nothing of my research interests at the time) told me so, and it sounded like routine news to boot. Although one midwife asked me "do you wear size nine and a half shoes?" (no, I wear 8) which was her way of saying, "Girl, you're running a big-and-tall business. You got this."

What you probably know from being alive and knowing other people who were also born and who are alive (or what you might hear if you ask a health professional in the childbirth biz) is that most women are able to birth babies vaginally, even larger-than-average babies. And that goes for most women who have ever lived. Today, "most women" includes many who have c-sections because not all c-sections are performed because of tight fit between mother's birth canal and baby's size. As I understand it, once the kid's started down into the birth canal and gets stuck, a c-section's no longer in the cards. So performing c-sections for tight fit is a preventative measure based on a probability, not a reflection of an actual tight fit. In the mid 20th century, tight fit used to be estimated by x-raying pregnant women and their fetuses. Can you imagine? And this was right about the time the obstetric dilemma hypothesis was born. I don't think that's a coincidence.

Here's a list of reasons for c-sections. Tight fit is included in the first bullet point. Tight fit is one of the few quantifiable childbirth risks. No wonder it's so prominent in our minds. That list excludes "elective" ones which can be done, at least in Rhode Island, if they check the box that says "fear of childbirth". And that's not even close to a list of reasons why women around the world and throughout history have died during or as a result of childbirth. For example, about a hundred years ago women were dying all over the place because of childbed fever.

Anyway, we should assume that I am like most women and expect that I could have given birth the way Mother Nature intended: through my birth canal and with the participation of other humans. Oh yeah, when it comes to humans, social behavior and received knowledge are part of natural childbirth. Even this natural childbirth (which has inspired a forthcoming reality television show featuring women giving birth in the wild!) involves the supportive and beneficial presence of other humans as well as the culture that the mother brings to the experience.

But a c-section's just culture too, so could it be part of "natural" childbirth, then?

I'm inclined to blurt out yes, of course! because I don't support calling anything that humans do "unnatural." But I know that's not something everyone agrees with. It's politics. For example, many of you out there don't flinch an inch at the subtitle of Elizabeth Kolbert's book, "The Sixth Extinction: An Unnatural History."  And given the present energetic movement against childbirth interventions, describing c-sections as "unnatural" as climate change could help minimize unnecessary ones for those who wish to give birth vaginally.

So there we have it. These are the two enormous issues raised by my own little c-section: What can it teach us about the evolution of gestation length, infant helplessness, and childbirth difficulty? And could it be considered natural?

One way for me to get at these questions is to try to understand why I experienced "unnatural" childbirth in the first place. So here goes.

Here's why I think I had to have a c-section:

1. My pregnancy ran into overtime.


This is expected for nulliparous mothers. I visited one of my OBs on my due date. He put his finger on the calendar on the Friday that was two weeks out and joked, "Here's when we all go to jail." Then he asked me, "Who do you want to deliver your baby? I'll see when they're on call before that Friday and schedule your induction then." And I chose my favorite midwife and he scheduled the induction.

All right so I was running late compared to most women, but that's still natural, normal. But it also means risks are ever-increasing by the day. And no matter how small, that the professionals know how to mitigate the biggest risks of all, *deaths*, means that they try to do that. They're on alert already as it is, and then they're even moreso on edge when you're overdue. Especially when it's your first baby and you're a geezer, over 35 years of age.

Now, does going overdue mean the baby keeps growing? Maybe, but not necessarily and not necessarily substantially. Both of us, together, should have been reaching our maximum, metabolically. There's only so much growing a fetus can do inside a mother.

When I approached my due date, and then once I went past it, I tried to eat fewer sweets to make it less comfortable in my womb. I also went back to taking long, hard walks, five milers, even though it was hard on my bladder because I thought that might help kick him out too. I even ran the last of my five miles the day before my induction, to no avail other than the mood boost it gave me.

2. I didn't go into labor naturally by my due date or by my induction date 11 days later. 

Although my cervix was ripening, when I went in to be induced I was only dilated 0-1 cm. I had 9+ more to go before the kid could get out at 10. So a balloon catheter was inserted and filled with water, and I had to tug on the tail of it, which tugged the balloon, which put pressure on the cervix. It dilated enough that it fell out several hours into the process, and by morning I was dilated 3-4 cm. This was exactly the goal of the catheter, this many centimeters. All was going well. However, that the cervix did not open on its own is already a missing piece of going "natural," of having my own biology contribute to my childbirth experience. So starting this way is already derailing things, making it difficult for anything natural to follow, naturally.

3. The fetus's head was facing the hard way: sunnyside up.

This was assessed by the midwife and cradling my belly in a bedsheet, with me on all fours, she and I could not twist him into a better position. His head, she said, was probably why I did not dilate naturally. When I asked an OB during my postpartum check-up, "What dilates the cervix?", he said "We don't know. But I can tell you it's not with the head like it's a battering ram." Well, then... hmph. And then I asked him if women carrying breech fetuses have trouble dilating their cervixes, or going into labor naturally, and he said not necessarily. No. Hmph.

Regardless of what causes cervical dilation, if the head isn't facing the right direction, it's notoriously tough to get down into the birth canal, let alone through the birth canal. It's not impossible, not even close. But it's not looking good at this point either. Perhaps the contractions will jossle his head into a better position, they said. And the contractions should further dilate the cervix.

4. Contractions didn't get underway, naturally, after the catheter dilation, so the drug pitocin was used. 

Induction and pitocin increase the chances that a mother will ask for drugs to help with pain and that she will have interventions, like a c-section. See for example this paper. What the causes are, I'm not sure. But pointing out the correlation is useful at this point because at this point, without even getting into hard labor yet, and without finding out whether my cervix does its job, I'm more likely than ever to be going to the operating room.

5. After six hours of easy labor and five hours of intense labor, my cervix never dilated past 5 cm.  

It needs to get to 10 cm to get the baby moving into the birth canal. Just like with due dates, I think that blanketly assigning this number to all women is probably not consistent with variable biology, but it's how it's currently done. And maybe any higher resolution, like "Sally's cervix needs to hit 9.7 cm", is pointless.

After several hours pitocin-induced contractions--which at first felt like the no-big-deal Braxton-Hicks ones I'd been having numerous times daily for the whole third trimester--I only dilated 1 cm more. That's even when they upped the pitocin to make them more intense.

But after they saw I'd made essentially no progress and that I was napping to save my energy for when things got bad, they woke me up and broke my bag. It would be nice if they could have let my labor progress slowly, if that's what my body wanted to do, but remember, my personal biology went out the window as soon as induction began. And then when that amniotic fluid oozed out of me, that's when bleep got real.

Every two minutes and then every one and a half, I grabbed Kevin's extended hand and breathed like an angry buffalo humping a locomotive. It was the worst pain of my life and I was afraid I'd never last to 10 cm, so I took the stadol when I told the nurse my pain was now at a 9 out of 10 (all previous answers to this question were no higher than 2). I was going to avoid the epidural no matter what, even at this point, because I was more afraid of the needle sticking out of my spine for hours of labor than I was afraid of these contractions. I have no idea if the stadol dulled any pain, because the pain just got worse, but it did help psychologically because it put me to sleep between contractions. There was no waiting with anxiety for the next one and time flew by. But after five hours of this, I had not dilated any more. But I had vomited plenty! And although I'd fended off the acupuncture (FFS!), I folded weakly and, for the peace of mind of a wonderfully caring nurse, I allowed a volunteer to perform reiki on me. And what a tragedy it was! Wherever she is, there's a good chance she gave up trying to help laboring women, and she may have given up reiki all-together.

The hard labor story ends at five hours because that's about when the nurse actually screamed into the intercom for the doctor. My contractions were sending the fetus into distress.

6. After five hours of intense labor, the fetus was experiencing "distress" at every contraction, as interpreted from his heart-rate monitor. 

Basically, he was bottoming out to a scary heart-rate and only very slowly coming back to a healthy heart-rate just in time to get nailed by another contraction. By the way, this is the official reason listed in my medical records for my c-section: fetal distress.

I know that a heart-rate monitor on the fetus is another one of those medical practices that increases the chances of an "unnatural" childbirth. That's probably because all fetuses are distressed during labor, but observing the horror, and then guessing whether it's safe to let it continue is seemingly impossible. So at some point, like with me and my fetus, they get alarmed and then how do you back down from that?  They gave me an oxygen mask which immediately helped the fetus a bit, but like I said, hackles were already up at this point. Soon thereafter we had a talk with the doctor about how I  could go several more hours like this and get absolutely nowhere with my cervix, and then there are those life and death matters. She never said c-section. I had to eek out between contractions, "So are you saying we need to perform a c-section?" and she said yes, and urgently. A c-section sounded like the only solution at this point both to battered, old me, to clear-minded Kevin, and clearly to the delivery team (and in hindsight, it still does to Kevin and me). Then, lickety-split, the anaesthesiologist arrived, got acquainted with our situation, and made me vomit more. And then like a whirlwind, Kevin's putting on scrubs, and we're told to kiss, and I'm jokingly protesting "I'm a doctor too!" while being wheeled into the operating room because I cannot walk through my contractions.

It's bright white, just like Monty Python said it would be. I sat on the crucifix-shaped operating table to receive all the numbing and pain killing agents through my spine. Somehow they pulled this off while I was still having massive contractions. Then I laid down, arms splayed out to the side, and they drew a curtain across my chest, a nurse told me how creepy it was about to be, and they got to work.

Although the c-section wasn't painful, I could feel everything. This was my childbirth experience. I felt the incision as if she was simply running her finger across my belly, and I felt the tugging and the pressure lifting from my back as they extracted my baby from me. After that, and after I got a short glimpse of him dangly over my left arm--"He's beautiful! He's perfect! He's got a dimple! He growled!"--I continued to feel many things, probably the birth of my placenta, etc...

But I didn't know what exactly I was feeling until I watched a video of a c-section on YouTube. Kevin helped fill in the details too. He had caught a naughty glimpse of the afterbirth scene before being chased back to his designated OR spot with the baby. Thanks to him (and that video) I know now that I was feeling my enormous muscular uterus and some of my intestines being yanked completely out of a small hole right above my pubic bones and then stuffed back in. For a few moments, it must have looked like I was getting re-inseminated by a red octopus.

I tell everyone that it was like going to outer space to give birth. And this, if you know me, is an exciting idea so my eyes are smiling and I sound dreamy when I say "it was like going to outer space to give birth!" I bet you're thinking it's the Prometheus influence, but you'd have the wrong movie. The correct one is Enemy Mine. And it's much more than that, actually. I was as jaw-dropped and awe-struck by humanity during my childbirth experience as I am by space exploration. The orchestration, the specialization, the patience, the years of study, of planning, the calculations, the dexterity. To boldly go. Wow. Like I said, humans are my new favorite animal.

I was back in our little room quicker than most pizza deliveries, where our bright red new baby was trying hard to nurse from his daddy. Then he nursed from me. And the story's all mushy weepy cuddly stuff from now on. So let's not. Let's remember what we're here for. Okay. Right.

7. The cord was wrapped twice around his neck. 

We found this out when he was cut out of me. That didn't help with moving him around in utero to a good position, nor did it help with oxygen flow during contractions! This would not have inhibited his safe vaginal birth, however, at least not necessarily.

8. He was enormous. His head was enormous too. 

He came out a whopping 9 pounds, 13 ounces, 22.25 inches long, with a head circumference of 15.5 inches. They say that's heavier than he'd be if born vaginally because he didn't get all the fluids squeezed out of him. But still, that's large. According to the CDC he was born as heavy as an average 3.5 month-old boy. His head was about the size of an average 2.5 month-old.

Red line is our baby's head circumference at birth. (source)

Way back at the mid-pregnancy ultra-sound, we knew he was going to be something. And then if you'd seen me by the end, like on my due date, you might have guessed I was carrying twins. I was so big that my mom joked she thought maybe a second fetus was hiding behind the other one, undetected.

Smiling on my due date because pregnancy was almost over. 
(By the way, I could still jog and I dressed weird while my body was weird.)

If I hadn't had the means to eat so much like I did during pregnancy, perhaps he wouldn't have grown so large inside me. If I hadn't lived such a relaxed lifestyle while pregnant, maybe he wouldn't have grown so large inside me. If I didn't have a medical safety net waiting for us at the end, perhaps I would have been scared into curbing my appetite from the get go. I gained 40 pounds. With this body, but in a different life, a different place, a different time, maybe I wouldn't have. Probably I wouldn't have.

His size has got to have influenced a few of those other contributors to my c-section. But clearly it's more complicated than his size. And this brings us back to the obstetric dilemma. Let's say he was too big or that his large size screwed everything up, even if he could technically fit through the birth canal. Well then, why didn't I go into labor? Labor triggers are, to me, a significant problem when it comes to explaining the evolution of gestation length in humans, and whether we have a unique problem at the end.

If our pregnancy length is determined by available energy, energy use, and metabolism (here and here) then women like me who go overdue, who are clearly not killing our babies inside us either, are just ... able to do that. But doing that clearly leads to problems in our species (one of the few known) that has such a tight fit to begin with.

If our pregnancy length is determined by our birth canal size, and any anatomical correlates, then why didn't I go into labor before my fetus got so big? What went wrong? What's frustrating too is, for my n of 1, we'll never know if I could have given birth vaginally because I never got the chance to try.

These seem like simple questions but they are deceptively complex. And I think there will be some exciting discoveries to come from medicine and anthropology in the coming decades to explain just how our reproduction works which will in turn help us reconstruct how it evolved.

What's my birth experience got to do with evolution? Why, everything. It's got everything to do with evolution, because if it's not evolution, it's magic.  And that's kind of where I'm coming from when I say that my c-section was still natural childbirth. It wasn't unnatural and it certainly wasn't supernatural. Sure, it's politics. I'm invested in the perspective that humans are part of the evolving, natural world and want others to see it that way or, simply, to understand how so many of us see it that way. But it's not just evolution that's got me enveloping culture into nature and that's got me all soft on the folks who drive fancy cars who cut my baby out of me.

Who knows what could have happened to my son or to me if we didn't have these people who know how to minimize the chances of our death? It's absolutely human to accumulate knowledge, like my nurses, midwives and doctors have about childbirth. Once learned, it's difficult for that knowledge to be unseen, unheard, unspoken, unknown. Why should we expect them to throw all that away so that we can experience some form of human being prior to that knowledge?

Nature vs. Culture? That's the wrong battle.
What matters is which one can fight hardest on my behalf against the unthinkable.


Maybe childbirth is so difficult because it can be. We've got all this culture to help out when things get dicey, with or without surgeons. On that note, maybe babies are so helpless because they can be. We've got all the anatomy and cognition to care for them and although the experiment would be impossible, it's doubtful any other species but ours could keep a human baby alive for very long. It could just be our dexterous hands and arms, but it could be so much more, like awareness of their vulnerability and their mortality,and (my favorite pet idea) awareness that they're related to us. Culture births and keeps human children alive with or without obstetricians. It's in our nature. Maybe it's time we let all this culture, our fundamental nature, extend into the operating room.