Thursday, February 28, 2013

The Brain Drain.....on our budget!

Well, we've seen case after case of Big Science projects that yielded more hype than heft.  GWAS and other 'omics monsters are examples.  So is the $1 B (as in Billion) study on following up children but that hasn't followed a single one up yet.  There is ENCODE, quickly being exposed (e.g., here) as the fund-gobbler with limited or even questionable results that it is.  And the Thousand Genomes project.  And then there are the relentless, repetitive studies of diet and other go-hardly-anywhere or say-the-obvious-again that get the headlines.

And now, perhaps not totally coincidentally timed just when the President has to make a decision about cuts, comes the urgent, world-revolutionizing brain mapping project (BAM!).  Even though no details have yet been announced, already it is being blasted by people who know about the project and have the brains to see through its flimsy rationales (e.g., here). And, it's interesting that so many geneticists, who of course benefitted themselves from their own Big Science decade+, are coming out against it.  Not surprisingly, as it will steer funding to something else.

These projects do have some scientific questions, and identify areas of our limited knowledge.  But these are nearly after-thoughts--first, let's look at everything that technology can find without having to think seriously about why we're doing it first (i.e., to have to state some useful hypotheses).  And second, they seem to be transparent strategies for being bad rather than good citizens:  NIH, and its PT Barnum leader Francis Collins, lays this Big One at Obama's feet just when it comes time to think about budget cuts. And the EU has, even in times of austerity, ponied up a half-billion Euros, for its own version of the Brain Drain (The Human Brain Project) -- is this a case of the US not wanting to be left behind?  If the President says we can't do this all the way, then will he feel pressured to temper other cuts?  This or something like must be what's in the lobbyists for Big Science's minds.

There is waste aplenty, crises in science research and publishing, and the like.  But the very well organized university-science-industry welfare system knows how to propose projects easy to brag about and hard to turn down, to make sure the cuts happen on somebody else's lawn.  Proposing huge new projects of the 'omics' (do everything all at once without real ideas) in the face of a budget crisis is basically to sneer at the public good, a science arrogance, which has all the earmarks of a cynical disregard for society at large and a shallowly selfish form of guild-protection.  Or is this too cynical a view on our part?

No proposed project is entirely worthless, even 'mapping' the human brain.  But the mind-set or strategem to co-opt research funding by going for Big Science is destructive to science itself, making safe, incremental, essentially thought-light (hypotheses need not apply) progress, and restricted to a set of investigators who have to toe the line as components of the bigger project.  These are becoming more and more top-down, NIH-administrated mega-groups, rather than independently initiated projects (known as RO1 applications) and the same is likely happening in other funding agencies.

Investigators bemoan the reduction in RO1 funds, and the flood of applications, but investigators desperate for funds when the chance per application is slow churn out applications, and most of them are safe, incremental projects following fads that seem fundable.  Investigators submit many applications a year, and who can blame them?  Unless there is some real squeeze that forces the system to fund what is really inovative or addresses real problems, which is not the tenor of our Big Science times, making more money available for RO1's would be good, but won't solve the problems.

There is now a long track record to show that what we say is not so wrong-headed.  Yes, even after you filter out the hurricane of hyperbole, most projects find things and, yes, there are improvements in knowledge or even occasionally in medical care.  Some of them are quite important.  But that's not the same as being worth it, or yielding a greater payback compared to more focused studies on more clearly soluble problems would have been.

The rat-race this is imposing on the academic research system and the hungry dependence of universities on external grants, are destructive of jobs, job security, morale, and of science progress and innovation itself.

This does raise a countervailing problem, however.  We already have an excess of people with advanced degrees who can't get jobs, or the kind of jobs they've trained for.  This is separate from the debate about whether there is a shortage of adequately trained technical science and engineering graduates and whether K-12 and research-obsessed universities are dropping the training ball.  We recruit too many graduate students, largely to do our research for us, or help us teach, so we can keep getting those grants that often don't produce that much, and then the grad students find that there aren't the needed real jobs out there afterwards.  The abuse of the system is worse in professional schools than in real universities with students, because professional schools (medical, public health, etc.) pay little of their faculty's salary and can't live on the tuition of their relatively small student body.  This is not their fault so much as the fault of the system we've allowed to be built.

Thus, cutting research funding to eliminate minimally useful or wasteful projects--reducing the Brain Drain--will force a cut-back in our convenient but excess scientific labor pool, as Karl Marx might have referred to it.  Faculty and staff will lose jobs, as will those who make and distribute the materials labs use, advertise it, publish research journals, and the like.  So, budget adjustment rather than just cuts is what we really need.

We all want things that we do to continue.  We build interest groups, settle into comfortable existence, and fight threats to the status quo.  All of this is only natural.  But why should scientists or bureaucrats have an easier job-finding time than people in 'lower' walks of life?  The proper and humane attitude is for the granting agencies to be public-spirited and volunteer cuts--real cuts--in the research budget, but cuts that are phased and tied to reforms that will continue to provide more secure, if more modest, funding to more (especially younger) investigators, to take the chance that this more diverse, more focused rather than grandiose omics-scale, a less frenzied science ecosystem will produce greater, better fruit than it's been doing.

And our nation would then not need to suffer the impending Brain Drain.

Wednesday, February 27, 2013

Bisphenol A: Forgotten lessons from Science 101

Bisphenol A (BPA) in the plastics that come into contact with our food is responsible for a lot of what ails us, from infertility to diabetes.  Isn't it? 

BPA
BPA is a chemical that has properties that resemble estrogen, and as such is thought to be a risk factor for "diseases of westernization" such as heart disease, diabetes, hypertension and stroke, although it must be said that study results are equivocal, some showing an association and some not.  BPA as a component of baby bottles was thought to be particularly worrisome, as early exposure has been suggested to be responsible for neurological disorders, with other who-knows-what longterm effects.

But now a speaker at the American Association for the Advancement of Science (AAAS) meetings that took place in Boston this month suggests that while there may indeed be an association between blood levels of BPA and illness, it may not be causal.

The Independent reports:
A more likely explanation is that obese people who overeat - and who develop chronic illnesses as a result - are also more likely to have raised levels of BPA in their bloodstream from plastic food packaging, said Professor Richard Sharpe of the Medical Research Council’s Centre for Reproductive Health in Edinburgh.
According to Sharpe:
“None of these studies actually shows that bisphenol A exposures causes the disorders, the exposure is simply associated with occurrence of the disorders,” he said.
“If this association was due to cause and effect, it would mean that bisphenol A was incredibly potent and toxic, and this does not agree with published studies. This possibility therefore seems illogical,” he added.
So, according to Richard Sharpe, the correlation is nothing more than an indicator that obese people, who are at higher risk of chronic diet-related diseases, simply have higher BPA blood levels because they eat more food that comes into contact with plastic.  But it's the food itself that's causal, responsible for obesity, which then causes the diseases for which BPA is being blamed.  And, Sharpe says, studies show that the BPA levels in blood are far too low for the chemical to be responsible for disease. 

Sharpe has been arguing his case for several years now, including in a piece, in The Independent again, in 2010 in which he said that the original animal studies that showed an effect were not replicable, and that implanting BPA under the skin and injecting it into the blood, which is how many animal studies have tested its effect, isn't representative of human exposure.  We ingest it, and it is thus broken down in our gut.  

But, on another public platform, Nicholas Kristof argued strongly in his widely-read column in the New York Times last August ("Big Chem Big Harm") that BPA is known to be responsible for many diseases and disorders, including autism, hyperactivity, aggression and depression in children, and breast cancer and diabetes in adults. He concluded that regulation of chemicals must be much stricter than it is.  


We aren't here to adjudicate.  But, this is rather astonishing, really.  One thing that the reported correlation has unquestionably caused, and Kristof's op/ed is an example, is a lot of fear, and BPA has been removed from baby bottles, and a lot of BPA-free choices are now available for other containers, like water bottles and the like.  How could the "science" be so backward here that apparently no one thought to consider the "Correlation doesn't equal causation" adage, something everyone should have learned in Science 101? 

Yes, determining causation can be problematic. But still, it's rather astonishing that simply questioning the direction of the association is enough to get a speaker a platform at a major scientific conference. 

Causation is a problematic concept, and its nature is far from obvious
Causation is a very complex subject that has been debated since, well, since there were  debates.  The classical idea was that some 'thing' is responsible for another 'thing' in the sense that if the first is present the second will ineluctably follow.  That's causal determinism and is what people generally have in mind.  Some argue, and have for a long time, that this is an oversimplification and truly deterministic properties, or laws, of nature are an illusion.  More accurate, from that point of view, would be a more probabilistic notion of such antecedent-consequent relationships.

One thing has been essential from the beginning, and that is a time ordering: cause must come before effect.  When and how time has such a clear-cut arrow is a subject for other times and isn't trivial.  However, here, the issue can be seen by the common idea of causation that when A always precedes B, and whenever A's around, B is sure to come, and thus A 'causes' B.

But it is easy to see why correlation between two factors--even if one can order them--clearly doesn't mean causation.  In the above example, plastic and over-eating both precede the subsequent diseases.  If all we observe are the plastic, we can make that the A in A+B.  But once we know that there is also a C, that also precedes B, it becomes more challenging to assign actual cause....whatever that is.

Unfortunately, until a mechanism is discovered we can't always decide whether A or C, or even some other correlate, are causal in regard to B.  This is a quandary that affects many areas of science, but especially observational ones like epidemiology, where or if we can't set up a controlled experiment to investigate the question.

Tuesday, February 26, 2013

Mediterranean diet cuts risk of heart disease!

This story is making headline news yet again, including at the New York Times, where Gina Kolata describes a new study done in Spain of the effects of adding extra virgin olive oil or nuts, fish, fruits, legumes and red wine to the diet, and reducing the consumption of red meat and processed baked goods.  (Note that the pyramid below was adapted from Consumer Reports Nov 1994.)

From www.womensheart.org
The total study group of about 7000 men (aged 55-80) and women (aged 60-80) was divided into 3 subgroups, and asked to follow the Mediterranean diet (MD) with extra virgin olive oil, the MD with nuts, or a low-fat diet.  They weren't asked to lose weight or to exercise.  The MD groups were given olive oil or nuts every week, and counseling as to how to follow the diet, while the low-fat diet group (the control group) was given a pamphlet on how to follow the diet when enrolled in the study, and then annually until 2006, when it was recognized that their adherence to the diet was poor, at which point researchers added further intervention. This group still never could consistently follow the low-fat diet, and instead were essentially eating their usual diet. 

Recruits were people without heart disease but with type 2 diabetes, or at least three major risk factors ("smoking, hypertension, elevated low-density lipoprotein cholesterol levels, low high-density lipoprotein cholesterol levels, overweight or obesity, or a family history of premature coronary heart disease").  They were followed up until either they dropped out or until the year 2010. The total person-years in the study was about 12,000, 11,000 and 10,000, by group. 

The paper is published in the New England Journal of Medicine.  The researchers assessed primary and secondary outcomes, with the former being heart attack, stroke or death from cardiovascular disease, and the latter being heart attack, stroke, death from cardiovascular disease or death from any other cause. 
The median follow-up period was 4.8 years. A total of 288 primary-outcome events occurred: 96 in the group assigned to a Mediterranean diet with extra-virgin olive oil (3.8%), 83 in the group assigned to a Mediterranean diet with nuts (3.4%), and 109 in the control group (4.4%). Taking into account the small differences in the accrual of person-years among the three groups, the respective rates of the primary end point were 8.1, 8.0, and 11.2 per 1000 person-years.  Outcomes According to Study Group.). The unadjusted hazard ratios were 0.70 (95% confidence interval [CI], 0.53 to 0.91) for a Mediterranean diet with extra-virgin olive oil and 0.70 (95% CI, 0.53 to 0.94) for a Mediterranean diet with nuts.
And,
In this trial, an energy-unrestricted Mediterranean diet supplemented with either extra-virgin olive oil or nuts resulted in an absolute risk reduction of approximately 3 major cardiovascular events per 1000 person-years, for a relative risk reduction of approximately 30%, among high-risk persons who were initially free of cardiovascular disease.
There was no effect of diet on mortality from all causes.  That is, the difference in total number of deaths between groups was not statistically significant.  The effect of diet on cardiovascular disease was apparently through stroke, not heart attack.  People following the Mediterranean diet did not lose weight, nor reduce the amount of fat in their diet, so the effect, the researchers say, was of dietary components alone. 

Interestingly, when researchers compared the merged MD groups with controls recruited before and after 2006, they found that "adjusted hazard ratios were 0.77 (95% CI, 0.59 to 1.00) for participants recruited before October 2006 and 0.49 (95% CI, 0.26 to 0.92) for those recruited thereafter (P=0.21 for interaction)."  Remember that the researchers decided to do more intervention with the control group after 2006. 

But how is that to be interpreted?  Were controls adhering better to the low-fat diet after 2006, and it turns out to be significantly worse than the MD?  Or vice versa?  Did the act of intervention itself made a difference somehow, or were the people recruited after 2006 metabolically different, older, sicker or something else from those recruited before?  Whatever the reason for the difference, it does suggest that comparison between the three groups is not a simple comparison of three different diets.

If this study is as significant as the write-ups are saying, it means that the effect of changing diet can be significant enough to be detectable within a relatively short time span (this study spanned 2003-2010, with subjects apparently included for varying lengths of time).  This suggests that the effect of a non Mediterranean diet over a lifetime is reversible, and thus that risk isn't as genetic as some think (not a surprise!), risk factors like blocked arteries may be reversible by diet, cholesterol can be changed by diet (this is also well-known), being overweight is not a significant risk factor (results on this issue go back and forth), and low-fat diets aren't protective (this, too, has been shown before, though doesn't seem to have caught on with the public). 

And, while this study does confirm things that have already been known, actually rather well and for quite a long time, it also means that people on the Mediterranean diet still die of cardiovascular disease, particularly heart attacks.  Rather than 11 CVD deaths per 1000 person-years, there were 8.  So, the difference may be statistically significant, but it's not qualitatively huge, like 25 vs 2, or even 11 vs 2.  Though, of course if you're one of those three, that's an incalculable difference.  Further, we don't know whether it's eliminating red meat and baked goods rather than adding olive oil and wine and nuts that makes the difference.

One can ask to what extent we should even be doing more and more studies of the same basic idea, once we have systematic data in its favor (dramatic disease benefits were found in a major North Karelia Finland project a long time ago, for example, in a huge dietary intervention study in a place that had, at the time, the highest CVD rates in the world).  There are various ways to measure effects and benefits, and to define outcomes, and these are relevant to evaluating any studies of diet and health.

If nothing else, this study is a reminder that if you reduce deaths from one cause, deaths from other causes go up.  People do still die of something. 

Monday, February 25, 2013

Yabba-dabba-doo-doo! Dino and Friends take an outing

Some things aren't actually concocted by Hollywood even if they seem so implausible that they must be in some sort of animated fantasy film.  If it's true, and not just a joke, that Oklahoma is about to pass a law that makes it illegal to fail students in science classes if they disagree with actual scientific findings ("academic freedom"), then Hollywood has a competitor, and it's not Bollywood.

Creation Museum diorama; Wikipedia
If one can assert that people were contemporary with dinosaurs then there isn't much one can't assert.   Actually, years ago there was a professor (not named Barney Rubble) where I was a graduate student who actually taught that.  Well, evolution wasn't his specialty, so one can be forgiving!

The fact that with all the money and effort we spend on education, all the TV shows that make even more money purveying hyper-excited science, magazines, videos, and so on, we still have not just a trivial fraction of people who could possibly hold the views that apparently represent the majority of the citizens of Oklahoma (or, at least, the drubs they elect to the legislature), is a remarkable fact.

The problem here is that this legislation is not a corrective to over-stated or over-confident science, which certainly does exist.  Hyperbole is the order of the day.  And it must be said that a lot of, if not the majority of scientists teaching subjects like biology at the university level are filtering through their particular take on the unknowns.  Students may rarely be aware of this, given the tendency to oversimplify or overstate.  But this isn't a matter of blind anti-sciencism!

Dino and Friends take an outing!
Similarly, who are we passing through our schools of education and then certifying to teach K-12 science?  It is said that in some Scandinavian countries, to get such jobs you must graduate in the top third of your university class!  In Finland I was recently told that that is not so.....but that it's harder to get into a school of education than into medical school.  Well, some countries have standards.

A Mother Jones article on the Sooner Stupor story says that some in this country are as blindly dogmatic in such ways, as Islamic fundamentalism is.  One might view this as a broad historical phenomenon, that we mirror our enemies.  We were more socialistic when we had opposition from communism, and now that we have opposition from Islamic fundamentalism, we are that way, too.

Unfortunately, that's not entirely convincing for situations like this.  There has been blind anti-scientism for a long time in this country, and anti-evolutionism has been the view of some prominent western biologists (in the late 19th and early 20th century).

The story also quotes Erik Meikle, director of the National Center for Science Education, saying that "An extremely high percentage of scientists will tell you that evolution doesn't have scientific weaknesses."  This is of course totally false as stated and hopefully a reporter's misquote!  But it is also a common sort of overstated, even misleading statement by the NCSE.  Their own publications, very useful as they are as an antidote to creationist insanities, are also often sanitized relative to the real issues in evolutionary science.  There have been since Darwin, and remain, many open issues about the nature of evolution, the role of various factors such as natural selection, chance, genes, various functional roles played by DNA and the environment and so on.  These are hotly debated as they should be--and we reflect those issues regularly here on MT.

If the NCSE meant, as we hope they did, that 'the fact of evolution is not seriously questioned by science', that would be more accurate.  But if we had no 'weaknesses', we'd be out of our jobs.  Often one hears that propaganda must be countered by opposing propaganda, but we don't hold that view.  If knowledge and education--real education--are our object, then we can't just fill our teachers' heads with propaganda of our own.

We commented on something similar a year or so ago in Texas, where we used to live.  At the time, we noted that, despite frequent appearances, there are reasonable people in Texas, and we know for a fact that this is true for Oklahoma as well.  But apparently there aren't enough of them when it comes to somehow allowing the living dinosaurs to turn the statehouse into a madhouse.

Friday, February 22, 2013

What we see is too often determined by what we expect to see

How we see is determined by a complex system of interacting eye parts, photoreceptors and brain functions, but what we see can be determined by what we expect to see.  Opsins are receptors bound to the membrane of photoreceptor cells of the retina of the eye that help organisms, including us, to see. They are involved in the conversion of light, photons, into an electrochemical signal that eventually gets translated into images.

Except when they aren't.  In a piece in last week's Science, "Opsins: Not just for eyes," Elizabeth Pennisi writes about opsins in sea urchins.  Sea urchins don't have eyes, so why do they have photoreceptors?

Adhesive tube feet: Janek Pfeifer, Wikimedia
Well, they may not have eyes, but they do have tube feet, and as it turns out, the photoreceptors, loaded with opsins, have been found on these tube feet, which are on the ends of their spines, and are used for feeding, moving, breathing. The photoreceptor cells in the tube feet aren't pigmented in the way that most opsin-containing photoreceptor cells are, which is why they had been overlooked.

But, opsins have been found in unexpected places before.  Pigment cells in amphibian skin, that react to light, dove brains, fish skin. So, photoreception is happening in all kinds of places.  And, sea urchins do tend to avoid light, which means they have to have a way of perceiving it so it does make sense that they have opsins.

Developmental biologist Maria Ina Arnone has been studying sea urchins for a long while.  As Pennisi writes,
Arnone proposed that the opsin photoreceptor cells in the sea urchin are positioned at the base of the tube feet such that they lie partially in the shadow of its calcite skeleton, allowing the skeleton to serve the same purpose as pigment in typical eyes—most opsins co-occur with pigment, which shields part of a photoreceptor cell so it can register the direction of incoming light. She has also shown that the photoreceptor cells connect to the five radial nerves in the brainless urchin, which may enable the input from the different photoreceptor cells to be compiled, much like an insect's compound eye does.
Arnone has found a second type of opsin, a "ciliary opsin," in different parts of the sea urchin, too, including the tube feet, the skin, perhaps muscle, and in the larvae.  She's not sure what this opsin does, and she can't say how one opsin became specialized for vision-related photoreception.

Arnone's work has led others to look for opsins in unexpected places, and they've now been found in the stinging cells of cnidaria, the phylum that includes hydra and jellyfish.  It turns out that the stinging cells are inhibited by light.  So, again, opsins are sensing light but for a purpose that has nothing to do with vision. 

And it's not just that opsins are doing their photoreception outside of eyes. It seems they also are sensing more than light -- e.g., in fruit flies they are involved in the mechanosensing required for hearing.  This is independent of light sensing.  The same group describing this role for opsins reports that they may also be involved in sensing temperature in these flies, and it's likely that opsins are involved in yet other kinds of sensory processes.

Pleiotropy, pleiotropy, pleiotropy everywhere!
And, opsins aren't the only molecules for which new roles were in the news last week. A paper in Nature Chemical Biology reports that tRNA synthetases, enzymes that have long been assumed to be specifically involved in translation of DNA into protein, have now been found to have nontranslational functions. "Although these new functions were thought to be 'moonlighting activities', many are as critical for cellular homeostasis as their activity in translation." And, apparently synthetases were recruited fairly early in evolution for their 'new' functions.
These new activities include but are not limited to (i) mediation of glucose and amino acid metabolism, (ii) regulation of the development of specific organs and tissues, (iii) control of the ying-yang balance of angiogenesis for the vasculature, (iv) triggering or silencing of inflammatory responses, (v) control of cell death and stress responses that may lead to tumorigenesis and (vi) amplification or inhibition of the immune response.
A new paper in PNAS reports that olfactory receptor genes, of which vertebrates have hundreds, were thought to be restricted to the nose and to function by detecting odorant molecules.  But some are now found to be expressed in other cells, including in kidney relating to blood pressure.  It was once suggested that they, so many and each so different, provided a kind of cell-type-specific 'address code' in the body.

There are many examples of newly understood functions for molecules that were once thought to do only one thing.  New functions for RNA itself are being discovered all the time, its role in gene expression only recently one of them.  We've seen RNA floating around in cells as long as it has been known about, and the extent of its function could have been described.  Except that no one thought of it.  Much of what used to be called "junk DNA" is now known to have function, and so on.

These are interesting cautionary tales about assumptions limiting what we see.  Opsins are for photoreception, tRNA synthetases are for translation of DNA to protein, olfactory receptors for smell.  Yes, scientists may stand on the shoulders of giants, but we also inherit the blinders of those who came before.

One reason why tunnel vision is a problem
If we understood what everything does, we wouldn't need science anymore.  Not knowing isn't the problem.  It's when assumptions block your view that it's a problem.  We've blogged before about the gene mapping study we've been involved in, looking at differences in head shape between inbred strains of large and small mice.  The mapping has, to date, identified 76 chromosomal regions that may contain a gene or genes that affect variation in head size and shape, a total of about 2500 possible candidate genes.

It turns out that function has been characterized for about 77% of the genes in these intervals, and information as to where they are expressed during at least one stage of embryonic development is available for about 60%.  So, not only are the data incomplete, but to add further difficulty, a not insignificant number of genes have been named for diseases they are associated with, or a single tissue in which they were first found, so that if they are in fact involved in the traits we are looking at, we won’t know it by name.  We might even be mislead.  

The gene called Brca1, Breast Cancer 1, for example, is associated with elevated risk of early onset breast cancer, but despite its name, it is not a gene for breast cancer.  It is a perfectly normal, functional DNA repair gene, expressed in a number of organs during different developmental stages and throughout life. The photo to the left, a figure from the expression database, GenePaint, shows BRCA1 expression at embryonic day 14.5 in a mouse.  The dark blue is where the gene is expressed: you can see that it's in the brain, the olfactory system, the tongue, the thymus, the liver, the lungs, and so forth doing, well, who knows what, really, at this stage of development.  It’s only when its function has been disrupted by mutation and the protein can no longer repair errors in DNA that it can be associated with cancer.

And, there are multiple genes identified as genes for a specific function – angiogenesis, or development of blood vessels, for example – but that are expressed in many different tissues.  So, gene names are often not reliable indicators of gene function, and all of a gene's functions generally are not known.  And in fact can't be known, since genes respond to environmental conditions, and gene function can't be assessed for every possible situation. 

We, and anyone doing GWAS or any other gene mapping studies, are faced with the task of poring over lists of genes for candidates for involvement in our trait of interest, be it facial shape and size or height or obesity or diabetes or schizophrenia, or anything else.  But, given the inconsistencies, vagaries, and incompleteness of the data, how are we supposed to choose?  We can’t rely on gene names or characterized function, and in our case we’ve found that about 95% of the genes in the map regions are expressed in the developing head, so we can’t pare down our list based on expression data either. And of course we can't do experiments with each of the 2500 genes in the map intervals.

We write all the time about the difficulties of predicting traits -- diseases, behaviors -- from genes.  There are many reasons why this is true.  Restricted vision is just one of them--whether in what opsins do or in what we 'see' them as doing!  But, few of us have the kind of vision that allows us to make sense of things in our data that are not what we expected to see.  Science progresses incrementally, in fits and starts, largely because that's true.  Thomas Kuhn wrote in his book "The Structure of Scientific Revolutions" about "paradigm shifts" which are leaps forward in our understanding.  They happen when someone finally understands the odd data that just didn't fit before.  Continental drift is probably the best example.  But, it doesn't always take, or make, a paradigm shift to really see what we're looking at rather than assume it's what we want it to be. 

Thursday, February 21, 2013

Sociobalderdash or Sociobiology? Part III

I want to add a third part to this series, triggered by the kerfluffle over Napoleon Chagnon's new book defending himself against his critics and recapitulating his views and studies of the Yanomami.  The issue about his treatment by the media and his critics is one for the sociology of science, and our objective was, and is, to put some of the scientific issues into perspective.  I tried in Parts I and II to do that.  This third post, however, was triggered by a couple of thoughtful comments made on our previous installments, and to further express what the theoretical context was, based on my experience coming up through graduate school and a post-doc at the time, and in the same group involved in the Yanomami studies.

At the time, in the late '60s and early 70's, there was a flurry of activity in the study of the nature, genetic basis, and/or evolution of social behavior.  Cultural anthropologists often were 'evolutionists' in the sense of trying to work out general principles, or even 'laws', of cultural evolution.  The idea that cultures evolve in a systematic way perhaps analogous to Darwinian biological evolution goes back at least to Herbert Spencer (but even Ibn Khaldun in the Islamic golden era, and the Greeks in the classical era, had relevant ideas).  I studied with Leslie White, who argued that culture evolved as a phenomenon of energy capture, a kind of physics-envy specificity, but others had various more purely sociocultural views (one, for example, that goes back to Marx and others who rejected psychological or Great Man theories; even, for example, Tolstoy's War and Peace, was an attempt to debunk such theories).

We can't go over everything involved here, but by the '60s the eugenic abuses of the Nazi era were fading from memory and it again became possible to challenge the tabula rasa or Freudian theories that propose that we are only what we experience culturally: nothing is built in to our makeup.  Indeed, this was perhaps partly a reaction to some superficial psycho-theories in cultural anthropology and popular 'intellectual' commentary from WWII and thereafter.

Cultural anthropologists rejecting psychological explanations, and seeing evolutionary generalizations in the range or even sequence of human indigenous cultures (Band, Tribe, Chiefdom, State, etc.) argued that humans were uniform slates molded by their culture (a kind of cultural tabula rasa argument).  Relative to cultural change, humans were a biological constant:  After all, no matter what your genes were, if you were (say) born in China you spoke Chinese and ate rice and were Taoist, but if born in Florida you spoke English (or, depending on the time, Cherokee), were Catholic and ate burgers.

Documenting inherent behavioral nature
In the 60s Nikolas Tinbergen, Konrad Lorenz, and others comparably prominent whose names escape me at present, were publishing very interesting studies in 'ethology', which purported to describe the inherent nature of animal behavior for a wide-ranging set of species.  If stereotypical behavior existed from birth (or, in the case of birds, hatching), then mustn't it be genetic?  And if it were genetic--even if we had no ability to identify specific genes at the time--then if it were useful, didn't that mean it had evolved in the classical Darwinian sense of having been forced into the genomes by natural selection--the harsh principles of survival of the fittest in competition among individuals?  What else could there be?

Ethological studies included famous and wonderful ones of lions, elephants, wolves, and so on.   Among the traits being reported were male competition for mates, group defense systems, male dominance hierarchies, and more.  The juicy idea of alpha males having all the fun was reported in one or another way to apply to various species.  This naturally led to a spate of monographs, symposium volumes, and highly popular books some, as now, by professors whose relevant 'expertise' wasn't questioned and who were expert at playing to the popular media.  Indeed, leading anthropologists turned to studies of wild primate populations (as contrasted to individual behavior of caged primates as some psychologists were doing) to relate their ecology to their social behavior, and interpret that in evolutionary deterministic frameworks.  Open country led to one-male harems, forest canopy dwelling to isolated pair-bonding, etc.

Leading anthropologists like Sherry Washburn and Irv DeVore characterized this approach as part of a 'New Physical Anthropology' that went beyond bones and stones: we will learn about human nature more directly, by studying our closest relatives out there in their natural setting.  And we'd do it in a Darwinian framework in which we sought deterministic environmental conditions that correlated with social behavior, accept the analogies of other species, interpret that in selectionist terms, assume it was embedded in the genome, and extrapolate it to human societies.  The idea of alpha males not just bullies but of their reproductive dominance was too much to resist.  This was a theme of the day in biological anthropology.

The degree of sensory overload and publication proliferation was far less in those days than it is now, and as graduate students and post-docs at Michigan where I was, at least, we all devoured everything that was published, book or article, and discussed it all the time.  It naturally affected our work, and some of the major players--including Chagnon and Neel--were right there, and were friends of ours, or were even directly involved in the South American work.

At that time, in the early '60s, VC Wynne-Edwards wrote a pretentiously pompous, but famous book (Animal Dispersion in Relation to Social Behavior) on the evolution of social behavior.  In a book organized, surely self-consciously, to resemble Darwin's Origin, he proposed a group selection view, quite contrary to Darwin's view of evolution (but less so to Wallace's) that selection was all about competition strictly among individuals.  The gist of Wynn-Edwards' argument for our purposes here was that individual animals have ways of eliminating their reproduction so that, as a group, they avoided serious overpopulation relative to food supplies, and kept their size within what the environment could support.  This could involve male-male competition, including sexual selection by display characteristics as in peacocks, and the exclusion of defeated males from having much of a sex life.  Animal species achieved these self-limiting characteristics in many different ways, involving group displays and so on.

This was a kind of 'voluntary' self-limitation for the good of the group in just what Darwinism argued was contrary to the Hobbesian competition of all against all for reproductive fitness, and for that reason was very controversial.  Isn't it, after all, the individual who reproduces or doesn't? 

At the time many prominent anthropologists, and some of us working in anthropological demography, were interested in how humans limited their populations.  Culturally 'primitive' societies had many ways of doing this, which were being widely reported--a topic du jour.  Male territoriality and hierarchy comprised one.  Infanticide was another.  So were dowries, delayed marriage, abortion and other cultural traits that had been very widely observed.  These practices were culturally imposed, but involved individuals voluntarily abstaining from reproduction or even killing their own newborn infants--but how could such things evolve??

Natural selection and demographic processes
Demographic anthropologists, of which I was one, were interested in the relationship between age-specific birth and death rates in this very context; it was a subject of my PhD dissertation and first book.  How was a given 'type' of culture reflected in the very processes that determined evolutionary fitness?  We didn't know specific genes, but the idea was that the age-specific effects of these practices affected at least the opportunity for natural selection at the genome level.  Demographic genetics was an important component of population genetics theory, worked on by some of its leaders, and prominent in the textbooks, at the time.

In this context interest was high in finding how mating and reproduction were constrained by natural populations both of humans and our close primate relatives.  In this sense, the work by Chagnon and Neel was in tune with the times, a product of the times.  In the enthusiasm for this Darwinian revival, I think the word 'opportunity' slipped subtly out of people's minds and was equated to the 'fact' of selection.

But there were even then, and especially later on, various forms of explicit and implicit opposition:  The idea of 'Man the Hunter' (title of a famous symposium volume) was closely examined, because the widely unquestioned macho image of caveman brutes with hefty clubs could be challenged by those who had studied extant hunter-gatherer societies directly found that they got most of their food by female gathering, and only a small fraction from meat.  Hunter-gatherers were not in a relentless struggle for food but were the 'original affluent society', lazing about most of the day (as Marshall Sahlins' book called them). 

The burgeoning feminist movement included numerous anthropologist primate-watchers trained in the New Physical Anthropology.  Many were women availing themselves of newly opened opportunities in academe, and they refused to see society only through the lens of male competition and control.  The idea of man the manly hunter was challenged as a common mythology.  It wasn't that female primates were important, if delicate, things, but even that they used various selfishly competitive guiles that did not involve violence, but that enhanced their own contributions to evolution.

The idea of self-sacrifice, or altruism, that was in a sense partly behind the Wynne-Edwards and other traits that were being documented, raised a challenge to Darwinism.  Wynne-Edwards' view was vigorously challenged from a classical Darwinist viewpoint, because there were few credible situations in which genes 'for' self-limitation or self-sacrifice for the good of the group could advance in frequency--organisms without the genes could just lay low and wait till the sacrificer had been sacrificed, and then move in on the newly available females!   William Hamilton proposed his inclusive fitness kin selection theory to account for the evolution of altruism:  genetic variation that leads you to sacrifice your own reproduction could advance in frequency if it led to a greater reproductive output of your relatives. Other rationales were also offered, for example, for why people would save drowners  who weren't their relatives, or would be willing to risk their lives by going to war, but in a kindred spirit.   Hamilton's rule ruled in many circles.

Again, strongly committed Darwinists swooned over these explanations, a acceptance of nature as really red in tooth and claw.   EO Wilson coined the term Sociobiology in 1975 in his book of that name, culpably (in my view) adding a final, very badly superficial, chapter on humans, after discussing more legitimate examples like ants, that he knew very well, in the rest of the book.  Like Wynne-Edwards, Wilson's subtitle (The New Synthesis) reached high if not pompously, playing on 'the modern evolutionary synthesis' a self-congratulatory characterization of evolutionary biology in the '40s.

All this strong Darwinism was in the eye of the anthropologists: in truth, it was argued that they saw what they wanted to see or had been prepared by their own experiences or by western imperialistic culture to see.  There was, the critics argued, nothing objective about this, and 'laws' of culture were simply reflections of these biases.  Indeed, Darwinism comfortably justified imperialism, male dominance over women, inequality, racism and other similar evils.  Thus went the deconstructionist, post-modern reaction to the 'modernism' of Darwinian 'science' and its attempt to explain, or even justify, the awful state of the world--a view that didn't reflect an objective truth.  It was, whether consciously or not, a kind of controlling 'plot', mainly perpetrated by privileged elites (mainly men).

In many ways this was a continuation of age-old disputes.  Naturally, feelings ran high.  They still do.  I'm oversimplifying all over the place here, for brevity (and, surely, because of my own limited knowledge).  But many of the dogmas, including even Hamilton's rule as well as ideas of group selection, have since on close examination been found to be forced, or wanting, at best, or applicable only under highly restrictive conditions.  But  post-modernist notions that there is no truth out there in the world are also manifestly simplistic.  Ethological findings, and behavioral determinism have been shown in many different ways to be, at best, generic descriptions of what happens in some situations, not what is genetically prescribed.

There is no wonder that, while the ideas when first proposed by Changon and Neel and many others in their time had wide societal purchase, there was a backlash.  The many other factors, including the anti-imperialistic defense of indigenous populations, that were simmering widely in the social sciences during and after the Viet Nam war era, naturally were sent nuclear in and by Patrick Tierney's often wildly irresponsible attacks on the Yanomami studies.  Legitimate questions of anthropological ethics that may be raised by those studies are buried under the storm-surge of contention, polemic demagoguery, and the like.

Natural selection and the "Truth"
My own predilection is that natural selection, especially for complex behavioral traits, is far weaker and more diffuse than the sociobiological view -- or, perhaps more accurately, assumptions -- would have it.  I think the genetic arguments about the role of the candidate behaviors (like Headman dominance) are weak, at best.  I also think that our behavioral repertoire and intelligence, which we know were on the rise long before humans came on the scene, are manifestly more about abilities to sense and respond to complex social situations than it is prescriptive.  I think rigid behavioral programming would be strongly selected against, because circumstances can vary greatly in a species like ours.

To me, the wide array of cultures shows this very clearly, and even ethological studies have shown that some of the classical case studies were not correctly interpreted (e.g., things assumed to be inherent were shown to have environmental causation, even including pre-hatching effects on chick behaviors shown in work by Gil Gottlieb and others).  That is why in my previous posts in this series, I have argued that however accurate Nap's characterization of the Yanomami as he saw them when he was there were, those traits are neither necessary nor universal even in South American indigenes, but further, are irrelevant to what is generally 'human' or how prescriptive that may be, or how or even what has 'evolved' in the Darwinian sense.

But of course my view is just one and just as I am sincere in holding them, we shouldn't question the sincerity of those with other views who in their own way understand genetics and evolution, often not including the chatterati on this topic in the public and even professional media, who don't really understand genetics or evolution, or perhaps even ethnology, yet who are not inhibited to opine on the subject.  There are intelligent thinkers with different points of view.  They don't advocate it any more strongly or irresponsibly then do their (knowledgeably qualified) opposition.

I happen to think the Darwinian case is greatly overstated and I think there is plenty of evidence to that effect (and commenters on  our previous posts have pointed to some of that evidence).  But the food-fights that treat people like Nap either as totally unaware of reality, or as having a throat-hold on the truth, are not constructive if we really do want to know what that truth might be.

Wednesday, February 20, 2013

Amazing - a measured story on the benefits of DNA sequencing

Here's a welcome, and all too rare example of a measured story about DNA sequencing and what it can offer.  Gina Kolata writes in the New York Times about sequencing increasingly being used in attempts to explain rare, particularly pediatric disorders.  Rather than overselling the promise of treatment and cure, she writes that it isn't always a panacea, and is proving to sometimes be successful and sometimes not.

The piece is about sequencing being done for clinical purposes, primarily at Baylor College of Medicine in Houston.  The sequencing center there is increasingly busy these days with exome and whole genome sequencing, a service that is only just beginning to be affordable -- and often covered by insurance.
Demand has soared — at Baylor, for example, scientists analyzed 5 to 10 DNA sequences a month when the program started in November 2011. Now they are doing more than 130 analyses a month. At the National Institutes of Health, which handles about 300 cases a year as part of its research program, demand is so great that the program is expected to ultimately take on 800 to 900 a year. 
But, here's the dilemma.  Most people who are looking to whole genome sequencing for diagnosis are doing so because they've got condition or disease that is so rare, or so poorly understood, that it has perplexed their physicians.  Often many physicians.  They may already have been genotyped for known conditions that their symptoms suggest, but with no success. These traits are often called 'orphan' diseases for these types of reasons.

The rarity of the condition means there's unlikely to be a cure, or even treatment for the disorder, even if sequencing does find the cause.  That is because developing enough knowledge, and then some drug or other approach, takes lots of time and many cases to study in a well-controlled way.  Indeed, Kolata writes that the success rate for finding a genetic "aberration" is only about 25 - 30%, and finding a causal mutation results in improved management for only 3% of patients, and treatment and a "major benefit" for only 1%.  So, what's the point?

As an example, some diseases have clearly familial occurrences -- that is, among several close relatives, and in some of those a gene has been guessed at and a likely causative variant found in the gene.  These are rather clear-cut cases, to the point that the trait can come to be defined in terms of the gene or process in which the gene participates -- even if most patients have neither variants in the named gene, nor affected relatives.  That's a stumbling block to understanding, if the studies of the disease are restricted to these clearly-caused cases, as often occurs.  So, for many people with such diseases, genotyping won't show anything that seems relevant.  Very discouraging.

Still, there's more to human life than what's in the flesh.  For many people, it's important just to know why they or their child has this rare disorder.  And there are practical reasons as well.  As one parent told Kolata after her child's mutation was identified, “It really became definitive for my husband and me. We would need to do lifelong planning for dependent care for the rest of his life.”  And, with a definitive diagnosis, insurers are more likely to cover medical costs without question and the  patient is treated with more understanding.

Even without a diagnosis of that clear kind, there may be satisfaction of a sort in knowing that one has at least looked, and also to have become part of a data base that may, someday, be large enough and well-studied enough to lead to other discoveries (such as genetic variants found in other genes that come under suspicion).

Unlike the fly-by-dusk outfits that sell genetic risk assessments to pay for their yachts, the studies we are referring to occur in professional, clinical settings and are done by geneticists and genetic counselors, not businessmen.  These are the well-established, fully licensed and professional, legitimate clinical contexts in which this type of work should be done.  And these investigators are not just collecting data, but typically committed as their main job, to figuring out what to do about the diseases.

Why is it so hard to find causal mutations?
There are several very simple reasons why even common diseases are hard to characterize at the gene level.  First, many different genetic disruptions or modifications can give similar effects.  By analogy, there are many ways to fiddle with a car so that it goes slower than it did when it was new.  Just because it doesn't perform up to specs, doesn't mean we know what the cause is (and cars are actually a whole lot simpler in this kind of respect than people are). 

Second, most of the effects are individually very small, so that a disease is the result of multiple contributions of sub-par genes or environmental experiences.  And these first two reasons -- many different contributing factors, and each of them individually minor -- imply that each affected person may be affected for a different reason.

Third, many factors, including major ones, are rare enough in the population that we simply can't get enough instances of their mutated state in the kinds of data that are usually being collected (for example, case-control studies) to generate statistically detected evidence.  Even a major effect can be buried among a mountain of minor ones if the major isn't common enough.

Different kinds of data can certainly reveal different kinds of causes, and studies of various types are being designed.  Often, this is rather superficially rationalized to obtain funding for very large, expensive, studies.   Our science establishment, like our culture generally, believes that SuperSize must be better.

Another reason for frustration is that very rare traits simply defy many types of statistical approaches.  Often, diagnosis is not consistent, cases go unreported or mis-diagnosed, different studies can't be compared.  Family studies, which are statistically very powerful in some situations, are often hampered by such factors, or by the fact that relatives who may be deceased were not diagnosed (maybe that was simply not possible during their lives).

So there are issues of all sorts surrounding the challenge. BUT, when enough is known, and the approach is a responsible, professional one, genetic counseling can be extremely effective.

Tuesday, February 19, 2013

Sociobalderdash, and the Yanomami? Part II

Yesterday I commented on the reviews of a book by Napoleon Chagnon, who is defending both his work among the Yanomami in Amazonia, and his treatment at the hands of a faction of anthropologists.  Though I have never been there myself, I was involved early on in analyzing some of the Yanomami demographic and genetic data (since this has to do with reproductive success by individuals), and I know Dr Chagnon and was close friends with many of the other key parties who were involved.

Nap was and perhaps still is the most prominent and well-known anthropologist of our time and he has been for decades.  As I said yesterday, I have not read his book, but was commenting on the issues and controversy, and reacting to what is being said in the high-visibility reviews (such as the New York Times on Sunday -- and again yesterday) and news stories his book has occasioned (again, the New York Times on Sunday, Inside Higher Education on Monday, The Chronicle of Higher Ed last week).  And, who in Anthropology doesn't know the saga of the Tierney book (a long discussion of which, and defense of Chagnon, is here, e.g.), or the documentary about this ongoing food fight, and on and on? 

The reason we are commenting on this is because it involves, and reflects, fundamental and heated disagreements within anthropology that are inherently part of the politics of the whole scene.  I know of few if any who can remain even remotely neutral in regard either to the way Nap has been treated by the press or by his opponents within anthropology, or about the substantive issues involved. 

The actual scientific issue has to do, essentially, with genetic determinism and the degree to which our behavioral, and consequently our physical traits are the product of natural selection.  Everyone, with or (mainly) without any real knowledge of the data or even the issues, is chiming in, including pop-sci authors from various disciplines who are credited with relevant 'expertise' by journalists who need someone to interview.

Some of the reaction to Chagnon's work has to do with the harmful experiences the Yanomami have had at the hands of anthropologists and other scientists, missionaries, explorers, and exploiters from the outside world.  This is about social politics related to views of imperialism, exploitation, and the like, not about the anthropology of the group themselves.

But much of the fire deals with the degree to which the Yanomami represent a 'primitive people' (as a famous Science paper by Jim Neel described them in 1970), implying not just that they didn't have cars and televisions, but that they constituted an archetype of our ancestral state, during which our nature evolved via Darwinian natural selection.  Even as post-docs in his lab, we felt that Neel's title was culpable hyperbole for getting a major publication, which someone of Neel's stature could do (and we told him so).

The idea was that if they are an archetype of the ur-human, Alley Oop in the flesh, then studying them can be used to extrapolate our societal and behavioral nature and their origins into our evolutionary past.  Specifically, and most controversially, do we live a Hobbesian life in which male violence and dominance hierarchies determine who reproduces, and violence is largely about capturing women?  Is our behavior based on a history of a species engaged in relentless, winner-take-all striving to spread our genes? Does it explain (or justify) warfare, or the sexual seizure of women?

By designating them as 'the fierce people' in the title of his most famous book, which was the monograph on 'primitive' people for many years, Nap essentially made such an argument, as did others in the research group, prominently including Neel who was the leader of the medical team involved in the classical Yanomami studies.  Neel had his own biomedical genetic reasons for accepting the head-man theory of life; reproduction highly concentrated in one or a few males  provided a way to understand the amount of harmful recessive genetic variation that our species had evolved to carry at any given time, as a comparison to large, modern societies exposed to chemicals and radiation, which cause genetic mutations and could heavily increase our burden of harmful variation.

The core issues
We need not here question the ethics, tactics, or descriptions of the Yanomami, nor the data that were collected.  These issues have been debated and disputed, but that is beyond our point.  Instead, we can question how 'primitive' or unacculturated or uncontacted they were, because those facts affect the degree to which they were archetypal as representatives of our evolutionary past.

First, this area had known missionaries for a long time.  Villages were sometimes located near  mission stations, and tribesmen from other villages knew about and interacted with those 'mission' villages.  The Venezuelan and/or Brazilian government had, we believe, been sending vaccination teams upriver through much of the area to vaccinate natives against smallpox.  By the early 1800s European settlers had established homesteads here and there along some if not much of the Amazon system.  Some of these had taken Yanomami wives and were raising families on small holdings along rivers and tributaries.

von Humboldt, 1806
Alexander von Humboldt had been through some of this general territory around 1800, coming down from the Caribbean through Venezuela, reporting mission contacts and even Indians with culture in many ways similar to the Yanomami (and, perhaps therefore, cultural antecedents of the tribe itself).  The Saliva Indians on the Orinoco region (which had Catholic missions) were "mild, shy, and sociable" and "not long ago [i.e. relative to 1800!] a traveler [to the missions!] was surprised to see how Indians played the violin, violincello, guitar, and flute."   Other explorers at the same time had filtered via the Atlantic up through the Amazon waterways, and there were European settlements and missionaries here and there deeply into the area.

As a result, and at the very least, even villages without known or remembered direct exposure, that were contacted in the 20th century, cannot be said to have been totally unaffected by the outside 'civilized' world.  The degree to which this changed their culture was assumed by anthropologists to have been minimal, relative to the peoples of the outside world.  But this was an assumption and, at least, far from complete.

Beyond this, the Yanomami practiced slash-and-burn agriculture.  Whatever the evolutionary source of their agricultural knowledge, they were not 'hunter-gatherers', and they were tropical forest rather than open-country people, that is, they were not living in the generally assumed ecology of our species' ancestry.  They were sparsely settled at the time of most western contact, but there are archeological and even some historical works by early explorers who reported what seem to be rather large, or even permanent, urban settlements.

It's important to realize that the Yanomami have not been represented as showing just some local, currently evolving traits, but as a showcase of human evolution generally.  But their way of life is not how our ancestors evolved!  For by far the majority of our evolution as a species, we evolved, as far as we know, as various sorts of pre-agricultural hunter-gatherers in tiny, very sparsely dispersed populations, in diverse ecosystems, mainly in the Old World and only partly in tropical rainforest conditions.  Unless agriculture has no effect whatever on social structure and behavior, a rather dubious assumption, the Yanomami are not unexceptionable archetypes of the way natural selection molded our behavior.

Indeed, other anthropologists and observers have not even seen the Yanomami as being particularly 'fierce' or warlike.  One who had been part of the studies told me, way back at the time of the most intense studies in the early '70s, that the Yanomami were the  most 'pacific' (his word) population in the region.  And other scientists working in the same area at the same time as Nap characterized them basically in benign terms.  Each anthropologist  his or her own biases and predispositions to see his/her own sociopolitical views in the 'other', a fact that led to deep and angry splits within anthropology, regarding whether we could ever be even close to 'objective' about such things (the highly inflammatory and destructive 'post-modernism' disputes).

But the really important points don't require that we deny the objective facts, or the Yanomami's violence and competitiveness, or their headman-based social structures, all of which may well be quite as Chagnon described them over many years of observation--even if that is what he preferentially saw, what appealed to him, or even if his visits triggered some of that mayhem, as has been alleged.  To deny that they had violent sports or did raids and captured women or had domineering male leaders, is simply silly if not scurrilous, and this must be said of many of the critics of Nap's work who often have been self-serving demagogues, to put it mildly.

Extrapolation is not needed to see the issues
We need not guess about whether the Yanomami social behavior represents human existence in our primeval past, because anthropologists have identified living hunter-gatherers and swidden agriculturalists with a wide range of social structures.  The degree to which they had male dominance associated with substantially greater reproductive success, or had murderous rapine violence, are debated (or hotly debated). But there is a lot of variation, and violence and inequity are not the predominant characteristics reported for hunter-gatherer societies, even if they indubitably had their occasional heated disputes, as do we all.  Indeed, to my recollection, studies of the surrounding South American indigenous groups do not characterize them as inherently or particularly violent.  The Yanomami data themselves do not support the kind of genetic interpretation of head-man fitness advantage.

But even were we to accept Nap's data and inferences, there is another plausible reason for the violent and territory based tendencies he reports, a reason not at all based on intemperate inherency.  The late 19th and early 20th centuries were ones during which outside explorers and exploiters pushed into much of the Amazon basin.  Among the major aspects of that was the rubber trade.  Rubber traders exploited the Indians badly in the usual ways, coming up the Amazon and then into its branching tributaries, and the fleeing Indians, who hadn't been enslaved or killed for target practice (yes, literally) may well have put frenzied territorial pressures on the Yanomami a century ago with consequences for their way of life when Chagnon and others first visited them.  But of course it would in that case be a recent, rather than inherent cultural trait.  There was indirect evidence for this in some of the demographic data in Nap's reports.

Such incursive pressures could plausibly have increased their wariness and warring behavior as a natural self-defense.  Again, that would be a fact, but not a necessary aspect, of Amazonian culture, and certainly not something that could be extrapolated back for the past thousands of generations, on other continents, during which our entire species' inherent 'tendencies', if we have any, were molded by natural selection, or to the extent that such molding even occurred. Such extrapolations are a reflection of assumptions or predilections of the extrapolator.

Thus, another important debate, and what really underlies some of the vitriol surrounding Chagnon and his fights with the less than noble savages in anthropology, is the degree to which there can be objectivity in observing cultures other than our own (or, perhaps, even including our own). Indeed, the total (and totally predictable) prior-commitment manifest by the many commenters on this issue, who may typically have minimal direct knowledge of the Yanomami or the technical issues involved, is probably in itself a case study in post-modernism!

Ötzi the ice man

No one doubts that humans have it in their makeup to be aggressive under some conditions.  But one cannot just dismiss the other anthropologists who have simply not perceived indigenous cultures as so violent, war-obsessed, or socially (and reproductively) unequal as the Yanomami were reported to be.  Furthermore, social inequity can occur without it being a systematic force of natural selection--that is, without variation in social achievement being the direct much less consistently causal results of genetic variation.

So even if behavioral Darwinism be a valid way to view our genomic evolution, which is not at all obvious, the Yanomami are not, like Ötzi, the Tyrolian neolithic iceman, a frozen instance of a unitary distant past.

So, sociobiology or sociobalderdash?
The point of this post is not to attempt to adjudicate, but to point out that the issues underlying the heat of the dispute have to do with both the truth of the Darwinian interpretation of human behavioral nature, and the way that Dr Chagnon did, and reported, his studies in the Yanomami.  The extent of genetic determinism is a legitimate one, and we must also recognize the wishful-thinking among anthropologists--on both sides of the arguments--that leads them to see what they see, and undoubtedly colors what and how they report it.

To repeat: Much more important is the degree to which observations today can be credibly extrapolated into the past, from one part of the world to all of humanity's patrimony.  All of this ado over Nap's work is irrelevant to that question:  Even were his descriptions indisputably 100% accurate, they don't contribute to the greater legitimate debate about the nature of our evolution.  Yanomami culture today, in the Amazon, says nothing about our African past 200,000 years ago.

One way to see the colorful charivari that has always surrounded Dr Chagnon has to do with the knowledge of his nature, not just the nature of his knowledge.

Monday, February 18, 2013

Sociobalderdash, and the Yanomami? Part I

Napoleon Chagnon's new book, Noble Savages, being widely reviewed and promoted, is great grist for the academic controversy mill.  Every pop-sci author, everyone with media-assigned expertise (including some prominent university professors automatically credited with relevant insight because of some book they've written) is in on the act.

Nap -- we've known each other since we were in the Human Genetics Department at Michigan, working with Jim Neel, the leader of the biomedical studies of the Yanoami -- is not the most relaxed personality you'll ever meet.  He's fiery and he's got very strong ideas that, even as graduate students, we wondered if didn't make him unsuitable as an objective observer of other cultures.

But we need not be post-modernists to recognize not only that Nap was for decades the most prominent cultural anthropologist in the post-Margaret Mead era, and he made the Yanomami one of the two most prominent 'primitive' (i.e., culturally non-technical relative to us) people in public and even professional awareness.  The Yanomami were #1 by far, I think, but the Kalahari San ('bushmen') of Africa were #2, after the prior era's more numerously prominent, but less publicly, well-known tribal populations having been visited by anthropologists in the colonial era.

Nap's interpretation of the Yanomami were a reflection of his time.  Animal behavior was being studied widely, and interpreted in the Darwinian context of attempts to explain the behavior in survival-of-the-fittest (SOTF) terms--that is, the traits we see today were assumed to be due to past natural selection essentially for the trait per se.  The term 'sociobiology' was coined by EO Wilson some years later, but the idea was already rampant.

The question being studied involved many different components, one of which was a genetic question related to issues of the amount of harmful genetic variation that our primitive ancestors carried in their populations (related, at the time, to what chemical and nuclear fallout might be doing to our much larger and more socially complex populations).  Looking at (or, perhaps more accurately, for) cultures today that were frozen replicates of our past was an objective of the evolutionary perspective of anthropology in the '60s and for a while thereafter.

Anthropological views and strategies on behavioral evolution
Rather than laboratory experiments, a prominent idea in anthropology at the time was that primates studied in the wild could show us how population structure evolved--how open vs forest environments led to selection for this or that kind of population size, territoriality, male dominance hierarchies, and the like.  Books reporting fascinating field studies, and opining captivatingly simple Darwinian explanations were rife.

Male dominance hierarchies suited the Hobbesian, Darwinian SOTF terms.  One tough guy (or alpha chimp, baboon, or whatever else that swimmeth in the sea or creepeth on the face of the earth), intimidated all the other guys and did all the rutting.  This very effectively spread tough-guy genes, leading us to be the way we are today.

Unfortunately how we really are can't be seen in modern complex societies.  Too many ways to reduce one guy's reproductive success, too many hospitals to take care of the weak, or soup kitchens for the needy.  So, to see how we really are we needed the frozen cultural fossils of our ancestry.  They could only be found in the most remote of places.

Neel seeking to understand the biomedical implications of the Big Man theory, and Chagnon to understand it culturally, made a very successful team.  I'll talk about what they actually did, found, or argued tomorrow, but here I want to go over just a bit of the reaction to Nap's book.

Flying fruit: anthropological food-fighting
I haven't read the book.  But, it's clear from reviews and stories in major media that, in essence, Nap is ranting about the way his work has been treated in recent years.  Anthropological opponents, who don't like Nap's aggressive personality or who don't like the idea that people might fight over resources or who don't like anthropologists' mucking about and stirring up the natives, accused him of seriously damaging the Yanomami, in many ways with lethal if inadvertent--but avoidable and predictable--effects, accused him of nefarious practice.

Nap responds that his opponents tried to vilify him within his profession, cowed a main professional organization into buying the accusations, and have done him dirty.  What really happened to the Yanomami and Nap's role in it (as alleged in Patrick Tierney's book Darkness in El Dorado roughly a decade ago), is disputed.  The biomedical accusations, such as giving measles to the Indians to see how they, not previously exposed, were manifestly false, as I know personally from discussions with and seeing field notes of,  Neel and one of his main field companions.

But if Chagnon has his enemies, he also has his supporters in what has become an archetype of a professional food-fight gone viral.  He was, at an advanced age and far after his work itself was done, elected into the National Academy of Sciences last year.  This must have a political symbolic nature, especially perhaps as the current NAS anthropology membership rather predominantly favors the Darwinian viewpoint of behavior and Nap's election (which would have been fully appropriate decades ago, without a political aroma) has to be seen as a gesture in the context of his recent fights within Anthropology.   This gives a joyful finger in the eye to Nap's opponents--and many would argue it's a well-deserved symbolic finger-gesture at their demagogic treatment of him.  And this new book is his attempt to revive his reputation.  Based on the reviews and articles about it, nobody will mistake it for an objective factual treatise. He is as feisty as ever.

A major explanation for the criticism to which he's been subjected, and a major element of his defense, is the fact that many anthropologists just can't buy sociobiological theorizing about human culture, nor his using the Yanomami as a valid, even archetypal 'primitive' people.  He argues that his antagonists simply can't abide the idea that Darwinian evolution has made us culturally and behaviorally, as well as physically the way we are.  That's true, whether one gives credence to the critics' viewpoint or not.

So regardless of whether his work directly or indirectly harmed the Yanomami, questions which involve legitimate ethical issues, the heat of the attacks have always been, I think, aimed at his justification of violence and inequality as being inherent in our nature, for reasons that he claims his studies of the Yanomami show.

Tomorrow, we'll look at some of those issues themselves.

Friday, February 15, 2013

Hearing loss and dementia - something else to worry about

A piece in Monday's New York Times by Katherine Bouton, "Straining to Hear and Fend Off Dementia," describes Bouton's experience trying to have a conversation in a crowded room.  She has severe hearing loss, though what she can hear is boosted by a hearing aid and a cochlear implant, and her struggle to hear what's being said to her often, she says, "uses up so much brain power" that she can't then think clearly enough to respond as she'd wish.  Could the phenomenon of "cognitive load," as she describes, explain a recently reported correlation between hearing loss and dementia?

A relationship between dementia and hearing loss was proposed decades ago.  A paper published in 1989 reported a case-control study of 100 people with dementia or 'cognitive dysfunction' and 100 age, sex and education-matched controls, finding that the "prevalence of a hearing loss of 30 dB or greater was significantly higher in cases than in controls (odds ratio, 2.0; 95% confidence
interval, 1.2 to 3.4), even when adjusted for potentially confounding variables."  Hearing loss was correlated with severity of cognitive impairment as well, with more profound loss being more frequent in those with more severe dementia.

But, which came first, the hearing loss or the dementia?  This is a question that Frank Lin and co-authors set out to answer with a prospective study of 639 people aged 36-90 and dementia-free in 1990-1994.  Hearing was tested when people were enrolled in the study, and they were followed for 11 or so years and tested for the onset of dementia yearly or biennially for most of that time.

Fifty-eight cases of dementia from all causes were diagnosed, of which 38 were considered to be Alzheimer disease; 4.4% of these had normal hearing.  The group with normal hearing was younger at the start of the study (mean age of those with no hearing loss was 59.9 compared with 71-77 for those with mild, moderate or severe loss), though the researchers did control for age in their analysis, so it should not explain the difference in the fraction of each hearing category that were subsequently diagnosed with cognitive impairment.  Though, prevalence of dementia was about 14% of the US population in 2002 at age 71 and older (5% for those 71-79 and 37.4% for those 90 and older) (Plassman et al., 2007), so the number will surely increase in the normal hearing group as the population ages.  Almost 17% of those with mild loss developed dementia, and 28.3 and 33.3% (2 people) of those with moderate and severe hearing loss.  The difference was statistically significant.

As Lin at all report in a 2013 paper:
The magnitude of these associations is clinically significant, with individuals having hearing loss demonstrating a 30% to 40% accelerated rate of cognitive decline and a 24% increased risk for incident cognitive impairment during a 6-year period compared with individuals having normal hearing. On average, individuals with hearing loss would require 7.7 years to decline by 5 points on the 3MS (a commonly accepted level of change indicative of cognitive impairment17-19) vs 10.9 years in individuals with normal hearing.
This still doesn't answer the question of whether, as Lin et al. wrote in 2011, "hearing loss is a marker for early-stage dementia or is actually a modifiable risk factor for dementia," however.  Is it the first sign of something going wrong -- arteriosclerosis or some such, which might cause hearing loss and dementia both -- or is the isolation that often comes with hearing loss and aging a cause of dementia?  If so, and if hearing loss can be prevented or ameliorated, can dementia also be prevented?  Or, as Katherine Bouton suggests in her piece, could the "cognitive load" that comes from struggling to understand speech be causal?  It's not a biomechanical connection but a psychosocial one. 

There are a lot of issues here to sort out, though, before we can even consider these questions.  Not all hearing loss is alike.  In fact, it can be neurosensory, having to do with cellular or physiological abnormalities of the structures of the inner ear and the auditory nerve, or conductive, the result of things that can go wrong mechanically -- temporarily or permanently -- with the bones in the middle  ear that conduct sound waves to the inner ear.

Age of onset varies widely, from congenital to advanced old age.  People don't necessarily lose all of their hearing, but can lose only certain frequencies so they can't hear, say, the highest bird song or musical bass lines, or they can lose the ability to hear conversation, or conversation in a crowded room or, of course, to hear anything.  And, hearing loss can be mild to profound.

So, in trying to determine whether hearing loss 'causes' dementia, these kinds of issues should probably be considered.  Unless it's the cognitive load, As Katherine Bouton suggests, in which case perhaps comparing the experience of people who've been deaf since birth and are members of the deaf community, and therefore for whom cognitive load wouldn't be an issue, with that of people who lose their hearing in adulthood, and have experiences such as Ms Bouton describes, would help elucidate the connection.

And, not all dementia is alike.  There are numerous conditions, symptoms, causes, ages at onset and so forth.  So, it's possible that the explanation for the possible correlation between hearing loss and dementia may look different depending on the kind of loss, extent of loss, and age at onset, and of course the kind and extent of dementia. And this means that a study based on only 184 individuals with any hearing loss, most of them old, in a sample of 639, and  a total of 38 cases of dementia among those with hearing loss, is probably not a large enough study to tease out the relationship.

But it's an interesting question.  Though, if a large majority of people over 80 have some hearing loss, but only a third or so have cognitive impairment, there may be more pressing things to worry about.