Friday, April 30, 2010

Another jolt to keep you awake at night

American epidemiology follows our puritanical culture, and intensely focuses on how harmful anything fun must be for you. Coffee, tea, alcohol, McFastFood, sugar, salt, chocolate, and sex (especially sex) are the Devil's own work. After all, when did you last see a big, expensive epidemiological study designed to show that brussels sprouts, rudabegas, or other delicacies of that sort are harmful. No--sin which means pleasure, is harmful. Must be harmful!

But here's a shocker: the latest bulletin from the Professor Sez One Food Fits All Department claims that coffee may be good for you (after all), and in so many ways! Now, after years of how bad it was for you, course is reversed by this deeply insightful, highly technical, perceptive study.

Now, it's too easy to poke fun at epidemiology (even genetic epidemiology) and the media, hand in hand with Claims of the Day. It's a major theme of this blog, since we try to be entertaining, and silly science is always good for a few laughs.

But another, actually serious, side of our blog is to comment on how science is working and at least to argue that it ought to be reformed. The first big problem is that reform involves several sticky wickets: how to disengage our professional careers from dependence on hasty results, self-marketing, grant-based salaries and tenure, proliferating competition and all. Those are all too human factors.

The other big problem is that the topics we're all trying to chase are damned difficult. You can't manage a career easily these days and still have time to try--really try--to wrestle with the general question of understanding multifactorial complexity without taking reductionist approaches, and usually hasty ones at that.

This latest Big Finding about coffee means that you'll live longer, with fewer diseases of all sorts, but in two ways. You'll not die of the typical diseases, which will add years to your life (if not hairs to your head or inches to your waistline). And you'll live more before you're dead because if you dose up on coffee you'll have fewer hours of sleep to waste your life away in. But you may have to accept feeling dead tired instead, as you lie wide awake all night thinking how great it is to still be alive. And, being tired, you'll be more accident prone, and .... well, there's a damnable trade-off to everything, it seems!

We have no solution (not even decaf). We wish we did. But if we did, we'd not have time to blather on in a blog like this. So maybe Silly Science is good for us, too!

Thursday, April 29, 2010

Chocolate madness, chocolate sadness

Well, folks, here's another installment for the law of unintended consequences, or the life's not as simple as you wish it were logbook. For years, the chocolate industry has been sponsoring research that showed that cocoa butter was good for your circulating lipid levels, and hence good for your heart. The Miracle Elixir, chocolate has even been shown to prevent cavities. Great news for those with chocolate madness, restaurants serving double or triple chocolate cakes, and great news for those looking for warm, loving Valentine's day gifts! Eat up, slim down, live longer, and love.

Unfortunately, there's a spoil-sport in every crowd. A BBC story throws a wet blanket on chocolate mania. It appears that rather than cheering you up, chocolate is a real downer. Maybe (hopefully) that's why your Valentine burst into tears and said s/he didn't really want to be your Valentine after sampling from the Lady Godivas you gave er.

It may be less than a shock to learn that the vast Cocoa-conspiracy of Hershey-funded researchers didn't happen to report this side of things. But they may not have bothered to look at what, for the researcher and his/her grant prospects, would be too saddening. Now, of course, the discoverers of this new effect say -- surprise! -- that more research is needed, so we can find out whether chocolate causes depression, or chocolate eaters are already self-medicating when the epidemiologists show up!

What is the reason for this? It's the way science works and is rewarded (not with chocolates!). It's the scientific method, that 'tunnels' through complex traits by intentionally keeping as many constant as possible so the effects of one only is studied. See our earlier posts on 'tunneling' and the philosophy of science. The problem is that when we do this, we intentionally ignore the other factors that also are at play in the real world. We know this, but we run to the BBC or NYTimes with our findings anyway.

It's sad enough that the pharmaceutical industry is trying hard to get all of us on lifetime maintenance medication: tranquilizers, neurotransmitter (IQ) boosters, statins (lower cholesterol), and who knows what else. So we see lots of advertising to push this. It's great for business. But the chocolate industry has itself been trying to push its products essentially as maintenance medication (eat a lot of it, every day, your whole life long, to prevent depression and heart disease). Pretty disgusting (the idea, if not the chocolate).

Please pass the cherry pie!

Wednesday, April 28, 2010

Henrietta Lacks privacy

A piece in the Sunday New York Times by Amy Harmon, "'Informed consent' and the ethics of DNA research", discusses the issues brought up by the recent case of the Havasupai vs. the University of Arizona (which we posted about here). Can anything be done with DNA samples once the original purpose for which blood was collected, and for which researchers had consent, has been accomplished?

Legally, we have no property right to our cells or tissues once they've left our bodies. But, researchers with federal funding are required to inform subjects what they are going to do with the samples, as a condition of getting the samples. Nowadays, that means everything they plan to do with them, for however long the samples are archived. And that includes the use of unforeseen technologies to look for as of yet unpredictable kinds of information.

This issue is not new, and in fact a sixty year old story about this is described in a recent book by Rebecca Skloot called The Immortal Life of Henrietta Lacks. Ms Lacks was a young African American woman who had the misfortune to have a particularly fast-growing form of cervical cancer, diagnosed in the late 1940s. She died in 1951, at the age of 31. She had been treated at Johns Hopkins, in Baltimore, where one George Otto Gey was then attempting to immortalize cell lines, so was collecting whatever tissue samples he could in his effort to perfect the technology for doing this.

Only when Gey's technician tried to grow cells from a sample of Ms Lacks' tumor were they successful. The significance of this was quickly apparent, and cell lines were immediately distributed throughout the medical world to be used in such research as the polio vaccine, and much more. The cells from Ms Lacks' tumor, and/or the DNA, are still in use in labs all over the world.

Neither Gey nor Hopkins ever profited from this work, but the sample was taken without Lacks' knowledge or permission. This was state-of-the-art at the times. Indeed the courts have recently ruled that samples can be used without the patient's permission or knowledge, even for profit (in a case decided by the Supreme Court of California, Moore v. Regents of the University of California.)

In her book, Skloot describes how she became fascinated by Henrietta Lacks in a biology class she took as a teenager, but was frustrated to discover that very little was known about her. Or perhaps she was fascinated because there was so little known. In any case, Skloot eventually decided to write a book about Lacks, and she spent much of her 30s looking for whatever information she could find. That turned out to be not much. So, she turned to Ms Lacks' surviving children and other family members to try to piece together the story of their mother's life.

Actually, by Skloot's own description she hounded Ms Lacks' children until they finally agreed to talk with her -- in one case, it took a year of frequent phone calls. And she fills her book with everything she learns about this woman and her family who, it turns out, are living largely in poverty in Baltimore, Maryland. Because Ms Lacks died young, and in poverty, the artifacts of her own life are few, and because she died when her children were so young, they remember little to nothing about her. So Skloot delves (intrudes?) into the lives of the children, and tells about a murder, sexual abuse, their religious practices, their scientific beliefs and many other heretofore private details. The book is being very well reviewed, and is selling like hotcakes.

But why? As with the Havasupai, Ms Lacks' family members learned by accident that Henrietta Lacks cells were in such wide use. According to Ms Skloot, that knowledge made them feel both that their mother must have been special and that she was being exploited. But, their mother's life is being conflated with the circumstances of her death; the particular tumor she had, and being in the right place at the right time with respect to the development of a particular technological advance. Or indeed, the wrong place at the wrong time, with respect to her privacy and the privacy of generations of her family to come.

To us, there's nothing about this story that would suggest that the world needs, or has a right to know any of the details about the lives of Henrietta Lacks or any member of her family. This book just continues what Ms Lacks' children originally saw as exploitation of their mother by a world out to make a buck. While Skloot has set up a foundation for the Lacks family with some of the proceeds of the book, this book is still poverty tourism, and it rights no wrongs.

Tuesday, April 27, 2010

You look just like your grandma, Or, Why conservation poses challenges for evolutionary biology

We posted last week about the 95 million year preservation in amber of plants and insects in recently discovered samples of ancient amber in Africa. This showed the early distribution of recognized types of organism very long ago in Africa, affecting reconstructions of paleoecology, since at least some had not formerly been known to be there.

That was interesting, and we raised another broader point that these specimens showed. It has to do with the conservation of form over evolutionary time. It's one thing to look like your granny, and quite another to look like your 95-million-times-great granny!

The evolutionary problem (and it's not a new one nor one we invented) is to explain the very ancient existence of species that look strikingly, perhaps entirely, those alive today. If (as we recently discussed) even a few hundred thousand years is enough for anthropologists to insist on naming a fossil as a new species--or even genus!--then how can something 95My old look so strikingly like what we see today?

One might think that insects, because they have a smaller genome than vertebrates, have less genetic 'room' for variation, and are more constrained by selection so that organisms like flies have no variation left--no ability to viably change form. Bacteria have maintained their form since literally the most ancient of times (fossils called stromatolites, ~3Bya old). But even bacteria have multiple genes contributing to traits, and insects and plants certainly to, too. Selection studies in these types of organisms, and mapping studies on even such exotic traits as startle reflexes, wing venation, bristle count, and sleep habits in flies show genetic causal complexity due to existing variation in their natural populations.

Thus, with many contributing genes, each susceptible to variation, one might expect plenty of room to vary, ever so slightly, over millions of years. Furthermore, these groups have continued to evolve diverse species so that there wasn't just one way, say, to be a fly that, once installed, was so tightly fenced in by natural selection as to be preserved for time immemorial. In genetic terms this would generally called an adaptive peak on the genetic fitness landscape, that the species simply couldn't get off. But this is a post-hoc argument that seems just too pat, without some further justification (speculation about this has certainly been offered, but is difficult to prove).

Anyway, the conservation of form and function, as well as its adaptive evolution present important problems that are not yet fully solved. You look a lot like your granny, and we have a clear understanding of why that is. But you don't look exactly like her, and for a fly or fern to look like its countlessly-great-great grannies is harder to explain.

Monday, April 26, 2010

With great power there must also come great responsibility

You’ve probably already forgotten about A. sediba.

After all, it’s been just over two weeks since the two new gorgeous partial skeletons from Malapa, South Africa were unveiled in Science magazine. And, after all, you’re probably a normal person who can forget about extinct hominins between their media splashes.

But there are some people who cannot.

They are a dedicated brother- and sisterhood of scientists who work around the clock on our fossil record. Adding to it, maintaining it, and interpreting it. It’s only because of this fearless gang of fieldworkers and phylogeneticists that you can live a relatively luxurious, paleoanthropology-free lifestyle.

Really. They never stop working for you.

In fact, just after the new species was announced, paleoanthropology’s two most important conferences (the annual meetings of the Paleoanthropology Society and the American Association of Physical Anthropologists) were held in St. Louis and in Albuquerque just so that hundreds of us could put our brains together, meld our minds, and figure out just what to make of these bones. If you say it’s just a coincidence that these conferences occurred mere days after the Science publications, well then you just don’t understand the extent of paleoanthropology’s superpowers.

Both Lee Berger and Darryl de Ruiter gave spectacular back-to-back presentations at the Paleoanthropology Society meetings on the Malapa fossils. Afterwards, the floor was opened up for questions. Because I am a real blogger and am not just using it as a convenient cover for another more mysterious occupation, I raised my hand, “How do you know the arms are long?”*

From the gist of Berger’s answer and from what it says in the paper, although the drawing shows the arms to be long, the actual length of the arms relative to body size is not a shut case. However, with the additional remains from more individuals still lying there, in the ground, just waiting their turns to be excavated…. Good estimates of body and limb proportions are tantalizingly close!

Not only that, but word on the street is the preservation is so tremendous that there may be organic remains beyond bones and teeth—the actual squishy parts like hominin skin and hair and who knows what else? If this is true, start bracing yourselves for a paleoanthropology story that will eclipse the hobbit! In fact, by all accounts, Berger and his team are at the site right now—while we’re just going about our normal daily lives—working hard to recover all the additional remains.

Among paleoanthropologists, there are many questions about the Malapa remains. Some question the dating (could be younger than they say!), but the general buzz seems to center on the genus. Why Australopithecus and not Homo or not something else?

An equally valid question is who cares? After all, Mother Nature wouldn’t know a genus if it smacked her in the face. Genus is important to humans alone. The genus that we use to label the Malapa specimens stands for how we interpret them. If they were Homo, they’d have to be more cognitively complex than australopiths, making stone tools, eating a more diverse diet that included meat from scavenging and hunting, walking and running with completely bipedal bodies, etc…. But as australopiths, with their tiny brains and mosaic of bipedal and arboreal (or remnant arboreal) morphology, they’re one step behind Homo—even if they’re 1.9 mya or younger and even if they overlap in space and time with early Homo.

Get ready for a complete overhaul of how the Plio-Pleistocene transition in the hominin fossil record is interpreted. Using standards that were implemented back in the 1960s when the fossil record made much more sense (because there were so many gaps!) is no longer appropriate let alone feasible.

What is easily forgotten in exciting times like these is that each new fossil that paleoanthropologists uncover will look different from all others. That’s because of variation due to sexual reproduction among other things. So the notion that something is special because it looks like nothing else we’ve ever seen before is misleading. Malapa is certainly special but clones of previously known hominins? Now THAT would be special. The trick is to know just how different something is, if it’s just normal variation within a species (with some change through time accounted for) or if it’s too much variation for within one species. If it’s the latter, then it’s a new species.

In this case, the authors have decided that the Malapa remains are too different from A. africanus and from anything else on record. And so a new legend is born and a new name, A. sediba, is added to the roster of those who make the world of paleoanthropology the most exciting realm of all!

Probably an even bigger to-do than the scientific reports on Malapa was the openness of the researchers. They brought casts of all the material and laid them out for everyone to see, plus they offered to make casts for anyone interested. Having just discovered the fossils in late 2008, this is a laudable strategy for moving paleoanthropology in a productive direction. If people are going to keep finding all these rewriting-the-textbook fossils, then getting them out quickly for the rest of the gang to see, by Berger’s team’s example, may be standard practice from here on out.

And on that note, I leave you with this final thought.

Whenever you or someone you love gets excited about a hominin fossil, whenever you fall asleep at night assured that your hominin ancestors are being respectfully reconstructed, whenever you walk safely down the street knowing that your evolutionary tree is growing properly, don’t forget to thank your friendly neighborhood paleoanthropologist.

This superhero-themed post is dedicated to my student Samantha Davis who presented a terrific poster on early Homo molar shape outline at the Paleoanthropology meetings. If only we’d had Malapa to include in the analysis, right?

*What I should have asked when I briefly had the floor back in that ballroom was, “Can we eat it?” After all (according to my cab driver from the airport who just finished Voyage of the Beagle), until recently, when scientists found a new species that was their first question. Then if they couldn’t eat it they had to figure out what else to do with it.

Friday, April 23, 2010

Ant misbehavin'

There's a new paper in PNAS that looks like one that really is interesting and relatively important. It is a story that made the popular science news, and reports paleontological findings with at least some interesting aspects.

This is the discovery of fossils ants and other insects, preserved in amber (tree resin, basically) that is estimated to be about 95 Million years ago, along with other species of plants. The insects are shown here. Apparently (we're no experts!) it has been thought from the available evidence that the ant lineages originated to the north (Asia or North America), rather than Europe. If the dating of this site is accurate, then the old idea is incorrect. At least, by this ancient time clearly recognizable ants were already in Africa.

Now this is not a story to shatter evolutionary theory, though overturning accepted views, no matter how arcane, gets played in the news headlines as being transformative. But it does seem to qualify as new and worth discovering and so on. That is, its appearance in the news doesn't seem to qualify as just more Big Discovery hype.

There may well be ecological, geological, or climatological reasons why this find is interesting. Or it could just be that, while the environments Way Back When were suitable for some of these arthropods, they just hadn't managed to get to all of them by this time, having originated elsewhere with suitable climate. So ant lovers can now start trying to explain this expansion, and EO Wilson can stop writing bad novels about ants and get back to what he does really well.

But there is something that is much more interesting to us. It's another example of a fact that is already well-known but that these specimens give us the occasion to write about. It's the very deep conservation of very modern-looking form, in this case of insects (but other groups of animals and plants show some similar characteristics).

Now, evolution is supposed to be a relentless struggle to keep ahead, to survive in changing environments (that include the ever-changing mix of other species, climate, and so on). If you look at the DNA from a group of critters like, say, ants, their sequence differences correspond to their species differences, and separation times estimated from the amount of sequence difference correspond to fossil evidence. Since Way Back When, there has been a proliferation of related species, with their own morphological differences and adaptations.

But wait! Why, then, do these very ancient fossils look just basically just like the bugs that bother you when you try to eat out on your back deck? Why can morphology stay essentially unchanged for 100 million years? Why hasn't selection or even just genetic drift (chance changes) led to changes in the form of these bugs?

The standard answer is that selection has kept it so. That means that the environments haven't changed. But why then did the species divergence also occur? Long-term stasis is not a new discovery; it was the basis of a famous, but over-hyped 'theory' called punctuated equilibrium, but that has to do with other aspects of evolution and the fossil record. For us here, what is not so clear at all is how selection keeps form so similar for so long in various lineages, yet allows divergence even in local areas as well as genomewide sequence differences to accumulate in the same way it does for less conserved branches of life. Various evolutionary ecologists, like a prominent acquaintance named Doug Futuyma, have advanced ideas about how genetic incompatibility can keep neighboring populations from interbreeding, so they retain their distinct forms and functions. But even without interbreeding as a source of variation, why doesn't form just drift--very slowly perhaps, but after 100 million years why not noticeably?

We don't have an answer, and maybe the rather superficial answer ('selection made it so') is correct. Even then, how that works with traits affected by many genes, as these insects' form certainly is, is a major question for evolutionary biology....if evolutionary biologists would think more seriously about it rather than assuming the answer is known.

We'll post on this subject again.

Thursday, April 22, 2010

Fierce addendum

Just yesterday, the state of Arizona settled and indemnity claim by the Havasupai Indians. Blood samples had been collected putatively for the study of genetic factors that might be predisposing the tribe to diabetes. But the researchers also then searched for genes affecting psychiatric disorders. They had informed consent for the former, but apparently not for the latter.

Of course, the researchers, and other scientists quoted in the story, claimed innocence and that yes, they should be allowed to use DNA for whatever.

Abuse of society by scientists or any other elite starts step by step with this kind of self-important hubris. It is entirely right to stifle it.

This relates directly to our post yesterday about the Yanomami bloods stored in our freezer,  We think that the Yanomami were given a broad explanation of the investigators' intent, because in regard to genes, at least, there wasn't anything specific to look at besides variation. So the use of their samples was for the stated purpose. The issue of distributing samples to other investigators has changed dramatically since then.

But this is not the case with the Havasupai samples. Modern informed consent was in place.

George Church, running the Personal Genome Project may have the unfortunate best and right idea. His volunteers essentially sign away all rights, and acknowledge that there is no way to protect against any use of the samples. No confidentiality guarantees. Nothing.

This may be realistic, and may spread throughout the biomedical research community, that body of noble selfless citizens who simply want to do the experiments they want to do (something we've seen before in history). But it seems unlikely to stick. Sooner or later some use of DNA will be tried that will cause discrimination or something, and the courts will probably decide that the informed consent wasn't informed enough. What if, for example, an investigator some day clones an individual from whom a blood sample was taken? Frankenstein stories are easy to imagine. That doesn't make them true or even likely, but it does show how sensitive and dodgy the issues are, even for scientists with the best of intent.

Really something to crow about!

We used to think that we were Man the Toolmaker, because that was what made us uniquely human. Then, we lost our special place to other clever primates, who also could use tools. Marvelous, stunningly surprising, if humbling.

Step by step we are returned to the ordinary. But crows? Could they, too, put the kabosh on our egos? Of course, it's been known that they're clever (after all, Rossini wrote an opera, The Thieving Magpie, about that!). But you have to see this story and video from the Beeb. Here a crow uses a sequence of tools to get his treat. Apparently, according to the story, they do it even without prior exposure to the layout, and on the first time.

Maybe we have to even farther down the evolutionary Chain of Being before we get to the bottom of our uniqueness. But birds and humans have been separated for hundreds of millions of years, so this means that it's not an evolutionary shared trait (too many dumb-bell species on the way up both branches?). It evolved independently. That's even worse, as it makes IQ even more ordinary.

Oh, well, we're still the only species that can throw a baseball (at least, guys can....)

Wednesday, April 21, 2010

The Fierce Non-Controversy

In the 1960's and 70's, geneticist James Neel and anthropologist Napoleon Chagnon, and a team of others from the University of Michigan and South America, went to the Amazon rainforest to conduct biological and ethnographic studies of the Yanomami. This work launched Chagnon's career, and made him the most famous anthropologist of his era. The Yanomami were the subject of extensive, and world-leading ethnographic films by Tim Asch that have been seen on television and in classrooms the world over. If you've taken a cultural anthropology class in the last 40 years, you have probably read Chagnon's book, The Fierce People, and seen these films.

But Neel had a different purpose. Before embarking on this project, he had already been involved in studying the genetic aftermath of the atomic bombs in Japan, and he became interested in comparing mutation rates of relatively isolated, inbred peoples with those found in outbred populations. How does 'primitive' population structure relate to the amount of variation in the genome? How did our ancestors' populations get rid of harmful mutations? What, in fact, is their load of such mutations, given that they're not exposed to chemicals, radiation, and other mutagens to which we in industrial countries were so regularly exposed?

In fact, the work was initially funded by the Atomic Energy Commission, which also funded the work in Japan. That was during the scare about atomic testing and other exposures to radiation (including what is still controversial, medical x-rays). Survivors of the atomic bombings became unfortunate natural guinea pigs in studies of the effects of the radiation they had been exposed to.

In those days, it wasn't possible to look at DNA itself to identify mutations caused by radiation or other substances, but Neel's group collected hundreds of blood samples to look for protein variants that directly reflected genetic changes and could be identified by the technology then available. Papers showed little direct evidence of important differences between the 'aboriginal' cultures and those of us in industrial countries. In fact, while the Japanese survivors did (and do still) experience excess risk of cancer, there was basically no increase in detectable mutations. But the data were crude, so the Yanomami blood samples that had been collected were saved in case times would change and more genetic information could be gleaned from them -- which is what happened.

But before DNA analysis was possible, Neel retired, and decided to divide the remaining blood samples between 3 labs, including our lab here at Penn State because Ken had been a post-doc of Neel's in the 70's, doing demographic work with Yanomama birth and death data. The idea was that the recipients would be caretakers, and when the technology improved would do further analysis of these unique, never-to-be-repeated data.

One doctoral student here decided to work with these samples, asking genetic questions that hadn't been possible to answer in the pre-DNA days. But, just as he was finishing his work, a book was published that set the Anthropological world on its ears. That book was Darkness in El Dorado, by Patrick Tierney, a journalist with an interest in the aftermath of the work that was done in the Amazon years before.

He made some very inflammatory claims about how Chagnon and Neel had carried out their work decades before, most seriously accusing them of having deliberately spread measles throughout the area, to study the effects of the virus on a virgin population. There were other accusations as well, and the book set up an uproar in the field, launching investigations and email flame wars that went on for months. The book was rather irresponsible in its charges about these biomedical studies, and the most egregious accusations were easily dismissed without any question.

But, the book did lead to Brazilian anthropologists contacting the Yanomami and telling them that blood of their parents and grandparents was still being stored in freezers in the United States, and suggesting that profit was being made with the samples, and that they could ask for their return. Yanomami are culturally proscribed from even mentioning the dead by name, so to at least some people, the knowledge that the blood of their dead grandparents could still exist required that they ask for the return of these samples. Proper funereal rites were needed to put to rest the souls of the sampled individuals who were deceased.

When the request was made via the Brazilian government, Ken put a moratorium on the use of the samples, and readily agreed to return them. Ryk Ward (now deceased but one of those who collected the samples, and a longtime friend of Ken's) agreed, and he and Ken discussed ways to return them, even in person. We started preparing to return the blood. And then we waited, and waited, and waited.

It has been 8 years now since the return of these bloods was requested, and we're still waiting to hear from the Brazilian government as to when and where and how they should be sent. Brazilian lawyers and anthropologists and the embassy have contacted Ken over the years, but still with no definite information about the return.

One issue is that, as we do every sample that comes through our lab, we have treated these bloods as potentially biohazardous. Not because we know anything specific about these samples that would, after 50 years, make them still potentially harmful, but because we always err on the side of caution.  The samples have been deep-frozen so that some pathogens could in principle have survived.  Since we don't know what will be done with the bloods once they are back in Brazil, this is something that Ken has discussed with every official who contacts him about their return. We are currently still waiting to hear from the Brazilian embassy how best to ensure they are safe and delivered to the proper recipients in Brazil.

Meanwhile, in spite of the fact that this is a non-issue, as these bloods will be returned when the logistics are worked out, and that has always been true, agreed to by all those labs that currently house such samples, for years an American anthropologist has masterminded numerous online letter writing campaigns to stir up undergraduates who are not properly informed about what's been going on, to demand the repatriation of these bloods.

So, from time to time Ken hears from reporters at newspapers that are getting flooded with letters from students, asking what's going on. The office of the president of Penn State was once inundated with letters (said anthropologist spearheads all of this via an online course), so that the university lawyers got involved. This guy has been told over and over that there has been a moratorium on the bloods since 2002, and yet he still persists in accusing Ken of refusing to hand them over. You can see here that Ken has pretty much lost his patience with this man's demagoguery.

Apart from this, this story does raise a number of issues widely relevant more broadly, and legitimately, in Anthropology -- the bloods were collected in the days before today's kinds of informed consent (for whatever that's worth in such a setting), so there is no binding obligation to return them. Indeed, the samples were collected voluntarily, paid for by trade goods, and with explanations of the general objectives for their use (we weren't there so we can't comment on the details). There is no legal issue here -- the issues are ethical. Any research project using human subjects initiated today would include a clause in the consent form that allowed subjects to request to be withdrawn from the study at any time, and that's how Ken is treating this, and has been from the beginning. Even if most of the subjects are deceased, and it is their relatives, or even group members, who are demanding return, and even if this is to give a sense of justice after a lot of abuse of the Yanomami by the outside world having nothing to do with these blood samples.

Archaeology is confronted by this kind of situation all the time, with requests for repatriation of skeletal remains and artifacts. Who determines where they belong? And who decides whether they go back? Who has legal, and more importantly who has moral, claim to set the conditions for use or return?

In the case of the Brazilian Yanomami bloods, the group itself has decided, or at least representatives of the group. But each population in each country will have different criteria and different feelings. These Yanomami bloods will go back to Brazil for at least some kind of closure, whenever the Brazilian embassy gives us a time and a place to send them to. There might still be interesting things that could be learned from these samples, and such samples could not be collected again. But such is life.

The ethical issues involving a history of dominance and colonialism, power differentials between industrial states and indigenous peoples can be found on every continent. The details and political realities may differ, but there is much to think about by those who would like to do good science, but would also like to ensure that it's done ethically.

Tuesday, April 20, 2010

Getting the picture

In discussing the cost of geneticizing complex common diseases, a colleague quipped a few years ago that it would be cheaper to hire a personal trainer for every diabetic than to continue the kind of research that has been going on in relation to this disease. It was a joke, but it wasn't funny because, seriously it is the right idea.

And believe it or not, our nation's less than noble citizens, the health care industry, might be catching on. A story in the NY Times says that UnitedHealth Group is teaming up with the YWCA to implement a lifestyle program--diet and exercize--to reduce the cost of treating people with Type 2 diabetes (which used to be called adult type diabetes, but now kids are getting it too).

All we can say is finally someone is getting with the program! If this were done nationally, the cost of healthcare from an insurance point of view would go down. Tons of wasted high-tech research such as big GWAS and other kinds of studies would not be done and the costs of that research could be redirected towards behavioral intervention, or even studies of how to implement behavioral intervention (so long as the latter were not just more school of public health professor welfare, but instead actually implemented programs).

Doing this would have benefit for everyone (except, perhaps, the genetics industry): countless people would lead healthier, longer lives. Health insurance would drop for everyone (in a properly regulated industry, because premiums amortize group costs to every participant's bill, and the total cost for the groups of insured would drop). And what would remain would be the cases of diabetes that really are genetic. Then, legitimate genetic research would be able to focus on genetic problems.

Cases that really are genetic might involve less genetic heterogeneity, or genes whose mutations showed up regularly in cases, or mutations with very strong effects. These would suggest targetable pathways for therapy, and for various genetic or other pharmaceutical engineering to tackle the problems. That would be the right way to spend health research funds, good for everyone.

Except..... Except that if thousands upon thousands of diabetics lived longer, more active lives, they would also be more likely to get some other degenerative diseases, like cancers, simply because they didn't succumb to diabetes first. And those diseases would need health care with its associated costs. Ah, well, one can't have everything!

Monday, April 19, 2010

The 'science' of living a healthy life?

Yesterday's NYT Magazine was devoted to The Science of Living a Healthy Life. Does exercise help us lose weight? Is estrogen replacement good or bad? Is marriage good for your health? We could stop right here by saying that the answer to living a healthy life is easy, and there's no need for expensive studies because we've known for millennia how to do it -- moderation in everything. But, this advice doesn't fill careers -- or magazines.

So, science boldly marches on. But how could we possibly not know the answers to these questions, after billions of dollars, thousands of state-of-the-art studies, and the devotion of many smart minds to the problem? In short, it's because the questions are wrongly posed. They address a causative factor as though it can be studied out of context -- as though you could simply add exercise to your life and measure what happens as a direct result. It exemplifies the way we've earlier described how science 'tunnels' through complex truths to try to get at isolated subtruths, when what is needed is the greater truth.

Indeed, weight loss is a 'simple' equation whereby expending more calories than you take in must result in shedding pounds, because that energy must be derived from stored form in tissues. We only get that energy from food. Simple beyond simple?

No, not really. Expending calories triggers other effects, hormonal and otherwise, that cause you to want to consume more calories than you would if you hadn't exercised, so simply adding a workout to your daily schedule is unlikely to result in weight loss. Eating less and exercising is another story, however. But, eating less without exercise results in weight loss as well (because of that simple energy balance equation, and the fact that just living and breathing require energy). But either way, once the weight is lost, it's very easy to put it right back on -- at least in our culture, with its temptations and so on. Here's where exercise might actually help -- it turns out that to maintain weight loss, exercise might be the key, though it's not clear why.

So, the answer to whether exercise causes weight loss is that it might do so if only it didn't trigger those other effects. Especially in women, apparently, as though women have evolved a mechanism to maintain body fat. The Just-So explanation for this would be that extra body fat is important for successful reproduction (gestation and lactation). But, this assumes that our species evolved in conditions in which there were excess calories to store. An unlikely scenario. So for now, it's possible that women's hormone repertoire handles body fat differently from men's, but it's not yet clear whether that is actually so. And since men have to be the tough guys defending the home front, and out hunting dinner, why shouldn't they, too, have comparable levels of the hunger effects?

As to whether marriage makes people healthier, as one of the stories in this week's magazine says, well, it depends. Again, it's impossible to answer this question out of context. And, does estrogen replacement improve the health of menopausal women? The answer again is, it depends. These questions assume a linear, simple, cause-effect relationship that just isn't the way biology works. We know this from genetics, epidemiology, psychology, economics, and so on. Now kinesiology and physiology are confirming this, in their inability to tell us how to live a healthy life. They raise the question as to why we have kinesiology departments and big grants if even these things haven't been worked out.

But from a more sympathetic view, the continued inclarity on these very basic kinds of questions shows that prevailing scientific methods just aren't yet up to measuring cause and effect in complex systems of this sort.

Friday, April 16, 2010

Thursday, April 15, 2010

Something to root for

Yesterday, we commented on the typical kind of societal, scientific, and biological complexity associated with genetically engineered food crops. Will evolution have its way, or will we keep one step ahead of famine? The issues have to do with world population demanding industrial-scale agriculture (or, at least, that's how modern societies are constructed).

Assuming that climate change doesn't change the entire name of the game, Mega-dust-bowl fashion, we are still feeding the world on the back of 'portable' fossil fuel (for fertilizers, insecticides, tractor and shipment fuels, and so on). We lay out huge fields of a single crop, as far as the eye (or a soaring eagle's eye) can see. These are largely grasses, the staple foods of most of the world. But the way we do this leads to disaster.

We are losing topsoil faster than a stripper loses clothes, and pretty soon farmland will be as bare as a stripper, too. We have water pollution problems due to runoff not just of soil but of fertilizer and pesticide. One reason is that we rely so heavily on annual grass crops. These need to be plowed under each year, baring and loosening the soil, which makes it vulnerable to runoff.

And mega-agribusiness removes family farms and their communities. Small farms traditionally had a mix and rotation of crops, grazing and other livestock (and their manure), and so on. And though the work was very hard, they had community. This lifestyle is not for everyone, but many pine for it, as it disappears.

However, a different kind of genetically engineered crop is being worked on by a few researchers in the US and elsewhere. Most notable is The Land Institute, in Salina, Kansas. There, under the direction and inspiration of Wes Jackson, new ideas are taking root. Many wild grasses related to commercial food grasses are perennials rather than annuals. They last for years, keeping roots to anchor the soil, reducing the need for fertilizers, keep the topsoil in place, and so on.

Jackson and his group of dedicated experimenters are using traditional agricultural breeding, and where possible, scientific methods, to get commercially useful annual crops to take on the relevant aspects of perennial stock, so that we can have perennial wheat, corn, and other crops. With even modest investment, they will likely be able to succeed. If they do, it will meld real science, both high-tech and traditional, with sound long-term future implications for our physical bellies and our social hearts.

Nobody claims that this will be easy, but it is easy to claim that we have little choice. The Land Institute has a web page where you can learn about these efforts:

Wednesday, April 14, 2010

Growing pains cropping up?

There is a good story in the NY Times about the use of genetically modified crops, that reflects some of the politics, and much of the genuine complexity of the life sciences and their application to human problems. The headline, Study Says Overuse Threatens Gains from Modified Crops.

Pesticide or insecticide resistant crops have been engineered so that crops either have natural insect resistance (the produce a toxic substance called BT), or they are resistant to an herbicide called Roundup. These crops yield about the same amount as formerly, but require less tillage to turn over weeds, or less insecticide spraying. By not requiring tillage every year, they help retain ground water and reduce soil erosion. This seems all to the good.

Any evolutionary biologist would predict that sooner or later the target species of insects or weeds would evolve resistance to the GM crops' engineered distance. The Times story refers to growing Roundup resistant weeds, and we can expect insects to 'learn' to eat these BT plants.

The social side of this has to do with agribusiness--forcing farmers to pay for GM seed because the plants can't be fertilized and proliferated by the farmers, raising the cost of the crop because of the seed costs, domination of the market by monocropping of the same type of plant, which can encourage a more rapid evolution of resistance, and the escape of the engineered plants to contaminate wild strains hence reducing biodiversity and future, different breeding stocks.

And of course there are those who either don't trust the safety of the genetically different plants for human consumption, or who don't like the corporate monopoly and greed, or who think the engineered plants will spread too widely, or will kill off necessary insects (like pollinators). And those who just don't like big business and yearn for small family farms and their communities.

In fact, all of these sides to the story have some merit, and what the long-term impact on society--if any--will be is of course unknowable. The issue involves not just the science, but the social sides, because protagonists seize on any fact that supports their side.

So far, it does not seem that most of the worries about food safety are valid. It seems that soil erosion and reduced spraying are good, and can help stall some of the erosion and other problems with mass-scale monocropping. Yes, small farmers are being replaced by capitalists in Omaha and New York, but different jobs are being created as well. How you rate the good and the bad will depend as much on your politics as the science.

The bigger questions are whether genetic engineering technology can stay one step ahead of evolutionary ecology. Or whether all this exotica will become moot or even life-way threatening, if society goes into a serious slump, such as an oil crisis depriving supplies of fertilizer or pesticides, or the human population grows too large or paves over too much land.

So if someone asks whether GM crops are good or bad, perhaps the best answer is 'yes'. Or simply the evolutionary response: 'wait and see.'

On the other hand, tomorrow we'll post about a program that is trying to balance all of the above objectives--and it's not pie in the sky, either.

Tuesday, April 13, 2010

Which came first? Or, the chicken or egg question is a non-question

There's a commentary by Michael Balter in the April 9 issue of Science, entitled "Did Working Memory Spark Creative Culture?" Did an enhanced ability to remember things make us human?
In the view of Wynn and Coolidge—an archaeologist and a psychologist who form an unusual scientific partnership—a stepwise increase in working memory capacity was central to the evolution of advanced human cognition. They argue that the final steps, consisting of one or more genetic mutations that led to "enhanced working memory," happened sometime after our species appeared nearly 200,000 years ago, and perhaps as recently as 40,000 years ago. With enhanced working memory, modern humans could do what their ancestors could not: express themselves in art and other symbolic behavior, speak in fully grammatical language, plan ahead, and make highly complex tools.
Not everyone is buying this theory, because it's supported by rather sparse data, and because it's not clear which came first, enhanced memory or advanced human cognition, but the theory is making a splash nonetheless.
Despite the critics, Wynn and Coolidge's ideas are increasingly popping up in scientific journals. The pair "have made a really big splash," says Philip Barnard, a cognition researcher at the University of Cambridge in the U.K. This month, Current Anthropology devotes a special online supplement to the topic, and later in April, Wynn and Coolidge will update their ideas at a major meeting on the evolution of language in the Netherlands. The theory makes sense to many. "It is the most impressive, explicit, and scientifically based model" so far, says archaeologist Paul Mellars of the University of Cambridge.

But we aren't here to debate the merits of this theory.  We're here to point out that in a broader, and more serious's bunkum! Cognition, whatever its molecular and neurobiological nature is, is only one in a long string of human capabilities that have been advocated to be The One that made us what we are.

So, not only our memory, but fire, domesticated dogs, brain size, opposable thumbs, the FOXP2 gene ('for' language), neotenous birth, altered birth canal, the microcephalia gene, among many other traits, behaviors, or genes have been posited as 'the' thing that made us human. Because of natural selection, naturally.

Each of these theories can be backed up by a story that logically (usually) makes sense. Each refers to genuine human traits that are quite different from the corresponding traits in our closest primate relatives. So far so good, and understanding these traits is a fine and important thing to do. But it's curious that each trait is so often just one thing taken in isolation -- and that that one thing is often the researcher's particular area of expertise. 

But, there was no single point in time when we became human. And no single trait evolved in isolation, or simply in response to another. The traits evolved in a higgelty-piggelty way, mosaic changes in varying physical, metabolic, and neurological patterns. Without upright posture, our kind of tool use would not have evolved. That means useful thumbs. Upright posture means various other anatomical changes. Without these, there'd be no need for our kind of brain power. Language, symbolic behavior verbal and otherwise, went hand in hand (or thumb in thumb?) with these other changes.  This is what's interesting about how evolution works.  The fact that everything evolves together -- must co-evolve -- which of course is true of every species, not just humans.  Even if something like the enhanced memory theory "makes sense to many", it's nonsense to treat these traits in isolation.  

And, evolution doesn't work by leaps, which is what most of these theories in effect would require: A point in time when a new thumb form or a tame wolf was enough to allow its owner to have more children than those in his or her small band who were less well-endowed, a kind of luck that would have had to carry on for many generations. All the kinds of traits that are posited as the one that made us human are complex, they evolved slowly over many millennia, and they generally arose together. The idea that we should be able to attribute our "humanness" to a single gene or trait -- a kind of 'Master' trait, which would have had to precede the others -- involves precisely the same determinism and darwinian fundamentalism that we so frequently criticize. 

It was long ago -- in 1929 -- when the Taung specimen was unearthed in South Africa. That specimen showed that, contrary to our egotistical preconceptions, the noble (gratuitously greedy and murderous?) human brain, did not lead the way. Upright posture of a sort evolved first. Culture evolved along the way, bit by bit, and does not require a huge brain per se. Trying to hawk MyFavoriteTrait is good for making a news splash, but is misleading science. Fortunately, while it helps build careers, most of the nonsensical aspects quickly fade. Meanwhile, those who are serious about human evolution can ignore the carnival barking and study the traits in a more sober evolutionary sense.

Monday, April 12, 2010

Why darwinian fundamentalism and genetic determinism?

Once again we return to the subject of darwinian fundamentalism and genetic determinism, something we have blogged about frequently before (here, for a start).  t's a subject that we write about a lot because this worldview is so pervasive and can have such destructive effects.  But if that's true, why does such a fervent clinging to darwinian fundamentalism and genetic determinism still persist?

Why such fervent clinging to darwinian fundamentalism in spite of what we know?
Darwinian fundamentalism is the belief that organized traits have to get here by a history of some material causes, not by pure chance, and that the major molding force we know of is natural selection.This view is held by many in spite of the fact that we know that selection is not usually a 'force', there are usually multiple ways to be successful at any given time, and most traits are due to many contributing genes. Pruning away what simply doesn't work is undoubtedly important, and is selection, but it isn't natural selection in the Darwinian sense: it need not involve competition, resource shortages, or population pressure. And it's not fine tuning.

Chance effects manifestly occur. It is probably typical that there are many roads to success (i.e., to reproduction), most of them not what might be deemed the very best (given this, as we say in The Mermaid's Tale, a more accurate description of who succeeds would be 'failure of the frail' rather than 'survival of the fittest'). Among roughly comparable ways to do OK, the way one takes is a matter of chance, and the differing ways will change slightly over time. There are other ways to mold traits such as (but not restricted to) organismal selection where individuals choose their preferred environment, and niche construction where they make it.

All of these are perfectly in line with what we know about genomes, phenogenetic relationships (between genotypes and phenotypes in individuals), and evolution as population history. To say that something organized can be here without having been fine-tuned by selection, or without some specific selective reason, is not anti-evolutionary nor non-causal, nor mystic! There is no need to invent adaptive scenarios for every trait!

Why is there such fervent clinging to genetic determinism?
The fact that genetic mechanisms are involved in essentially all biological traits does not mean there must be a specific genotype (or even a tractable subset of genotypes) that determine or mandate a trait. Chance in many manifestations, and the effects of non-genetic aspects of life combine, often in different ways in each individual to produce traits. Each instance may involve a different genotype (perhaps at a great many genes), but even knowing that genotype may not lead to useful predictive ability. That organized traits can be here without having been genetically determined in the usual connotation of that phrase is not non-materialistic, nor non-causal, nor mystic!

To insist that only one particular explanation is consistent with legitimate materialistic science is a deep misunderstanding of what we know about evolution, and of what we know about genetics.

The acceptance of the role of chance in evolution is treated as heresy in some circles. It's a clinging to darwinism as religious fundamentalists cling to the Bible. But, so much has been learned in the 150 years since Darwin that it's bad science not to question, and build upon Darwin's ideas. Odd that he's so often treated like Jesus! It's not questioning evolution, it's strengthening it.

Friday, April 9, 2010

The newest newest newest transformative fossil fills the gap....but not the credibility gap

Here we go again. As reported in Science, and commented on by the BBC , and in fact all over the news, more hominid fossils -- two specimens -- have been found!! These, in south Africa 'might be' a new species says the discoverer. Why? Because they isn't just like other fossils known. Naturally.
Researchers tell the journal Science that the creatures fill an important gap between older hominids and the group of more modern species known as Homo, which includes our own kind.
What else can they say if they want to be in the news? Hey, says CNN, it's a "time machine", it "sheds new light". And it's no surprise that the investigators have already, unilaterally decided on its name Australopithecus sediba. The problem here is not that the finds aren't potentially important and informative, nor that they come from a sparsely represented part of human ancestry. It's the melodramatizing of the term species.
"It's at the point where we transition from an ape that walks on two legs to, effectively, us," lead scientist Professor Lee Berger of the University of the Witwatersrand told BBC News.
The time period is 1.8 to 2.0 million years, but calling it a 'point' reflects the issues. Morphological change would likely have been very gradual (even this 'point' is given 200,000 years, or 10,000 generations--roughly twice the age of our human species.

Naturally it doesn't look just like other things that distant from us or each other. It's a "fascinating mosaic" of features said one person interviewed. Another says that rather than putting it in Australopithecus, 'certain features' suggest it was really Homo. But these are entirely subjective cutoff points. Indeed, even the species concept itself is vague in many cases. And Homo vs Australopithecus is a genus, not species distinction, and that is entirely human-imposed (not natural categories). Since this is basically an evolutionary continuum, it's good for the paleontology business to be able to argue about the distinctions even if they have little meaning.

Holly is away actually doing some science of her own (presenting research at two meetings) but she expects to provide you with some entertaining (and cogent) comments on the finds themselves when she gets back home. She's an expert, and she may see things more stunning than estimates of the fossils' age at death, how tall they were, and that they 'may have' been somewhat stronger than Lucy (3 Myr old). Give us a break!

The idea that we find something from a poorly represented time gap, morphology not exactly seen before, doesn't make this dramatic news. It may well be quite interesting to paleoanthropologists, but it can't overturn what we already know, and the proliferation of species names is not helpful (because it's so subjective) and hence it's a problematic reason to blare out yet another most-important fossil news extra.

By pretending that every investigator and every 'find' is, like the children in Lake Woebegon, above average, we turn all science into vanilla, making it hard to know what's really noteworthy and worth following up with more research. Of course, to some extent this is yet another intentional aspect of our current science and science-funding culture, besides the investigators' understandable personal investment and satisfaction and ego in their results. It's part of the lobbying process.

That magazines like Science and Nature are hungry for these Big Stories makes them less trustworthy as serious science journals, a view widespread even among those who, for personal career reasons, hunger to be published there. By making them into the go-to places, even if it's often as much circus as science, undermines other journals, where most of these stories properly should go to nurture the various areas within science.

Science needn't be boring, but one can take great interest in such fossils without insisting that they are profoundly revolutionary. Living on steroids is exhausting. Instead, if we had a proper understanding of science, and the phenomena of Nature, interest alone should be enough.

Thursday, April 8, 2010

What is 'Just-So' science?

We write a lot about the tendency toward ideology in the way some science is done, especially in relation to genetic determinism and Darwinian fundamentalism, pointing out how often and how far inferences in these areas go beyond the data, or scenarios are assumed and then treated as being proven. When applied to humans, the offenses are more numerous and more serious, especially since they can have social consequences and are often blatantly related to lobbying for careerist objectives like research funding. Misleading hypotheses masquerading as established truth can (are designed to) move money, and that takes it away from other areas that might be more fruitful.

When you construct a hypothesis about something you see in Nature, it almost always has some relevance to a currently accepted theory. But in the areas mentioned, the theory as well as the hypothesis tend to be accepted rather than subject to serious, objective testing. Thus, the idea that everything has to have a Darwinian explanation--that is, that natural selection is assumed to be the causative force--underlies much of what we object to in our posts. Yet proper evolutionary theory does not require the selective assumption, in the usual fine-tuning sense in which it is typically invoked. Even many biologists, not to mention hosts of pop-culture opiners, harbor what is largely a caricature of what we actually already know.

A Just-So story is one of Rudyard Kipling's children's tales that gives a fanciful, but of course seemingly air-tight explanation for some observed phenomenon. How did the leopard get its spots? An Ethiopian friend daubed the once all-tawny leopard with his blackened finger tips.

In science, a Just-So story is one that can hardly be refuted, is offered based on some sort of evidence, usually not very rigorous, and is treated as truth, almost defying anyone who criticizes to prove that the hypothesis is false.

In fact, in order not to be junk science, a Just-so hypothesis should be subject to stringent testing before being accepted, just like any other hypothesis. Instead, those offering such an hypothesis often act as if this is an insight that only a benighted heretic would challenge. They put a huge burden of proof on the challenger when, in fact, the Just-So hypothesis is often so air-tight with after-the-fact rationale as to be hardly an empirical suggestion at all--and not all that different from other kinds of statements of faith.

This is serious, because much of the life, social, and health sciences are victims of facile hypotheses of this kind. Some of them may be true, but the strength of evidence is usually wanting. In many cases collecting appropriate data is very difficult, if it's even possible. Many evolutionary scenarios are like this. But that is an argument against, not in favor of, asserting the hypothesis.

Questions that often tempt Just-So explanations include those related to our recent post on the widespread observation of 'gay' behavior, questions such as: Why are men like this and women like that? Why is gender behavior like it is? Or, in human genetics: Why do we get diseases like cancer or diabetes? What is the cause of mental illness? Can we explain these phenomena with 'Darwinian' scenarios?  Hypotheses invoking natural selection or genetic determinism are routine, yet we have tons of reason to realize that they are usually heavily over-stated.

The point is not that outcomes or traits have no cause, nor that the cause can't have a genetic component, nor that natural selection may not have played a role in the trait in the past. Instead, the point is that with pat hypotheses, often representing pre-formed worldviews, and/or vested interests, we don't know what level of truth applies since the assertions are being assumed rather than tested.

As scientists, we should stop telling Just-So stories, no matter how much they make us feel perceptive and intelligent, or lead us to be assertive when it comes to things that could be misused against people in society. Because it's awful science, it is often justifiably called 'junk' science, and detracts from good science.

Of course, those stories might be perfectly suitable for entertaining children, for whom contemplating fantasies may be an enjoyable stage of life.

Wednesday, April 7, 2010

Pull up or pull out?

Well, here's another stunner. After all this time, we still don't know what amount or what kind of exercise is best. With all the money, marketeering, and other scientific interest involved in understanding what makes us sick, what we know is that some exercise is good for you...but not too much. And older people don't like vigorous exercise.

How could it be that after billions of dollars in research (and promotion) over many decades, we don't know? After all, you can find routine comments about the virtue of exercise in, say, Victorian novels (and, though we haven't looked, probably in Hippocrates 2500 years ago).

This reflects the serious if not fundamental problem with epidemiology, a problem at least as serious as making causal inference in genetics, where at least DNA segments can be objectively identified. In our posts last week on reductionism, we discussed the way that science 'tunnels' through complex reality by reducing studies to isolated variables to identify their causal effects on some measured outcome. We noted that in genetics this has provided a phenomenally successful way to discover many things that genes do. But it leaves the genetic contribution to most interesting traits, known by a catchword as 'complex', poorly accounted for in any specific way.

In environmental and lifestyle epidemiology, that deals with the common causes of disease and death in the industrial world, even tunneling (with all its limitations) is exceedingly problematic. That's at least in part due to the fact that exposure variables, unlike genetic sequences, are poorly defined and measured. 'Exercise' or 'diet' are examples, but so are many other aspects of modern life. This is especially true when the fact as well as the amount and duration of exposure are needed. Who can remember what they did, or ate, decades ago--all the way, perhaps, back to early childhood? At least, we're exposed to our genes from conception, and infectious diseases often are clear-cut point-causes where a single exposure can do the trick: one sneeze from someone with the flu is enough.

Besides the fact that it's hard to isolate individual variables, such as diet or exercise, in a cogent experiment, the variables of interest are heavily confounded, that is, diet is correlated with many other aspects of our lives, measured (e.g., education, income) and unmeasured. The same is true in genetics (e.g., what's called 'linkage disequilibrium' or 'epistasis', for those who know genetics), but easier to at least think about if not measure (though by no means trivial). But if eating chocolate rather than, say Supersize sodas, is correlated with exercise, and because these are both correlated with social status, jobs, education, neighborhood, race, and so on, then it is nearly impossible to isolate one of those variables. Confounding is a very serious, if not lethal, problem for application of 'tunneling' approaches in epidemiology, even if they would work. But because of complex causation, it probably wouldn't.

We've previously posted about the problem that preventive screening as in mammography to detect breast cancer may over-diagnose and over-treat lesions that would go away on their own or turn out not to be cancer after all. Studies showed that those not screened had fewer deaths from cancer than those who were screened. This has both monetary and psychological costs. It challenges various vested interests, too, so it's not surprising that we need yet another study (with its costs, too). So the latest U-turn in this saga is getting headlines, threatening policy changes (or no-changes), because it defends screening. And a BBC report interviewed a Scandinavian expert who bluntly ridiculed the current study as patently wrong. We can't judge without attempting to read the latest studies, and their skeptics, which we haven't done. The statistics are subtle and one can probably find in them what one wants to find. But the point here is the problem of uncertainty, even in heavily studied issues, is what concerns us here.

There's no easy answer, but the fact that there's a problem is clear. Candid epidemiologists who recognize this (privately, of course, because doing so publicly will jeopardize their funding changes, as we have been told many times) often turn in frustration to genetics. They believe that we have The Truth because genes, being molecules and the blueprint for life, have to be more tractable. Well, sorry, but it's not so simple on the genetic side of the fence either.

Dealing with complexity is a challenge. Somehow, the latest finding still makes it into the news, even though we really know that next week or next year the reverse will be announced. When and how we'll start making better progress is anyone's guess.

Meanwhile, do your pull ups, but don't pull out of your health club membership.

Tuesday, April 6, 2010

The efficacy of predicting gene prediction

Francis Collins has a track record of predicting tremendous advances in health from the human genome project --what he and many others said would be a revolution. Writing in the April 1 issue of Nature, which celebrates the 10th Anniversary of the Human Genome, he asks if the revolution that he and others foresaw more than a decade ago has now arrived. He writes that he never deletes a powerpoint file, so he knows exactly what he predicted when he stood on the podium with President Clinton in 2001 and announced the completion of the human genome sequence.
Predictive genetic tests will be available for a dozen conditions
Interventions to reduce risk will be available for several of these
Many primary-care providers will begin to practise genetic medicine
Preimplantation genetic diagnosis will be widely available, and its limits will be fiercely debated
A ban on genetic discrimination will be in place in the United States
Access to genetic medicine will remain inequitable, especially in the developing world
He says that it's fair to say that these predictions have come true. Well, yes, one can always declare victory after the fact. He also adds, "The promise of a revolution in human health remains quite real."  

But let's look at his predictions one at a time.  In fact, there are dozens of predictive genetic tests available -- but how accurate are they? Interventions to reduce risk of, say, obesity or type II diabetes are available, yes, but they've been known for decades, if not centuries -- diet and exercise. What he means by genetic medicine isn't really clear, and preimplantation genetic diagnosis can't be claimed to be due to HGP, but to technological advances, as well as the discovery of diseases for pediatric Mendelian diseases, which was already being done before the human genome sequence. A ban on genetic discrimination is in place (GINA), but how successful that will be is still an open question (and, as our friend, geneticist and tuba player Joe Terwilliger says, the differential rates 18 year old boys and girls pay for car insurance is genetic discrimination, isn't it?). And, well, predicting differential access to anything medical based on income disparities isn't exactly a challenge.

Collins in fact made many other predictions (or promises) for the HGP, particularly in the late 1990s as the human genome was nearing completion. For example, in a lecture he delivered in 1999 he presented a scenario for the year 2010 whereby a hypothetical young man, a smoker, is found to have high cholesterol. Here's a bit of that imagined scenario:
Aided by an interactive computer program that takes John's family history, his physician notes that there is a strong paternal history of myocardial infarction and that John's father died at the age of 48 years.
To obtain more precise information about his risks of contracting coronary artery disease and other illnesses in the future, John agrees to consider a battery of genetic tests that are available in 2010.  After working through an interactive computer program that explains the benefits and risks of such tests, John agrees (and signs informed consent) to undergo 15 genetic tests that provide risk information for illnesses for which preventive strategies are available.
John is please to learn that genetic testing does not always give bad news -- his risks of contracting prostate cancer and Alzheimer's disease are reduced, because he carries low-risk variants of the several genes known in 2010 to contribute to these illnesses. But John is sobered by the evidence of his increased risks of contracting coronary artery disease, colon cancer, and lung cancer. Confronted with the reality of his own genetic data, he arrives at that crucial "teachable moment" when a lifelong change in health-related behavior, focused on reducing specific risks, is possible. And there is much to offer. By 2010, the field of pharmacogenomics has blossomed, and a prophylactic drug regimen based on the knowledge of John's personal genetic data can be precisely prescribed to reduce his cholesterol level and the risk of coronary artery disease to normal levels...
Understandably, Collins was a great cheerleader for the genome project. But that is exactly the problem, and we regularly write about it: the life-as-lobbying worldview. Dr Collins did say that he believed that diseases would turn out to be caused by numerous genes with small effects, interacting with environmental factors, but he clearly believed that these genes would be common and identifiable (hence, the HapMap project, whose scenarios and promises did not materialize except by a lot of post hoc wriggling and redefinitions, such as of 'common').  And he clearly believed that understanding genetic effects would be straightforward -- and counteractable, with some help from the pharmaceutical industry.

In fact, the 'several genes' he predicted would be found by 2010 to be responsible for Alzheimer's disease turn out to be hundreds of genes, each with many alleles and mostly with tiny effects. And, although he did write that lifestyle changes were a component of prevention, and that John should quit smoking, he also clearly believed that designing prophylactic drugs based on what was learned from the human genome would be so easy that by the year 2010 we'd be able to prevent many genetic diseases pharmaceutically. After we easily predicted them.

But in fairness, no one should be strictly held to their predictions (well, except for seers and grant seekers who promise too much) and mostly this hypothetical scenario is interesting as insight into Collins' beliefs about the importance of genes and the power of technology to counteract them -- a set of beliefs that still drives him as director of the NIH, which we have blogged about before. But to us, at least as interesting is something we wrote about last week, the use of modern technology to tell us something we already know. In Collins' hypothetical scenario -- which was completely made up; remember, he could have imagined anything -- John learns from his genetic testing that he was at risk of heart disease and colon cancer. But his family history already told him that! This isn't exactly something that justifies the billions of dollars spent on the HGP!

As director of the National Institutes of Health and past director of the National Human Genome Research Institute, Collins had, and continues to have, tremendous influence over the direction of medical research funding for the last decade, and into the foreseeable future. Indeed, as we've said many times before, his faith-based commitment to improving our health through genetics and technology is taking real money away from real problems that could have real solutions, given equal commitment to solving them. 

Monday, April 5, 2010

The long history of Gay Pairee

An article in the Easter Sunday New York Times Magazine shows that homosexual behavior is found in hundreds of species of animals--even insects. That this merits an article, with lots of hand-wringing as to whether gay behavior is right or wrong, natural or otherwise, or indeed even worthy of explanation reveals a lot more about our culture than it does about nature. In short, it shows that our obsession with sexual preference is myopically ethnocentric.

Sexual preference is doubtlessly based on many genetic and environmental factors. Rather than being a rigid threshold, or a yes-no phenomenon, it is some sort of continuum that involves emotional preferences, social constraints or learning, as well as actual copulation and reproduction. This is so obvious it should not have to be discussed and trees don't need to be killed to print stories about it (of course, some of those trees might be 'homosexual'!).

The most important point is going to be the one least heeded because it threatens many vested, often fervid, beliefs about humans and evolution. That's that homosexuality is a problem that human evolutionary anthropology (as well as 'Darwinian medicine') have to explain. The problem is darwinian, or any other fundamentalism, masquerading as 'theory'. Darwinian fundamentalism assumes that what's here and organized must be so because of natural selection. No room for incidental traits, nor a wasted calorie.

From an evolutionary point of view it is no surprise of any kind that homosexual behavior would exist in other species, nor that it would have many manifestations. After all, even within humans it is like that. There is not a whiff of evidence that homosexual behavior is favored per se by natural selection, and clearly if the distribution of interactions and genital use shifted too much towards the homosexual, then it would be selected against (or some modified form of reproduction would replace the 'standard' forms--which, by the way, are themselves hugely variable).

The spectrum of sex and gender behavior, contrary to the oft-expressed shock that homosexuality could even exist (since all behaviors must be here because they're adaptive, and how could homosexuality be adaptive?), is no threat of any kind to properly understood evolutionary theory. If there were a single allele (a variant in a single gene) that inevitably conferred exclusive homosexuality upon its bearers, then our theory predicts it would disappear since if its bearers didn't engage in reproduction it could not be transmitted to the next generation. If complex genotypes confer an increased chance of gay behavior that reduced the bearers' number of offspring, it could proliferate only by chance, or by raising the reproductive success of closely related kin, though that's problematic because they, too, would carry the 'bad' gene. Theories of such 'kin selection' are elegantly mathematical but hard to prove.

On the other hand, sex-related behavior is manifestly complex, a mix of environmental, genetic, and chance effects. Given that, most 'gay' genotypes are unique--every case different, with very little net selection against any specific contributing allele. Just like other perplexing complex traits, they really aren't 'genetic' in the meaningful sense of the word, and fluidity in behaviors is simply tolerated by the weak selective constraints typical of nature.

But this kind of behavior may be interesting in its own right, if it could be stripped of the Just-So story kind of science that engages in excessive theorizing and hand-wringing, and we recognized our culture-bound reasons for even thinking it was particularly interesting. That we cling to fundamentalistic, or tribal, 'theory' is a failure of anthropology to educate scientists about themselves, or a reflection of a deep need for simple organizing principles to live by. And besides blinkered scientists, an understanding of anthropology should suffice to show that some people in a given culture, such as ours, may claim to be revolted by gay behavior but that other perfectly respectable cultures do not share that view. One doesn't have to like, or dislike, variations in sex or gender behavior to realize that such a personal view, to which everyone may have a right, is not mandated by any ultimate truths, and there's certainly no legitimate justification for religious bigotry in this respect.

An important bottom line is that this is not a human-specific problem in any evolutionary sense. The genetic permissiveness in regard to sexual preference is probably so complex that it has virtually no longterm evolutionary consequences or implications. If this is a sociobiological problem, it's hard to see how. Clearly, even in non-social species this has not happened, and homosexuality was present in early animal ancestors, so we've inherited that capability countless eons ago. It is not particularly important to species success, and not tightly monitored by natural selection.

Above all, there is no justification for human exceptionalism. To the extent that homosexual patterns are important to human society, it is not an evolutionary question but a cultural one. Its nature, variation, and impact might be interesting to study on their own, as phenomena that vary from culture to culture. If gay men help care for their brother's children it's an interesting aspect of social structure that is essentially irrelevant to the genetic aspects per se. The genetic mechanisms involved in behavioral differences might be of physiological interest, but they do not explain the phenomenon.

The same holds for many other aspects of social and behavioral traits to which human exceptionalism and lots of 'theory' have been applied. And it's why many people consider 'evolutionary psychology' to be a made-up kind of junk science, if it insists on tight 'darwinian' explanations for things for which the evidence such as discussed in the Times article shows simply isn't there. And it's time to stop geneticizing social behavior, even if, of course, genetic mechanisms are involved.

As anthropologists realized more than a century ago, social facts are best explained in terms of social facts (again, even if a minority of cases, or of behavioral extremes, have specific genetic explanations). Treating social behaviors as if they needed evolutionary explanations in the sense of Darwinian selection adds value judgments that are far more a reflection of preconceived notions of modern society (that is, of the scientists who are peering into behavior with Gotcha! hubris). Unfortunately, we know from clear and repeated history that such value judgments often provide the excuse for discrimination against people, with the upper classes deciding who's good and who's bad.