Wednesday, June 30, 2010


We'll be taking a bit of a hiatus from blogging for the next few weeks, as we're off to the European Evo-Devo Society meetings, among other places. We are hoping to check in from to time, especially from the meetings -- and Holly may post a time or two from the field -- but we'll be back to as normal as it gets around here the last week of July.  We'll have to do some catching up with the science news when we return -- if we miss something big, do let us know!

Tuesday, June 29, 2010

Soccer-head games

The World Cup is an exciting event that captures global attention. But it also raises some interesting questions about how we know things and what it is that we know--or don't. While soccer players play head games with the ball, science plays head games with the game.

Interesting questions include such things as the apparent phenomenon of momentum shifts such as occur after a goal is scored. Some are due perhaps to the manager's notions of strategy, but this has to do with the collective behavior of 11 guys and how that relates to the behavior of the 11 in the other-colored shirts. But when you need to win and you know 1-0 is not a safe lead, why can't you keep up the momentum?

Another interesting question has to do with whether the tournament winner is the 'best' team. Since goals are hard to come by, referees make bad decisions, and fluke events are so prominent, it is easy to know who the winner is, but what does it mean as to which team is 'best'?

One bad call (and this World Cup has been atrocious in the refereeing department) has collective effects, yet everybody 'knows' at some level that if you had a good game plan it should be a good game plan even if you're down a goal.

Hot and cold streaks have been debated for a long time: do they even exist? Is it only our imagination that the team deflated by bad luck or a mistake collectively changes behavior? Are streaks only identifiable after the fact, rather than being demonstrably (and statistically) real?

Sports represent a good example of complex 'emergent' phenomena. Sports psychologists and coaches obviously haven't figure out how to manage it. Do you reduce it to one or two players and talk to them to keep the higher-level organization as you want it? Do you reduce it to some compound--let's call it 'adrenalin' just as a representative word--that every player needs to sniff to regain momentum? And if that kind of reductionism is relevant, how is it that a goal by the other side reduces the adrenalin of everyone on your side?

In our book, The Mermaid's Tale, we characterize life as being about partially isolated modular units that are in communication with each other by complex signaling systems. Emergence occurs in the development of a hand or feather or leaf, as a result of a hierarchy of such inter-unit communication. A team is like an organ, and seems to respond, develop, change, and function as a result of the partial isolation, but partial communication, among its elements.

In this case the signals are not growth factors like BMP or FGF. They don't seem to be chemical (e.g., they're not pheromones). But what are they? They are visual, but also perceptual in complex ways. The brain receives the information about the changed situation, and then it signals to the cells in the body in ways that change their behavior.

Often the change is detrimental to the organism, which is not the usual effect of signaling. It's very interesting, and relevant to the general problem of understanding complexity.

But one thing is simple: a major sports competition needs competent referees. Too bad that FIFA can't find enough of them.

Monday, June 28, 2010

Right. What else could it be but intelligent design?

In the middle of an otherwise pretty sensible piece in Sunday's New York Times about efforts to ban the hunting of bluefin tuna, the author, Paul Greenberg, has this to say about why the bluefin deserves to be treated with such deference:
Not only is the bluefin’s dense, distinctly beefy musculature supremely appropriate for traversing the ocean’s breadth, but the animal also has attributes that make its evolutionary appearance seem almost deus ex machina, or rather machina ex deo — a machine from God. How else could a fish develop a sextantlike “pineal window” in the top of its head that scientists say enables it to navigate over thousands of miles? How else could a fish develop a propulsion system whereby a whip-thin crescent tail vibrates at fantastic speeds, shooting the bluefin forward at speeds that can reach 40 miles an hour? And how else would a fish appear within a mostly coldblooded phylum that can use its metabolic heat to raise its body temperature far above that of the surrounding water, allowing it to traverse the frigid seas of the subarctic?
How else indeed!

This is an amazing piece of writing for a reputable newspaper, with an often credible team of science journalists. Who let this go by?

We don't know everything about how today's life-forms got here, but no evidence suggests that any of the gaps or even possible missing pieces of our theory and understanding could lead to Intelligent Design. It is one thing for an author to use mythological phrases rhetorically, but in today's contentious society where religious inanity already has credence as part of culture-battles for power, it is inexcusable and irresponsible to let such rhetoric pass the editorial blue-pencil.

Immortality, II

We blog here whenever we have the opportunity about the astounding results from epidemiological studies, when truly profound findings are made. These go beyond new details, to a truly profound and, yes, paradigm-changing conclusion: immortality is the result of negative risk. If you avoid the risk factors being reported you can apparently reduce your risk to below 0%! Thus, by avoiding more than 100% of heart disease risks, you must presumably either grow a new heart, or have more lives than the proverbial cat.

Commenter TexBrit last week reminded us of a study that was reported a few years ago on the Beeb -- before MT was born -- showing that housework reduces breast cancer risk by around 30%. The many things that would or could be said about this ground-breaking (publicly funded?) study would exhaust the space available here, so we'll just let the story stand on its own. Because it is so highly technical, we've included the main Figure from the story, in hopes that MT readers will be able to get the true depth and importance of this Big Finding.

There could be many misinterpretations here, so we must be circumspect. We are not suggesting that women quit their jobs in droves and return to full-time domestic duties in the hopes of growing a third breast by reducing their breast cancer rate below 0%, or anything like that. We're just being neutral, and reporting the news.

Friday, June 25, 2010

Ten years on

Here is a graph showing the results of a Nature poll asking scientists what the human genome project has meant to them, 10 years on. Sequencing technology wins. But then, technology is not only the darling of our society, it is the career-maker for many of those polled, and it applies across the board in the life sciences. And with technology comes new knowledge. Much of it is truly new: unexpected findings about DNA functions. But even if we know much, much more, our basic theoretical understanding of life and its evolution have not really changed as a result.

Of course, as the graph also shows, this doesn't mean that most scientists would say that technology's the only benefit. In fact, most do say they would have their genome sequenced if it were cheaper (though the results don't show what their interest is in having the sequence), but 16% wouldn't have their genome sequenced if you paid them. The poll, but not the graph, also shows that
[m]ore than one-third of respondents now predict that it will take 10–20 years for personalized medicine, based on genetic information, to become commonplace, and more than 25% even longer than that. Some 5% don't expect it will happen in their lifetime.

Why Nature should have taken this poll this is revealing. Revealing of Nature's nature as a magazine, of the social and also vested-interest aspect of science. Ten years is not a magic number, and there's no real bottom line finding in genomics or genetics as a result of the last decade. Genome data have streamed out before and since 10 years ago, and the 'announcement' of the human genome sequence was itself a highly staged event, for publicity and other aspects of funding politics.

Genomic and genetic research have prospered in the last decades, at an accelerating pace, and across the spectrum of the life sciences. The availability of the data, and of the technology to generate that data, including experimental technologies for developmental biology, has been enormously productive and helpful. It changes everything we do. But it is not associated with any fixed point in time. That's melodrama, and the fact that scientists pay any attention to it reflects some of the material and career interests much more than anything to do with the science itself. True scientific advances aren't adjudicated by popularity polls.

Whether a scientist would get his/her own genome sequence is like that, too. Most scientists, like most lay people, have not got a very clear idea what can and can't be predicted from a genome sequence. And how could they? Everyone in and out of science is being bombarded by all sorts of lobbying, advocacy, sales pitches, fear-mongering about funding, skepticism based on scientific argument, skepticism based on politics, and so on.

So, those MT readers who are not in science would have a hard time making much sense of such an informal poll. But that's OK, because the poll makes no difference to what's going on in science.

Thursday, June 24, 2010

Faith-based medicine

There's a story today on the BBC website on faith-based medicine. OK, not in the usual sense, but with the headline "Transforming medicine in the genome age", followed by sentences and quotations like this:
In conversation with the evolutionary biologist Richard Dawkins on BBC Radio 4's "The Age of the Genome", Craig Venter is in no doubt about the place in history that sequencing the book of life deserves: "I think it's far more important than walking on the Moon; not much has happened since walking on the Moon."
it's a pretty apt description.

The story refers to (read: promotes) a BBC radio series hosted by Richard Dawkins and called, "The Age of the Genome." We hasten to say we haven't yet listened to it -- and, well, we probably won't. As we see it, there is room for debate as to whether Dr Dawkins is 'Britain's leading intellectual' as the media often have called him.  But if you do listen, we'd love to know what you think.

The story goes on:

In 2000, we knew of just a handful of genes which influence our risk of developing common diseases such as diabetes, heart disease and cancers.
According to Peter Donnelly, director of the Wellcome Trust Centre for Human Genetics, says: "Because of the experiments we are now able to do, that number has gone from ten or twenty to something like 700, across well over 100 diseases now."


One of the clinical expectations from the human genome project was that one day we would be going to our doctors for a personal genome reading.
These DNA check-ups would reveal with precision our individual risks for the whole gamut of common diseases. Our doctors would advise us on lifestyle and prescribe preventive medicines and measures accordingly, tailored to our genetic endowment.

This is all pretty curious, actually, since on this side of the Atlantic, at least, the news that all this hype might be not much more than that has reached the major media by now. We blogged about the New York Times front page story on this just last week. 

We don't like hammering away at this again and again, or just being persistent nay-sayers. The truth is that genome research is making some notable contributions, but is heavily and persistently lobbying by exaggerating its contribution to society and doing whatever it can to protect the enormous public and private investment in the genome's eyeview of life. These vested interests (e.g., the bank of DNA sequencers in the photo above) turn the temperature up on the lobbying and hyping, with media concupiscence, whenever it looks as if a bit of the tempering truth, that the promises have not been nearly lived up to, seems likely to leak out into the public domain. So as long as the carnival barkers are out there luring you in to see their side shows, we'll have to keep pointing out that the truth is much less rosy than they're telling you. 

An especially serious side of this is that, in economic hard times, there are many far more urgent and likely to succeed areas in which to invest than genome research. Without falling into simply new fads and vested interests, climate, agricultural, energy dependence, population growth and consumption, infectious disease, and other similar problems are far more important, and likely to be aided by research. And of course food and ordinary low-tech interventions and prevention, not research, are what is really needed, far more than expensive, exotic approaches to disease.

Wednesday, June 23, 2010

It takes a great violinist to make a great violin

What makes a great violin? This question has intrigued players, modern violin makers, and acousticians for a long time, but it's still not clear, according to a piece in Science (subscription required for the full story).
Ilya Kaler, a renowned soloist, gazes admiringly at the 269-year-old violin. He has just played four other great old Italian instruments in an invitation-only recital in the cramped quarters of Bein and Fushi Inc., violin dealers whose shop looks out over Chicago's famous Michigan Avenue. Now Kaler holds the star of the 7 April event, a fiddle named the Vieuxtemps after a previous owner and crafted by Bartolomeo Giuseppe Antonio Guarneri, also known as Guarneri del Gesù, who, along with Antonio Stradivari, is widely considered the best violinmaker ever to have lived.
The acoustics of the best old Italian violins, the Strads, the Guarneris, the Guadagnini, have been studied extensively, and tantalizing hints (e.g., here and here and here) have been found as to why they are so superior to most other instruments, but it's still not possible to say what makes a Strad a Strad.
...can subtle density variations or spectral features explain the supposedly superior qualities of Strads and Guarneris or distinguish between them? Cambridge's Woodhouse [James Woodhouse, an acoustics engineer at the University of Cambridge in the United Kingdom who has studied the violin for 35 years] has doubts. Acoustically speaking, researchers can now say why a violin sounds like a violin and not a guitar, he says, but they struggle to make finer distinctions. "If you chose any particular feature, you can probably find two million-dollar violins that differ from each other in that one feature as much as the million-dollar ones differ from a good $10,000 modern one," Woodhouse says.

What does this have to do with genetics or evolution or development? Complexity. The acousticians who are studying great violins have discovered complexity.

...the fact that so far tests have identified no obvious difference between great and good violins may actually be telling researchers something. Most studies have been made on violins in pristine isolation, typically suspended by rubber bands from a mount. Of course, when played, a fiddle lies clamped beneath a violinist's jaw, its neck cradled in the musician's hand, its strings worked by the bow. The instrument's defining qualities may show through only in that interaction.
"I can tell you that the violinist is the big deal," Bissinger [a physicist at East Carolina  University in Greenville, North Carolina, who has studied the violin since 1969] says. "A great violinist can make even a bad violin sound good." Zygmuntowicz [a violin maker from New York City] agrees but warns that researchers may struggle to get reliable data on the working violin. "The situations that a violin operates in are really contaminated circumstances for testing," he says. "Science has shied away from that interaction because it doesn't make good papers yet."
To hear Kaler tell it, the violin-violinist interaction is subtle. Asked what distinguishes the Vieuxtemps, he cites its resonance and ease of response. Then he adds, "If a violin responds too easily, it limits the possibility of a performer to produce many colors or to put his or her own imprint on the instrumentbecause the instrument anticipates your desires too much." So a violin must resist just enough to make the violinist work for what he wants, he says.


Ilya Kaler.

Tuesday, June 22, 2010

Life goes on up at the farm

Polymeadows had a good spring.  Most of the new babies have been weaned, though a straggler set of twins was born last week.  Thanks to Hank, they've been talking with a new distributor for their dairy products--who just happens to be national.

A few weeks ago, Jen and Melvin traded their friends and neighbors, Brad Kessler and Dona McAdams, some hay for three Nubian bucklings from Brad and Dona's small herd.  The boys have been making themselves right at home.  Beautiful goats--next year's babies promise to be stunning.  (Brad writes about his own romance with goats, including what he's learned from Jen and Melvin, here.)

And the biggest news of all--Melvin got a cell phone!  He's promising to send a video of the view he has of his hay fields from the seat of his tractor.  But he has to figure out how to take it first.

Monday, June 21, 2010

Starry eyed at Starbucks! Bulletin from the Doublethink Department

Here is more of the java jazz we blogged about a while back. It's the ultimate sin in western life: something fun that may also be good for you. Tea and coffee are good for your heart! And we mean lots of tea and coffee. Of course, there have been countless studies of the caffeine hits that one can pick and choose from to find one's preferred results, and habits have changed a lot over that time, so confounding variables may also have changed but not have been measured. The rise of the coffee house culture, change in other components of diets (having berries with your tea, say), drop in smoking, all may be correlated.

Here comes the next ad jingle: "4+ cups of tea or 2-4 cups of coffee a day keep the cardiologist away."

Now let's take this nonsensical doublethink logic a step or two forward. First, If 4 cups reduces heart attack risk by 1/3, then the simple response is to drink 12 cups and be totally immune! That means you can hog up on Twinkies and BigMacs with double-sized fries, and if you eat all that along with a coupla cuppas, you can stride healthfully through life. Or sit at Starbucks and slug away, also eating hyper-fat, hyper-sugared muffins with abandon.

But wait! What if you drink 16 or more cups a day?? That means you'll have less than one heart attack. It's not the same as being immortal, but it may mean you'll grow a second heart so that if you stop honoring TeaTime you could have a heart attack in one of those hearts but live on your spare. How about that! Isn't epidemiology great?

But, comes the party-spoiler. The same Big Study found that if you take any milk with your coffee and tea, this negates the entire protective effect. Now all this titration makes a chem lab seem like amateur city, and how it works only a chemist would know. Of course, another response is to ask how such nonsense can make it into the journals, much less the news. Well, it's easy to see how science reporters might pick this stuff up, given the nature of their profession. But epidemiologists should know better.

Anyway, all such musings aside, this little celebratory note about something you enjoy being good for you--a rare enough finding--is to point this out, to reassure you, and who knows, maybe it'll lead on to bigger and better things? If caffeine hits the spot, maybe nicotine is next? Or a few stiff drinks a day to keep the doctor away?  Remember when eggs were poison?  Too full of cholesterol to be part of any thinking person's diet?  What happened to that?

OK, there's a serious message here. 30 years ago the Big Story was that coffee, even a small amount, caused pancreatic cancer.  So how do we know when we should believe a study that comes along and is reported as the Big Story? Is there any reason whatever to believe this one, or should we wait until tomorrow's contradiction?

This is a very serious issue of widespread import for science and the society that supports it and depends on its results. There is no easy answer, and accountability is difficult to impose for many reasons. But the problem is real, especially in our era in which science is not just what a few idle rich do, but is an institutionalized part of our national 'system.'

The nature of the science involved makes these results problematic. Why don't we tell epidemiologists that if they want funds to do something, that it should actually answer a question definitively. If they protest that this is hard to do, then let's say that if they want funds they have to suggest a better way? Otherwise, the same money could pay for exercise centers (with coffee bars) to reduce disease much more effectively than the results of these studies generally do.

Finally, why not ignore most of these Big Stories, and just go go and eat what you want--just do it in moderation. You'll do better by far than you can by attempting to follow every bit of New Advice that comes along.

Friday, June 18, 2010

Doing it like bunnies!

The Guardian reports, with some titillation, about a study published recently in the International Journal of Impotence Research on the effects exposure to mobile phones has on the sexual behavior of male rabbits (the surprisingly tame photo here is from the same Guardian story).

A group of researchers studied the effects of electromagnetic exposure to the genitals of 6 rabbits for 8 hours a day for 12 weeks, using as controls 6 rabbits exposed to phones that were turned off, and 6 rabbits with no exposure to phones. They tested them by introducing "teaser" females (it's not clear if they were trained or not) into the cages of the males, and measuring receptive behavior, ejaculation frequency, and hormone levels. Did the bunnies get a buzz on their phones?

The complete paper is available only to those with subscriptions, but from the abstract we learn the following:
Mounts without ejaculation were the main mounts in the phone group and its duration and frequency increased significantly compared with the controls, whereas the reverse was observed in its mounts with ejaculation. Ejaculation frequency dropped significantly, biting/grasping against teasers increased notably and mounting latency in accumulated means from the first to the fourth teasers were noted in the phone group. The hormonal assays did not show any significant differences between the study groups. Therefore, the pulsed radiofrequency emitted by a conventional MP, which was kept on a standby position, could affect the sexual behavior in the rabbit.
Obviously, this study is meant to address the question of whether long term exposure to electromagnetic waves is harmful to mobile phone users, not just what it does to a rabbit's sex life (especially given how infrequently rabbits carry cell phones for 8 hours a day, on or off). A number of studies have shown little or no effect on risk of brain cancer (maybe), which we blogged about here, but because phones are hung from a belt, or kept in pockets probably for more time than they are held up to the ear, the question of other than brain effects is an obvious one. Of course, there's probably less distance between the phone and a rabbit's genitals than a human's, but even so, the results of this study are suggestive in more ways than one.

A common slur against mankind is that men haven't the metabolic energy to serve their two main organs, their brain and their nethers. So this study may show that either way, a phone call is a bad thing.

Thursday, June 17, 2010

Huff and puff away!

For years when giving talks about the problems in identifying genetic causation, in QandA time Ken has often been asked, "Well, if things like GWAS aren't really working, then tell us what to do instead!" It's spoken as a dare, but that's entirely off the mark. Just because someone explains that a given approach is not very effective and why does not mean they are obliged to suggest a new miracle theory or cure. In practice, science is part of society and won't--or can't--make major gear changes without a new path to funds, jobs, and so on.

Another statement made in frustration by audience members is, "I want a pill for lung cancer, so I can continue to smoke!". That's not only a dream, but a subtle reason why even genetics, if perfectly successful, will not solve the disease problem as promised.

But smokers, take heart! And you won't need to get your genes diagnosed by the carnival barkers at direct-to-consumer companies. In fact, you can smoke away with much less risk, almost enough to be worth it (if you smoke a tasty brand). "Nuts!" you say, knowing in your heart that smoking's a killer no matter what. But thanks to another public service study, we can say "That's right!".

Because the study says that consumption of B vitamins, and nutrients like, yes, nuts can cut your risk of lung cancer in half, even for smokers.

So why have we wasted so much research money, that taxpayers could have used to buy their smokes and peanuts? Because it's good for the science business? Because nobody had any reason to think that vit B could have anything to do with lung cancer? We don't know the answer, but in a technophilic society we think technology first and simple answers second. And we have to echo Tuesday's post, too: this study is probably at least as likely to be due to unobserved confounders than the vitamin itself.

Sadly, we have to close on a downer note. The majority, perhaps the vast majority, of smoking-related deaths are due to diseases other than lung cancer, not to mention quality-of-life effects like blindness and years of emphysema. So, put the pack away for a rainy day. But keep the nuts, because they could be good for you for other reasons--unless confounding erases the effect!

Wednesday, June 16, 2010

Confounding -- theme of the week

See your dentist, brush, and floss--and live longer! At least, that's what a recent study has claimed (the image is from the story). Good boys and girls who brush twice a day have a markedly lower risk of heart disease. This is the result of an epidemiological study, not a study of a particular risk factor (such as a particular pathogen that isn't brushed away in lazy, halitotic people).

A link between gum disease and risk of heart disease has been reported previously, the idea being that the bacteria that cause periodontal disease can enter the blood stream and contribute to arterial blockages, or that the inflammation that characterizes gum disease can increase risk of arterial plaque build-up.  But, this is apparently the first study to show the association of risk of heart disease with brushing. 

But, perhaps the association is not so straightforward.  As with any epidemiological correlation, the issue of confounding is certainly important, and perhaps almost impossible to rule out. Brushing and dental visits are correlated with many other factors including socioeconomic status and thus diet, exercise, and general health, factors that have been associated with risk of heart disease apart from gum disease. Those aspects not measured in a survey could lurk in the background as the real causal factors.
Study leader Professor Richard Watt, from University College London, said future studies will be needed to confirm whether the link between oral health behaviour and cardiovascular disease "is in fact causal or merely a risk marker".

Tuesday, June 15, 2010

Repeat after us: Correlation is not causation, correlation is not causation, correlation is not....

Confounding is probably the single most important explanation for irreproducible and even nonsensical results in epidemiology--and probably in genetics and evolutionary reconstructions as well. In the mid 1900's, for example, before everyone had telephones, researchers found that having a phone was a risk factor for breast cancer. How could that be? As it turned out, having a phone was associated with middle or upper class status, and money was associated with a diet that included increasingly more fat--or with increased age at first birth, or fewer children, and so on--risk factors for breast cancer subsequently identified by many studies that noted increased risk as socioeconomic status rose. Confounding is notoriously difficult to control, largely because many associations can't be anticipated in advance of the design of a study. (Whether the same-sounding argument applies to the idea that use of cell phones 'causes' brain cancer is not known.)

The BBC is reporting that meat eating causes early menarche. Or rather, eating a lot of meat. This according to a paper in Public Health Nutrition (though this is only the latest of the papers reporting this correlation). The age at menarche--first menstrual period--dropped throughout the 20th century and many people have wondered why. It had been thought that this was due to increased nutrition in general, but arguing against this idea is the observation that, as obesity rates increased, age at menarche didn't further decrease. That is, it's apparently not a simple matter of body size or nutritional intake.

The question of what has caused early periods prompted a group of researchers in Britain to look prospectively at a cohort of 3000 girls, including nutritional intake at age 3, 7 and 10. They identified a group of girls at birth in 1991 or 1992, and followed them up for about 13 years both by questionnaire and clinically. Early menarche was considered 12 years 8 months or younger, experienced by about half of the sample.

Girls with 'high' meat intake were eating 8 or more portions of meat per week at age 3 and 12 or more portions at age 7. Since early menarche has been associated with increased risk of breast cancer (the odds ratio is 1.5 - 2 times higher for women who started having periods at or before age 12 vs. women who started at 15 or older), the authors of the paper note that early menarche should be of concern. (But remember that 1.5 - 2 times a fairly low risk is not that high, and that, anyway, the comparison group, women who reached menarche at 15 or older has been a very small fraction of women for decades, at least in the developed world, and they perhaps were late for reasons that also protect against breast cancer, or at least may have different hormonal profiles. So whether this increased odds ratio is meaningful is up to you to decide.)In this large group of contemporary girls we have found a number of associations between dietary intakes throughout childhood and the occurrence of menarche. We have confirmed previous findings of higher energy intakes among girls reaching menarche earlier, reflecting their larger body size. We have also found evidence that intakes of meat and total and animal protein, and also possibly PUFA [polyunsaturated fatty acids] in early to mid-childhood may increase the chances of menarche by 12 years 8 months. However, we found no evidence that the chances of reaching menarche increased with higher total fat intakes, or reduced with higher intakes of fruit, vegetables or NSP. Unexpectedly, higher vegetable intakes at 3 years were associated with an increased chance of reaching menarche, although this may have reflected the positive association between meat and vegetable intakes at 3 years.

But could the meat/menarche association be due to confounders, variables that are associated with meat consumption and are the true explanation for the correlation? Things like ethnicity, socioeconomic status, mother's behavior during pregnancy, and so on, which would influence diet? The researchers controlled for some of these, but ethnicity was classified crudely as white/non-white, for example, and in the UK, as elsewhere, non-white can cover a lot of different diets, so that something else that rides along with meat could be the explanation instead. Kind of cooking oil, for example--we have absolutely no evidence that this is the case, but the point is that it's possible. Not to mention that high meat consumption is often associated with increased consumption of processed foods in general, which was not controlled for.  That is, high meat consumption could indicate a different diet, with some unidentified causative variable, compared with the diet of kids who eat less meat.

Another possible confounding variable that has long been discussed in the literature--though as far as we can tell, the association is still only suggestive--is exogenous hormone levels in meat (a brief time out for a bow to the web, where you can find just about anything: here, for example, is a link to the "Museum of Menstruation and Women's Health", which we stumbled upon in searching for papers on meat hormones and age at menarche). Animals, of course, are given a panorama of hormones to increase the speed at which they grow. Whether these hormones are found in excess in meat, or are active in our bodies when we eat that meat isn't clear, at least to us. But it's certainly a possible alternative explanation for the association between meat consumption and age at menarche.

So, should we stop feeding meat to little girls? Early age at menarche, whatever its cause, may be a risk factor for breast cancer (any increased risk is not to be taken lightly, though see above), but it has also been found to protect against osteoporosis--both presumed to be due to the increased length of exposure to estrogen. And, of course age at menarche is by no means the only factor associated with risk of breast cancer or osteoporosis. And the protein and other ingredients in meat are likely good for other aspects of growth and health.

Of course epidemiologists (and journal editors) have to eat, and to do that they have to publish lots of studies!  Does this study tell us anything of significance? You decide.

Monday, June 14, 2010

The Fired Coach Syndrome

The Fired Coach Gets Hired Again
Well, sports fans, you probably are all familiar with the Fired Coach Syndrome. That's the regular pattern whereby, when a coach is fired because his team doesn't win--even if it's because they haven't got good enough players--he is immediately hired by some other team and treated as their savior. We need not name names, because as one coach said, all coaches have either been fired, or will be fired (with the single exception of our own Joe Paterno, shown at left!).

What has this got to do with genetics, evolution, or any other MT theme? The NY Times has a story--a front page story--on the failure of genome-wide association studies (GWAS) to lead to many disease 'cures'. Big news!

Or is it?

This story is by an op-ed writer who has built his career over many years by serving as a notorious mouthpiece for the genomics 'industry'. Now, without a whiff of contrition, he's telling a new tale, and again making a Big Story of it. But, as before, he does it uncritically, too.

Genomewide Association Studies
It's true that countless millions have been spent on GWAS with little of the long-promised 'cures' to show for it. A few of us have been long-term critics of this approach, for the decade in question, and for legitimate reasons. Complex traits are here today because whatever their genetic basis, it passed the sieve of evolution. For human geneticists, the main interest has of course been the genetic basis of disease, for obvious societal and funding reasons.

But for many decades it has been clear that while genes contribute substantially to their variation, these traits (normal or otherwise) are basically not due mainly to single genetic effects. The evolutionary assumptions that are implicit in complex disease mapping studies never, except for wishful thinking, predicted other than what we have seen. And another fact that the NYT story and a recent journal article or two report as curious--that we can predict risk better with a patient's family history than with his/her genotype data--is entirely expected, based on reasons that everyone properly trained in genetics should have known (for approximately the last 100 years based on the genetics of 'polygenic' traits).

The Evolution of Complexity
Evolution generates redundancies, removes nasty variants, and generates a distribution of mutation effects and allele (genetic variant) frequencies that will not as a rule lead to common, chronic, late-onset, or complex traits being caused by just a gene or two (and here we're not eve considering the wild card, the environmental effects that are usually substantially more important than the genetic ones). But that doesn't mean there would be no identifiable contributing genes: for most traits some alleles, in some genes, in some populations, will have marked effects. Unlike a complete 'bust', that, too, is what we find and was predictable.

From countless GWAS and other approaches to enumerate the genetic cause of complex traits, we know of a large number of contributing genes, say around 1000. But only a surprising few of these do so replicably or with high probability and nontrivial frequency. What we see today is what everyone should have known to expect and a few of us were saying this in papers at the beginning of the decade in question.

Selling the Brooklyn Bridge--Again
So if there's no news in this news, why does the former hawker for the vested genomic interests now have the creds to write as if insightfully about the current state of affairs? It's by consulting the very people who, mainly knowingly, lobbied and hyped everyone into the GWAS era. Why should he be getting assessments of the situation from those whose mouthpiece he was for an approach that didn't work, and for predicted reasons? The same vested interests are now advocating their new grand theories, such as that finally we'll solve the problem in terms of (take your pick) copy number variation, epigenetics, whole genome sequences, large biobanks, many very rare alleles with strong effect, combinations of common alleles with modest, effects, etc.

Why should those who sold us the old Brooklyn Bridge be listened to when they advocate these new ideas which, not incidentally, require larger, longer, more costly studies (for their own labs?). They are not giving us reasons, just hand-waving invocation of current fads or Plan B's. Naturally, they like the idea of locking in funding for the rest of their careers. But would people have any interest at all in buying this new Brooklyn Bridge?

The answer is the Fired Coach Syndrome. Somehow, we tend to believe that these people with known rap sheets are still the experts. Their past statements and advocacy are not taken to be the lobbying that they actually were. Now, they're being listened to as the coaches of the new Team GenomeSequence.

But there are deeper and more serious issues. Unfortunately, they are reflected in the latest Big Story now being marketed in this Times article and elsewhere, which continues to suffer from uncritical hyperbole and oversimplification.

We Already Knew All This
The fact is that after a few GWAS (and other kinds of studies of variation underlying single-gene traits), many years ago, we confirmed the theory we had about evolution and complex genetics (which goes back nearly 100 years). We should have declared our knowledge firm, and thought of better ways to understand complex traits. But too many vested interests, without better ideas, demanded the big GWAS funding. In that sense GWAS and its fellow-travelers have generally been a very expensive bust.

But here we have to again criticize the attempt to make this a new Big Story. Because we have learned a lot, even if at great cost, about genetic control. Huge and useful data bases have been created. Technology has been developed. DNA sequencing is now about as cheap as your HDTV set. Many corporations have flourished--money for their stockholders, and jobs for employees. Much has been learned about genomes of many species, and their variation. Despite private greed, much or most of these data are freely available to anyone. Genetics has flourished perhaps like no science ever before.

Maybe the funds should have been spent in other ways, but they have led to much new knowledge. This doesn't mean that 1000 new promising targets for Pharma have been revealed, as Francis Collins and Eric Lander enthusiastically claim: instead, most of those genes are trivial contributors. Still, these investigators have sponsored or done technically excellent work, contributed to the huge public data bases, new understanding of genomic functions of many kinds. They've done this while, largely unrecognized even by themselves, showing that classical theory was right--and that is a substantial baby in the bathwater.

There are important questions about complex causation, which science can address, especially if we could decide to confront complexity on its own terms rather than promising to reduce it to a manageable number of enumerable causes as has been done to date and is essentially still being done.

However, the reportage errors continue: Despite the waste and the catering to vested political interests, the Times article's main claim to Big Story, the fact that we haven't developed many 'cures' of complex disease, is irrelevant to whether GWAS was a scientific success.

Disease is complex, organisms evolved to resist tinkering from the outside--especially tinkering with our genomes--and attacking diseases genetically is a difficult engineering feat. Whether, when, or how that will be done successfully is not yet clear, but nobody has any right to expect it to be rapid. It's a bum rap at GWAS to say they failed because the gene engineers haven't yet figured out how to use the results to make cures. The real rap, and what should discredit much of the hype machine, is that they have promised 'cures' (in fact, and to be fair, some of the main advocates--though not Francis Collins--have at least been slightly circumspect, warning this is all for our grandchildren or beyond--even if such caveats were declared in passing and did not temper their lobbying for the GWAS and related resources).

Bigger, Longer, Pricier More-of-the-same
However, the Failed Coach Syndrome would suggest that we should be very skeptical about the same writers and scientists who are now deftly expostulating their latest grand theories, paradigm shifts, and new strategies. Because they're largely rationales for more-of-the-same, except at larger scale, longer time frames, and greater cost. What we should be doing is understanding where the baby is, and not just fostering existing self-interest by saying there was no baby in the bath so let's run a lot more bathwater and maybe we'll find triplets in there someplace.

One thing that some are advocating is to search genomewide sequence data from patients, to find clearly harmful mutations in functional genome regions that are known to be related to the physiology in question--such as rare 'knockout' mutations in those genes which might be inferred to be causal and then tested experimentally. Whether this justifies the continued use of mega-scale genomic approaches is debatable. Though one can predict there will not be a huge bonanza of findings with much therapeutic value, there certainly will be some, and at least to some extent this genome-screening approach is very different from the statistical-evolutionary basis underlying GWAS. The reason is too complex for this post, but the gist is that such studies will finally rest on biology rather than baloney, even if it's already being over-sold in the usual lobbying way.

Today's PT Barnums
As to Fired Coaches, it must be said that high level expertise and capability is not to be found everywhere, and our genetic PT Barnums got there largely because of both management and scientific skill. Once the research community has its hands on funds, and labs have been built, it's politically difficult if not impossible to shift from this entrenchment to new investigators or unrelated approaches. Science is part of society and works by evolution, which includes careerism. Scientific revolutions can't be ordered up by the media or by funding institutions with their turf-protecting bureaucrats: they just happen when and where they happen. So in that sense we must in practice rely on the same cast of characters, though whether we should try to move gradually away to fresh approaches is a legitimate question.

In science, we should at least be aware of what's afoot because otherwise resources continue to be diverted for less rather than more effective use. But can we learn to temper our claims--or at least penalize those who don't? Or can the system demand accountability for results if we promise them? Or can we establish full-stop criteria for large projects once it's clear that they aren't really bearing fruit? Probably not. Because the real name of the game is getting funding. To a substantial extent, scientific facts themselves are secondary. That's the anthropological truth. Or perhaps, as Marshal McLuhan said decades ago, in modern science the medium really is the message. And now we are clearly going to hire the same coaches to guide us into the future!

Maybe we have no choice. But we should be aware that's what we're doing. And if you believe their new pronouncements or their media megaphones, then we have a bridge that we'd like to sell you. And, by the way, Joe Paterno has stayed in the national rankings and just signed a new age 83.

Friday, June 11, 2010

Researcher calls B.S. on Ardi's hominin-ness (And, a modest proposal: Blind paleoanthropology)

The question on your mind today, no doubt, is the same one troubling humans everywhere: Is Ardipithecus ramidus a bona fide hominin?

That is, is it a member of the evolutionary lineage that is exclusively human?

Remember, the discovery team (White et al.) said that it is. They based their assessment on anatomical traits, which is standard practice. And their conclusion has many significant implications, some of which are outlined here.

However, in last week’s issue of Science, not only was Ardi’s habitat questioned, but in a separate Technical Comment, Esteban Sarmiento argued that Ar. ramidus is NOT a hominin.

According to Sarmiento, White et al.’s “analysis of shared-derived characters provides insufficient evidence of an ancestor-descendant relationship and exclusivity to the [hominin] lineage. Molecular and anatomical studies rather suggest that Ar. ramidus predates the human/African ape divergence.”

As with the habitat paper, White et al. published a reply to Sarmiento’s paper alongside it.

Here is a rough outline of the issues that are being debated.

What was the LCA like? Did you infer it correctly? Can you infer it at all?

In the initial paper, White et al. made a list of “inferred” traits of the LCA (hypothetical last common ancestor of chimpanzees and humans) to better assess the evolutionary trends seen in the hominin fossil record and determine where Ar. ramidus fits.

Sarmiento didn’t like that because they didn’t describe how they determined these character states. Furthermore, he says that the LCA traits are based on an assumption that the chimpanzee condition is the primitive one and the human condition is the derived one. Their LCA is too chimpy. This is an interesting accusation considering that much of the media and popular hoopla after the announcement of Ardi was about how chimpanzees could be more derived than we think and that chimps may not be good living models for the LCA!

In fact, this was so much the takeaway message in that flurry of Ardi papers that a separate group of scientists, led by Andrew Whiten, felt compelled to write a piece defending the significance of chimpanzees for understanding human evolution and the LCA!

What links ardipiths to australopiths? Are the traits exclusive to homs?

According to White et al., Ar. ramidus shares features with later australopiths (A. anamensis and A. afarensis). But Sarmiento says that White et al., “fail to show that the common Ardipithecus/Australopithecus characters provide evidence of an ancestor-descendant relationship and are exclusive to the hominid lineage and shared-derived with humans.”

Since over half the traits are linked to the canine-premolar complex, Sarmiento says this makes for a weak argument given its historically deceptive nature (he cites his own work here) and given how much variation there is in the hominoid (ape) fossil record in the complex. He goes on to hypothesize that this trait, given how it varies, could have experienced multiple shifts along the continuum of nearly absent to strong presence.

But here’s the rub: Sarmiento argues that a human-like canine-premolar complex is NOT diagnostic of early hominins and does not indicate a strong relationship between ardipiths/australopiths any more than it suggests a strong ancestor-descendent relationship between oreopiths/ardipiths or sivapiths/ardipths since those fossil apes also show some so-called derived features in the canine-premolar complex.

White et al. point out that this trait in ardipiths has been firmly established and accepted in the 15 years since the first ardipiths dental remains were published. Hmm. So, what’s left then for diagnosing early homs?

Beyond the canine-premolar complex, says Sarmiento, the evidence is also weak for Ar. ramidus’s link to australopiths, and hence, its hominin status. He says that “none of the eight postcranial characters [...] are useful because they are not exclusive to humans or even shared-derived with humans. Moreover, the other four craniodental characters are just as useless for the same reasons. Sarmiento goes on to argue that the traits that White et al. uses are present to some degree or another in earlier fossil apes (i.e., Oreopithecus and Dryopithecus), “and have appeared independently in other primate lineages.”

So that pretty much, according to Sarmiento, renders all the traits that were highlighted by White et al. useless.

You can’t help but wonder what, if anything, in the anatomy of a fossil is helpful for diagnosing its evolutionary position! And this sentiment seems to be behind the rebuttal-challenge of White et al.: Where are the new ideas and analyses, beyond the dismisses and disses?

Was Ardi really bipedal?

The interpretation of Ar. ramidus bipedality is a problem for Sarmiento. He briefly mentions that he’s not convinced by the foot morphology and notes that the femur and pelvis are so fragmentary that they are “open to interpretation” which seems like a nice way to put “they could show anything a person wants them to show so they should be ignored.” Sarmiento says that the bipedal traits cited in Ardi, “also serve the mechanical requisites of quadrupedality, and in the case of Ar. ramidus foot-segment proportions, find their closest functional analog to those of gorillas, a terrestrial or semiterrestrial quadruped and not a facultative or habitual biped.”

Are there features that indicate that Ardi was not a hominin?

Yes, says Sarmiento. He lists some primitive African ape traits in the wrist and cranium which were claimed by White’s team to be exclusively hominin traits.White et al. don't address the wrist anatomy but in their discussion of the cranial anatomy they argue that Sarmiento's interpretation of the character states is not parsimonious.

Just look at the molecular clocks!

Sarmiento writes, “Over the past 40 years, a multitude of independent biomolecular studies based on different methods, some analyzing millions of DNA base-pair sequences, have arrived at a minimum human/ African ape divergence date of ~3 to 5 million years before the present.”

If you're furrowing your brow, you’re not alone. Go to Type in humans and chimpanzees. Do you arrive at a consensus of ~3 to 5 million years? Ken didn’t. Neither did I.

Sarmiento writes, “With a 4.4-million-year geologic age, Ar. ramidus probably predates the human and African ape divergence.”

Uh, then this means that Orrorin and Sahelanthropus (two strong candidates for early homs) are also too old to be hominins and this means that we can ignore the majority of molecular clock studies that estimate the split to be much earlier like 5.6 to 7 Ma. White et al., point all this out in their rebuttal.

Conclusions on the ‘No’ side.

Sarmiento ends with food for thought, “Even if Ar. ramidus was an exclusive member of the human, chimpanzee, or gorilla lineages, given its proximity in time to this divergence date, it would be difficult to unambiguously recognize it as such.”

- Clearly it’s difficult to figure out, otherwise there wouldn’t be cause for debate.

“It therefore seems premature to use Ar. ramidus to directly infer LCA ecology and locomotor anatomy or the origin of supposed human social systems, selection strategies, and sexual behaviors.”

- Good cautionary advice, but it won’t stop people from trying. The evidence in the fossil record is now there - why not use it?

“Human evolutionary studies are not a new science where every new find revolutionizes interpretations of our past. In fact, what is known of LCA anatomy and ecology is based largely on comparative studies of human and nonhuman primates. These same studies allow us to classify fossils and recognize ancestors. A purported fossil ancestor that must overturn nearly all we know about our evolution to fit into our lineage is unlikely to be such an ancestor. In this regard, it is curious that in a century-old race for superlative hominid fossils on a continent currently populated with African apes, we consistently unearth nearly complete hominid ancestors and have yet to recognize even a small fragment of a bona fide chimpanzee or gorilla ancestor (29).”

- This echoes a bit of what Whiten et al. wrote and sounds like a lot of grumbles about Ardi.

Conclusions on the ‘Yes’ side.

White’s team ends their reply with, “The character distributions we noted in the pelvis, C/P3 complex, and basicranium are consistently indicative of a sister relationship of Ar. ramidus with Australopithecus (and later hominids). For Ar. ramidus to be a stem species of the African ape and human clade as Sarmiento advocates, its highly derived C/P3 complex morphology, basicranial shortening, and iliac structure must have first emerged in some yet-unidentified Miocene ancestor before then reverting to an African ape–like condition. Such multiple, nonparsimonious character reversals are highly unlikely.”

Basically they say that their interpretation of Ar. ramidus is the most, and potentially only, parsimonious one. But somehow, in doing this, they change the rules of parsimony for phylogenetic interpretations of fossil apes that that precede Ardi and this is sort of what Sarmiento is getting at and this is what we discussed.

What’s primitive? What’s derived? Where’s parsimony? Where’s homoplasy? How much can converge and how often?

I’m getting a head ache, are you?

Both teams do a lot of citing themselves in their papers. Not to discredit or to diminish, by any means, any of their previous research, but this behavior suggests a narrow, potentially polemical view of the field and of how to conduct paleoanthropological research.

And this leads me to wonder....

How could we remove personality, personal stakes, reputations, etc. from paleoanthropology?

A Modest Proposal: Blind Paleoanthropology

Other scientific fields have strategies for removing subjectivity and potential bias from scientific research. Why not implement blind studies on fossils? This goes beyond what number-crunching, taxon-free statistical analyses can do. I’m talking about the hands-on part.

Here's a preliminary outline of the process:

1. Discover an important fossil.

2. Immediately make high quality replicas and take high quality photographs.

3. Send them to a central clearing house.

4. While you begin your analysis, the clearing house begins its duties.

5. The clearing house finds a suitable researcher or team of researchers to analyze the specimens and they, obviously, agree to do it.

(These researchers will need to have access to comparative data, both extant and fossil. This may be the trickiest part, but it still doesn't shoot down this idea totally.)

6. The clearing house sends off the replicas and photographs to the blind team and it's free of all other information. Nothing about identity of discoverers, site or even geographic region, age, associated artifacts, fossils, etc. is included. Nothing. The only thing the blind team knows is that these things came from somewhere on Earth.

7. Discovery team and blind team perform their taxonomic and morphological analyses and come to preliminary conclusions about the species designation (obviously only the discovery team is permitted to name the new species if one is required) and about the individual’s morphology, posture, locomotion, etc…

8. Through the middle-man clearing house the two teams publish in concert.

The Result: Paleoanthropology is more objective, more scientific, more accessible and, potentially, more efficient.

Who would serve as the clearing house middle-man/woman? It could be a rotating position within AAPA. People who want to be the blinds can sign on to a list where they describe the kinds of resources that they have or could have if given the opportunity to be the blind. There are certainly plenty of us to fill this roster.

I don’t think this strategy undermines the important role of experience. If your experienced team finds one thing and a “blind” postdoc finds a completely different thing, then experience may indeed prevail until further studies are performed to support one or the other interpretation.

However, the blind will have an invaluable opportunity to see the anatomy plain and simple. What it is and nothing more. I'm jealous of these hypothetical "blind" people. It sounds like a thrill. Plus, their results will have the potential to be completely stimulating hypotheses for future workers to test!

Questions for MT readers:

Has this already been done?

How could we improve this proposal?

Or are there too many limitations to implementing such a process?

Blind paleoanthropology: Why not?


Esteban E. Sarmiento. Science 328, 1105-b (2010); Comment on the Paleobiology and Classification of Ardipithecus ramidus.

Tim D. White, et al. Science 328, 1105-c (2010); Response to Comment on the Paleobiology and Classification of Ardipithecus ramidus.

Thursday, June 10, 2010

Monkey see, monkey do. Tragedy more real than fiction

Nothing is juicier than Victorian murder mysteries. If you don't know Wilkie Collins, who largely invented the detective story, you're in for a treat (The Moonstone, The Woman in White, Armadale). And then there was the real mystery, and the real detective story--perhaps the first true one involving a detective, and a great read, The Suspicions of Mr Wicher. Great summer reading, if nothing else! And of course, it all led to the one and only Sherlock Holmes.

Well, apparently a 40 year-old British PhD student doing his dissertation on Victorian murders (they'll give a PhD for almost anything, it seems), has tried to practice what he preached. At least, so says the news NYTimes story. This pathetic sod--if the story is true--boldly did in some prostitutes.  The image at left, from the story, a cop searching for bodily remains, looks like the first scene in a BBC mystery series.

Connection to evolution and genetics?..... Well, humans are herd animals, not competing to be the best as much as competing to conform. So, read about murder, commit murder. Or, how about this: Unsolved murders (Jack the Ripper was one of this guy's 'research' subjects) were done by big alpha males (they weren't caught, after all) so this guy's trying to be one himself--doing what some, at least, would claim is evolution's bidding. Or, like much of developmental biology, cells are led by their context to do themselves in or to instruct other cells to do that--cyto-murder!

Or, maybe there's no connection, and this was just a slow news day.