What? What are you doing here, on a holiday eve? Perhaps you're bored, web-browsing while waiting to become thoroughly plastered later tonight? Or, at least waiting to watch the ball drop in Times Square. Well, we here at MT thought that as a pre-holiday service we'd bring you an echo out of our own modest past. We seemed to have aroused a stir with yesterday's foray into the wild territory of wild explanations about behavior. That forced a bit of reflection and soul-searching on our behalf, which we'll try to correct with a bonus post today.
The fact is that yesterday's post seemed to elicit a bit of confusion, or even anger, about what we are up to here at The Mermaid's Tale, at least with respect to our views on evolutionary psychology and behavioral evolution. We were even accused of being so opaque as not to understand or even believe in evolution!
So, in an effort to show that we're in fact Equal Opportunity critics, we thought we'd solve your pre-holiday boredom by rustling up this old post of ours (from 2011), that showed how absolutely open-minded we are, except that instead of explaining live behavior, we examined explanations for past behavior, based on paleontological evidence. This should make it crystal clear that we certainly value, indeed even offer, evolutionary explanations.
----------------------------------------------------
From the very beginning of formal taxonomy in biology, genus and species names have been assigned partly as descriptions of important aspects of a species. Hence, Homo sapiens refers to our species' purported wisdom. The underlying idea is that one correctly understands important key functions that characterize the species. Linnaeus did not have 'evolution' as a framework with which to make such judgments, but we do. And we should use that framework!
However, sometimes, in the rush to publish in the leading tabloids (in this case, Science magazine), a name is too hastily chosen. That applies clearly to the recently ballyhooed 1.9+ million year old South African human ancestor. Substantial fossil remains of two individuals, a male and a female, were found and seemed clearly to be contemporaneous. Many features suggested that they could be marketed as a Revolutionary! revision to our entire understanding of human origins.
Now, too often in paleoanthropology there is little substantial evidence for such a dramatic consequences, similar to the pinnate carnage known as the War of Jenkins' Ear between Britain and Spain in 1739. But not in this case here! In this case, the specimens were dubbed Australopithecus sediba (See the special issue of Science, Sept 9, 2011 [subscription required]), and there are many traits, some more dramatic than others, by which to argue that it is Really (this time) important.
We haven't the space to outline all these features, but although we are amateurs at paleontology (for expertise, we fortunately have Holly the Amazing Dunsworth as part of our MT team), we feel qualified to comment. One of the most remarkable features of A. sediba is its long, delicate digits on its obviously dextrous hand, and in particular its long and clearly useful thumb. But useful for what?
One who is rooted, or rutted, in classical human paleontology, and thinks that tool use is what Made us Human, the obvious inference is: a dextrous hand for tool use! There is, however, a tiny problem: no tools were found in the site.
Now this can be hand-waived away in defense of the explanatory hypothesis that is derived from classical adaptive thinking. Wooden tools, like stems chimps strip and use to ferret out termites, wouldn't preserve, and at that early stage of our evolution, stone pebbles might have been used--or even made--but would be perhaps sparse and if not remodeled by the creatures, unrecognizable. Or perhaps these creatures hadn't carried their tools to the site. There may indeed be perfectly reasonable explanations for why no tools were found, but one must at least admit that the absence of tools does not constitute evidence for the tool hypothesis.
Still, if you believe there must be a functionally adaptive explanation for absolutely everything, and you are committed to the current framework of explanations about our ancestry, then tool-use is the obvious one. How occasional use of pebbles led to higher reproductive success--which, remember, is what adaptive explanations must convincingly show--is somewhat less than obvious. Also, it is the female specimen whose hand is best preserved, though the male is inferred (from one finger bone!) to have been similarly handy. But females don't throw pebbles to gather berries!
A satisfying explanation
So how solid or even credible is the tool-use explanation? Could there be better scenarios? We think so, and we believe our idea is as scientifically valid as the tired old one of tool use.
It is obvious upon looking at the fossil hand, that its most likely purpose was, not to mince words about it, masturbation. Just look at the hand itself and its reach position (figure 2). Think about it: deft and masterly self-satisfying would yield heightened sexuality, indeed of keeping one's self aroused at all times, ready for the Real Thing whenever the opportunity might arise. Unlike having to wait for prey to amble by, one could take one's evolutionary future in one's own hands--and use one's tool in a better way, one might say.
Being in a dreamy state is a lot less likely to provoke lethal strife within the population nor "Not tonight, I'm too tired" syndromes, compared to the high-stress life of hunting giraffes (much less rabbits) or trying to bring down berries, by throwing chunky stones at them. Our laid-back scenario does not require fabricating stories of how rock-tossing indirectly got you a mate, because pervasive arousal would be much more closely connected to reproductive coupling, a way of coming rapidly to the important climax: immediate evolutionary success.
Indeed, and here is a key part of our explanation, the same fitness advantage would have applied to both the males and the females. If both parties were at anticipatory states more of the time, fitness-related activity would have occurred even more frequently than it does now, if you can imagine that, and quickly led to our own very existence as a be-thumbed if not bewildered species.
Supporting our hypothesis, vestiges of the original use are still around, as for example the frequency with which football and baseball players grab themselves before each play. Of course, humans seem subsequently and unfairly to have evolved to be less gender-symmetric in this regard. But our explanation is far better than the tired stone-axe story-telling with which we're so familiar. For this reason, we suggest the new nomenclature for our ancient ancestor: Australopithecus erotimanis.
Now, you may think our scenario is simply silly and not at all credible. But is it? By what criterion would you make such a judgment? Indeed, even being silly wouldn't make it false. And, while you may view the standard Man the Hunter explanation as highly plausible, being plausible doesn't make it true. Nor when you get right down to it, is the evidence for the stone-age hypothesis any better than the evidence for our hypothesis.
Indeed, if you think carefully about it, even the presence of some worked pebbles would not count as evidence that hands evolved for tool use. The dextrous hand could have been an exaptation, that is, a trait evolved in some other context, and then later was co-opted for a new function--in this case, the flexible hand, once evolved for one use, could then be used to make and throw tools. What we have explained here is the earlier function that made the hand available in that way.
Don't laugh or sneer, because this is actually a not-so-silly point about the science, or lack of science, involved in so much of human paleontology. It's a field in which committed belief in the need for specific and usually simple adaptive scenarios, using a subjective, culture-specific sense of what is 'plausible', determines what gets into the literature and the text-books (though our idea might have a better chance with National Geographic).
Will anthropology ever become a more seriously rigorous science, with at least an appropriate level of circumspection? It's something to ponder.
Tuesday, December 31, 2013
Monday, December 30, 2013
"Italians are lovers, not fighters!" -- genes, behavior and ideology
By
Ken Weiss
In what now seems like a long time ago, I was an Air Force weather officer, stationed at a NATO base in England. One of my fellow forecasters, Gus Leshner, the best forecaster I ever met, was a Warrant Officer who had fought in WWII in Italy. He once told me that, in a wartime discussion with some Italians just after their country had rather easily surrendered, one of them told him rather unembarassedly that Italy lost because "We Italians are lovers, not fighters."
At the time, and in its context, that story seemed an apt metaphor for Italian culture as we saw it at the time. Gus told me a few of his other war stories while we finished working up the morning forecast for the swarm of pilots who would shortly call for their pre-flight briefings.
I never thought about his quip except as an apt description of a rather weak military country that is loaded with culture--sculpture, opera, paintings, intellectual movies, fantastic food, and people who talk by waving their hands. Organized crime aside, it's a warm and appealing image, and it befits Anne's and my wonderful son-in-law, a native of that land.
But then along came sociobiology....
In their genes
Sociobiology is one of several names for that field of biology (evolutionary psychology, known as evo psycho by some, is another) that seems to think that everything we do or say is mandated by our inherited genomes (sometimes, possibly affected in minor ways by the environment). Even our culture itself is the product of our genes. Our genes make us do what we do, and if we do it well, our population proliferates (as a corollary, behavior has Darwinian-based values that proponents of this view can easily identify). You may know how the theory goes, as it's all over the news media.
A few decades ago, , as I recall, Robert Ardrey, an author about as highly qualified as Richard Dawkins to write about the nature of life, proclaimed about Italians that their honking, fist-shaking impatience in traffic (another clearly inherent trait, as you know if you've ever tried to actually drive in Italy) was part of their genetic make-up (Ardrey was a playwright, so that, like others of his scientific ilk, inventing fiction was his specialty).
Well, when it comes to the lusty Italians, one can't resist conjuring objects of what's in their (excitingly tight, body-hugging) jeans as well as genes, given various memorable cinematic and statuary representatives of their genome. The Land of Lovers! Soft violin music, or a Verdi aria in the background, a glass or two of Valpolicella, and....well, one's imagination is hard to control.
But wait--isn't, um, Rome in Italy?
When these thoughts pass, we find that we've got some reconciling to do: the image of the pacific Italian lover vis-à-vis the Romans (who were from Italy, as we recall). They were for many centuries the dominant military power, by unrepentant force of arms controlling the biggest empire the world had known (til the Brits came along, and nobody ever accused them of being lovers rather than fighters, given their gloomy climate).
Romans with shields, chariots, wielding their swords (in combat, that is)! They had oval chariot-combat arenas, and gladiatorial combat centers (coliseums, probably named after food-brands or insurance companies now lost to history). They slaughtered vicious lions to prove their mighty valor, and slaughtered each other to prove their, well, I'm not sure what, but it certainly wasn't about love!
Now, I have myself seen these scenes of brutality in many places, and have read about what happened in them, even in books by authors from that time (e.g., Galen, one of whose important jobs was fixing up gored and sliced gladiators, and another was taking care of the slashed and stabbed soldiers of Emperor Marcus Arelius), and in Charlton Heston movies. The Romans were rather nasty beasts. If the classics are anything to go by, a favorite sport was crucifixion.
But then how is this possible? I mean, everyone's heard about lovers' spats, but this seems rather different!
The Roman empire fizzled out only about 1500 years ago. Maybe some Jews and a few other stragglers from the Middle Eastern part of the Empire, or Turks and their like, ended up in Italy as that fizzling happened. But, hostility to immigrants being what it is (and, sociobiologists would tell you, it's inherently genetic), these drifters certainly didn't replace the Romans and their seed.
But, if we're to believe the story told by the likes of evolutionary psychologists, the very same population that is now amorously pacific by nature was once quite violent. So how on earth did the requisite nation-wide genome replacement take place, and how were gladiator genomes exterminated by the soft-and-sexy, artsy ones that now fill the Mediterranean boot?
Our inclination is to venture to say that culture is not, after all, in our genes, and that genes enable all sorts of behaviors, but don't prescribe them. But that must be wrong, because so many anthropologists and others who should know, assure us that what we are can be read off a DNA sequencer. Indeed, we hear tell of a new book, due out in the spring, by a New York Times 'science' (op/ed?) writer of a particularly determinist persuasion that will insist that this is so.
To quote Thomas Huxley in 1891, an acerbic critic of religious orthodoxy who, even back then and unlike many of their lot today, actually understood evolution, himself quoting Goethe from 1833; "There is nothing more terrible than energetic ignorance." [Huxley, 1891, "Hasisadra's Adventure".]
It's going to be a long, hot spring.
At the time, and in its context, that story seemed an apt metaphor for Italian culture as we saw it at the time. Gus told me a few of his other war stories while we finished working up the morning forecast for the swarm of pilots who would shortly call for their pre-flight briefings.
I never thought about his quip except as an apt description of a rather weak military country that is loaded with culture--sculpture, opera, paintings, intellectual movies, fantastic food, and people who talk by waving their hands. Organized crime aside, it's a warm and appealing image, and it befits Anne's and my wonderful son-in-law, a native of that land.
But then along came sociobiology....
In their genes
Sociobiology is one of several names for that field of biology (evolutionary psychology, known as evo psycho by some, is another) that seems to think that everything we do or say is mandated by our inherited genomes (sometimes, possibly affected in minor ways by the environment). Even our culture itself is the product of our genes. Our genes make us do what we do, and if we do it well, our population proliferates (as a corollary, behavior has Darwinian-based values that proponents of this view can easily identify). You may know how the theory goes, as it's all over the news media.
A few decades ago, , as I recall, Robert Ardrey, an author about as highly qualified as Richard Dawkins to write about the nature of life, proclaimed about Italians that their honking, fist-shaking impatience in traffic (another clearly inherent trait, as you know if you've ever tried to actually drive in Italy) was part of their genetic make-up (Ardrey was a playwright, so that, like others of his scientific ilk, inventing fiction was his specialty).
Well, when it comes to the lusty Italians, one can't resist conjuring objects of what's in their (excitingly tight, body-hugging) jeans as well as genes, given various memorable cinematic and statuary representatives of their genome. The Land of Lovers! Soft violin music, or a Verdi aria in the background, a glass or two of Valpolicella, and....well, one's imagination is hard to control.
But wait--isn't, um, Rome in Italy?
When these thoughts pass, we find that we've got some reconciling to do: the image of the pacific Italian lover vis-à-vis the Romans (who were from Italy, as we recall). They were for many centuries the dominant military power, by unrepentant force of arms controlling the biggest empire the world had known (til the Brits came along, and nobody ever accused them of being lovers rather than fighters, given their gloomy climate).
Tough guys, or sissies?: http://annoyzview.files.wordpress.com/2012/03/roman_soldier_1.jpg |
Romans with shields, chariots, wielding their swords (in combat, that is)! They had oval chariot-combat arenas, and gladiatorial combat centers (coliseums, probably named after food-brands or insurance companies now lost to history). They slaughtered vicious lions to prove their mighty valor, and slaughtered each other to prove their, well, I'm not sure what, but it certainly wasn't about love!
Now, I have myself seen these scenes of brutality in many places, and have read about what happened in them, even in books by authors from that time (e.g., Galen, one of whose important jobs was fixing up gored and sliced gladiators, and another was taking care of the slashed and stabbed soldiers of Emperor Marcus Arelius), and in Charlton Heston movies. The Romans were rather nasty beasts. If the classics are anything to go by, a favorite sport was crucifixion.
But then how is this possible? I mean, everyone's heard about lovers' spats, but this seems rather different!
The Roman empire fizzled out only about 1500 years ago. Maybe some Jews and a few other stragglers from the Middle Eastern part of the Empire, or Turks and their like, ended up in Italy as that fizzling happened. But, hostility to immigrants being what it is (and, sociobiologists would tell you, it's inherently genetic), these drifters certainly didn't replace the Romans and their seed.
But, if we're to believe the story told by the likes of evolutionary psychologists, the very same population that is now amorously pacific by nature was once quite violent. So how on earth did the requisite nation-wide genome replacement take place, and how were gladiator genomes exterminated by the soft-and-sexy, artsy ones that now fill the Mediterranean boot?
Our inclination is to venture to say that culture is not, after all, in our genes, and that genes enable all sorts of behaviors, but don't prescribe them. But that must be wrong, because so many anthropologists and others who should know, assure us that what we are can be read off a DNA sequencer. Indeed, we hear tell of a new book, due out in the spring, by a New York Times 'science' (op/ed?) writer of a particularly determinist persuasion that will insist that this is so.
To quote Thomas Huxley in 1891, an acerbic critic of religious orthodoxy who, even back then and unlike many of their lot today, actually understood evolution, himself quoting Goethe from 1833; "There is nothing more terrible than energetic ignorance." [Huxley, 1891, "Hasisadra's Adventure".]
It's going to be a long, hot spring.
Wednesday, December 25, 2013
Wishes for the new season, in old verse.
By
Ken Weiss
Even in science, which is supposed to be objective, and should be objective, there are times when feelings and interactions are not consistent with that ideal. Scientists, as any other group of humans, often disagree passionately and as a result, can treat each other coldly.
We offer you on this Christmas day an 1888 verse by the wonderful pastoral poet William Wordsworth, who made his points in ordinary language so as not to be exclusive about his subject or his readers.
The point, for scientists, is that we needn't hoard our every idea or result, even ones we're no longer using, against those who might gain warmth from their use.
We may differ about what is right or wrong in science, but we know what's right in how we regard each other, so let neither our acts nor our wishes set each other's teeth achatter!
What is't that ails young Harry Gill?
That evermore his teeth they chatter,
Chatter, chatter, chatter still!
Of waistcoats Harry has no lack,
Good duffle grey, and flannel fine;
He has a blanket on his back,
And coats enough to smother nine.
In March, December, and in July,
'Tis all the same with Harry Gill;
The neighbours tell, and tell you truly,
His teeth they chatter, chatter still.
At night, at morning, and at noon,
'Tis all the same with Harry Gill;
His teeth they chatter, chatter still!
Young Harry was a lusty drover,
And who so stout of limb as he?
His cheeks were red as ruddy clover;
His voice was like the voice of three.
Old Goody Blake was old and poor;
Ill fed she was, and thinly clad;
And any man who passed her door
Might see how poor a hut she had.
All day she spun in her poor dwelling:
And then her three hours' work at night,
Alas! 'twas hardly worth the telling,
It would not pay for candle-light.
Remote from sheltered village-green,
On a hill's northern side she dwelt,
Where from sea-blasts the hawthorns lean,
And hoary dews are slow to melt.
By the same fire to boil their pottage,
Two poor old Dames, as I have known,
Will often live in one small cottage;
But she, poor Woman! housed alone.
'Twas well enough when summer came,
The long, warm, lightsome summer-day,
Then at her door the 'canty' Dame
Would sit, as any linnet, gay.
But when the ice our streams did fetter,
Oh then how her old bones would shake!
You would have said, if you had met her,
'Twas a hard time for Goody Blake.
Her evenings then were dull and dead:
Sad case it was, as you may think,
For very cold to go to bed;
And then for cold not sleep a wink.
O joy for her! whene'er in winter
The winds at night had made a rout;
And scattered many a lusty splinter
And many a rotten bough about.
Yet never had she, well or sick,
As every man who knew her says,
A pile beforehand, turf or stick,
Enough to warm her for three days.
Now, when the frost was past enduring,
And made her poor old bones to ache,
Could any thing be more alluring
Than an old hedge to Goody Blake?
And, now and then, it must be said,
When her old bones were cold and chill,
She left her fire, or left her bed,
To seek the hedge of Harry Gill.
Now Harry he had long suspected
This trespass of old Goody Blake;
And vowed that she should be detected--
That he on her would vengeance take.
And oft from his warm fire he'd go,
And to the fields his road would take;
And there, at night, in frost and snow,
He watched to seize old Goody Blake.
And once, behind a rick of barley,
Thus looking out did Harry stand:
The moon was full and shining clearly,
And crisp with frost the stubble land.
--He hears a noise--he's all awake--
Again?--on tip-toe down the hill
He softly creeps--'tis Goody Blake;
She's at the hedge of Harry Gill!
Right glad was he when he beheld her:
Stick after stick did Goody pull:
He stood behind a bush of elder,
Till she had filled her apron full.
When with her load she turned about,
The by-way back again to take;
He started forward, with a shout,
And sprang upon poor Goody Blake.
And fiercely by the arm he took her,
And by the arm he held her fast,
And fiercely by the arm he shook her,
And cried, "I've caught you then at last!"--
Then Goody, who had nothing said,
Her bundle from her lap let fall;
And, kneeling on the sticks, she prayed
To God that is the judge of all.
She prayed, her withered hand uprearing,
While Harry held her by the arm--
"God! who art never out of hearing,
O may he never more be warm!" 0
The cold, cold moon above her head,
Thus on her knees did Goody pray;
Young Harry heard what she had said:
And icy cold he turned away.
He went complaining all the morrow
That he was cold and very chill:
His face was gloom, his heart was sorrow,
Alas! that day for Harry Gill!
That day he wore a riding-coat,
But not a whit the warmer he:
Another was on Thursday brought,
And ere the Sabbath he had three.
'Twas all in vain, a useless matter,
And blankets were about him pinned;
Yet still his jaws and teeth they clatter;
Like a loose casement in the wind.
And Harry's flesh it fell away;
And all who see him say, 'tis plain,
That, live as long as live he may,
He never will be warm again.
No word to any man he utters,
A-bed or up, to young or old;
But ever to himself he mutters,
"Poor Harry Gill is very cold."
A-bed or up, by night or day;
His teeth they chatter, chatter still.
Now think, ye farmers all, I pray,
Of Goody Blake and Harry Gill!
And now here's another from Wordsworth...
in a more cheerful vein!
Make it Snow!
The minstrels played their Christmas tune
To-night beneath my cottage-eaves;
While, smitten by a lofty moon,
The encircling laurels, thick with leaves,
Gave back a rich and dazzling sheen,
That overpowered their natural green.
Through hill and valley every breeze
Had sunk to rest with folded wings:
Keen was the air, but could not freeze,
Nor check, the music of the strings;
So stout and hardy were the band
That scraped the chords with strenuous hand.
And who but listened?--till was paid
Respect to every inmate's claim,
The greeting given, the music played
In honour of each household name,
Duly pronounced with lusty call,
And "Merry Christmas" wished to all.
Monday, December 23, 2013
Thumbs up...and heads down. More on 'selectionism'
By
Ken Weiss
Last week we used my glorious thumb and its several properties to illustrate the difficult problem of defining what a biological trait is, in relation to the widely held assumption that our traits are here strictly because of natural selection. Our point was that it is quite tricky to define a trait, or specify just how precise, and in what ways, our assumed selection has worked, and is working.
A thumb is seemingly so simple that it can be surprising how difficult this is. But here is another 'trait' that is clearly an evolved trait with what even I would agree is of fundamental importance to our existence, and in that sense, 'adaptive': the skull. It, if it is an 'it', clearly has evolved and using evolutionary theory and comparative anatomy, we can identify what we assume are evolutionarily corresponding 'landmarks' (the dots in the figures below). These are canonical anatomical locations shared by members of a species, but the distance between them varies within and among species. These measures can be indicative of genetic and/or life experience variation. But is the skull, or a landmark, the product of natural selection? To answer that question, we must understand what 'it' is.
This example is courtesy of work with our long-time collaborator (and good friend) Joan Richtsmeier, an expert on imaging and analyzing craniofacial dimensions and their pattern of variation. Indeed, the figure shows various standard measures that people in this field use. We and Joan, and Tim Ryan also in Anthropology here, along with Jim Cheverud of Loyola in Chicago, Heather Lawson at Washington University Medical School, Jeff Rogers at Baylor College of Medicine, and others, have been looking at the genetic contributions to craniometric traits in laboratory mice, and in a big baboon genealogy housed in Texas, and to compare that with what is known about human skull-shape variation.
We have CT scanned the skulls and made careful identification of standard 'landmark' points, the dots in the figure (the Figure just shows whole skulls, but in a CT scan the landmarks can be in 3-Dimensional locations and we can look at any slice through the object). Then, using hundreds of individual skull-scans, we have been relating inter-landmark distances (lines that can be drawn between any two landmarks) to genetic variation, to 'map' the genomic locations that affect each distance's (or correlated distances') variation. Yes, skeptical though we are of its over-interpretation, we are using GWAS-like methods (QTL mapping) to do this. (We've described the difficulties with this work a few times on MT, including here and here. The gene mapping issues we discussed in these posts remain unresolved.)
We chose the landmarks and distances based on what has been done before, or reflecting obvious locations like where sutures join or peaks in curvature and so on. The statistical methods in the genomic analysis and so on are also well-tested (there are some controversies but they're beside the point, or comparable to the point that we're making here). For most of the landmarks in one species, homologous locations can be found in the others (in our work, humans, baboons, and mice). In this sense, the distances are very clearly 'objective,' replicable measures of shape and size in the skulls.
But in what sense are they 'traits'? Is the fact that we can identify and measure them, for reasons we decide on, a sufficient basis for the definition of a trait? Is a measure's genomic 'control', regions that we identify by mapping, actually 'for' that measure? If so it implies that other aspects of skulls, such as the bone thickness, dental cusp or shape patterns, and so on, are unrelated and irrelevant to the measure. That seems hardly to be justified.
We know these traits evolved and that what we observe today ancestrally had to be consistent with successful survival and reproduction--passing the screening of natural selection or whatever other types of functional trials were imposed. And they survived chance events, too.
But just as with the thumb example from Friday's post, it is we the investigators who decide what a 'trait' is, and if we wish to infer adaptive scenarios it is we who decide what is adapting. That leaves us the freedom to decide with equal subjectivity what it was adapting for. Of course we know that other aspects than what we're testing had to have evolved consistently with what we're testing, or else a single, functionally viable and cohering skull would not have developed in the ancestral individuals as they and their various craniofacial 'traits' (whatever they be) were evolving. But of the countless genes expressed in, and hence clearly contributing to, skull development, how is it that we can decide if the gene is 'for' some particular distance measure? A gene may contribute to that strictly via some other effect than what we think our measurement is about (indeed, what 'strictly' means is itself questionable). Indeed, most genes contribute to many different tissues and structures and developmental stages in the head, not just one.
Here, we're not just playing possibly-trivial thumb games, but are considering the nature and evolution of one of our most vital structures. Yet the same question: what is a 'trait'? arises in regard to essentially every distance we assessed and to the very act of choosing landmarks and distances to measure. So what we are really considering is the nature of evolution itself.
Chance is a chancy subject because it, and associated 'probability' are very hard to define. But even if chance has, and always had, nothing to do with our 'traits', and if selection really were, as so often asserted, the be-all and end-all of life (except, of course, for love and lunch), and even if we've had Darwinian ideas for more than 150 years, and even if they were wholly correct, then even then, natural selection and the nature of evolution are still far from completely understood.
Somehow, however, many of people in evolutionary biology don't seem to realize this, or don't want to acknowledge it. Or don't they care about truth, so long as they can tell a good story?
A thumb is seemingly so simple that it can be surprising how difficult this is. But here is another 'trait' that is clearly an evolved trait with what even I would agree is of fundamental importance to our existence, and in that sense, 'adaptive': the skull. It, if it is an 'it', clearly has evolved and using evolutionary theory and comparative anatomy, we can identify what we assume are evolutionarily corresponding 'landmarks' (the dots in the figures below). These are canonical anatomical locations shared by members of a species, but the distance between them varies within and among species. These measures can be indicative of genetic and/or life experience variation. But is the skull, or a landmark, the product of natural selection? To answer that question, we must understand what 'it' is.
Landmarks: human, baboon, and mouse (Figure by Joan Richtsmeier) |
This example is courtesy of work with our long-time collaborator (and good friend) Joan Richtsmeier, an expert on imaging and analyzing craniofacial dimensions and their pattern of variation. Indeed, the figure shows various standard measures that people in this field use. We and Joan, and Tim Ryan also in Anthropology here, along with Jim Cheverud of Loyola in Chicago, Heather Lawson at Washington University Medical School, Jeff Rogers at Baylor College of Medicine, and others, have been looking at the genetic contributions to craniometric traits in laboratory mice, and in a big baboon genealogy housed in Texas, and to compare that with what is known about human skull-shape variation.
We have CT scanned the skulls and made careful identification of standard 'landmark' points, the dots in the figure (the Figure just shows whole skulls, but in a CT scan the landmarks can be in 3-Dimensional locations and we can look at any slice through the object). Then, using hundreds of individual skull-scans, we have been relating inter-landmark distances (lines that can be drawn between any two landmarks) to genetic variation, to 'map' the genomic locations that affect each distance's (or correlated distances') variation. Yes, skeptical though we are of its over-interpretation, we are using GWAS-like methods (QTL mapping) to do this. (We've described the difficulties with this work a few times on MT, including here and here. The gene mapping issues we discussed in these posts remain unresolved.)
We chose the landmarks and distances based on what has been done before, or reflecting obvious locations like where sutures join or peaks in curvature and so on. The statistical methods in the genomic analysis and so on are also well-tested (there are some controversies but they're beside the point, or comparable to the point that we're making here). For most of the landmarks in one species, homologous locations can be found in the others (in our work, humans, baboons, and mice). In this sense, the distances are very clearly 'objective,' replicable measures of shape and size in the skulls.
But in what sense are they 'traits'? Is the fact that we can identify and measure them, for reasons we decide on, a sufficient basis for the definition of a trait? Is a measure's genomic 'control', regions that we identify by mapping, actually 'for' that measure? If so it implies that other aspects of skulls, such as the bone thickness, dental cusp or shape patterns, and so on, are unrelated and irrelevant to the measure. That seems hardly to be justified.
We know these traits evolved and that what we observe today ancestrally had to be consistent with successful survival and reproduction--passing the screening of natural selection or whatever other types of functional trials were imposed. And they survived chance events, too.
But just as with the thumb example from Friday's post, it is we the investigators who decide what a 'trait' is, and if we wish to infer adaptive scenarios it is we who decide what is adapting. That leaves us the freedom to decide with equal subjectivity what it was adapting for. Of course we know that other aspects than what we're testing had to have evolved consistently with what we're testing, or else a single, functionally viable and cohering skull would not have developed in the ancestral individuals as they and their various craniofacial 'traits' (whatever they be) were evolving. But of the countless genes expressed in, and hence clearly contributing to, skull development, how is it that we can decide if the gene is 'for' some particular distance measure? A gene may contribute to that strictly via some other effect than what we think our measurement is about (indeed, what 'strictly' means is itself questionable). Indeed, most genes contribute to many different tissues and structures and developmental stages in the head, not just one.
Here, we're not just playing possibly-trivial thumb games, but are considering the nature and evolution of one of our most vital structures. Yet the same question: what is a 'trait'? arises in regard to essentially every distance we assessed and to the very act of choosing landmarks and distances to measure. So what we are really considering is the nature of evolution itself.
Chance is a chancy subject because it, and associated 'probability' are very hard to define. But even if chance has, and always had, nothing to do with our 'traits', and if selection really were, as so often asserted, the be-all and end-all of life (except, of course, for love and lunch), and even if we've had Darwinian ideas for more than 150 years, and even if they were wholly correct, then even then, natural selection and the nature of evolution are still far from completely understood.
Somehow, however, many of people in evolutionary biology don't seem to realize this, or don't want to acknowledge it. Or don't they care about truth, so long as they can tell a good story?
Friday, December 20, 2013
"Every trait is due to natural selection!"....often said, but is it true?
By
Ken Weiss
It is often said by people who claim to understand evolution that every trait is due to natural selection. This is belief in a form of very strong determinism that reduces, in essence, to genetic determinism. It's closely akin to the confident defense of the pop-culture concept of the 'selfish' gene against all challenges. Put another way, in such a view, natural selection is not a scientific statement about evidence, but an axiom--an assumption--that is used to make inference from data, but not something to be tested in itself.
For reasons we'll go into, it's not entirely clear where this view comes from, but it's often justified as a form of materialism, as if Genesis According to Darwin is true and therefore selection is the proper riposte to Genesis According to God. But it's highly misplaced in science, and the assumption obscures an actual discussion. Let's agree that if we'll stay within the confines of materialism--that is, no mysticism and that the laws of physical nature must not just be consistent with, but must also ultimately explain, everything biological--we can examine the nature of strongly determinstic Darwinism.
So let's consider the proposition that every trait is due to natural selection. We'll use an example. First we need a trait. Ok, here's a picture of my thumb (not used out of vanity over its undoubted beauty!). Is 'it' a 'trait'? If it is, of course, we can speak about 'its' evolution. If it's not a trait, then what is it and how can we understand it?
On second thought, maybe the whole thumb is not a 'trait'. Maybe it's the angle, θ, the measure of the bend where the second phalanx starts. Or is the trait that my thumb has a second phalanx? Or that I can make that angle? Or is the trait the length, x, of the second phalanx? Or the angle between the thumb and the first finger? How do we decide? All of these traits or parts of traits have in some way evolved together, so it's hard to say when one aspect begins and another one ends, that is, what is the 'trait'.
But let's say we make our decision on that question. Then, if whatever we decide our trait must, by the usual assumption, have evolved by natural selection, we want to be able to say what it evolved 'for'. And to do that, we must show that the trait we're assuming resulted from natural selection somehow had direct fitness (survival or fertility) effects, because, since to selectionistic explanations chance is not an option, differential fitness is the mechanism by which natural selection works. If our trait is θ, that means that if my ancestors had had an intrapollectial angle (pollex means thumb in Latin, and therefore using the word makes it 'science') that was--what?--greater than my θ, or was it less than my θ, or was it enough that it wasn't exactly my θ?, then they didn't have any kids and I wouldn't be here!
But wait! What if the trait was the length of the distal pollectial segment, x, that mattered. Those ancestors with more stubby thumb-ends are fossils without issue (that means, by the way, that when paleontologists dig them up they can't interpret what they were adapted for, since the death was because they weren't adaptive). Or maybe those with gross thumbs whose last digit was a tad too long (how much is too much--one millimeter, say?) died out. The trouble of course is that even my adaptive distant ancestors are dead, and how can we say whether they were 'fully' adaptive and died after having many kids, or were just caught by a lion when they wandered carelessly away from the campfire to take a pee one night (we don't need to consider whether, perhaps gender-related, the thumb trait was involved in that latter process).
Now given my important pollectial triangle (here we consider θ and x to have jointly been the critical 'trait', though what justification there is for that is an open issue), our task is to understand what natural selection molded it for.
The obvious thing is to look at what we actually do with our thumbs (that requires a triangular configuration like mine). Most anthropologists will argue that our thumbs are for tool use--at least, it says that in textbooks about thumbs, so it must be true. But I know that my deep ancestors didn't have pens and pencils or space-bars to tap, so it must have been for throwing stones in annual sporting events where the winner impressed the girls and, well, you know....
What about holding a rake in the fall when I clear my yard of leaves---it couldn't have been for that because my ancestors evolved in Africa where you don't have to rake leaves in the fall because it's tropical and there is no fall! No, I have to think of uses that aren't just modern exploitations of a trait that was strongly selected in the past for something they did in the past. This is a bit now-centered, because in the past the use of the traits they had then might not have been for what they did then, but what their ancestors did in their distant past.
Anyway, I suppose since the trait must have had important fitness-related function, if it is assumed to have evolved by natural selection, it may have had a role in sexual stimulation of some sort. One can let one's imagination run wild, but we needn't go that far. As I was saying, other than for modern uses, what I use my pollectial angle for is for getting itchy specks of dust out and sleep-grains of the corner of my eye. Without that angle, θ, I wouldn't easily be able to do it. Now on the dusty African savannah, when sharp-eyed spotting of prey and aiming of spears or hunting-stones was vital, the eye-clearing use of thumbs is an obvious explanation.
There is the annoying little question of why not everyone's thumbs have exactly the same triangle--side length or angle. If this is an adaptive trait formed by natural selection long ago, which it must have been since humans around the world have pollectial triangles (and so did Neandertals and even Denisovans!), then why is it still so variable?
Well, I'm getting tired of all this speculation. This is especially unnerving because I can think of other aspects of thumbs that might be considered as traits, or the trait may be a complex of traits with their own individual selective reasons. Or, really disturbing, these assembled aspects may have had different functions (that affected reproductive success) during the million years in which the trait(s) were adaptively evolving.
Now selectionist axioms don't provide room for probability in reproductive success. That's why earlier I said that if my ancestors' value of θ was even a bit different from mine, then they must not have reproduced; if some tolerance was allowed, then fitness would be probabilistic, and that means chance, which is not allowed: the trait has to be functionally deterministic. That's because once you admit probabilities (genetic drift), you open the door to all sorts of less satisfying, less simple, and less clear-cut explanations--and Nature and the Times will shun your papers. Part of the fear is that if one says a trait's presence is for 'random' reasons, it opens the door to creationism or, worse, lack of some explanation that's simple enough for undergraduates to understand or the research to be reported in the NY Times. It raises the specter of a 747 arising randomly out of a pile of scrap metal.
This is a silly non sequitur, since nobody (not even neutralists, who somehow can conceive of aspects of traits that arose for no adaptive reason at all) denies the role of various sorts of constraint, including classical natural selection, even when invoking genetic drift. But it's just as silly to think that every tiny aspect of every trait has to be here because of specific natural selection in the usual sense. Darwin rationalized various aspects of traits, such as correlations among them, basically as having developmental-constraint reasons and so on. We know of such things in better molecular terms today.
More importantly, it is the definition of what counts as a 'trait' that we should be thinking more seriously about. Organisms evolve as organisms, not just as sets of traits, and to the extent that a trait is in the eye of the definer, speculations about what it's 'for' may also be just-so stories in the mind of the definer. So is the use of selection as an axiom: if something (whatever that be) is assumed to be here because of selection, then to say that it's here because of selection is a tautology. The care one must take to avoid such tautological thinking has been known for many decades. There are serious questions about evolution's mechanism that usually are swept under the rug because they point out inconvenient complexity that does not lead to convenient publishable answers. Who decides where a 'trait' s/he can define is or was a unitary phenomenon during its evolution? And who decides how precise the hypothesized adaptation was? And who decides how much of its variation may have been tolerated or even itself adaptive?
Trait micro-definitions, arbitrary post-hoc trait definition, and so on show by example how hard it is to legitimately decide what to include. And if, as most of the evidence both theoretical and observational suggests, typical selective advantages are on the order of about one per thousand (you have 10001 kids with a good thumb-angle vs 1000 without such an angle), that is almost meaningless besides being literally impossible to prove. It certainly means that traits with socially sensitive implications when interpreted in strong selectionist terms are, evolutionarily, essentially meaningless, as is the very notion that selection like that is a determinative 'force'. Many such traits probably exist, and most traits may be like that, if we could even decide how to define them. They should be deprived of their societal implications or value judgments, or much of the scientific surety with which the explanations are too-often couched. Even mate-choice criteria, that could be very close to fitness, if the trait is used only now and then because there really wasn't much choice in human ancestral populations or kinship rules specified who you mated with, may mean little locally even if over long term the scenario is true.
The point about thumbs may seem like a kind of reductio ad absurdam, but is it? Viewed from some time or vantage points, structures whose adaptive origins may seem obvious might be as problematic as my thumb from some other frames of reference. For thousands of generations it could be that, say, the angle or size of the iliac crest would seem as trivial a 'trait' as my thumb, while today it seems fundamental to upright posture.
Fatigue wins....
I must confess that my patience is running low (that's one of my traits!), so I am simply ignoring the wrinkles in the thumb. They may actually be the trait that evolved adaptively, though 'for' what is beyond me. Probably I'm just tiring. You can provide the explanation without my help, I'm sure.
For reasons we'll go into, it's not entirely clear where this view comes from, but it's often justified as a form of materialism, as if Genesis According to Darwin is true and therefore selection is the proper riposte to Genesis According to God. But it's highly misplaced in science, and the assumption obscures an actual discussion. Let's agree that if we'll stay within the confines of materialism--that is, no mysticism and that the laws of physical nature must not just be consistent with, but must also ultimately explain, everything biological--we can examine the nature of strongly determinstic Darwinism.
So let's consider the proposition that every trait is due to natural selection. We'll use an example. First we need a trait. Ok, here's a picture of my thumb (not used out of vanity over its undoubted beauty!). Is 'it' a 'trait'? If it is, of course, we can speak about 'its' evolution. If it's not a trait, then what is it and how can we understand it?
Thumbs up...or angles out....or....? |
On second thought, maybe the whole thumb is not a 'trait'. Maybe it's the angle, θ, the measure of the bend where the second phalanx starts. Or is the trait that my thumb has a second phalanx? Or that I can make that angle? Or is the trait the length, x, of the second phalanx? Or the angle between the thumb and the first finger? How do we decide? All of these traits or parts of traits have in some way evolved together, so it's hard to say when one aspect begins and another one ends, that is, what is the 'trait'.
But let's say we make our decision on that question. Then, if whatever we decide our trait must, by the usual assumption, have evolved by natural selection, we want to be able to say what it evolved 'for'. And to do that, we must show that the trait we're assuming resulted from natural selection somehow had direct fitness (survival or fertility) effects, because, since to selectionistic explanations chance is not an option, differential fitness is the mechanism by which natural selection works. If our trait is θ, that means that if my ancestors had had an intrapollectial angle (pollex means thumb in Latin, and therefore using the word makes it 'science') that was--what?--greater than my θ, or was it less than my θ, or was it enough that it wasn't exactly my θ?, then they didn't have any kids and I wouldn't be here!
But wait! What if the trait was the length of the distal pollectial segment, x, that mattered. Those ancestors with more stubby thumb-ends are fossils without issue (that means, by the way, that when paleontologists dig them up they can't interpret what they were adapted for, since the death was because they weren't adaptive). Or maybe those with gross thumbs whose last digit was a tad too long (how much is too much--one millimeter, say?) died out. The trouble of course is that even my adaptive distant ancestors are dead, and how can we say whether they were 'fully' adaptive and died after having many kids, or were just caught by a lion when they wandered carelessly away from the campfire to take a pee one night (we don't need to consider whether, perhaps gender-related, the thumb trait was involved in that latter process).
Now given my important pollectial triangle (here we consider θ and x to have jointly been the critical 'trait', though what justification there is for that is an open issue), our task is to understand what natural selection molded it for.
The obvious thing is to look at what we actually do with our thumbs (that requires a triangular configuration like mine). Most anthropologists will argue that our thumbs are for tool use--at least, it says that in textbooks about thumbs, so it must be true. But I know that my deep ancestors didn't have pens and pencils or space-bars to tap, so it must have been for throwing stones in annual sporting events where the winner impressed the girls and, well, you know....
What about holding a rake in the fall when I clear my yard of leaves---it couldn't have been for that because my ancestors evolved in Africa where you don't have to rake leaves in the fall because it's tropical and there is no fall! No, I have to think of uses that aren't just modern exploitations of a trait that was strongly selected in the past for something they did in the past. This is a bit now-centered, because in the past the use of the traits they had then might not have been for what they did then, but what their ancestors did in their distant past.
Anyway, I suppose since the trait must have had important fitness-related function, if it is assumed to have evolved by natural selection, it may have had a role in sexual stimulation of some sort. One can let one's imagination run wild, but we needn't go that far. As I was saying, other than for modern uses, what I use my pollectial angle for is for getting itchy specks of dust out and sleep-grains of the corner of my eye. Without that angle, θ, I wouldn't easily be able to do it. Now on the dusty African savannah, when sharp-eyed spotting of prey and aiming of spears or hunting-stones was vital, the eye-clearing use of thumbs is an obvious explanation.
There is the annoying little question of why not everyone's thumbs have exactly the same triangle--side length or angle. If this is an adaptive trait formed by natural selection long ago, which it must have been since humans around the world have pollectial triangles (and so did Neandertals and even Denisovans!), then why is it still so variable?
Well, I'm getting tired of all this speculation. This is especially unnerving because I can think of other aspects of thumbs that might be considered as traits, or the trait may be a complex of traits with their own individual selective reasons. Or, really disturbing, these assembled aspects may have had different functions (that affected reproductive success) during the million years in which the trait(s) were adaptively evolving.
Now selectionist axioms don't provide room for probability in reproductive success. That's why earlier I said that if my ancestors' value of θ was even a bit different from mine, then they must not have reproduced; if some tolerance was allowed, then fitness would be probabilistic, and that means chance, which is not allowed: the trait has to be functionally deterministic. That's because once you admit probabilities (genetic drift), you open the door to all sorts of less satisfying, less simple, and less clear-cut explanations--and Nature and the Times will shun your papers. Part of the fear is that if one says a trait's presence is for 'random' reasons, it opens the door to creationism or, worse, lack of some explanation that's simple enough for undergraduates to understand or the research to be reported in the NY Times. It raises the specter of a 747 arising randomly out of a pile of scrap metal.
This is a silly non sequitur, since nobody (not even neutralists, who somehow can conceive of aspects of traits that arose for no adaptive reason at all) denies the role of various sorts of constraint, including classical natural selection, even when invoking genetic drift. But it's just as silly to think that every tiny aspect of every trait has to be here because of specific natural selection in the usual sense. Darwin rationalized various aspects of traits, such as correlations among them, basically as having developmental-constraint reasons and so on. We know of such things in better molecular terms today.
More importantly, it is the definition of what counts as a 'trait' that we should be thinking more seriously about. Organisms evolve as organisms, not just as sets of traits, and to the extent that a trait is in the eye of the definer, speculations about what it's 'for' may also be just-so stories in the mind of the definer. So is the use of selection as an axiom: if something (whatever that be) is assumed to be here because of selection, then to say that it's here because of selection is a tautology. The care one must take to avoid such tautological thinking has been known for many decades. There are serious questions about evolution's mechanism that usually are swept under the rug because they point out inconvenient complexity that does not lead to convenient publishable answers. Who decides where a 'trait' s/he can define is or was a unitary phenomenon during its evolution? And who decides how precise the hypothesized adaptation was? And who decides how much of its variation may have been tolerated or even itself adaptive?
Trait micro-definitions, arbitrary post-hoc trait definition, and so on show by example how hard it is to legitimately decide what to include. And if, as most of the evidence both theoretical and observational suggests, typical selective advantages are on the order of about one per thousand (you have 10001 kids with a good thumb-angle vs 1000 without such an angle), that is almost meaningless besides being literally impossible to prove. It certainly means that traits with socially sensitive implications when interpreted in strong selectionist terms are, evolutionarily, essentially meaningless, as is the very notion that selection like that is a determinative 'force'. Many such traits probably exist, and most traits may be like that, if we could even decide how to define them. They should be deprived of their societal implications or value judgments, or much of the scientific surety with which the explanations are too-often couched. Even mate-choice criteria, that could be very close to fitness, if the trait is used only now and then because there really wasn't much choice in human ancestral populations or kinship rules specified who you mated with, may mean little locally even if over long term the scenario is true.
The point about thumbs may seem like a kind of reductio ad absurdam, but is it? Viewed from some time or vantage points, structures whose adaptive origins may seem obvious might be as problematic as my thumb from some other frames of reference. For thousands of generations it could be that, say, the angle or size of the iliac crest would seem as trivial a 'trait' as my thumb, while today it seems fundamental to upright posture.
Fatigue wins....
I must confess that my patience is running low (that's one of my traits!), so I am simply ignoring the wrinkles in the thumb. They may actually be the trait that evolved adaptively, though 'for' what is beyond me. Probably I'm just tiring. You can provide the explanation without my help, I'm sure.
Thursday, December 19, 2013
Cycling--and cycling ideas
By
Ken Weiss
On December 13 we posted about the Big Surprise release of a research paper that showed that exercise is good for you, indeed, better than medicine. Not good news for doctors (or the corporations that own them or sell pills through them). We commented about how well known this was.
A regular reader, John Vokey, pointed out a very nice recent article in the British Medical Journal, by the arch skeptic Ben Goldacre and David Spiegelhalter, about how we know whether something that's obvious is actually true. Here is a link to that piece. He's a widely known writer and commentator, as well as a practicing physician in Britain, who has written a great deal about similar aspects of how we use data and how this affects medicine. He writes about 'Bad Pharma' to try to correct such things (see link below his picture for more).
While it is obvious that exercise is good for your health, and we have some good physiological and physical reasons and mechanisms to back up that statement, in our post we noted that the correlation between health and exercise may not be so simple. For example, you have to already be healthy to exercise so the correlation may be a result not a cause of better health.
Goldacre takes something bluntly obvious, that wearing a helmet when bicycling is good for your health (that is, in terms of injuries). He shows that even this is neither so obvious nor simple. Just to illustrate the point, if you ride more often or more often in traffic because you feel safer when you wear a helmet, even with the same per-mile (or, in Goldacre's UK, per kilometre) risk there will be more rather than fewer cycling-related injuries: the population at-risk has grown. Or drivers may cut closer to you seeing that you are helmeted. And so on. As John Vokey pointed out in his comment, that brief but to-the-point article is a fine lesson in statistical reasoning.
If something as apparently simple as the risk of cycling with vs without a helmet is not so simple, then how much more complex will other sorts of causation, epidemiological, genetic, and evolutionary are supposed cause-and-effect scenarios be? A due respect for this complexity should routinely temper conclusions from simple study designs (or, in the case of evolution, almost pure surmises about natural selection in the distant past).
Yet pressures, and perhaps natural tendencies in our boastful current culture, seem to be doing just the opposite: leading investigators to make ever-quicker and ever more grandiose claims about their findings. This is used for self-promotion in general, in seeking grant support, and in the rush to the media. And science journalists often show little, sometimes almost zero sense of skepticism or even circumspection, about such claims.
The issues we face in science are nowadays very complex and subtle, and we know from even simple examples, such as the one Goldacre used to illustrate the pitfalls of statistical reasoning, that our conclusions can be very wrong, even in very simple ways. We try to make conclusions in science, but we should do that by starting with respect for the complexity of the problem.
A regular reader, John Vokey, pointed out a very nice recent article in the British Medical Journal, by the arch skeptic Ben Goldacre and David Spiegelhalter, about how we know whether something that's obvious is actually true. Here is a link to that piece. He's a widely known writer and commentator, as well as a practicing physician in Britain, who has written a great deal about similar aspects of how we use data and how this affects medicine. He writes about 'Bad Pharma' to try to correct such things (see link below his picture for more).
Ben Goldacre |
While it is obvious that exercise is good for your health, and we have some good physiological and physical reasons and mechanisms to back up that statement, in our post we noted that the correlation between health and exercise may not be so simple. For example, you have to already be healthy to exercise so the correlation may be a result not a cause of better health.
Goldacre takes something bluntly obvious, that wearing a helmet when bicycling is good for your health (that is, in terms of injuries). He shows that even this is neither so obvious nor simple. Just to illustrate the point, if you ride more often or more often in traffic because you feel safer when you wear a helmet, even with the same per-mile (or, in Goldacre's UK, per kilometre) risk there will be more rather than fewer cycling-related injuries: the population at-risk has grown. Or drivers may cut closer to you seeing that you are helmeted. And so on. As John Vokey pointed out in his comment, that brief but to-the-point article is a fine lesson in statistical reasoning.
If something as apparently simple as the risk of cycling with vs without a helmet is not so simple, then how much more complex will other sorts of causation, epidemiological, genetic, and evolutionary are supposed cause-and-effect scenarios be? A due respect for this complexity should routinely temper conclusions from simple study designs (or, in the case of evolution, almost pure surmises about natural selection in the distant past).
Yet pressures, and perhaps natural tendencies in our boastful current culture, seem to be doing just the opposite: leading investigators to make ever-quicker and ever more grandiose claims about their findings. This is used for self-promotion in general, in seeking grant support, and in the rush to the media. And science journalists often show little, sometimes almost zero sense of skepticism or even circumspection, about such claims.
The issues we face in science are nowadays very complex and subtle, and we know from even simple examples, such as the one Goldacre used to illustrate the pitfalls of statistical reasoning, that our conclusions can be very wrong, even in very simple ways. We try to make conclusions in science, but we should do that by starting with respect for the complexity of the problem.
Wednesday, December 18, 2013
On being at home where we live
By
Ken Weiss
We use this blog to present issues we feel are interesting....and points we believe are important. Often that means critiques of things we see around us. Those can be stronger or weaker as we see the situation through our own, fallible, eyes.
But it is all too easy to fall into critic-mode, fail to appreciate what is good about our areas of knowledge, and value what, in historical terms, a life of science really represents compared to what most of our species have had to endure.
This, if anything, should reinforce the burden every scholar or scientist feels: not to waste the opportunity, or act too selfishly, but to work towards a common good--the common good we routinely voice as a profession but to which we have it in our power to make a better approximation.
In recognition of the need to 'be at home where you live', we reproduce a poem we recently learned of, by the wonderful pastoral, common-man's poet, William Wordsworth. It is even fitting for the season.
Here, 'home' includes more than one's collection of Big Data, grants, or other score counts, and shows what one can find even in a simple poem.
Nuns Fret Not at Their Convent’s Narrow Room
By William Wordsworth
Nuns fret not at their convent’s narrow room;
And hermits are contented with their cells;
And students with their pensive citadels;
Maids at the wheel, the weaver at this loom,
Sit blithe and happy; bees that soar for bloom,
High as the highest Peak of Furness-fells,
Will murmur by the hour in foxglove bells:
In truth the prison, into which we doom
Ourselves, no prison is: and hence for me,
In sundry moods, ’twas pastime to be bound
Within the Sonnet’s scanty plot of ground;
Pleased if some Souls (for such there needs must be)
Who have felt the weight of too much liberty,
Should find brief solace there, as I have found.
But it is all too easy to fall into critic-mode, fail to appreciate what is good about our areas of knowledge, and value what, in historical terms, a life of science really represents compared to what most of our species have had to endure.
This, if anything, should reinforce the burden every scholar or scientist feels: not to waste the opportunity, or act too selfishly, but to work towards a common good--the common good we routinely voice as a profession but to which we have it in our power to make a better approximation.
In recognition of the need to 'be at home where you live', we reproduce a poem we recently learned of, by the wonderful pastoral, common-man's poet, William Wordsworth. It is even fitting for the season.
Here, 'home' includes more than one's collection of Big Data, grants, or other score counts, and shows what one can find even in a simple poem.
Weaver at his loom; Van Gogh |
Nuns Fret Not at Their Convent’s Narrow Room
By William Wordsworth
Nuns fret not at their convent’s narrow room;
And hermits are contented with their cells;
And students with their pensive citadels;
Maids at the wheel, the weaver at this loom,
Sit blithe and happy; bees that soar for bloom,
High as the highest Peak of Furness-fells,
Will murmur by the hour in foxglove bells:
In truth the prison, into which we doom
Ourselves, no prison is: and hence for me,
In sundry moods, ’twas pastime to be bound
Within the Sonnet’s scanty plot of ground;
Pleased if some Souls (for such there needs must be)
Who have felt the weight of too much liberty,
Should find brief solace there, as I have found.
Tuesday, December 17, 2013
The FDA's new 'ban' on antibiotic use in animals needs more teeth
For decades, healthy cows, pigs and chickens have been given antibiotics to maintain their health and to boost their growth, and more recently, so have farmed fish, but this is a major cause of the waning effectiveness of antibiotics. There seem to be multiple pathways leading from antibiotic use in animals to antibiotic resistance in humans.*
Chicken house; Wikimedia Commons |
The 'ban'
Now the FDA says it is effectively banning the use in food animals of those antibiotics that are medically important in human health, and that are used solely to enhance animal growth. A second piece of the new regulations is that a licensed veterinarian will be required to oversee antibiotic use if the grower wants to deliver these drugs to prevent illness. The changes will become effective over the next three years.
How will it work? The FDA is requesting that drug makers change antibiotic labels to exclude their use in animal growth promotion. Whether they do this or not is entirely voluntary. Given the huge vested interest drug manufacturers have in selling antibiotics to food producers -- 70-80% of antibiotics in the US are used in the food supply** -- and that farmers have in promoting fast growth in their animals, whether this will actually work is an open question, and there are many doubters. Though, the two pharmaceuticals that make the majority of antibiotics have said they will comply.
Comply or not, there are loopholes. A food producer can claim that the same daily use of low doses of antibiotics now meant to enhance growth is required to prevent illness, which would mean it's allowed. Thus, it's possible that nothing will change. Many critics would much prefer that antibiotics be allowed only to treat infection, and would like to see the FDA ban the preventive use of antibiotics.
Why we need a policy that works
It is important that we have a policy that works. Maryn McKenna describes the dire consequences of losing antibiotics in her sobering, excellent recent piece for Wired, ("When We Lose Antibiotics, Here's Everything Else We'll Lose Too"). Not only will we lose the obvious, the ability to treat infection, but also, as she writes, we'll lose the ability to treat cancer when it requires suppressing the immune response, to do organ transplants, kidney dialysis because it relies on an implanted portal into the blood stream, many kinds of surgery, Caesarian sections will be risky, and much more. As she points out, in the pre-antibiotic era, "one out of every nine skin infections killed" -- life will be a lot more dangerous again.
And, clearly, the way animals are raised for food on industrial farms will also have to change. But there are many arguments in favor of this already, even apart from the antibiotic resistance issue. Animals raised in the kinds of crowded conditions pig or cattle or chickens are too often raised in increases their risk of illness. And, these animals are often raised on feed that that also makes them more susceptible; smaller farms, and more humane conditions would greatly reduce the need for antibiotics. And, as McKenna also points out, many crops depend on antibiotics as well. When fruit or vegetable diseases now controlled with antibiotics can no longer be, that will be another major problem.
So, the FDA may be taking a desirable first step, but the stakes in public health terms are very high. If the critics turn out to be right about the loopholes, there's a lot to lose.
-----
*Smith, DL et al., Animal antibiotic use has an early but important impact on the emergence of antibiotic resistance in human commensal bacteria, PNAS, 2001.
Marshall, BM; Levy, SB., Food Animals and Antimicrobials: Impacts on Human Health, Clinical Microbiology Reviews, 2011.
**Mellon M, Benbrook C, Benbrook K L. Hogging It: Estimates of Antimicrobial Abuse in Livestock. Cambridge, MA: Union of Concerned Scientists; 2001.
National Research Council, Committee on Drug Use in Food Animals. The Use of Drugs in Food Animals: Benefits and Risks. Washington, DC: Natl. Acad. Press; 1999.
Monday, December 16, 2013
Innovation-stimulation: will it work? Definitely worth a try!
By
Ken Weiss
Francis Collins has for some reason decided that NIH should try really, really this time, to stimulate research 'innovation' by moving at least a bit away from costly, wasteful, excessive but incremental big project grants, to dedicating a goodly chunk of NIH external research funding to individual investigators rather than large groups. Here is the Nature story about it.
The NIH has been experimenting with funding high-risk, high-reward science with four separate pilot programs, including the Pioneer awards. According to the Nature piece,
Of course, individual independent investigators are just people, trend-following herd animals like most of us are. Once the new program is in place, every investigator will flock to the trough. Most will propose routine, safe projects even if they assert that they're 'innovative'.
Those proposals that really are innovative will involve risk in two main senses. First, they mainly will involve procedures or strategies that are to the area and/or to the investigator, truly new, unclear, or untried. Second, if the work is really innovative, most of it won't get completed on time, won't yield much in the way of publications, and -- worse -- won't find anything really new.
But is that outcome really 'worse'? We think just the opposite! If not much is invested in a project, not much is lost if it was truly creative but failed. By contrast much is currently invested in huge projects that are so safe that they hardly generate commensurate returns. Indeed, the reason for the failure of a really exploratory study may provide more useful knowledge than most 'positive' studies' findings. And most potentially innovative ideas are, and turn out to deserve to be, busts. That is why we call the ones that succeed innovative: they can change how we think.
This NIH policy change won't change crush of competition to keep the grants flowing, and will make it hard to see what is really innovative, in the inevitable panicky rush to get one's salary covered and keep the lab operating. It takes experience, perhaps, but not undue cynicism, to predict that this new policy will be gamed and strategized. The overpopulation of investigators still need funding (or jobs!), and will flood to the new trough, finding all sorts of reasons why their work is innovative. Do you think it could be otherwise, or that such discussions are not taking place already at brown-bag lunches in departments across the country?
Place limits
Unless we limit how much funding any one investigator can have, don't give these new grants to people who already have a grant or impose some such restrictions, we will largely see just be a game of musical chairs. New labels, same stuff. After all, who will be reviewing and administering these applications? It will be the same people who have brought you big-scale non-innovation all these years. Unless today's heavy hitters are able to reverse the politics back to the old way (making sure their big-projects don't get curtailed!), NIH will make it a new System, with all the bureaucratic politics and cumbersomeness that that involves. If their career has been spent in the current treadmill, how many will even be able to think in truly innovative ways? We are, after all, middle-class people who need to earn a living as things now stand. What else can you expect?
Still, the change should be better than what we currently have! The amount of funds wasted in the new way will be less than the amount being thrown away in the current rush to Big Science, the seeking of huge projects too big to kill and thus to provide career safety for the lucky investigators and their labs and fancy equipment. As long as in the new way, the funds for individual researchers are enough to let them do good work but not enough to let them get comfortably entrenched, or for their administrators to depend on the overhead, then it's got a chance to make a real difference.
Of course, this will work even better if those who are training graduate students and post-docs inculcate innovative thinking. If the grants are big enough just to enable faculty to hire students and technical staff to do the work, they may work less well. What we need are grants to individuals that are small enough that the recipients will actually have to roll up their sleeves and do some of their own work.
We would suggest a fillip that should be tried: Give grants to graduate students to do their own truly independent project, not just to be serfs on their mentors' project. Independent, free-standing funding for dissertations. Labs should be their professors' places of training, not just their playgrounds.
Finally, we know that too-few and too-small will not really work well (compare western science to most of Eastern European, Indian, or South American science in the '80s, for example). But unrelenting vigilance will be required to prevent coalescence once again into fewer, bigger projects.
If it can be done and really done properly, this could be a salubrious change, in directions we have to approve of, since we've been criticizing the current big-science mode for years. But we have to be patient, because innovation is very hard to come by.
Alpine ibex climbing Cingino Dam in Italy (source: every other web site) |
The NIH currently spends less than 5% of its US$30-billion budget on grants for individual researchers, including the annual Pioneer awards, which give seven people an average of $500,000 a year for five years. In contrast, the NIH’s most popular grant, the R01, typically awards researchers $250,000 per year for 3‒5 years, and requires a large amount of preliminary data to support grant applications.Expanding the program is clearly a move in a good direction. Big projects have their place, but have become as much a reflexive strategy for self-perpetuation as they are truly justified by their results history (which, by and large, isn't all that good or has reached diminishing returns).
Of course, individual independent investigators are just people, trend-following herd animals like most of us are. Once the new program is in place, every investigator will flock to the trough. Most will propose routine, safe projects even if they assert that they're 'innovative'.
Those proposals that really are innovative will involve risk in two main senses. First, they mainly will involve procedures or strategies that are to the area and/or to the investigator, truly new, unclear, or untried. Second, if the work is really innovative, most of it won't get completed on time, won't yield much in the way of publications, and -- worse -- won't find anything really new.
But is that outcome really 'worse'? We think just the opposite! If not much is invested in a project, not much is lost if it was truly creative but failed. By contrast much is currently invested in huge projects that are so safe that they hardly generate commensurate returns. Indeed, the reason for the failure of a really exploratory study may provide more useful knowledge than most 'positive' studies' findings. And most potentially innovative ideas are, and turn out to deserve to be, busts. That is why we call the ones that succeed innovative: they can change how we think.
This NIH policy change won't change crush of competition to keep the grants flowing, and will make it hard to see what is really innovative, in the inevitable panicky rush to get one's salary covered and keep the lab operating. It takes experience, perhaps, but not undue cynicism, to predict that this new policy will be gamed and strategized. The overpopulation of investigators still need funding (or jobs!), and will flood to the new trough, finding all sorts of reasons why their work is innovative. Do you think it could be otherwise, or that such discussions are not taking place already at brown-bag lunches in departments across the country?
Place limits
Unless we limit how much funding any one investigator can have, don't give these new grants to people who already have a grant or impose some such restrictions, we will largely see just be a game of musical chairs. New labels, same stuff. After all, who will be reviewing and administering these applications? It will be the same people who have brought you big-scale non-innovation all these years. Unless today's heavy hitters are able to reverse the politics back to the old way (making sure their big-projects don't get curtailed!), NIH will make it a new System, with all the bureaucratic politics and cumbersomeness that that involves. If their career has been spent in the current treadmill, how many will even be able to think in truly innovative ways? We are, after all, middle-class people who need to earn a living as things now stand. What else can you expect?
Still, the change should be better than what we currently have! The amount of funds wasted in the new way will be less than the amount being thrown away in the current rush to Big Science, the seeking of huge projects too big to kill and thus to provide career safety for the lucky investigators and their labs and fancy equipment. As long as in the new way, the funds for individual researchers are enough to let them do good work but not enough to let them get comfortably entrenched, or for their administrators to depend on the overhead, then it's got a chance to make a real difference.
Of course, this will work even better if those who are training graduate students and post-docs inculcate innovative thinking. If the grants are big enough just to enable faculty to hire students and technical staff to do the work, they may work less well. What we need are grants to individuals that are small enough that the recipients will actually have to roll up their sleeves and do some of their own work.
We would suggest a fillip that should be tried: Give grants to graduate students to do their own truly independent project, not just to be serfs on their mentors' project. Independent, free-standing funding for dissertations. Labs should be their professors' places of training, not just their playgrounds.
Finally, we know that too-few and too-small will not really work well (compare western science to most of Eastern European, Indian, or South American science in the '80s, for example). But unrelenting vigilance will be required to prevent coalescence once again into fewer, bigger projects.
If it can be done and really done properly, this could be a salubrious change, in directions we have to approve of, since we've been criticizing the current big-science mode for years. But we have to be patient, because innovation is very hard to come by.
Friday, December 13, 2013
Hippocrates knew it. Galen knew it. EVERYBODY knows it! (So why are we still paying for research on it?)
By
Ken Weiss
It is totally fair to say that everybody knows that exercise is good for you, and overindulgence isn't. Not all the details are known and they probably change over time and place, because there are various ways to exercise and various ways to eat, drink, and be merry.
But around 400 BC Hippocrates (whoever he/they was/were) clearly observed, knew, and stated that moderation in all things is good for health and longevity, and that exercise is part of that. 500 years later (yet still 2000 years ago), Galen was also very clear about the same points, and this from his own very extensive observation. Yes! "Evidence-based medicine" isn't new!
And, Galen's view, as described by Jack Berryman in "Motion and rest: Galen on exercise and health" (The Lancet, vol 380:9838, pp 210-11):
So, if we all already know this, and have known it for millennia, why are we as societies still paying for researchers to design even more studies so they could show this yet again, and again, and again, and...? The latest instance is covered in a recent NY Times story reporting a study published in the British Medical Journal in October ("Comparative effectiveness of exercise and drug interventions on mortality outcomes: metaepidemiological study", Naci and Ioannidis, BMJ 2013:347).
The authors looked at studies of the effect of exercise on mortality from heart disease, chronic heart failure, stroke or diabetes and found that exercise was either as good as the standard drug treatment or better, except in the case of chronic heart failure. The results show that exercise can be very effective, although medicine is the usual treatment prescribed (naturally).
This is not to mention how much disease would be reduced if we had the societal guts to address poverty. Real unsolved disease problems may be harder to design studies to understand, actually requiring new thinking rather than just designing some new sampling and questionnaires and the like. But at least it would be a more real kind of 'research'.
One answer is that this is how the system, and what is basically its rote means of self-perpetuation works. Science is a social phenomenon not just an objective one. An institutionalized system doesn't insist on moving beyond essentially safe problems that we have a sufficient knowledge of, to face up to ones we don't yet understand. That's riskier for professors needing salaries and publications, and administrators needing the overhead funding. It's part of the fat in the system.
And fat, as we've known since Hippocrates, isn't good for you!
** Actually, despite this post, no, we don't really know this that quite as clearly as you might think! We do certainly have lots of good mechanistic and physiological reasons why exercise is good, but some fraction of the association of exercise with health may be due to confounding: those who exercise are already healthier than average, or know or care more bout health, or they wouldn't do it (e.g., if they were too overweight, or had troublesome joints, etc.). So those who exercise are not a random sample. Is it the exercise itself that does them good? In any case, Galen thought so: he went to the gym regularly because he knew it was good for him!
But around 400 BC Hippocrates (whoever he/they was/were) clearly observed, knew, and stated that moderation in all things is good for health and longevity, and that exercise is part of that. 500 years later (yet still 2000 years ago), Galen was also very clear about the same points, and this from his own very extensive observation. Yes! "Evidence-based medicine" isn't new!
Hippocrates; Rubens engraving; Wikipedia |
If we could give every individual the right amount of nourishment and exercise, not too little and not too much, we would have found the safest way to health.
Eating alone will not keep a man well; he must also take exercise. -Hippocrates
And, Galen's view, as described by Jack Berryman in "Motion and rest: Galen on exercise and health" (The Lancet, vol 380:9838, pp 210-11):
Galen (c 129—210 AD), who borrowed much from Hippocrates, structured his medical “theory” upon the “naturals” (of, or with nature—physiology), the “non-naturals” (things not innate—health), and the “contra-naturals” (against nature—pathology). Central to Galen's theory was hygiene (named after the goddess of health Hygieia) and the uses and abuses of Galen's “six things non-natural”. Galen's theory was underpinned by six factors external to the body over which a person had some control: air and environment; food (diet) and drink; sleep and wake; motion (exercise) and rest; retention and evacuation; and passions of the mind (emotions). Galen proposed that these factors should be used in moderation since too much or too little would put the body in imbalance and lead to disease or illness.
Galen; Wikipedia |
The authors looked at studies of the effect of exercise on mortality from heart disease, chronic heart failure, stroke or diabetes and found that exercise was either as good as the standard drug treatment or better, except in the case of chronic heart failure. The results show that exercise can be very effective, although medicine is the usual treatment prescribed (naturally).
The results also underscore how infrequently exercise is considered or studied as a medical intervention, Dr. Ioannidis said. “Only 5 percent” of the available and relevant experiments in his new analysis involved exercise. “We need far more information” about how exercise compares, head to head, with drugs in the treatment of many conditions, he said, as well as what types and amounts of exercise confer the most benefit and whether there are side effects, such as injuries. Ideally, he said, pharmaceutical companies would set aside a tiny fraction of their profits for such studies.
But he is not optimistic that such funding will materialize, without widespread public pressure.The bottom line is that we already know exercise is good for you, don't we?** It is problematic that we yet need 'far more information', the usual researcher's plaint. How many details do we need to know about, already knowing that they are largely ephemeral, when there are actual serious unanswered disease questions that we might study? If half or more of diseases are in a sense treatable, preventable, or delayable with exercise rather than drugs, MRIs and CAT scans, surgery or other approaches, then why do we still allow doctors to meddle as much as they do? Why do we still have to spend public funds, essentially to feed schools of public health, to keep on doing what are essentially retreads of the same old studies (with fancier and costlier statistical packages and other exciting technologies to make us seem wise and innovatively insightful)--when there are real, devastating disease problems with real unknowns that could be addressed more intensely?
This is not to mention how much disease would be reduced if we had the societal guts to address poverty. Real unsolved disease problems may be harder to design studies to understand, actually requiring new thinking rather than just designing some new sampling and questionnaires and the like. But at least it would be a more real kind of 'research'.
One answer is that this is how the system, and what is basically its rote means of self-perpetuation works. Science is a social phenomenon not just an objective one. An institutionalized system doesn't insist on moving beyond essentially safe problems that we have a sufficient knowledge of, to face up to ones we don't yet understand. That's riskier for professors needing salaries and publications, and administrators needing the overhead funding. It's part of the fat in the system.
And fat, as we've known since Hippocrates, isn't good for you!
** Actually, despite this post, no, we don't really know this that quite as clearly as you might think! We do certainly have lots of good mechanistic and physiological reasons why exercise is good, but some fraction of the association of exercise with health may be due to confounding: those who exercise are already healthier than average, or know or care more bout health, or they wouldn't do it (e.g., if they were too overweight, or had troublesome joints, etc.). So those who exercise are not a random sample. Is it the exercise itself that does them good? In any case, Galen thought so: he went to the gym regularly because he knew it was good for him!
Subscribe to:
Posts (Atom)