We see study after study of genes 'for' behavioral traits considered to be driven by selection: intelligence, athletic ability, criminality, recklessness, drug abuse, aggression, even being a caring grandmother. The list goes on and on and on. Simplistically stated, the idea is that behavioral traits have a genetic basis, usually a simple 'genetic' one, and that during human evolution, those genetically bestowed with the 'best' version of the trait outcompeted those unlucky enough to be less intelligent, less of a risk taker, a more fearful warrior, and so on. That is pure Darwinian determinism: the bearers of the 'optimal' version of a trait systematically had more offspring, and thus the gene(s) for that version were selected for, and therefore increased in frequency.
This is why, for example, the basis of homosexuality is so curious to evolutionary biologists. How could a behavioral trait that means its bearer does not have offspring ever have evolved? How could a gene persist if it codes for something that interferes with reproduction so the gene isn't passed on? The most common explanation for this is that during the long millennia of human evolution, homosexuals mated and reproduced anyway, because homosexuality was culturally proscribed in the small groups in which humans lived. Maybe that's so, but it's certainly no longer true in many cultures where being gay doesn't have to be hidden anymore, so should we now expect the frequency of homosexuality to fall? Another post hoc account is that homosexuals helped care for their relatives' children, enhancing their extended kinship and hence consistent with natural selection--a technically plausible but basically forced speculative explanation by those who want Darwinian determinism to be as universal as gravity.
In any case, the "cause" of homosexuality is certainly an interesting evolutionary puzzle, if it's assumed to be genetic. It may well not be, of course -- perhaps sexual orientation is influenced by environmental exposures in utero or in infancy. But, let's go with the genetic assumption. Let's even assume that looking for genes for IQ, aggression and so many other behaviors is reasonable, because all these traits, as all traits, must be here because of natural selection.
In that case, it's very curious that there are so many traits that defy Darwinian explanation whose genetic basis isn't being explored. Where are the searches for genes for, say, voluntary celibacy, or use of birth control and non-celibates choosing not to have children, or for suicide, or child-beating, or infanticide, or abortion, or young men volunteering to be soldiers? These are all traits that make no evolutionary sense and shouldn't have evolved, if such traits have a genetic basis. We should be just as perplexed by the evolutionary history of these behaviors as we are by homosexuality. Why aren't we looking for genetic explanations?
I think it's a reflection of cultural values. It's rather akin to environmental epidemiologists never looking for the harmful effects of cauliflower, broccoli or Brussel's sprouts -- instead it's the things we like, our indulgences; alcohol, fatty foods, sugar, which reflect our Puritan scorn for pleasure. I think we notice and think about what seem to us to be unacceptable aberrations, and give much less thought to what seems normal. It's ordinary to us that nuns and priests choose not to reproduce, even if that is completely non-Darwinian, or that suicide bombers are generally of reproductive age and are foregoing having children. Abortion may not be personally acceptable to you, but it's a societal norm. Indeed, artificial birth control itself is highly problematic in a Darwinian world -- even worse for Darwinian theory, it sends women into the work force, away from their children.
Apparently we don't generally notice that these 'normal' behaviors are non-Darwinian -- our primary drive, consciously or unconsciously, but inherently, is supposed to be to perpetuate our genes. If behaviors are genetically driven, selected for, then it's not just homosexuality -- which, until recently, was not socially acceptable -- that doesn't make evolutionary sense, it's any behavior whose primary ramification is not to send our genes into the next generation.
So, don't we have the same issue with explaining the evolutionary origin of all these behaviors as we do explaining homosexuality? Perhaps. But let's consider an explanation that's not generally proffered: Perhaps this is all just statistical 'noise' around a weak rather than precisely or strongly deterministic natural selection, that Nature is just sloppier than the strictly Darwinian view would expect. The success of no species requires that every individual reproduce, so long as enough do. Culture is a powerful force -- once we respond to cultural dictates and norms, the simple evolutionary explanation of selection for optimal (in fitness terms) traits is much less convincing. And, perhaps we didn't evolve to reproduce, just to have orgasms.
And, is there a gene for being dogmatic?
Showing posts with label darwinian evolution. Show all posts
Showing posts with label darwinian evolution. Show all posts
Friday, January 11, 2019
Monday, May 9, 2016
Darwin the Newtonian. Part III. In what sense does genetic drift 'exist'?
By
Ken Weiss
It has been about 50 years since Motoo Kimora and King and Jukes proposed that a substantial fraction of genetic variation can be selectively neutral, meaning that the frequency of such an allele (sequence variant) in a population or among species changes by chance--genetic drift--and, furthermore, that selectively 'neutral' variation and its dynamics are a widespread characteristic of evolution (see Wikipedia: Neutral theory of molecular evolution). Because Darwin had been so influential with his Newtonian-like deterministic theory of natural selection, natural evolution was and still is referred to as 'non-Darwinian' evolution. That's somewhat misleading, if convenient as a catch-phrase, and often used to denigrate the idea of neutral evolution, because even Darwin knew there were changes in life that were not due to selection (e.g., gradual loss of traits no longer useful, chance events affecting fitness).
First, of course, is the 'blind watchmaker' argument. How else can one explain the highly organized functionally intricate traits of organisms, from the smallest microbe to the largest animals and plants? No one can argue that such traits could plausibly just arise 'by chance'!
But beyond that, the reasoning basically coincides with what Darwin asserted. It takes a basically thermodynamic belief and applies it to life. Mother Nature can detect even the smallest difference between bearers of alternative genotypes, and in her Newtonian force-like way, will proffer better success on the better genotype. If we're material scientists, not religious or other mystics, then it is almost axiomatic that since a mutation changes the nature of the molecule, if for no other reason that it requires the use of a different nucleotide and hence the use and or production of at least slightly different molecules and at least slightly different amounts of energy.
The difference might be very tiny in a given cell, but an organism has countless cells--many many billions in a human, and what about a whale or tree! Every nonessential nucleotide has to be provided for each of the billions of cells, renewed each time any cell divides. A mutation that deleted something with no important function would make the bearer more economical in terms of its need for food and energy. The difference might be small, but those who then don't waste energy on something nonessential must on average do better: they'll have to find less food, for example, meaning spend less time out scouting and hence exposed to predators, etc. In short, even such a trivial change will confer at least a tiny advantage, and as Darwin said many times to describe natural selection, nature detects the smallest grain in the balance (scale) of the struggle for life. So even if there is no direct 'function,' every nucleotide functions in the sense of needing to be maintained in every cell, creating a thermodynamic or energy demand. In this Newtonian view, which some evolutionary biologists hold or invoke quite strongly, there simply cannot be true selective neutrality--no genetic drift!
The relative success of any two genotypes in a population sample will almost never be exactly the same, and how could one ever claim that there is no functional reason for this difference? Just because a statistical test doesn't find 'significant' differences in the probabilistic sense that it's not particularly unusual if nothing is going on, tiny differences nonetheless obviously can be real. For example, a die that's biased in favor of 6 can, by chance, come up 3 or some other number more often in an experiment of just a few rolls. Significance cutoff values are, after all, nothing more than subjective criteria that we have chosen as conventions for making pragmatic decisions (the reason for dice being this way is interesting, but beyond our point here).
The difference might be very tiny in a given cell, but an organism has countless cells--many many billions in a human, and what about a whale or tree! Every nonessential nucleotide has to be provided for each of the billions of cells, renewed each time any cell divides. A mutation that deleted something with no important function would make the bearer more economical in terms of its need for food and energy. The difference might be small, but those who then don't waste energy on something nonessential must on average do better: they'll have to find less food, for example, meaning spend less time out scouting and hence exposed to predators, etc. In short, even such a trivial change will confer at least a tiny advantage, and as Darwin said many times to describe natural selection, nature detects the smallest grain in the balance (scale) of the struggle for life. So even if there is no direct 'function,' every nucleotide functions in the sense of needing to be maintained in every cell, creating a thermodynamic or energy demand. In this Newtonian view, which some evolutionary biologists hold or invoke quite strongly, there simply cannot be true selective neutrality--no genetic drift!
The relative success of any two genotypes in a population sample will almost never be exactly the same, and how could one ever claim that there is no functional reason for this difference? Just because a statistical test doesn't find 'significant' differences in the probabilistic sense that it's not particularly unusual if nothing is going on, tiny differences nonetheless obviously can be real. For example, a die that's biased in favor of 6 can, by chance, come up 3 or some other number more often in an experiment of just a few rolls. Significance cutoff values are, after all, nothing more than subjective criteria that we have chosen as conventions for making pragmatic decisions (the reason for dice being this way is interesting, but beyond our point here).
But what about the lightning strikes? They are fortuitous events that, obviously, work randomly against individuals in a population in a way unrelated to their genotypes, thus adding some 'noise' to their relative reproductive success and hence of allele (genetic variant) frequencies in the population over time. That noise would also be a form of true genetic drift, because it would be due to a cause unrelated to any function of the affected variants, whose frequencies would change, at least to some extent, by chance alone. A common, and not unreasonable selectionist response to that is to acknowledge that, OK! there's a minor role for chance, but nonetheless, on average, over time, the more efficient version must still win out in the end: 'must', for purely physical/chemical energetics if no other reasons. That is, there can be no such thing as genetic drift on average, over the long haul. Of course, 'overall' and 'in the end' have many unstated assumptions. Among the most problematic is that sample sizes will eventually be sufficiently great for the underlying physical, deterministic truth to win out over the functionally unrelated lightning-strike types of factors.
On the other hand, the neutralists argue in essence that such minuscule energetic and many other differences are simply too weak to be detected by natural selection--that is, to affect the fitness of their bearers. Our survival and reproduction are so heavily affected by those genotypes that really do affect them, that the remaining variants simply are not detectable by selection in life's real, finite daily hurly-burly competition. Their frequencies will evolve just by chance, even if the physical and energetic facts are real in molecular terms.
But to say that variants that are chemically or physically different do not affect fitness is actually a rather strong assertion! It is at best a very vague 'theory', and a very strong assumption of Newtonian (classical physics) deterministic principles. It is by no means obvious how one could ever prove that two variants have no effect.
But to say that variants that are chemically or physically different do not affect fitness is actually a rather strong assertion! It is at best a very vague 'theory', and a very strong assumption of Newtonian (classical physics) deterministic principles. It is by no means obvious how one could ever prove that two variants have no effect.
So we have two contending viewpoints. Everyone accepts that there is a chance component in survival and reproduction, but the selectionist view sees that component as trivial in the face of basic physical facts that two things that are different really are different and hence must be detectable by selection, and the other view that true equivalence is not only possible but widespread in life.
When you think about it, both views are so vague and dogmatic that they become largely philosophical rather than actual scientific views. That's not good, if we fancy that we are actually trying to understand the real world. What is the problem with these assertions?
When you think about it, both views are so vague and dogmatic that they become largely philosophical rather than actual scientific views. That's not good, if we fancy that we are actually trying to understand the real world. What is the problem with these assertions?
Can drift be proved?
Maybe the simplest thing in an empirical setting would just be to rule out genetic drift, and show that even if the differences between two genotypes are small in terms of fitness there is always at least some difference. But it might be easier to take the opposite approach, and prove that genetic drift exists. To that, one must compare carriers of the different genotypes and show that in a real population context (because that's where evolution occurs) there is no, that is zero difference in their fitness. But to prove that something has a value of exactly zero is essentially impossible!
Again to a dice-rolling analogy, a truly unbiased die can still come up 6 a different number of times than 1/6th of the number of rolls: try any number of rolls not divisible by 6! In the absence of any true theory of causation, or perhaps to contravene the pure thermodynamic consideration that different things really are different, we have to rely on statistical comparisons among samples of individuals with the different competing genotypes. Since there is the lightning-strike source of at least some irrelevant chance effects and no way to know all the possible ways the genotypes' effects might differ truly but only slightly, we are stuck making comparisons of the realized fitness (e.g., number of surviving offspring) of the two groups. That is what evolution does, after all. But for us to make inferences we must apply some sort of statistical criteria, like a significance cut-off value ('p-value') to decide. We may judge the result to be 'not different from chance', but that is an arbitrary and subjective criterion. Indeed, in the context of these contending views, it is also an emotional criterion. Really proving that a fitness difference is exactly zero without any real external theory to guide us, is essentially impossible.
All we can really hope to do without better biological theory (if such were to exist) is to show that the fitness difference is very small. But if there is even a small difference, if it is systematic it is the very definition of natural selection! Showing that the difference is 'systematic' is easier to say than do, because there is no limit to the causal ideas we might hypothesize. We cannot repeat the study exactly, and statistical tests relate to repeatable events.
There's another element making a test of real neutrality almost impossible. We cannot sample groups of individuals who have this or that variant and who do not differ in anything else. Every organism is different, and so are the details of their environment and lifestyle experiences. So we really cannot ever prove that specific variants have no selective effect, except by this sort of weak statistical test averaging over non-replicable other effects that we assume are randomly distributed in our sample. There are so many ways that selection might operate, that one cannot itemize them in a study and rule out all such things. Again, selectionists can simply smile and be happy that their view is in a sense irrefutable.
A neutralist riposte to this smugness would be to say that, while it's literally true that we can't prove a variant to confer exactly zero effect, we can say that it has a trivially small effect--that it is effectively neutral. But there is trouble with that argument, besides its subjectivity, which is the idea that the variant in question may in other times and genomic or environmental contexts have some stronger effect, and not be effectively neutral.
A related problem comes from the neutralists' own idea that by far most sequence variants seem to have no statistically discernible function or effect. That is not the same as no effect. Genomes are loaded with nearly or essentially neutral variants by the usual sampling strategies used in bioinformatic computing, such as that neutral sites have greater variation in populations or between species than is found in clearly functional elements. But this in no way rules out the possibility that combinations of these do-almost-nothings might together have a substantial or even predominant effect on a trait and the carriers' fitness.
After all, is not that just what have countless very large-scale GWAS studies shown? Such studies repeatedly, and with great fanfare, report that there are tens, hundreds, or even thousands of genome sites that have very small but statistically identifiable individual effects but that even these together still account for only a minority of the heritability, the estimate of the overall amount of contribution that genetic variation makes to the trait's variation. That is, it is likely that many variants that individually are not detectably different from being neutral may contribute to the trait, and thus potentially to its fitness value, in a functional sense.
This is one of the serious and I think deeply misperceived implications of the very high levels of complexity that are clearly and consistently observed, which raises questions about whether the concept of neutrality makes any empirical sense, and remains rather a metaphysical or philosophical idea. This is related to the concepts of phenogenetic drift that we discussed in Part II of this series, in which the same phenotype with its particular fitness can be produced by a multitude of different genotypes--the underlying alleles being exchangeable. So are they neutral or not?
A neutralist riposte to this smugness would be to say that, while it's literally true that we can't prove a variant to confer exactly zero effect, we can say that it has a trivially small effect--that it is effectively neutral. But there is trouble with that argument, besides its subjectivity, which is the idea that the variant in question may in other times and genomic or environmental contexts have some stronger effect, and not be effectively neutral.
A related problem comes from the neutralists' own idea that by far most sequence variants seem to have no statistically discernible function or effect. That is not the same as no effect. Genomes are loaded with nearly or essentially neutral variants by the usual sampling strategies used in bioinformatic computing, such as that neutral sites have greater variation in populations or between species than is found in clearly functional elements. But this in no way rules out the possibility that combinations of these do-almost-nothings might together have a substantial or even predominant effect on a trait and the carriers' fitness.
After all, is not that just what have countless very large-scale GWAS studies shown? Such studies repeatedly, and with great fanfare, report that there are tens, hundreds, or even thousands of genome sites that have very small but statistically identifiable individual effects but that even these together still account for only a minority of the heritability, the estimate of the overall amount of contribution that genetic variation makes to the trait's variation. That is, it is likely that many variants that individually are not detectably different from being neutral may contribute to the trait, and thus potentially to its fitness value, in a functional sense.
This is one of the serious and I think deeply misperceived implications of the very high levels of complexity that are clearly and consistently observed, which raises questions about whether the concept of neutrality makes any empirical sense, and remains rather a metaphysical or philosophical idea. This is related to the concepts of phenogenetic drift that we discussed in Part II of this series, in which the same phenotype with its particular fitness can be produced by a multitude of different genotypes--the underlying alleles being exchangeable. So are they neutral or not?
In the end, we must acknowledge that selective neutrality cannot be proved, and that there can always be some, even if slight, selective difference at work. Drift is apparently a mythical or even mystical, or at least metaphoric concept. We live in a selection-driven world, just as Darwin said more than a century ago. Or do we? Tune in tomorrow.
Monday, February 18, 2013
Sociobalderdash, and the Yanomami? Part I
By
Ken Weiss
Napoleon Chagnon's new book, Noble Savages, being widely reviewed and promoted, is great grist for the academic controversy mill. Every pop-sci author, everyone with media-assigned expertise (including some prominent university professors automatically credited with relevant insight because of some book they've written) is in on the act.
Nap -- we've known each other since we were in the Human Genetics Department at Michigan, working with Jim Neel, the leader of the biomedical studies of the Yanoami -- is not the most relaxed personality you'll ever meet. He's fiery and he's got very strong ideas that, even as graduate students, we wondered if didn't make him unsuitable as an objective observer of other cultures.
But we need not be post-modernists to recognize not only that Nap was for decades the most prominent cultural anthropologist in the post-Margaret Mead era, and he made the Yanomami one of the two most prominent 'primitive' (i.e., culturally non-technical relative to us) people in public and even professional awareness. The Yanomami were #1 by far, I think, but the Kalahari San ('bushmen') of Africa were #2, after the prior era's more numerously prominent, but less publicly, well-known tribal populations having been visited by anthropologists in the colonial era.
Nap's interpretation of the Yanomami were a reflection of his time. Animal behavior was being studied widely, and interpreted in the Darwinian context of attempts to explain the behavior in survival-of-the-fittest (SOTF) terms--that is, the traits we see today were assumed to be due to past natural selection essentially for the trait per se. The term 'sociobiology' was coined by EO Wilson some years later, but the idea was already rampant.
The question being studied involved many different components, one of which was a genetic question related to issues of the amount of harmful genetic variation that our primitive ancestors carried in their populations (related, at the time, to what chemical and nuclear fallout might be doing to our much larger and more socially complex populations). Looking at (or, perhaps more accurately, for) cultures today that were frozen replicates of our past was an objective of the evolutionary perspective of anthropology in the '60s and for a while thereafter.
Anthropological views and strategies on behavioral evolution
Rather than laboratory experiments, a prominent idea in anthropology at the time was that primates studied in the wild could show us how population structure evolved--how open vs forest environments led to selection for this or that kind of population size, territoriality, male dominance hierarchies, and the like. Books reporting fascinating field studies, and opining captivatingly simple Darwinian explanations were rife.
Male dominance hierarchies suited the Hobbesian, Darwinian SOTF terms. One tough guy (or alpha chimp, baboon, or whatever else that swimmeth in the sea or creepeth on the face of the earth), intimidated all the other guys and did all the rutting. This very effectively spread tough-guy genes, leading us to be the way we are today.
Unfortunately how we really are can't be seen in modern complex societies. Too many ways to reduce one guy's reproductive success, too many hospitals to take care of the weak, or soup kitchens for the needy. So, to see how we really are we needed the frozen cultural fossils of our ancestry. They could only be found in the most remote of places.
Neel seeking to understand the biomedical implications of the Big Man theory, and Chagnon to understand it culturally, made a very successful team. I'll talk about what they actually did, found, or argued tomorrow, but here I want to go over just a bit of the reaction to Nap's book.
Flying fruit: anthropological food-fighting
I haven't read the book. But, it's clear from reviews and stories in major media that, in essence, Nap is ranting about the way his work has been treated in recent years. Anthropological opponents, who don't like Nap's aggressive personality or who don't like the idea that people might fight over resources or who don't like anthropologists' mucking about and stirring up the natives, accused him of seriously damaging the Yanomami, in many ways with lethal if inadvertent--but avoidable and predictable--effects, accused him of nefarious practice.
Nap responds that his opponents tried to vilify him within his profession, cowed a main professional organization into buying the accusations, and have done him dirty. What really happened to the Yanomami and Nap's role in it (as alleged in Patrick Tierney's book Darkness in El Dorado roughly a decade ago), is disputed. The biomedical accusations, such as giving measles to the Indians to see how they, not previously exposed, were manifestly false, as I know personally from discussions with and seeing field notes of, Neel and one of his main field companions.
But if Chagnon has his enemies, he also has his supporters in what has become an archetype of a professional food-fight gone viral. He was, at an advanced age and far after his work itself was done, elected into the National Academy of Sciences last year. This must have a political symbolic nature, especially perhaps as the current NAS anthropology membership rather predominantly favors the Darwinian viewpoint of behavior and Nap's election (which would have been fully appropriate decades ago, without a political aroma) has to be seen as a gesture in the context of his recent fights within Anthropology. This gives a joyful finger in the eye to Nap's opponents--and many would argue it's a well-deserved symbolic finger-gesture at their demagogic treatment of him. And this new book is his attempt to revive his reputation. Based on the reviews and articles about it, nobody will mistake it for an objective factual treatise. He is as feisty as ever.
A major explanation for the criticism to which he's been subjected, and a major element of his defense, is the fact that many anthropologists just can't buy sociobiological theorizing about human culture, nor his using the Yanomami as a valid, even archetypal 'primitive' people. He argues that his antagonists simply can't abide the idea that Darwinian evolution has made us culturally and behaviorally, as well as physically the way we are. That's true, whether one gives credence to the critics' viewpoint or not.
So regardless of whether his work directly or indirectly harmed the Yanomami, questions which involve legitimate ethical issues, the heat of the attacks have always been, I think, aimed at his justification of violence and inequality as being inherent in our nature, for reasons that he claims his studies of the Yanomami show.
Tomorrow, we'll look at some of those issues themselves.
Nap -- we've known each other since we were in the Human Genetics Department at Michigan, working with Jim Neel, the leader of the biomedical studies of the Yanoami -- is not the most relaxed personality you'll ever meet. He's fiery and he's got very strong ideas that, even as graduate students, we wondered if didn't make him unsuitable as an objective observer of other cultures.
But we need not be post-modernists to recognize not only that Nap was for decades the most prominent cultural anthropologist in the post-Margaret Mead era, and he made the Yanomami one of the two most prominent 'primitive' (i.e., culturally non-technical relative to us) people in public and even professional awareness. The Yanomami were #1 by far, I think, but the Kalahari San ('bushmen') of Africa were #2, after the prior era's more numerously prominent, but less publicly, well-known tribal populations having been visited by anthropologists in the colonial era.
Nap's interpretation of the Yanomami were a reflection of his time. Animal behavior was being studied widely, and interpreted in the Darwinian context of attempts to explain the behavior in survival-of-the-fittest (SOTF) terms--that is, the traits we see today were assumed to be due to past natural selection essentially for the trait per se. The term 'sociobiology' was coined by EO Wilson some years later, but the idea was already rampant.
The question being studied involved many different components, one of which was a genetic question related to issues of the amount of harmful genetic variation that our primitive ancestors carried in their populations (related, at the time, to what chemical and nuclear fallout might be doing to our much larger and more socially complex populations). Looking at (or, perhaps more accurately, for) cultures today that were frozen replicates of our past was an objective of the evolutionary perspective of anthropology in the '60s and for a while thereafter.
Anthropological views and strategies on behavioral evolution
Rather than laboratory experiments, a prominent idea in anthropology at the time was that primates studied in the wild could show us how population structure evolved--how open vs forest environments led to selection for this or that kind of population size, territoriality, male dominance hierarchies, and the like. Books reporting fascinating field studies, and opining captivatingly simple Darwinian explanations were rife.
Male dominance hierarchies suited the Hobbesian, Darwinian SOTF terms. One tough guy (or alpha chimp, baboon, or whatever else that swimmeth in the sea or creepeth on the face of the earth), intimidated all the other guys and did all the rutting. This very effectively spread tough-guy genes, leading us to be the way we are today.
Unfortunately how we really are can't be seen in modern complex societies. Too many ways to reduce one guy's reproductive success, too many hospitals to take care of the weak, or soup kitchens for the needy. So, to see how we really are we needed the frozen cultural fossils of our ancestry. They could only be found in the most remote of places.
Neel seeking to understand the biomedical implications of the Big Man theory, and Chagnon to understand it culturally, made a very successful team. I'll talk about what they actually did, found, or argued tomorrow, but here I want to go over just a bit of the reaction to Nap's book.
Flying fruit: anthropological food-fighting
I haven't read the book. But, it's clear from reviews and stories in major media that, in essence, Nap is ranting about the way his work has been treated in recent years. Anthropological opponents, who don't like Nap's aggressive personality or who don't like the idea that people might fight over resources or who don't like anthropologists' mucking about and stirring up the natives, accused him of seriously damaging the Yanomami, in many ways with lethal if inadvertent--but avoidable and predictable--effects, accused him of nefarious practice.
Nap responds that his opponents tried to vilify him within his profession, cowed a main professional organization into buying the accusations, and have done him dirty. What really happened to the Yanomami and Nap's role in it (as alleged in Patrick Tierney's book Darkness in El Dorado roughly a decade ago), is disputed. The biomedical accusations, such as giving measles to the Indians to see how they, not previously exposed, were manifestly false, as I know personally from discussions with and seeing field notes of, Neel and one of his main field companions.
But if Chagnon has his enemies, he also has his supporters in what has become an archetype of a professional food-fight gone viral. He was, at an advanced age and far after his work itself was done, elected into the National Academy of Sciences last year. This must have a political symbolic nature, especially perhaps as the current NAS anthropology membership rather predominantly favors the Darwinian viewpoint of behavior and Nap's election (which would have been fully appropriate decades ago, without a political aroma) has to be seen as a gesture in the context of his recent fights within Anthropology. This gives a joyful finger in the eye to Nap's opponents--and many would argue it's a well-deserved symbolic finger-gesture at their demagogic treatment of him. And this new book is his attempt to revive his reputation. Based on the reviews and articles about it, nobody will mistake it for an objective factual treatise. He is as feisty as ever.
A major explanation for the criticism to which he's been subjected, and a major element of his defense, is the fact that many anthropologists just can't buy sociobiological theorizing about human culture, nor his using the Yanomami as a valid, even archetypal 'primitive' people. He argues that his antagonists simply can't abide the idea that Darwinian evolution has made us culturally and behaviorally, as well as physically the way we are. That's true, whether one gives credence to the critics' viewpoint or not.
So regardless of whether his work directly or indirectly harmed the Yanomami, questions which involve legitimate ethical issues, the heat of the attacks have always been, I think, aimed at his justification of violence and inequality as being inherent in our nature, for reasons that he claims his studies of the Yanomami show.
Tomorrow, we'll look at some of those issues themselves.
Friday, March 23, 2012
You look just like.... well, almost like
![]() | ||
| Female hoverfly on cistus flower; Wikimedia |
Ken's point in his post was that, yes, Darwinians have explained these as examples of very clever adaptation, a way to outwit predators and increase the odds of survival, but that in fact these ruses aren't always all that successful. As he said, after describing some examples of very effective mimicry in butterflies that he had come across in his travels, "...while I did see the effectiveness of protective coloration in these two instances, I also did, after all, see the butterflies. I wasn't completely fooled."
Indeed, if it's so effective, why haven't all species evolved protective coloration? And, the Darwinian's answer is that it's only one kind of adaptation, and there are many others. Each trait has its own adaptive purpose, and it is the job of science to uncover what that is. But, as Ken also pointed out, most adaptive explanations can't be confirmed, no matter how plausible they seem. Furthermore, whether an organism is eater or eatee it may typically largely be due to chance, and the genetic contribution is usually very slight, and essentially undetectable (even industrial melanism in moths, recently confirmed statistically with new data, was a lot of work). And, the assumption that natural selection detects all functional genetic variation is simply an assumption, but it makes Just-so stories about adaptation unfalsifiable.
We read the Nature pieces on mimicry, then, with this in mind. The question posed in the Penney et al. paper is why some harmless hoverflies are such good mimics of stinging Hymemoptera (wasps or bees), and others are much less convincing. They point out that evolutionary theory about mimicry assumes that some copies are pretty exact but that examples of inexact copying abound (though, there does come a point where one would ask how we're sure it's in fact a mimic or what 'exact' means in this kind of situation).
Explanations of poor mimicry include that it may look poor to us, but it's good enough to fool a predator, or imperfect mimicry is even more adaptive than perfect mimicry, or that imperfect mimicry benefits co-specifics (this is a kin selection argument), or that there are constraints on the evolution of a more perfect copy, or relaxed selection, whereby selection for mimicry becomes weak enough that it is "counteracted by weak selection or mutation", that is, that there's no selective benefit to refining the mimic further.
To try to determine which of these explains the poor hoverfly mimics, Penney et al. used "subjective human rankings of mimetic fidelity...across a range of species" of two different taxa (Syrphidae and Hymenoptera) and compared them for consistency against a statistical analysis of trait values. They found a strong positive relationship between body size and mimetic fidelity, and suggest that "body size influences predation behaviour and thereby the intensity of selection for more perfect mimicry."
The idea is that the larger the prey, the more benefit to the predator, and thus the more urgently the prey needs to figure out a way to avoid the predator. Smaller or more abundant hoverflies needn't spend so much energy trying to fool a predator, as each insect is less likely to be preyed upon because there are more of them to choose from, or because they are less of a mouthful, and so less disirable.
So, they explain imperfect mimicry as the relaxation of selection on mimicry, though they do not find this counteracted by weak selection or mutation, and they do not reject the constraints hypothesis. They conclude that "reduced predation pressure on less profitable prey species limits the selection for mimetic perfection."
The same explanation always, but always different
Notice that each of the 5 possible explanations they offer assume that selection of some sort must be the explanation, if only they knew it. This is the Darwinist assumption, that the ground-state of life is competitive natural selection. If one selection story is shown to be wrong, then it must be another, as we've seen in this case. This explanatory tack is very widely accepted, indeed, assumed without question. But the assumption of selective adaptation is not itself ever questioned. Is it true?
More accurate than that assumption is that the ground state of life is existence, or over time, persistence. Whatever reproduces, reproduces. This is an assumption, too, and is testable....but isn't very helpful at all in and of itself. We can go a bit further: There is differential reproduction among differing genotypes, but even in a totally non-selective world this would be the case (in formal terms, genetic drift is inevitable). Sometimes success may be due to a systematic relationship between the state that succeeds and its success, and that's natural selection, but this need not be the case. The question is when and to what extent predictable, non-random change is occurring, and that is not at all easy to show most of the time.
More profoundly, selection need not be (as Darwin seriously thought, and as most modern biologists accept without thinking seriously about) the force-like phenomenon it is usually, if often tacitly, assumed to be. It can be weak, ephemeral, local, moveable, and even probabilistic. Even from a purely selectionist point of view, all sorts of species with all sorts of variation are reproducing in all sorts of ways in relation to all sorts of factors -- including each other. There is no reason to expect that single factors, alone, will necessarily motor on through with some clear-cut force-like trajectory of change. These statements are not at all controversial, yet seem to be in effect ignored when each investigator's favorite trait is being evaluated 'on the margins' as one would say in statistics: that is, evaluated in isolation, on its own.
We have a great parallel here with polygenic causation that is so pervasive and frustrates GWAS as we've said countless times here. With polyfactorial ecologies, what oozes through the blizzard of factors will not necessarily be simple or explicable in terms of one factor on its own -- say, looking like something else. This is a very different perspective than trying to analyze everything as if it is the type of selection we have to identify or explain why, surprisingly, it's not perfect.
Think of it in this very simple way. It is almost always possible to change most traits. Experimentally, this is reflected by the fact that most traits respond to artificial selection. In this case, that means that it should always be possible for natural selection to lead to change in ways that make any species that somebody else eats look more like the background of where it lives (even against bacterial predators, some form of camouflage defense at the molecular level should always be possible). If the selectionist stories are accurate, that camouflage increases your odds of living to frolic another day, then every species should be camouflaged and should normally dwell on its match.
This is so clearly not how nature is that one wonders why fundamentalistic Darwinism ever took hold, even by Darwin himself. Why isn't everything camouflaged? The answer, which we referred to casually as the 'slop' of nature in earlier posts (e.g., here), is that evolution is persistence in the entire ecology of each organism, and sometimes something seemingly so obvious as mimicry clearly seems to happen. But not most of the time. Or, each trait in each organism can be argued to have some such story. That is so ad hoc, or perhaps post hoc, it has a resemblance to creationism--in the epistemological sense of being something in which every individual observation has the same explanation (selection made it so), no matter what the details. If we assume selection, we can always adjust our explanations for what's here to fit some selection-story. One simple component of this, obviously, is that the predators are evolving their detection ability as well. It's all an endless dance, and this is not controversial biology, but is within what we know very well.
Biology should grow up. The ground state of life is persistence, however it happens. And stuff happens. There are lots of ways to persist. Selection is one, but it's only one, and it is itself a subtle, moveable feast.
Thursday, June 16, 2011
Oh, what a tangled web we weave
By
Ken Weiss
Walter Scott is responsible for the famous line: Oh, what a tangled web we weave, when first we practice to deceive.It sounds profound, but is it wise words, or just bollocks?
Here, at least is the latest in the 'evolution was just like this' department. A study on reasoning that is part of an issue on the subject in the Journal of Behavioral and Brain Sciences, concludes that reasoning evolved to be deceiving rather than to tell the truth. At least, the authors of this new just-so story don't claim it's genetic: deceptive rhetoric evolved socially.
As the NYTimes reports, it has long been assumed that reasoning evolved to enable us to search for and determine Truth. But,
[n]ow some researchers are suggesting that reason evolved for a completely different purpose: to win arguments. Rationality, by this yardstick (and irrationality too, but we’ll get to that) is nothing more or less than a servant of the hard-wired compulsion to triumph in the debating arena. According to this view, bias, lack of logic and other supposed flaws that pollute the stream of reason are instead social adaptations that enable one group to persuade (and defeat) another. Certitude works, however sharply it may depart from the truth.
The idea, labeled the argumentative theory of reasoning, is the brainchild of French cognitive social scientists, and it has stirred excited discussion (and appalled dissent) among philosophers, political scientists, educators and psychologists, some of whom say it offers profound insight into the way people think and behave.Now whether this is true in all societies, a uniform cultural evolution or a parallel one (similar in, say Pacific Islanders and African Ashanti or San, but of independent social-evolutionary origin) is a valid question. In fact, much as we hate to take sides with the genetic-evolutionary view (!), there have been numerous arguments that human language is basically an extension of eons-old display tactics that were designed to intimidate or deceive, as in mating competition. Such things are not new to humans, or even primates, so if it is biological it predates our species and the explanation is more general.
However, let's ignore whether it's cultural or biological. The same people who fervidly see competition-everywhere, the arch-darwinian view of life, will love the idea that she dissembles to deceive. If you're looking for a rival under every bed, you'll certainly go along with the idea that we use language (reasoning, persuasion, rhetoric) to distract, derail, or deceive potential competitors: you really do, as the article says, want to win rather than to inform others.
This certainly is one way language is used (at least in cultures in nation-states, where we have daily evidence). But is it 'the' truth? Is it part of culture, or only of some cultures....or did cultures evolve reasoning 'for' deception per se?
It is easy to see a polar opposite to the latest assault of selectionism. If you deceive, you can cause others to come to loss or grief. Why should they not have a long memory, that they'll use to even the score later? Why isn't truth good for the group, and deception a way to make everyone vulnerable? Our ancestors--including primates--lived in very small local groups. They might be very vulnerable to internal misinformation. Why is telling the truth institutionalized in many if not most cultural norms--what children are taught, for example, even if we're not perfect at it? Is that because if everyone is convinced you're truth-telling, it's easier for you to mislead? Or is it because truth-telling really is what's important, and you mutually have to rely on it for your survival?
Further, what is truth and how do you know what people's motives are (whether they are even aware of them or not)? Since 'truth' is what we think we perceive, and since we always have imperfect data, imperfect perception, and imperfect intelligence, why should we assume that 'reasoning' is false rather than flawed? After all, even 'experts' in most fields related to behavior (to wit: economists, pundits) are grossly wrong presumably because of ignorance or bias rather than intentional deception. One might suggest that the journal article's authors' interpretation of the intent in reasoning is more a reflection of their ideologies and biases, than it is of the underlying truth (or are we just saying that to make you believe us??).
Anyway, why should our ability to reason have evolved for only one purpose? Did our hands evolve just to let us hunt? Like most traits, reason is multi-purpose, and can be as useful for cooperation as well as competition -- and many other things. It's how we express how we assess our environment, our circumstances. Often it is verbal expression, imperfectly representing our internal thoughts. We can reason our way to figuring out from these footprints what kind of prey we're likely to find if we hang out long enough at the waterhole, or that that plant is poisonous, or to formulating a mathematical proof just as we can clap with our hands, help another across a creek, and caress a child's cheek. And, it's not just that the functions other than winning arguments are exaptations (purposes beyond the one the trait first evolved 'for') of the ability to reason, because ants and bees can reason. Or is deceptive reasoning limited, by social constraints, to certain but not all topics of conversation?
The choice of a single overriding purpose for our ability to reason says more about the those doing the choosing than it does about the trait.
Monday, June 29, 2009
The Centrality of Cooperation in Life: a first installment
A recent story in the New York Times science section about the importance of cooperation in ant colonies reminded us that we've been focused on things like disease and genetic causation in our blog for a while now. So we thought it was time to get back to other things, such as the importance of cooperation in all of biology, not just to ants.
Cooperation was in the subtitle of our book, and for a reason. Ever since Darwin's Origin of Species, whose 150th anniversary is rightly being celebrated this year, there has been what we think is an excessive belief that competition is central to the nature of life. In biology, this ethos is largely about the way that Darwinian evolution, with its stress on competition among individuals within a population, with its genetically determined winners, losers, inherent goods and bads, has fit the nature of our industrial culture's worldview. It is a convenient way to rationalize and hence justify self-interested gain by a few against the many.
Nobody can deny that there is competition in life, in the sense that some individuals do better at reproducing than others. Species have their day, and fade as other species flourish. It's an important mechanism for biological change and was a brilliant insight of Darwin, as well of others in his time (Wallace's attention was more on group competition against environmental limitations, than on individual competition). It helped demystify the origin and nature of life and its diversity.
However, it's not the whole truth about life. Instead, we think, a focus on competition draws disproportionate attention to the long-term historical aspects of life, even if the Darwinian explanation is accurate!, rather than what we can easily see every day before our very eyes. What we see everywhere, every day is mainly cooperation: among molecules within each of our cells, among cells within each individual, and among individuals. In a deep biological sense, if not in one that fits the value-loaded human word 'cooperation', even predator and prey must cooperate: both must be present for each to survive.
Classical evolutionary theory, and a lot of popular science writing based on it, assumes competition to be the fundamental force in life. But absolutely as essential to life is necessity for cooperation at all times and at all these levels--among genes in genetic pathways, among organelles in cells, among cells and tissues and organs, and in ecosystems among organisms within and between species.
And, our focus on cooperation leads us to a different view of natural selection and its importance in evolution. As we say in The Mermaid's Tale, natural selection does happen, but it depends on a lot of if's. For example, if a species over-reproduces, and if there is variation in the next generation, and if some of that variation leads its bearers to do better in a given environment, and if that's due to the inherited genome, and if the environment remains stable for long enough that this variant is favored consistently, and if the favored forms reproduce successfully, as do their offspring, and if they produce more offspring than organisms without the favored variant, then these favored organisms may become more common, due to natural selection. That is, they will be better adapted to their environment. But, all these if's must co-occur for natural selection to be an important force in change. If they are sporadic, or varying in nature and intensity, then their relative importance diminishes in relation to other aspects of life, including chance. Indeed, distinguishing chance from natural selection is no simple challenge.
No matter, to understand life in a deep sense one really has first to understand the nature of the countless cooperative interactions on which it is based. How those interactions change over long time periods of generations of cells, organisms, species, and ecosystems is important. But the interactions, and how they organize life, come first.
Cooperation was in the subtitle of our book, and for a reason. Ever since Darwin's Origin of Species, whose 150th anniversary is rightly being celebrated this year, there has been what we think is an excessive belief that competition is central to the nature of life. In biology, this ethos is largely about the way that Darwinian evolution, with its stress on competition among individuals within a population, with its genetically determined winners, losers, inherent goods and bads, has fit the nature of our industrial culture's worldview. It is a convenient way to rationalize and hence justify self-interested gain by a few against the many.
Nobody can deny that there is competition in life, in the sense that some individuals do better at reproducing than others. Species have their day, and fade as other species flourish. It's an important mechanism for biological change and was a brilliant insight of Darwin, as well of others in his time (Wallace's attention was more on group competition against environmental limitations, than on individual competition). It helped demystify the origin and nature of life and its diversity.
However, it's not the whole truth about life. Instead, we think, a focus on competition draws disproportionate attention to the long-term historical aspects of life, even if the Darwinian explanation is accurate!, rather than what we can easily see every day before our very eyes. What we see everywhere, every day is mainly cooperation: among molecules within each of our cells, among cells within each individual, and among individuals. In a deep biological sense, if not in one that fits the value-loaded human word 'cooperation', even predator and prey must cooperate: both must be present for each to survive.
Classical evolutionary theory, and a lot of popular science writing based on it, assumes competition to be the fundamental force in life. But absolutely as essential to life is necessity for cooperation at all times and at all these levels--among genes in genetic pathways, among organelles in cells, among cells and tissues and organs, and in ecosystems among organisms within and between species.
And, our focus on cooperation leads us to a different view of natural selection and its importance in evolution. As we say in The Mermaid's Tale, natural selection does happen, but it depends on a lot of if's. For example, if a species over-reproduces, and if there is variation in the next generation, and if some of that variation leads its bearers to do better in a given environment, and if that's due to the inherited genome, and if the environment remains stable for long enough that this variant is favored consistently, and if the favored forms reproduce successfully, as do their offspring, and if they produce more offspring than organisms without the favored variant, then these favored organisms may become more common, due to natural selection. That is, they will be better adapted to their environment. But, all these if's must co-occur for natural selection to be an important force in change. If they are sporadic, or varying in nature and intensity, then their relative importance diminishes in relation to other aspects of life, including chance. Indeed, distinguishing chance from natural selection is no simple challenge.
No matter, to understand life in a deep sense one really has first to understand the nature of the countless cooperative interactions on which it is based. How those interactions change over long time periods of generations of cells, organisms, species, and ecosystems is important. But the interactions, and how they organize life, come first.
Wednesday, May 27, 2009
How long would it take to walk to the ends of the Earth?
"Who started walking out of Africa? Not everybody walked. Very few people walked, and they walked to the ends of the Earth. The geneticists now have found that the people who migrated had a particular gene, and this gene makes you a risk-taker, a wanderer. Bipolarity is an extreme version of this gene, and the people who migrated, they had this gene."This is a quote from Indian political economist, Deepak Lal, on the BBC radio show, The Forum, which aired 05/24/09. Lal went on to say that the people who inherited this gene for risk-taking are now ethnic minorities, which explains why they tend to "punch above their weight".
But the people who migrated out of Africa are ancestors of us all. If they had a gene 'for' risk-taking, all of us would have it, not just today's ethnic minorities. And, it can't be argued that for genetic reasons ethnic minorities tend to succeed more often than majority populations because the minorities are or were majorities somewhere at some time.
It's amazing how misguided smart people can be about genetics and population history. Granted Lal is an economist, not a population geneticist or anthropologist-- but he spoke so authoritatively! Darwinian scenarios, of the BS as well as plausible variety, are so easy to put forth with an air of confidence! But, is this all air, or does it actually matter when people get it so wrong? We suggest that it does for a number of reasons because it happens in many contexts, with ramifications, not just on BBC radio programmes.
First, the idea of a gene 'for' risk-taking has been reported, but the original study has been difficult to replicate, in part because it's difficult to define risk-taking, and in part because genes 'for' behavior have been quite elusive. Populations considered to be high risk-takers or violent by some, with a high frequency of the DRD4 mutations originally associated with risk-taking, might be considered to be pacific by others, for example. So, this trait is, like most complex traits, difficult to define and genes 'for' it difficult to confirm. If one thinks about genetic mechanisms, many genes would contribute to such traits, and they would be very culture-specific or their manifestation would be, at least.
Second, genes 'for' bipolarity, again as for all complex traits, have not been confirmed. Mapping genes making major contributions to psychiatric disorders has been very problematic, at best. Some candidates have been found, but even they generally account for only a small fraction of the instances.
Third, ethnic minorities are only such at certain times and places.
Fourth, can we really conclude that ethnic minorities are all feisty? It's funny how people with guns, money, or power can seem so much more adventurous than those who haven't the same assets.
These simple (or is it simplistic?) Darwinian arguments reflect incredibly naive population biology. Basically, nobody 'walked out of Africa' in the alleged sense. People did not have travel posters or travel agents. They might go upstream or 'over there' to find game or plants they liked to eat. If nobody was living there (or only brute Homo erectines), they might decide to kick them out and take over a nice campsite. Nobody ever heard of the ends of the earth! They did not know there were island paradises in the Pacific to go vacation at.
Young adults would pair up and move to better pastures, perhaps to get away from their annoying parents or pests in their little hunter-gatherer bands. They would, however, generally stay in touch--not go too far, because family was everything for social survival (and mates in the next generation). Gradual expansion would be on foot and in a sense by everyone, going over the next hill for open territory. Nobody would know they were going anywhere!
Each generation, village exogamy (kin-based mating rules) would mean that the 'explorers'' children would have to return to their former hearth and home to find mates. There would be large amounts of inter-village gene flow, back and forth, each generation.
It is possible that among those who chose to pop over the hill, those more inclined would have some slightly increased probability of doing that. But in the overall scheme of things, genetic drift and other kinds of chance would have watered down any such signal.
The genetic data clearly show that the farther from Africa people are, the smaller the subset of 'African' genetic variants they carry. This does not mean that they purified African adventure genes, and those in Tierra del Fuego aren't pure adventurers, nor less so than the Bantu marauders who captured each other in wars and sold the victims into slavery (unless there is an adventure gene in the slaves that made them turn themselves in?)
In historic times, and in invasions, hordes of armies or boatloads of pilgrims were forced to travel and settle elsewhere, in large numbers. They were not selected based on Indiana Jones genes.
We'll be a lot better off in evolutionary reconstructions, especially as regards behavior which clearly reflects social biases of the authors, if we temper our view based on a realistic understanding of the demographic chaos that is life!
Tuesday, May 26, 2009
For God's sake!
By
Ken Weiss
We have just reviewed a book by CE Cosans, called Owen's Ape & Darwin's Bulldog: Beyond Darwinism and Creationism (Indiana Univ Press, 2009). It's an interesting discussion of the famous debate between Darwin's public (pugilistic) advocate, TH Huxley, and Darwin's opponent Richard Owen (founder of the British Natural History Museum). The question was whether humans have distinct anatomical characters compared to apes (genes were not known at the time).
The brain, naturally, was the organ of interest. Huxley claimed that there were only quantitative differences between the anatomy of ape and human brains. Owen claimed distinct differences, including a structure called the hippocampus minor. The debate was long, public, and bitter. In the end, because Darwinism won the nature-of-life debate, Huxley is treated as the winner of this debate, too. But Cosans provides good evidence that, considering their worldviews and what they actually said, Owen's interpretation was perhaps closer to the anatomical truth. Cosans analyzes the history in terms of theories about science and its relationship to the empirical world and our interpretations of it.
The analysis of this history is fine, but Cosans also delves into what Darwin, Huxley, and Owen believed about God (and hence his subtitle). This is only peripherally relevant to the main event, the debate about anatomy. It makes a saleable subtitle, and we guess that these days nobody can simply leave the religion vs 'creationism' fight alone.
The point here, however, is not who believes what and why in that regard, but that the religion debate provides a distraction, that we constantly see these days, away from the merits of the various scientific cases. In particular, we should no longer be concerned as scientists about what Darwin's or Huxley's personal religious views were. Darwin's writing is of interest to history, certainly, but not as a sacred text (though one to be revered, to be sure!).
What we think today about biology and its nature needs to be evaluated in terms of what we know and what we can testably speculate about. The gravitational pull of gossipy food-fights is natural, since even scientists are human. We have enough trouble being objective about the science itself, such as the relative roles of natural selection, population structure, environmental change, and chance in the nature of life. But too often we relate these discussions to irrelevant, but often highly emotive, sideshows.
We don't need that in order to struggle with truly fascinating questions such as what makes humans seem so different from other animals, despite great similarities in our genomes!
The brain, naturally, was the organ of interest. Huxley claimed that there were only quantitative differences between the anatomy of ape and human brains. Owen claimed distinct differences, including a structure called the hippocampus minor. The debate was long, public, and bitter. In the end, because Darwinism won the nature-of-life debate, Huxley is treated as the winner of this debate, too. But Cosans provides good evidence that, considering their worldviews and what they actually said, Owen's interpretation was perhaps closer to the anatomical truth. Cosans analyzes the history in terms of theories about science and its relationship to the empirical world and our interpretations of it.
The analysis of this history is fine, but Cosans also delves into what Darwin, Huxley, and Owen believed about God (and hence his subtitle). This is only peripherally relevant to the main event, the debate about anatomy. It makes a saleable subtitle, and we guess that these days nobody can simply leave the religion vs 'creationism' fight alone.
The point here, however, is not who believes what and why in that regard, but that the religion debate provides a distraction, that we constantly see these days, away from the merits of the various scientific cases. In particular, we should no longer be concerned as scientists about what Darwin's or Huxley's personal religious views were. Darwin's writing is of interest to history, certainly, but not as a sacred text (though one to be revered, to be sure!).
What we think today about biology and its nature needs to be evaluated in terms of what we know and what we can testably speculate about. The gravitational pull of gossipy food-fights is natural, since even scientists are human. We have enough trouble being objective about the science itself, such as the relative roles of natural selection, population structure, environmental change, and chance in the nature of life. But too often we relate these discussions to irrelevant, but often highly emotive, sideshows.
We don't need that in order to struggle with truly fascinating questions such as what makes humans seem so different from other animals, despite great similarities in our genomes!
Saturday, May 23, 2009
Genetic leaf-litter
By
Ken Weiss
There are many ways in which everyone is a conceptual prisoner, encaged in culturally based limits. We are born to, and trained in and entrained by our circumstances, and these in turn are a legacy of history. We can try to escape from this but probably the most we can hope for is to keep subtle assumptions and constraints at bay. In genetics, there is a pervasive concept of the 'wild type', a concept that goes back into the history of genetic research, referring to the natural allele at a gene, that was favored by a history of selection, relative to which other alleles (mutational variants) were viewed as generally rare and harmful (waiting to be shortly removed by natural selection).
There is a tacit extension of this gene-specific concept to the whole genome (or even organism) as when 'normal' inbred laboratory mice are referred to as the 'wild type' relative to an experimental modification such as a transgenic gene knockout mouse of the same strain. Sometimes this is clear shorthand, but beware of conceptual shorthand! An implication of this kind of genetic thinking is that in regard to human traits, including especially disease, there is the normal human genome as represented by 'the' human genome sequence available in genome data bases, and the disease-causing mutants. But in fact genomes are very large sequences of DNA that serve as targets for mutation in every cell, every individual, every generation.
We know that biological traits are the result of developmental processes that include countless genes (of the classical protein-coding type as well as many other functional DNA sequence elements). Species contain large numbers of members--there are about 7 billion of us humans stalking the Earth. What this means is that there is a potentially huge amount of variation at most if not all viable spots in our genome. After a mutation occurs, it may proliferate if its bearer successfully reproduces. Over time, some of these alleles grow in frequency to become quite common.

When genomic DNA is sequenced in a number of individuals, this variation is easily detected. But whether affected by natural selection or just by the chance aspects of reproductive success or failure, most allelic variation that is present in genomes at any given time is rare. Relative to the more common variants, this genetic variation is a kind of leaf-litter of variation. Even with hundreds of thousands or, indeed, hundreds of millions of very rare variants present in our species, any small sample will pick up some of them by chance.
In a small sample, those will seem to be more common than they are; so if we sequenced 5 people (10 copies of the genome) the lucky variants whose true population frequency is only a few in a billion that by chance are in the 5 people we sample, will seem to have a frequency of at least 10% (one copy of the 10 we sampled being the variant). The tip-off that this genomic leaf-litter exists is that most of the variants are not seen in other samples, or if common enough to be sampled more than once, usually only seen in samples from the same geographic region (because that's where they arose as new mutations, and were transmitted to descendants who remained living in the same continent). In developed countries, variants that cause disease will show up in specialty clinics at major medical centers.
In trying to find variants by mapping, as in genomewide association studies (GWAS) that compare sequences between cases and controls, we may feel that we have so far detected the common, but not all the rare causal variants that exist. But we may also feel that if we can just enlarge our samples, we'll get a much better handle on the nature of the effects of these variants, or we'll detect the remaining variants that haven't yet been detected.
This is likely to be an illusion, as the growing number of those of us who argue that very large GWAS will not bring a big payoff of the kind envisioned and promised by those who argue for this kind of project. There are several reasons for this skepticism. First, it is hard to detect rare things with statistical significance, much less to get a good idea of their effects and action. One needs huge samples to get enough instances to show that the variant is meaningfully more common in cases than controls.
But second, the leaf-litter phenomenon means that as sample sizes increase, more and more rarer and rarer variants will be picked up. It will be difficult to show clearly that they are causally involved with our trait, but even if they are they will have less and less effect on public health. They will vary from population to population, and sample to sample from the same population. Environments may affect whether carriers of the variant manifest the disease, and most such variation will at most have minor effect on risk of disease (if the effect were stronger, the allele would have been removed by selection, or we would have been able to detect it in family studies).
And if it requires more than one such variant, or even many of them, to combine to produce disease, the detection and evaluation situation will be that much more challenging, if not pointless. There will always be exceptions, as is true about the nature of life. But the leaf-litter phenomenon is real and there is plenty of evidence for it. It is predicted by population genetics theory. And it is consistent with results of mapping studies that have been done to date. Ironically, perhaps, while the individual rare alleles have little detectable effects, their aggregate effects in the population may account for the observed heritability (familial aggregation of risk, or similarity of trait values) of most traits, including disease. That heritability, which is clearly there, is what has been considered mysterious given the failure of linkage or GWAS studies to find the genes that are responsible.
We are presented with a kind of epistemological paradox: the genetic variation exists, but we may have insurmountable challenges to find most of it. Indeed, it is somewhat mystical even to argue that it exists as individual effects, if they cannot be found or replicated by current statistical genetic methods. Evolution 'cares' about reproductive success, not about simplicity in genetic causation. From a population perspective, evolution occurs because mutations occur generating variation that selection and chance can effect from one generation to the next.
Genetic leaf-litter is thus the fuel for evolution. We may care to know the cause of each instance of a trait or disease, but Nature has only cared about viability and success, and tolerates the leaf-litter. As massive amounts of human DNA sequence are produced, we will see this. It will be an incredible playground for population and evolutionary geneticists. But what we do with it, in terms of identifying disease causation, is not clear.
There is a tacit extension of this gene-specific concept to the whole genome (or even organism) as when 'normal' inbred laboratory mice are referred to as the 'wild type' relative to an experimental modification such as a transgenic gene knockout mouse of the same strain. Sometimes this is clear shorthand, but beware of conceptual shorthand! An implication of this kind of genetic thinking is that in regard to human traits, including especially disease, there is the normal human genome as represented by 'the' human genome sequence available in genome data bases, and the disease-causing mutants. But in fact genomes are very large sequences of DNA that serve as targets for mutation in every cell, every individual, every generation.
We know that biological traits are the result of developmental processes that include countless genes (of the classical protein-coding type as well as many other functional DNA sequence elements). Species contain large numbers of members--there are about 7 billion of us humans stalking the Earth. What this means is that there is a potentially huge amount of variation at most if not all viable spots in our genome. After a mutation occurs, it may proliferate if its bearer successfully reproduces. Over time, some of these alleles grow in frequency to become quite common.

When genomic DNA is sequenced in a number of individuals, this variation is easily detected. But whether affected by natural selection or just by the chance aspects of reproductive success or failure, most allelic variation that is present in genomes at any given time is rare. Relative to the more common variants, this genetic variation is a kind of leaf-litter of variation. Even with hundreds of thousands or, indeed, hundreds of millions of very rare variants present in our species, any small sample will pick up some of them by chance.
In a small sample, those will seem to be more common than they are; so if we sequenced 5 people (10 copies of the genome) the lucky variants whose true population frequency is only a few in a billion that by chance are in the 5 people we sample, will seem to have a frequency of at least 10% (one copy of the 10 we sampled being the variant). The tip-off that this genomic leaf-litter exists is that most of the variants are not seen in other samples, or if common enough to be sampled more than once, usually only seen in samples from the same geographic region (because that's where they arose as new mutations, and were transmitted to descendants who remained living in the same continent). In developed countries, variants that cause disease will show up in specialty clinics at major medical centers.
In trying to find variants by mapping, as in genomewide association studies (GWAS) that compare sequences between cases and controls, we may feel that we have so far detected the common, but not all the rare causal variants that exist. But we may also feel that if we can just enlarge our samples, we'll get a much better handle on the nature of the effects of these variants, or we'll detect the remaining variants that haven't yet been detected.
This is likely to be an illusion, as the growing number of those of us who argue that very large GWAS will not bring a big payoff of the kind envisioned and promised by those who argue for this kind of project. There are several reasons for this skepticism. First, it is hard to detect rare things with statistical significance, much less to get a good idea of their effects and action. One needs huge samples to get enough instances to show that the variant is meaningfully more common in cases than controls.
But second, the leaf-litter phenomenon means that as sample sizes increase, more and more rarer and rarer variants will be picked up. It will be difficult to show clearly that they are causally involved with our trait, but even if they are they will have less and less effect on public health. They will vary from population to population, and sample to sample from the same population. Environments may affect whether carriers of the variant manifest the disease, and most such variation will at most have minor effect on risk of disease (if the effect were stronger, the allele would have been removed by selection, or we would have been able to detect it in family studies).
And if it requires more than one such variant, or even many of them, to combine to produce disease, the detection and evaluation situation will be that much more challenging, if not pointless. There will always be exceptions, as is true about the nature of life. But the leaf-litter phenomenon is real and there is plenty of evidence for it. It is predicted by population genetics theory. And it is consistent with results of mapping studies that have been done to date. Ironically, perhaps, while the individual rare alleles have little detectable effects, their aggregate effects in the population may account for the observed heritability (familial aggregation of risk, or similarity of trait values) of most traits, including disease. That heritability, which is clearly there, is what has been considered mysterious given the failure of linkage or GWAS studies to find the genes that are responsible.
We are presented with a kind of epistemological paradox: the genetic variation exists, but we may have insurmountable challenges to find most of it. Indeed, it is somewhat mystical even to argue that it exists as individual effects, if they cannot be found or replicated by current statistical genetic methods. Evolution 'cares' about reproductive success, not about simplicity in genetic causation. From a population perspective, evolution occurs because mutations occur generating variation that selection and chance can effect from one generation to the next.
Genetic leaf-litter is thus the fuel for evolution. We may care to know the cause of each instance of a trait or disease, but Nature has only cared about viability and success, and tolerates the leaf-litter. As massive amounts of human DNA sequence are produced, we will see this. It will be an incredible playground for population and evolutionary geneticists. But what we do with it, in terms of identifying disease causation, is not clear.
Monday, May 18, 2009
The strident atheist scientists
By
Ken Weiss
The BBC has a 5-minute interview program and on May 16 their website aired such an interview with Richard Dawkins. Although a public figure, he is purportedly a professor of Public Understanding of Science, at Oxford. That is unfortunate, because he like many others in science these days (especially, evolutionary biologists) are cashing in on a kind of strident atheism that is a bad misrepresentation of science, just the opposite of what they should be doing.
Dawkins said in this interview, as he has elsewhere, that he doesn't believe in something for which there is no evidence. Therefore he says he shouldn't have to deal with God any more than with the Flying Spaghetti Monster. Of course, he has every right to doubt the claims of religion (or tooth fairies, or Santa Claus), or the literal truth of religions based on sacred texts that purport to relate to the real world. Many scientists, who spend their lives trying very hard to understand the world share his skepticism about the claims of religion and other 'mystical' kinds of phenomena.
But it's very bad science! We should be educating the public--and ourselves--as to what science as we know it is really about, not propagandizing our personal views on the world (or, at least, we should be clear that those are our views, and not based on science per se). The reason is quite simple, but we think just as important.
Science uses an array of methods to study a range of phenomena and has some semi-formal rules of decision-making (called the 'scientific method'). We agree, as a culture, that the principles of things like mathematics and our idea about logical reasoning are true. We agree on acceptable forms of measurement, data collection, data analysis for drawing both specific observational conclusions as well as generalizations (theories) about Nature. But therein lies the most important issue.
Dawkins and the professional atheists say that as scientists they don't believe in that for which there is no evidence. But what they really mean, and to give them credit may not even be aware of, is that they don't accept things for which there is no acceptable evidence. And that is a kind of tautology. Something is defined as 'true' (or, at least, plausible) if and only if it is within the realm of their kind of evidence.
There's nothing at all wrong with doing this, but it does make science an axiomatic system for viewing some particular aspects of the world. What is wrong is not recognizing that this is what they are doing. In fact, there is a great deal of evidence for religious claims--even if that doesn't legitimately apply to claims such as the Biblical fundamentalists make about young earth and intelligent design.
Hundreds of millions of people claim that they have had what scientists would call mystical experiences. Many say they communicate directly with God. Whether that's literally true or not is, in fact, beyond the scope of current science. The reason is that current science doesn't deal with, and many if not most scientists won't accept, such types of subjective, personalized evidence as evidence.
Science says it deals with the material world, but that communicating with God is not material, and therefore not real in the sense of materialism. But what is 'material'? Before the discovery of, say, electromagnetic radiation, we could not deal with unobservable aspects of it (like parts of the spectrum we couldn't see). Dark matter in space is another kind of example. We can't see it in the usual way, and only claim it's there because it seems to affect the pattern of radiation as it passes to us through the cosmos.
Once a phenomenon is discovered and can be measured, it becomes part of the 'material' world and joins the panoply of scientific causes. We can never know what things we don't yet know, and it is a kind of arrogance to assert that things we don't know by our chosen way-of-knowing aren't true--or that there is 'no evidence' for them.
We cannot challenge the atheists' skepticism about the often patently false material claims of religion. But it is badly misleading the public 'understanding' of science to present science as if it is the only way of understanding the realities of existence.
What we are saying provides absolutely no support for religious or other mystical claims! We cannot, and do not, imply that claims of personal contacts with God are true or even what 'true' would mean in this case. The mistake Dawkins and other strident scientist-atheists make does not in any way lead to the conclusion that Genesis may be true after all!
But scientists and the public alike should realize that science is a part of our culture, not something handed down to us from the outside (by whom? God??)! It is a particular, and hence limited, way of characterizing the world. In terms of manipulating the world, it is immensely powerful. And it does seem to provide ever-better accounts of what we see around us. But it is a system that is based on a set of accepted rules. Religion and its kin are in a similar way cultural phenomena.
To a hard-nosed scientist, the comforting claims of religion don't seem to ring true, and that can be a depressing thought, given the pain and suffering that clearly are part of reality. Atheism is easy to understand. It is easy to make the personal judgment that claims of speaking with God are illusions. But personal conviction is not the same as science, and not what science is all about. If you don't want to accept as evidence such claims, fine. But it's part of your assumptions. And that's what the public, religious or otherwise, should understand about the nature of science in relation to the nature of Nature.
Dawkins was asked in the same interview what is the point if he doesn't believe in God. His answer was that the point is to enjoy life, to live life to the fullest. This seems to us to ignore Darwin's own lessons, to further perpetuate misunderstanding, not further the public understanding, of science. Dawkins has been called "Darwin's Rottweiller" because he has written so much in defense of evolutionary thought, but within a Darwinian framework, the point, if there is one, is only to survive and reproduce. And that is not even a 'point', as if there was some sort of ordained purpose; instead, it's just the nature of life.
Anything else we make of life, our values, our sense of beauty and purpose, our beliefs and so on, is our own invention. It may be the most important aspect of our lives as we experience them, for sure. But it is personal, and does not reflect scientific knowledge and there is no reason other than celebrity gawking why the opinion about the point of life of a scientist as scientist, is any more meaningful than anyone else's view. Again, this is relevant to science only so far as the evidentiary and topical basis of science goes, which would say that Nature has properties, but doesn't have a 'point'.
If Dawkins had said that science doesn't tell us anything about the 'point' beyond the Darwinian framework, and that this is not an area where expertise of any kind at all has any bearing, and that what he can offer is his own view of life for those who might be curious about it, then he would have been doing his job to enhance public understanding.
Dawkins said in this interview, as he has elsewhere, that he doesn't believe in something for which there is no evidence. Therefore he says he shouldn't have to deal with God any more than with the Flying Spaghetti Monster. Of course, he has every right to doubt the claims of religion (or tooth fairies, or Santa Claus), or the literal truth of religions based on sacred texts that purport to relate to the real world. Many scientists, who spend their lives trying very hard to understand the world share his skepticism about the claims of religion and other 'mystical' kinds of phenomena.
But it's very bad science! We should be educating the public--and ourselves--as to what science as we know it is really about, not propagandizing our personal views on the world (or, at least, we should be clear that those are our views, and not based on science per se). The reason is quite simple, but we think just as important.
Science uses an array of methods to study a range of phenomena and has some semi-formal rules of decision-making (called the 'scientific method'). We agree, as a culture, that the principles of things like mathematics and our idea about logical reasoning are true. We agree on acceptable forms of measurement, data collection, data analysis for drawing both specific observational conclusions as well as generalizations (theories) about Nature. But therein lies the most important issue.
Dawkins and the professional atheists say that as scientists they don't believe in that for which there is no evidence. But what they really mean, and to give them credit may not even be aware of, is that they don't accept things for which there is no acceptable evidence. And that is a kind of tautology. Something is defined as 'true' (or, at least, plausible) if and only if it is within the realm of their kind of evidence.
There's nothing at all wrong with doing this, but it does make science an axiomatic system for viewing some particular aspects of the world. What is wrong is not recognizing that this is what they are doing. In fact, there is a great deal of evidence for religious claims--even if that doesn't legitimately apply to claims such as the Biblical fundamentalists make about young earth and intelligent design.
Hundreds of millions of people claim that they have had what scientists would call mystical experiences. Many say they communicate directly with God. Whether that's literally true or not is, in fact, beyond the scope of current science. The reason is that current science doesn't deal with, and many if not most scientists won't accept, such types of subjective, personalized evidence as evidence.
Science says it deals with the material world, but that communicating with God is not material, and therefore not real in the sense of materialism. But what is 'material'? Before the discovery of, say, electromagnetic radiation, we could not deal with unobservable aspects of it (like parts of the spectrum we couldn't see). Dark matter in space is another kind of example. We can't see it in the usual way, and only claim it's there because it seems to affect the pattern of radiation as it passes to us through the cosmos.
Once a phenomenon is discovered and can be measured, it becomes part of the 'material' world and joins the panoply of scientific causes. We can never know what things we don't yet know, and it is a kind of arrogance to assert that things we don't know by our chosen way-of-knowing aren't true--or that there is 'no evidence' for them.
We cannot challenge the atheists' skepticism about the often patently false material claims of religion. But it is badly misleading the public 'understanding' of science to present science as if it is the only way of understanding the realities of existence.
What we are saying provides absolutely no support for religious or other mystical claims! We cannot, and do not, imply that claims of personal contacts with God are true or even what 'true' would mean in this case. The mistake Dawkins and other strident scientist-atheists make does not in any way lead to the conclusion that Genesis may be true after all!
But scientists and the public alike should realize that science is a part of our culture, not something handed down to us from the outside (by whom? God??)! It is a particular, and hence limited, way of characterizing the world. In terms of manipulating the world, it is immensely powerful. And it does seem to provide ever-better accounts of what we see around us. But it is a system that is based on a set of accepted rules. Religion and its kin are in a similar way cultural phenomena.
To a hard-nosed scientist, the comforting claims of religion don't seem to ring true, and that can be a depressing thought, given the pain and suffering that clearly are part of reality. Atheism is easy to understand. It is easy to make the personal judgment that claims of speaking with God are illusions. But personal conviction is not the same as science, and not what science is all about. If you don't want to accept as evidence such claims, fine. But it's part of your assumptions. And that's what the public, religious or otherwise, should understand about the nature of science in relation to the nature of Nature.
Dawkins was asked in the same interview what is the point if he doesn't believe in God. His answer was that the point is to enjoy life, to live life to the fullest. This seems to us to ignore Darwin's own lessons, to further perpetuate misunderstanding, not further the public understanding, of science. Dawkins has been called "Darwin's Rottweiller" because he has written so much in defense of evolutionary thought, but within a Darwinian framework, the point, if there is one, is only to survive and reproduce. And that is not even a 'point', as if there was some sort of ordained purpose; instead, it's just the nature of life.
Anything else we make of life, our values, our sense of beauty and purpose, our beliefs and so on, is our own invention. It may be the most important aspect of our lives as we experience them, for sure. But it is personal, and does not reflect scientific knowledge and there is no reason other than celebrity gawking why the opinion about the point of life of a scientist as scientist, is any more meaningful than anyone else's view. Again, this is relevant to science only so far as the evidentiary and topical basis of science goes, which would say that Nature has properties, but doesn't have a 'point'.
If Dawkins had said that science doesn't tell us anything about the 'point' beyond the Darwinian framework, and that this is not an area where expertise of any kind at all has any bearing, and that what he can offer is his own view of life for those who might be curious about it, then he would have been doing his job to enhance public understanding.
Tuesday, April 28, 2009
The Darwin parable?
By
Ken Weiss
Every human culture is embedded in stories about itself in the world, its lore, based on some accepted type of evidence. Science is a kind of lore about the world that is accepted by modern industrialized cultures.
It has long been pointed out that science is only temporary knowledge in the sense that, as we quoted in an earlier post, every scientist before yesterday has been wrong at least to some extent. As Galileo observed, rather than being true, sometimes the accepted wisdom is the opposite of the truth--he was referring to Aristotle, whose views had been assumed to be true for nearly 2000 years.
Scientific theory provides a kind of parable of the world, a simplified story. That's not the same as the exact truth. Here is a cogent bit of dialog from JP Shanley's play Doubt, referring to a parable the priest, Father Flynn, had used in a recent sermon:
Darwinian theory is like that. The idea that traits that are here today are here because they were better for their carriers' ancestors than what their peers had, is a tight, taut, and typically unfalsifiable kind of explanation. Since what is here is here because it worked and what did not work is not here, this becomes true by definition. It's a catch-all 'explanation'. It at least has the appearance of truth, even when some particular Darwnian explanation invokes some specific factor--often treated from Darwin's time to the present as a kind of 'Newtonian' force--that drove genetic change in the adaptive direction we see today.
That's the kind of scenario that's offered to account for the evolution of flight to capture prey, showy peacock feathers to attract mates, protective coloration to hide from predators, or why people live long enough to be grandmothers (to care for their genetic descendants). Some of these explanations may very well be factually true, but almost all could have other plausible explanations or are impossible to prove.
Simple, tight, irrefutable but unprovable stories like these are, to varying but unknown extent, parables rather than literal truth. Unfortunately, while science often (and often deservedly!) has little patience with pat religious parables that are invoked as literal truth, science often too blithely accepts its own theories as literal truth rather than parable.
We naturally have our own personal ideas about the nature of life, and we think (we believe) that they are generally true. They are sometimes different, as we try to outline in our book and in these postings, from what many others take for granted as truth. Strong darwinian selectionism and strong genetic determinism, in the ways we have discussed, are examples.
It may be difficult for people in any kind of culture, even modern technical culture, to be properly circumspect about their own truth-stories. Perhaps science must cling to theories too tight to be literally true, by dismissing the problems and inconsistencies that almost always are known. Accepted truths provide a working research framework, psychological safety in numbers, and the conformism needed to garner society's resources and power (here, among other things, in the form of grants, jobs, publications).
In fact, as a recent book by P. Kyle Stanford, Exceeding Our Grasp (Oxford Press, 2006) discusses at length, most theories including those in biology are under-determined: this means that many different theories, especially unconceived alternatives to current theory, could provide comparable fit to the existing data.
We can't know when and how such new ideas will arise and take their turn in the lore that is science. But in principle, at least, a bit more humility, a bit more recognition that our simple stories are more likely to be parable than perfect, would do us good.
Good parables do have semblance to plausibility and truth. Otherwise, they would not be useful parables. As we confront the nature of genomes, we see things that seem to fit our theoretical stories. In science as in other areas of human affairs, that fit is the lure, the drug, that draws us to extend our well-supported explanations to accept things as true that really are parable. We see this all the time, in the nature of much that is being said in genetics these days, as we have discussed.
Probably there's a parable about wisdom that could serve as a lesson in this regard. Maybe a commenter will remind us what it is!
It has long been pointed out that science is only temporary knowledge in the sense that, as we quoted in an earlier post, every scientist before yesterday has been wrong at least to some extent. As Galileo observed, rather than being true, sometimes the accepted wisdom is the opposite of the truth--he was referring to Aristotle, whose views had been assumed to be true for nearly 2000 years.
Scientific theory provides a kind of parable of the world, a simplified story. That's not the same as the exact truth. Here is a cogent bit of dialog from JP Shanley's play Doubt, referring to a parable the priest, Father Flynn, had used in a recent sermon:
Sister James: "Aren't the things that actually happen in life more worthy of interpretation than a made-up story?"
Father Flynn: "No. What actually happens in life is beyond interpretation. The truth makes for a bad sermon. It tends to be confusing and have no clear conclusion."
Darwinian theory is like that. The idea that traits that are here today are here because they were better for their carriers' ancestors than what their peers had, is a tight, taut, and typically unfalsifiable kind of explanation. Since what is here is here because it worked and what did not work is not here, this becomes true by definition. It's a catch-all 'explanation'. It at least has the appearance of truth, even when some particular Darwnian explanation invokes some specific factor--often treated from Darwin's time to the present as a kind of 'Newtonian' force--that drove genetic change in the adaptive direction we see today.
That's the kind of scenario that's offered to account for the evolution of flight to capture prey, showy peacock feathers to attract mates, protective coloration to hide from predators, or why people live long enough to be grandmothers (to care for their genetic descendants). Some of these explanations may very well be factually true, but almost all could have other plausible explanations or are impossible to prove.
Simple, tight, irrefutable but unprovable stories like these are, to varying but unknown extent, parables rather than literal truth. Unfortunately, while science often (and often deservedly!) has little patience with pat religious parables that are invoked as literal truth, science often too blithely accepts its own theories as literal truth rather than parable.
We naturally have our own personal ideas about the nature of life, and we think (we believe) that they are generally true. They are sometimes different, as we try to outline in our book and in these postings, from what many others take for granted as truth. Strong darwinian selectionism and strong genetic determinism, in the ways we have discussed, are examples.
It may be difficult for people in any kind of culture, even modern technical culture, to be properly circumspect about their own truth-stories. Perhaps science must cling to theories too tight to be literally true, by dismissing the problems and inconsistencies that almost always are known. Accepted truths provide a working research framework, psychological safety in numbers, and the conformism needed to garner society's resources and power (here, among other things, in the form of grants, jobs, publications).
In fact, as a recent book by P. Kyle Stanford, Exceeding Our Grasp (Oxford Press, 2006) discusses at length, most theories including those in biology are under-determined: this means that many different theories, especially unconceived alternatives to current theory, could provide comparable fit to the existing data.
We can't know when and how such new ideas will arise and take their turn in the lore that is science. But in principle, at least, a bit more humility, a bit more recognition that our simple stories are more likely to be parable than perfect, would do us good.
Good parables do have semblance to plausibility and truth. Otherwise, they would not be useful parables. As we confront the nature of genomes, we see things that seem to fit our theoretical stories. In science as in other areas of human affairs, that fit is the lure, the drug, that draws us to extend our well-supported explanations to accept things as true that really are parable. We see this all the time, in the nature of much that is being said in genetics these days, as we have discussed.
Probably there's a parable about wisdom that could serve as a lesson in this regard. Maybe a commenter will remind us what it is!
Friday, April 17, 2009
Darwin and Malthus, evolution in good times and bad
Yesterday, again in the NY Times, Nicholas Kristof reported on studies of the genetic basis of IQ. This has long been a sensitive subject because, of course, the measurers come from the upper social strata, and they design the tests to measure things they feel are important (e.g, in defining what IQ even is). It's a middle-class way of ranking middle-class people who are competing for middle-class resources. Naturally, the lower social strata do worse. Whether or not that class-based aspect is societally OK is a separate question and a matter of one's social politics: Kristof says no, and so do we, but clearly not everyone has that view and there are still those who want to attribute more mental agility to some races than to others.
By most if not all measurements, IQ (and 'intelligence', whatever it is, like almost anything else) has substantial 'heritability'. What that means is that the IQ scores of close relatives are more similar than the scores of random pairs of individuals. As everyone who talks about heritability knows (or should know), it's a measure of the relative contribution of shared ancestry to score similarities. It is relative to the contribution of other factors that are referred to as 'environmental.'
Kristof's column points out that IQ is suppressed in deprived circumstances of lower socioeconomic strata. And so is the heritability--the apparent relative contribution of genes. That makes sense, because no matter what your 'genetic' IQ, if you have no books in your house etc., your abilities have little chance of becoming realized. How you do relative to others who are deprived is largely a matter of chance, hardly correlated with your genotype. Correspondingly, there is evidence that scores and heritability rise when conditions, including educational investment, are improved. The point is that when conditions are bad, everyone suffers. When conditions are good, all can gain, and there are opportunities for inherited talent to shine.
Can we relate this to one of the pillars of evolutionary biology? That is the idea, due to both Darwin and Wallace, that natural selection works because in times of overpopulation (which they argued, following Thomas Malthus, was basically all times), those with advantageous genotypes would proliferate at the relative expense of others in the population. That fits the dogma that evolution is an endless brutal struggle for survival, often caricatured as 'survival of the fittest'.
Such an idea is certainly possible in principle. But, hard times might actually be less likely to support innovation evolutionarily. When there is a food shortage, it could be that everyone suffers more comparably, so that even what would otherwise be 'better' genotypes simply struggle along or don't make it at all. By contrast, good times might be good for all on average, but might provide the wiggle room for superior phenotypes, and their underlying genotypes, to excel.
This is not at all strange or out in left field. Natural selection can only select among genetically based fitness differences. If hard times mean there is little correlation between genotype and phenotype, selective differences have little if any evolutionary effect, and survival is mostly luck.
In this sense, from Malthus to the present, this central tenet of evolutionary theory may have been wrong, or at least inaccurate. Adaptive evolution may actually have occurred most in plentiful times, not under severe overpopulation. In most environments, competition is clearly not all that severe--if it were, most organisms would be gaunt and on the very thin edge of survival, which manifestly is not generally true.
The IQ story only reflects some here-and-now findings, not evolutionary ones per se. But it may suggest reasons to think about similar issues more broadly.
By most if not all measurements, IQ (and 'intelligence', whatever it is, like almost anything else) has substantial 'heritability'. What that means is that the IQ scores of close relatives are more similar than the scores of random pairs of individuals. As everyone who talks about heritability knows (or should know), it's a measure of the relative contribution of shared ancestry to score similarities. It is relative to the contribution of other factors that are referred to as 'environmental.'
Kristof's column points out that IQ is suppressed in deprived circumstances of lower socioeconomic strata. And so is the heritability--the apparent relative contribution of genes. That makes sense, because no matter what your 'genetic' IQ, if you have no books in your house etc., your abilities have little chance of becoming realized. How you do relative to others who are deprived is largely a matter of chance, hardly correlated with your genotype. Correspondingly, there is evidence that scores and heritability rise when conditions, including educational investment, are improved. The point is that when conditions are bad, everyone suffers. When conditions are good, all can gain, and there are opportunities for inherited talent to shine.
Can we relate this to one of the pillars of evolutionary biology? That is the idea, due to both Darwin and Wallace, that natural selection works because in times of overpopulation (which they argued, following Thomas Malthus, was basically all times), those with advantageous genotypes would proliferate at the relative expense of others in the population. That fits the dogma that evolution is an endless brutal struggle for survival, often caricatured as 'survival of the fittest'.
Such an idea is certainly possible in principle. But, hard times might actually be less likely to support innovation evolutionarily. When there is a food shortage, it could be that everyone suffers more comparably, so that even what would otherwise be 'better' genotypes simply struggle along or don't make it at all. By contrast, good times might be good for all on average, but might provide the wiggle room for superior phenotypes, and their underlying genotypes, to excel.
This is not at all strange or out in left field. Natural selection can only select among genetically based fitness differences. If hard times mean there is little correlation between genotype and phenotype, selective differences have little if any evolutionary effect, and survival is mostly luck.
In this sense, from Malthus to the present, this central tenet of evolutionary theory may have been wrong, or at least inaccurate. Adaptive evolution may actually have occurred most in plentiful times, not under severe overpopulation. In most environments, competition is clearly not all that severe--if it were, most organisms would be gaunt and on the very thin edge of survival, which manifestly is not generally true.
The IQ story only reflects some here-and-now findings, not evolutionary ones per se. But it may suggest reasons to think about similar issues more broadly.
Subscribe to:
Posts (Atom)

