A sobering study of studies reported in the NYT by Gina Kolata, one of their best science reporters, has shown definitively that no findings about prevention of Alzheimer's Disease (AD) can be shown to have worked in any substantial way.
AD is a devastating disease and NIH and other research funders have poured money into the wishing well, essentially chasing every idea that somebody said s/he thought (wished) would lead to a way to avoid this disease. Keep exercising. Keep doing crossword puzzles or mental challenges. Eat lots of veggies. Nothing.
In some senses this really has been wishful thinking. It never seemed highly credible that some simple lifestyle change would have such a major effect (but those of us who do crossword puzzles and eat a balanced diet could hope, of course!). There are many problems. They begin with problems diagnosing AD, especially before a post-mortem.
Even then, some of the chemical evidence, such as deposits of beta-amyloid molecules in specific parts of the brain were thought to be chemically responsible, and diagnostic. Various trails of genetic evidence (genes involved in or affecting amyloid or other particles in the brain) were followed. The diagnostic criteria are apparently not even that definitive.
Possibly AD is, like most things with complex effects, a mix of similar traits. Almost certainly AD is along a spectrum of mental-function, a point on a continuum that we define as disease. It may have many causes because there's not actually an 'it'. 'It' is more likely 'them.'
This is the challenge of complexity. Evolutionarily, even if we can believe evolutionary stories in situations like this, AD occurs past reproductive ages by and large, so natural selection could not remove those who would get it. One can contrive, or contort, stories to get around this: those who are doomed in our long-living society to get AD have some reproductive charisma earlier in life (yes! that kind of argument has been raised).
But if the likely fact is true that AD is past the effects of natural selection because it strikes too late in life, then anything that could lead to it could circulate in the population without being removed by selection. If there are many ways to lose memory function, perhaps thousands of ways, then that's what we'll see today. It seems consistent with the evidence.
However, if some physiological pathways are truly causally involved, then it may be possible for preventive (even gene-product-based) therapies. Many investigators are trying and the news media (perhaps especially the less responsible reporters) regularly tout the next miracle cure.
Anyone with a relative suffering from AD wishes that some such miracle will in fact be found -- and now. If we can keep ourselves alive many years beyond our proper sell-by date, then it will be important to keep at least our mental functions in decent order, even if everything else falls apart. Naturally, rather than accept earlier Exits, we keep throwing our money down the wishing well.
Whether we're wishing for something that's not possible in the face of causal complexity, or a stunning treatment will answer our wishes, nobody knows. That's what keeps us tossing resources down the well.
Tuesday, August 31, 2010
Monday, August 30, 2010
Trust in science
By
Ken Weiss
The NYTimes has been carrying articles about the investigations of Marc Hauser, a behavioral psychologist at Harvard, who was accused of fabricating data in published research papers (here, and here and here, if you're not totally sick of the subject yet, but the story is also being covered by Nature and Science, if you subscribe, and elsewhere). This had apparently caused uneasiness among his students and collaborators for many years until finally someone blew the proverbial whistle.
We don't know anything other than what's in the news about this case, but it does serve as a reminder of how fragile the underpinnings of science can be. Everything we do in science depends heavily on the work of others, the equipment and chemicals we use, the published literature, laboratory procedure manuals, recorded provenance of the samples we use, and much else. No matter how important your work, you are still using this past legacy even just to design what you are going to do next, including even the questions you choose to ask.
Blatant fraud in science seems to be very rare indeed. But in our self-righteous feelings at the discovery of someone who pollutes the commons, we do have to be aware of where truthfulness lies, and where it doesn't.
Every experiment involves procedures, protocols, data collection, data analysis, and interpretation. It usually involves a crew of students, technicians, faculty, and others to do a project. We often need collaborative teams of specialists. They all can't know everything the other's doing, and can't all know enough to judge the other's work.We can replicate others' work only up to a very limited extent. We have to rely on trust.
Every one is fallible. Study designs are rarely perfect, and their execution usually has areas in which various kinds of mistakes can be made, and it's very hard to catch them. We all know this, and thus have to carefully judge studies we see, but we tend to view sources of error as simply random, and usually minor or easily caught. Minor inefficiencies in science, and reasons for quality control.
But that is a mistaken way to view science, because science, as any human endeavor, involves integrity. Aspects of many or even most studies involve shadings that can alter the objective facts. The authors have a point of view, and they may color their interpretations accordingly--telling the literal truth, but shaping it to advance their theories. Very unusual or unrepresentative observations are often discarded and barely mentioned if at all in the scientific report (justified by calling them 'outliers'). Reconstructions--like assembling the fossil fragments into a hip bone or skull--are fallible and may reflect the expectations or evolutionary theories of the paleontologist involved.
These are not just random errors, they are biases in science reporting, and they are ubiquitous. They are, in fact, often things the investigators are well aware of. Disagreeing work, or work by the investigator's rivals or competitors, may not be cited in a paper, for example. Cases may be presented in ways that don't cover all that's known because of the need to get the next grant. Negative results may not be published. Some results involve judgment calls, and these can be in the eye, and the expectations, of the observer. We are not always wholly oblivious of these things, but reported results are not always 100% as honest as we'd like to believe, again because while not outright lies, authors may be tempted to shade the known truth.
So is intentional fabrication of data worse, because it's more complete? The answer would be clearly yes, because we have to have constraints on our tendencies to promote our own ideas, to rigidly prevent making up data, and enhance as much as possible our ability, at least in principle, to try to replicate an experiment. But we are perhaps more tolerant than we should be of the shadings of misrepresentation that exist, especially those of which investigators are aware.
We don't know whether Dr Hauser did what he's accused of. But we do feel that if the accusations are true, he should be prevented from gainful employment by any university. We have to hold the line, strictly, on outright fraud. It's simply too great a crime against all the rest of us.
But at the same time, we need to realize both the central importance of trust and truthfulness in science, and the range of ways in which truth can be undermined. Fortunately, most science seems to be at least technically truthful. But our outrage against outright fraud should be tempered by knowledge of the many subtle ways in which bias and dissembling, along with honest human error, are part of science, and that must lead us to question even the past work on which all of our present work rests.
We don't know anything other than what's in the news about this case, but it does serve as a reminder of how fragile the underpinnings of science can be. Everything we do in science depends heavily on the work of others, the equipment and chemicals we use, the published literature, laboratory procedure manuals, recorded provenance of the samples we use, and much else. No matter how important your work, you are still using this past legacy even just to design what you are going to do next, including even the questions you choose to ask.
Blatant fraud in science seems to be very rare indeed. But in our self-righteous feelings at the discovery of someone who pollutes the commons, we do have to be aware of where truthfulness lies, and where it doesn't.
Every experiment involves procedures, protocols, data collection, data analysis, and interpretation. It usually involves a crew of students, technicians, faculty, and others to do a project. We often need collaborative teams of specialists. They all can't know everything the other's doing, and can't all know enough to judge the other's work.We can replicate others' work only up to a very limited extent. We have to rely on trust.
Every one is fallible. Study designs are rarely perfect, and their execution usually has areas in which various kinds of mistakes can be made, and it's very hard to catch them. We all know this, and thus have to carefully judge studies we see, but we tend to view sources of error as simply random, and usually minor or easily caught. Minor inefficiencies in science, and reasons for quality control.
But that is a mistaken way to view science, because science, as any human endeavor, involves integrity. Aspects of many or even most studies involve shadings that can alter the objective facts. The authors have a point of view, and they may color their interpretations accordingly--telling the literal truth, but shaping it to advance their theories. Very unusual or unrepresentative observations are often discarded and barely mentioned if at all in the scientific report (justified by calling them 'outliers'). Reconstructions--like assembling the fossil fragments into a hip bone or skull--are fallible and may reflect the expectations or evolutionary theories of the paleontologist involved.
These are not just random errors, they are biases in science reporting, and they are ubiquitous. They are, in fact, often things the investigators are well aware of. Disagreeing work, or work by the investigator's rivals or competitors, may not be cited in a paper, for example. Cases may be presented in ways that don't cover all that's known because of the need to get the next grant. Negative results may not be published. Some results involve judgment calls, and these can be in the eye, and the expectations, of the observer. We are not always wholly oblivious of these things, but reported results are not always 100% as honest as we'd like to believe, again because while not outright lies, authors may be tempted to shade the known truth.
So is intentional fabrication of data worse, because it's more complete? The answer would be clearly yes, because we have to have constraints on our tendencies to promote our own ideas, to rigidly prevent making up data, and enhance as much as possible our ability, at least in principle, to try to replicate an experiment. But we are perhaps more tolerant than we should be of the shadings of misrepresentation that exist, especially those of which investigators are aware.
We don't know whether Dr Hauser did what he's accused of. But we do feel that if the accusations are true, he should be prevented from gainful employment by any university. We have to hold the line, strictly, on outright fraud. It's simply too great a crime against all the rest of us.
But at the same time, we need to realize both the central importance of trust and truthfulness in science, and the range of ways in which truth can be undermined. Fortunately, most science seems to be at least technically truthful. But our outrage against outright fraud should be tempered by knowledge of the many subtle ways in which bias and dissembling, along with honest human error, are part of science, and that must lead us to question even the past work on which all of our present work rests.
Friday, August 27, 2010
Rounding up the evidence on glyphosate
We've blogged a few times (here, e.g.) about the unintended consequences of the widespread use of genetically modified Roundup-resistant plants and, consequently, Roundup (Roundup is an herbicide, and its active ingredient is glyphosate). Not surprisingly to anyone except Monsanto--the original producers of the stuff, who said this wouldn't happen--Roundup is encouraging the growth of herbicide-resistant weeds wherever it's liberally used, and thus the need for farmers to use more, and more toxic, herbicides, or more labor-intensive horticultural practices to deal with these newly stubborn weeds. That's evolution for ya.
But other effects are getting some attention too. A quick check on Google and Google Scholar provides evidence that for more than a decade there have been anecdotal and scientific journal reports of neural, craniofacial, limb and other anomalies in infants born in regions with intensive Roundup use, as well as effects on amphibians and other vertebrates living in or near the fields. Any organism exposed to water that contains glyphosate-laced runoff apparently is also at risk.
Among many other reports, a 2009 paper finds that glyphosate alone is toxic to cells, and that other chemicals added to the herbicide, supposedly inert, exacerbate the effect.
The point here is not to take political sides, but to stress that evaluating the evidence independently, regardless of commercial interest (or, for that matter, tree-hugger interests) is the only way to even have a good chance of preventing major calamities--or, to become confident that they won't occur. But what we know about evolution and development make it quite conceivable that the problems are real.
But other effects are getting some attention too. A quick check on Google and Google Scholar provides evidence that for more than a decade there have been anecdotal and scientific journal reports of neural, craniofacial, limb and other anomalies in infants born in regions with intensive Roundup use, as well as effects on amphibians and other vertebrates living in or near the fields. Any organism exposed to water that contains glyphosate-laced runoff apparently is also at risk.
Among many other reports, a 2009 paper finds that glyphosate alone is toxic to cells, and that other chemicals added to the herbicide, supposedly inert, exacerbate the effect.
We have evaluated the toxicity of four glyphosate (G)-based herbicides in Roundup (R) formulations, from 105 times dilutions, on three different human cell types. This dilution level is far below agricultural recommendations and corresponds to low levels of residues in food or feed. The formulations have been compared to G alone and with its main metabolite AMPA or with one known adjuvant of R formulations, POEA. HUVEC primary neonate umbilical cord vein cells have been tested with 293 embryonic kidney and JEG3 placental cell lines. All R formulations cause total cell death within 24 h, through an inhibition of the mitochondrial succinate dehydrogenase activity, and necrosis, by release of cytosolic adenylate kinase measuring membrane damage. They also induce apoptosis via activation of enzymatic caspases 3/7 activity.And,
The real threshold of G toxicity must take into account the presence of adjuvants but also G metabolism and time-amplified effects or bioaccumulation. This should be discussed when analyzing the in vivo toxic actions of R. This work clearly confirms that the adjuvants in Roundup formulations are not inert. Moreover, the proprietary mixtures available on the market could cause cell damage and even death around residual levels to be expected, especially in food and feed derived from R formulation-treated crops.Now a paper in the journal Chemical Research in Toxicology (Aug 9, 2010) reports the teratogenic effects of incubating frog and chick embryos with a 1/5000 dilution of glyphosate, and suggests a mechanism to explain the effect.
The treated embryos were highly abnormal with marked alterations in cephalic and neural crest development and shortening of the anterior−posterior (A-P) axis. Alterations on neural crest markers were later correlated with deformities in the cranial cartilages at tadpole stages. Embryos injected with pure glyphosate showed very similar phenotypes. Moreover, GBH [glyphosate based herbicides] produced similar effects in chicken embryos, showing a gradual loss of rhombomere domains, reduction of the optic vesicles, and microcephaly...... A reporter gene assay revealed that GBH treatment increased endogenous retinoic acid (RA) activity in Xenopus embryos and cotreatment with a RA antagonist rescued the teratogenic effects of the GBH. Therefore, we conclude that the phenotypes produced by GBH are mainly a consequence of the increase of endogenous retinoid activity..... The direct effect of glyphosate on early mechanisms of morphogenesis in vertebrate embryos opens concerns about the clinical findings from human offspring in populations exposed to GBH in agricultural fields.Retinoic acid is a signaling factor that is essential for cell differentiation in the developing embryo, as a regulator of major stages in growth and patterning. If glyphosate is confirmed to interfere with retinoic acid levels in developing vertebrate embryos, as this study suggests, it's a serious problem for farm workers, people who live in regions where glyphosate is used heavily, or who use water from sources contaminated with glyphosate runoff, as well as any other vertebrate exposed to this stuff.
The point here is not to take political sides, but to stress that evaluating the evidence independently, regardless of commercial interest (or, for that matter, tree-hugger interests) is the only way to even have a good chance of preventing major calamities--or, to become confident that they won't occur. But what we know about evolution and development make it quite conceivable that the problems are real.
Thursday, August 26, 2010
Nanotechnology and you
Sixty-one year old futurologist Ray Kurzweil could recently be heard talking about -- what else? -- the future on the BBC radio program, Interview Classic. He so wants to be part of the future that he envisions that he takes 150 pills a day to keep himself healthy until a time when he can download his brain to a computer chip, thus transcending his body which he will no longer need. Well, we'll let him explain:
Got it? So, why should we believe this guy on this?
To establish his credibility, he was asked what predictions he has been right about in the past. His list includes foreseeing the doubling of the power of ARPANET every year in the early 1980's to become the world wide interconnection of communication that we know now as the internet, the demise of the Soviet Union in the '80's because of the democratizing effects of decentralized communication, that a computer would take the world chess championship by 1988, and more. Is there anything you've been wrong about? he was asked. Maybe he was off by a few years on some of his predictions, he said, but that's about it when it comes to predicting the speed by which technology changes, and its effects.
That's all well and good, and he apparently does have an impressive track record when it comes to predictions about technology. Unfortunately, he's not on such solid ground when it comes to predictions based on understanding genetics and evolution -- since he doesn't seem to understand either.
"Within 30 years human experience in total will be transformed by technology," he said. We now have the genome, which "by the way, it's obsolete because it evolved thousands of years ago."
The interviewer was surprised at the idea that the human genome is obsolete, protesting that it has worked pretty well. There are nearly 10 billion people around to prove it, too! And we dominate the earth. That's failure for you!
No, said Kurzweil, it's actually worked pretty poorly. Our genes evolved in a very different environment from the one we're living in now. Wrong! Our genes evolved up to yesterday, when we inherited them; they're only as old as we are, and it's very likely that there will be even more copies of it 25 years from now!
We get diabetes, he said, because we evolved in a time when we needed genes that allowed us to sequester calories as fat, during famine, and now when we're surrounded by unlimited calories, we eat too much and store too much fat. This idea is known as the "thrifty gene hypothesis" (a misreading of a theory that the geneticist James Neel proposed long ago in a somewhat different context). We guess that to Kurzweil, dying is proof that a genome doesn't work well.
Also, as he says, we've found "genes that promote cancer and heart disease, we'd like to turn those off." Or, as he puts it,"We'd like to change the software."
So, let's stop there (though there's much much more if you care to listen to the interview). So many misconceptions in so little time! Our genome didn't evolve "thousands of years ago", it is the product of 4 billion years of evolution, and of course it hasn't stopped evolving. The idea that because it is old, it's obsolete is utterly non-biological, and based on his next misperception, which is that the genome is the software that makes us what we are. Does this mean there was a master programmer? With an ultimate design in mind? Or, that the environment no longer has an effect? Or, and this is the most important thing, that we aren't adaptable to change? If this were true, the first Twinkie you ate, being not the raw meat or betel nut your long ago ancestors grew up on, would have killed you. Even granola would be lethal.
And as for "genes that promote cancer and heart disease, we'd like to turn those off." We've got genes sitting around dormant for decades, waiting for just the right time to give us cancer or heart attacks? It's a common misconception that genes 'for' diseases do only one thing and that is cause disease (indeed, many genes have even been named for the disease they are associated with, such as breast cancer genes or genes 'for' Alzheimer's disease), but most genes are involved in numerous normal functions, and if you "turn them off", you'd destroy essential functions.
Kurzweil conflates technology with biology and evolution -- technology really does become obsolete, and software really does tell the computer literally everything it does. But genes and genomes don't follow that model. Kurzweil is right that nanotechnology is already being used therapeutically, and it will surely become more and more useful (though will do nothing to bring down the cost of medical care -- or cure malaria or TB), but his idea that in 25 years nanotechnology will be fixing all the things wrong with our genes is a basic misunderstanding of how genes work.
But does his biological naivete negate his idea that we'll all be downloading our brains within 25 years? Well, what does that mean? Will your consciousness reside in two places at once (a new version of mind-body duality!)? Will your consciousness on the PC (or, better, iMac) be the same as your wet-ware one?
The obsolescence of our genomes reflected in our mortality is an interesting take, too. Yes, we wear out and die (by the way, we're assuming here, as Kurzweil must, that there's no eternal afterlife!). But what about the You-server on which your essence is downloaded? If it's a machine, it too will die....even become obsolete! I mean, I wouldn't have wanted to be downloaded to 5 1/4" floppies or magnetic tape!
But, Kurzweil would likely reply that in each new generation of hypertech your "I" would be simply transfered, byte by byte, to the new medium. That's not so different from transmitting yourself to a baby, except that it's hermaphroditic. But there's another problem here. The electronic solution means that no matter how the world changed, you won't! 500 years from now, you'll still be you, and you'll still be singing the songs of the 21st century. Unless.....unless, Kurzweil thinks that perhaps we'll omit checking our parity bits from time to time and allow some change in our wiring.
That's a clever twist, and it probably would work! But it's not totally new: it's called evolution, and we've already got that very well solved with our native wet-ware as it is.
Indeed, the idea of putting our brains in computer data banks is not new. Much of your brain is on your computer now: things you don't memorize but want access to, documents of your brain activity during your life, resources like Wiki, and the like. And the same has been true for ages. We call it books.
Whether your entire brain which is a 4-dimensional phenomenon can be made into a flat .txt file, or some computer-science storage advance, is unknowable at present (by anyone but Kurzweil, anyway). Surely more and more knowledge may be downloadable. Maybe even by direct neural signal sending.
But if Kurzweil's explanations are what's going, this is naive biology posing as insight.
Then again, post-biological beings may already be old news.
When I was 40 I came out at 38 on biological aging test, I’m now 61 and I come out close to 40. If only age 2, 3 years in the last 20 years, so that really is feasible. Now, I take a lot of supplements, I eat a certain diet, I exercise, I take about 150 pills a day and you’ll say, “Ray you really think taking all these pills is going to able you to live hundreds of years?” No. The point of this whole program, bridge one, is just to get to bridge two. Bridge two is a full flowering of this biotechnology revolution where we will have far more powerful method to really reprogram our genes, away from aging and away from disease and that’s about 15 to 20 years away so the whole point of bridge one is to get to that point in good health. And it’s not a point that we’ll do this, I’ll do this for 15 years and nothing will happen and suddenly we’ll have the whole thing I mean, every year in fact it’s almost everyday new developments come out already from this biotechnology revolution but 15 years from now we’ll be a very mature technology and we’ll have a much easier time of slowing down stopping and even reversing this aging and disease processes. That’s bridge two and that will bring us to bridge 3 maybe 25 years from now. The nanotechnology revolution were we can have billions of nanobots keeping us healthy at the level of every cell on our body and that will go in our brains and extend our brains enable us to backup ultimately the information in our brains. Those technologies will ultimately give us very dramatic extensions to our longevity.
Got it? So, why should we believe this guy on this?
To establish his credibility, he was asked what predictions he has been right about in the past. His list includes foreseeing the doubling of the power of ARPANET every year in the early 1980's to become the world wide interconnection of communication that we know now as the internet, the demise of the Soviet Union in the '80's because of the democratizing effects of decentralized communication, that a computer would take the world chess championship by 1988, and more. Is there anything you've been wrong about? he was asked. Maybe he was off by a few years on some of his predictions, he said, but that's about it when it comes to predicting the speed by which technology changes, and its effects.
That's all well and good, and he apparently does have an impressive track record when it comes to predictions about technology. Unfortunately, he's not on such solid ground when it comes to predictions based on understanding genetics and evolution -- since he doesn't seem to understand either.
"Within 30 years human experience in total will be transformed by technology," he said. We now have the genome, which "by the way, it's obsolete because it evolved thousands of years ago."
The interviewer was surprised at the idea that the human genome is obsolete, protesting that it has worked pretty well. There are nearly 10 billion people around to prove it, too! And we dominate the earth. That's failure for you!
No, said Kurzweil, it's actually worked pretty poorly. Our genes evolved in a very different environment from the one we're living in now. Wrong! Our genes evolved up to yesterday, when we inherited them; they're only as old as we are, and it's very likely that there will be even more copies of it 25 years from now!
We get diabetes, he said, because we evolved in a time when we needed genes that allowed us to sequester calories as fat, during famine, and now when we're surrounded by unlimited calories, we eat too much and store too much fat. This idea is known as the "thrifty gene hypothesis" (a misreading of a theory that the geneticist James Neel proposed long ago in a somewhat different context). We guess that to Kurzweil, dying is proof that a genome doesn't work well.
Also, as he says, we've found "genes that promote cancer and heart disease, we'd like to turn those off." Or, as he puts it,"We'd like to change the software."
So, let's stop there (though there's much much more if you care to listen to the interview). So many misconceptions in so little time! Our genome didn't evolve "thousands of years ago", it is the product of 4 billion years of evolution, and of course it hasn't stopped evolving. The idea that because it is old, it's obsolete is utterly non-biological, and based on his next misperception, which is that the genome is the software that makes us what we are. Does this mean there was a master programmer? With an ultimate design in mind? Or, that the environment no longer has an effect? Or, and this is the most important thing, that we aren't adaptable to change? If this were true, the first Twinkie you ate, being not the raw meat or betel nut your long ago ancestors grew up on, would have killed you. Even granola would be lethal.
And as for "genes that promote cancer and heart disease, we'd like to turn those off." We've got genes sitting around dormant for decades, waiting for just the right time to give us cancer or heart attacks? It's a common misconception that genes 'for' diseases do only one thing and that is cause disease (indeed, many genes have even been named for the disease they are associated with, such as breast cancer genes or genes 'for' Alzheimer's disease), but most genes are involved in numerous normal functions, and if you "turn them off", you'd destroy essential functions.
Kurzweil conflates technology with biology and evolution -- technology really does become obsolete, and software really does tell the computer literally everything it does. But genes and genomes don't follow that model. Kurzweil is right that nanotechnology is already being used therapeutically, and it will surely become more and more useful (though will do nothing to bring down the cost of medical care -- or cure malaria or TB), but his idea that in 25 years nanotechnology will be fixing all the things wrong with our genes is a basic misunderstanding of how genes work.
But does his biological naivete negate his idea that we'll all be downloading our brains within 25 years? Well, what does that mean? Will your consciousness reside in two places at once (a new version of mind-body duality!)? Will your consciousness on the PC (or, better, iMac) be the same as your wet-ware one?
The obsolescence of our genomes reflected in our mortality is an interesting take, too. Yes, we wear out and die (by the way, we're assuming here, as Kurzweil must, that there's no eternal afterlife!). But what about the You-server on which your essence is downloaded? If it's a machine, it too will die....even become obsolete! I mean, I wouldn't have wanted to be downloaded to 5 1/4" floppies or magnetic tape!
But, Kurzweil would likely reply that in each new generation of hypertech your "I" would be simply transfered, byte by byte, to the new medium. That's not so different from transmitting yourself to a baby, except that it's hermaphroditic. But there's another problem here. The electronic solution means that no matter how the world changed, you won't! 500 years from now, you'll still be you, and you'll still be singing the songs of the 21st century. Unless.....unless, Kurzweil thinks that perhaps we'll omit checking our parity bits from time to time and allow some change in our wiring.
That's a clever twist, and it probably would work! But it's not totally new: it's called evolution, and we've already got that very well solved with our native wet-ware as it is.
Indeed, the idea of putting our brains in computer data banks is not new. Much of your brain is on your computer now: things you don't memorize but want access to, documents of your brain activity during your life, resources like Wiki, and the like. And the same has been true for ages. We call it books.
Whether your entire brain which is a 4-dimensional phenomenon can be made into a flat .txt file, or some computer-science storage advance, is unknowable at present (by anyone but Kurzweil, anyway). Surely more and more knowledge may be downloadable. Maybe even by direct neural signal sending.
But if Kurzweil's explanations are what's going, this is naive biology posing as insight.
Then again, post-biological beings may already be old news.
Wednesday, August 25, 2010
Let’s get this straight once and for all: They’re not monkeys, they’re apes.
I’m going to try to help you understand why that jackhole at the zoo corrected you about a silly word in front of your kids last Tuesday.
I’ve seen it happen many times: A mother and her kids are standing at the glass pointing at the gorillas inside the exhibit (in the "ape house" with labels everywhere) and the mother says, “Look at the silly monkeys!”
On cue, a bystander interjects, “They’re not monkeys, they’re apes,” which irks the mother and she walks away.
On cue, a bystander interjects, “They’re not monkeys, they’re apes,” which irks the mother and she walks away.
In my brief stint so far doing observations at the Lincoln Park Zoo, I have met a ton of people as they file past Jojo's group of gorillas. Most of the interactions are positive, make what I'm doing feel even cooler than it already is, and validate the importance of the research. And I have never once corrected anyone who I (very often) overhear calling the gorillas “monkeys.”
If asked, or if I’m approached and a conversation ensues, then I do correct any number of misconceptions people may have about the gorillas (e.g. that they're monkeys or that the females are ashamed after they have sex... jeepers!), but I never impose myself on anyone visiting the apes, without their permission. This is hard to do for a professional educator, but it’s my M.O. because I think it's polite and it's not my primary concern or role there... collecting behavioral data is.
But you know what? It’s getting harder and harder to stay silent. The insanity of the whole situation is building to a nearly unbearable level. It’s beginning to blow my mind, the incredible number of people who call apes monkeys.
Mind you most of my friends know monkeys from apes, but here’s why it’s so staggering…
What if you were at the park and met a nice lady and her toddler who wanted to pet your dog and the lady told her kid that your dog was a cat?
You might think, at first, that she's not a native English speaker, but she has an American accent. Okay, then you might think that maybe she's hard of seeing, but for this example, she’s got 20/20 vision and even blind people would assume a pet on a leash was first a dog and only rarely a cat. What if you corrected her and instead of blaming it on a "senior moment," she furrowed her brow and found a way to end the interaction? What else could you possibly think about this mistaken woman except that she must be INSANE? No sane person would teach their child that a dog is a cat, would think a dog is a cat, or would never have learned during their whole life that a dog wasn't a cat.
Dogs and cats are distinct. Easy.
So are monkeys and apes. Just as easy. Confusing them, like confusing cats and dogs, is either embarrassingly lazy or is downright flirting with insanity.
Apes are gibbons, siamangs, orangutans, gorillas, chimps, and people. We apes don’t have tails and we have big brains and advanced cognitive skills among other traits. Monkeys have tails (even ones that look tailless have little stubs) and most have much smaller brains (an exception being the capuchin).
Apes and monkeys are separate categories of animals. This is why calling an ape a monkey sounds absolutely crazy and that is why some people just can’t help themselves and morph into prickish pedants around ignorant zoo visitors.
Would you call a horse a zebra? No.
Would you call a goat a sheep? No. Or if you did, the farmer would correct you.
Would you call a frog a toad? No. Well, maybe you’d do that because it's so common, but frog- and toad-ologists hate that.
And, apologies to the frog- and toad-ologists out there, but since monkeys and apes are our closest relatives, there is much more at stake when we mislabel them.
We sound so stupid* when we don't know the names of our own relatives. And these aren't even species level names... they're big broad categories of animals that small children are capable of learning.
And no! Knowledge of the animal kingdom is not child's play. It's everyone's Earth. Look around.
Granted, "monkey" is a lot more fun to say than "ape." You don't hear people going gaga over their kittens with, "You look like a little ape!"** So being so much more ubiquitous in our daily vernacular, I do understand how the word "monkey" may be more prominently tattooed on our neurons and might rest further down the tongue than the word "ape."
It's also clear that many zoo visitors are new parents who haven't yet relearned all the things they learned long ago but have since forgotten. They just need a little more time. (This is some of the joy of parenting that I look forward to one day. Differential calculus here I come, again! I've missed you.)
But, it really doesn't matter why so many humans can't bother to distinguish monkeys from apes. What matters is that for your own reputation, for your own dignity, you call apes apes and monkeys monkeys.
The gorillas don't care what you call them from behind the glass, but many of your fellow humans do, and you probably care what your fellow humans think because that's prosocial behavior.
Images modified from: channel.nationalgeographic.com and www.ocf.berkeley.edu
Tuesday, August 24, 2010
Killing: science, society, and democracy
By
Ken Weiss
Well, here's one for scientists to ponder. A US court has put a restraint on the use of stem cells for research, on the grounds that it kills human embryos. Of course, commercial and research interests who would stand to be limited in what they do, naturally don't like that, and will complain. They'll raise all sorts of sanctimonious arguments about the importance of this research for human health.
Naturally, the religious right will feel triumphant. They'll offer sanctimonious arguments about the importance of sacred human life. If you were in that camp, you would, too.
Of course this is not just about human embryos. From the instant of fertilization, a zygote is a human (for that matter, a sperm or egg cell is 'human' in important ways). The abortion and stem cell research debates are off-base when the side that wants these to be legal tries to pretend otherwise.
As we well know, the other side typically takes a religious posture about this and says we can't harm human life -- unless it has some other color or religion, or could be targeted by a drone, or lives inexcusably in the Holy Land, or deserves capital punishment. (Hopefully, they won't extend their religious thinking, as some 'religions' do, to stoning such offenders!)
The sanctimonious humanitarian argument about what should be allowed is often self-serving. Even humanitarian scientists, and the religious, barely extend protection to mice, despite a lot of institutional committees established to protect against unnecessary cruelty. Unfortunately for the mouse, cruelty is in the eye of the investigator, even though mice are clearly our close relatives (do they have 'souls'?). And animal protection rules don't extend to insects and basically not to fish either. So scientists, even those who know life is all connected by the threads of evolution, rationalize what being 'humane' is, for their own interests. (Stoning mice for research purposes is, fortunately, probably prohibited.)
These are culture wars, in which ideas of what constitutes a killable or manipulable life are the symbolic grain of sand over which the combat takes place. From carnivore to Jayne, we each have to draw the line where, for us, killing the innocent is acceptable.
So, what do you think?
Fortunately, it has become increasingly possible to redifferentiate or dedifferentiate somatic (body) cells from adults for therapeutic use on the same person, and most people, scientists and lay public alike, will applaud such efforts.
That won't, however, entirely deal with the issues, such as the use of those cells for research rather than direct therapy. After all, if one of your skin cells can be grown into something more impressive, like a kidney or new embryo, how is that different from manipulating embryos as it's done today? Is it splitting a soul into multiple parts? What about transplanting a somatically cloned kidney into a donor's twin?
It's too bad we can't deal with these issues for what they are, cultural decisions, but instead have courts and others debating as if this is about science, what any biologist knows to be true. A zygote, a somatic cell, an embryo: all are 'human' in some sense of the term. Even insects show fear. The religious can argue that their sacred texts tell them one thing; scientists can argue for the material practicalities of what they believe would be worthwhile research. In a democracy, what counts as what kind of life is, to a great extent, decided by vote.
But our society shouldn't muddle such decisions or understanding of science by placing cultural debates within an arena of cultural combat.
Naturally, the religious right will feel triumphant. They'll offer sanctimonious arguments about the importance of sacred human life. If you were in that camp, you would, too.
Of course this is not just about human embryos. From the instant of fertilization, a zygote is a human (for that matter, a sperm or egg cell is 'human' in important ways). The abortion and stem cell research debates are off-base when the side that wants these to be legal tries to pretend otherwise.
As we well know, the other side typically takes a religious posture about this and says we can't harm human life -- unless it has some other color or religion, or could be targeted by a drone, or lives inexcusably in the Holy Land, or deserves capital punishment. (Hopefully, they won't extend their religious thinking, as some 'religions' do, to stoning such offenders!)
The sanctimonious humanitarian argument about what should be allowed is often self-serving. Even humanitarian scientists, and the religious, barely extend protection to mice, despite a lot of institutional committees established to protect against unnecessary cruelty. Unfortunately for the mouse, cruelty is in the eye of the investigator, even though mice are clearly our close relatives (do they have 'souls'?). And animal protection rules don't extend to insects and basically not to fish either. So scientists, even those who know life is all connected by the threads of evolution, rationalize what being 'humane' is, for their own interests. (Stoning mice for research purposes is, fortunately, probably prohibited.)
These are culture wars, in which ideas of what constitutes a killable or manipulable life are the symbolic grain of sand over which the combat takes place. From carnivore to Jayne, we each have to draw the line where, for us, killing the innocent is acceptable.
So, what do you think?
Fortunately, it has become increasingly possible to redifferentiate or dedifferentiate somatic (body) cells from adults for therapeutic use on the same person, and most people, scientists and lay public alike, will applaud such efforts.
That won't, however, entirely deal with the issues, such as the use of those cells for research rather than direct therapy. After all, if one of your skin cells can be grown into something more impressive, like a kidney or new embryo, how is that different from manipulating embryos as it's done today? Is it splitting a soul into multiple parts? What about transplanting a somatically cloned kidney into a donor's twin?
It's too bad we can't deal with these issues for what they are, cultural decisions, but instead have courts and others debating as if this is about science, what any biologist knows to be true. A zygote, a somatic cell, an embryo: all are 'human' in some sense of the term. Even insects show fear. The religious can argue that their sacred texts tell them one thing; scientists can argue for the material practicalities of what they believe would be worthwhile research. In a democracy, what counts as what kind of life is, to a great extent, decided by vote.
But our society shouldn't muddle such decisions or understanding of science by placing cultural debates within an arena of cultural combat.
Monday, August 23, 2010
The inconstancy of life...or is it just of our findings? Irreproducible results
By
Ken Weiss
We are awash in inconsistent findings from the kinds of research that is often done in epidemiology, genetics, and even evolutionary biology. One study leads to a given assertion--some risk factor or selective agent causes some outcome--but the excited follow-up study fails to confirm it. Today Twinkies are good for you, tomorrow they're nearly lethal!
We post about this regularly, and it can be found almost every day in the popular science news and even in the scientific literature itself. GWAS are of course a very good example that we mention frequently.
These results are basically statistical ones resulting from various kinds of sampling. If they are consistent in anything, it's their inconsistency. And therein lies a serious challenge to observational science (including evolutionary biology).
An important part of the problem is that when effects are small (assuming they're real), there's a substantial probability that the excess risk they're responsible for won't be detected in a study sample. Study samples match cases and controls, or some similar kind of comparison, and a minor cause will be found about as often in both groups, and that means by chance may be found either more often in the controls or not sufficiently more often in cases to pass a test of statistical significance.
A second problem is complexity and confounding. When many variables are responsible for a given outcome, the controls and cases may differ in ways not actually measured, so that the net effect of the risk factor under test may be swamped by these other factors.
Finally, the putative risk factor may have appeared on someone's radar by chance or by fluke or by a hyperactive imagination, a prejudicial bias, or a Freudian nightmare. We tend to see bad things all around us, and since we don't want anything bad at all of any kind ever, and we have a huge and hungry epidemiology industry, we're bound to test countless things. Puritanism may lead us inadvertently to assume that if it's fun it must be bad for you. Yet, negative findings aren't reported as often as positive ones, and that leads to biased reporting: the flukes that turn out positive get the ink.
We published a paper a while ago in which we listed a number of inconsistent findings. Ken has been told that the existence of this list has made its way into a book, and consequently he's gotten requests for the list. So, we thought we'd post it here. It's out of date now, and we could update it with a lot more examples, but we're sure you can think of plenty on your own.
Wishful thinking and legitimate hopes for knowledge lead us to tend to believe things that are far more tentative than they may appear on the surface. It's only natural--but it's not good science. It's a major problem that we face in relating science to society today.
We post about this regularly, and it can be found almost every day in the popular science news and even in the scientific literature itself. GWAS are of course a very good example that we mention frequently.
These results are basically statistical ones resulting from various kinds of sampling. If they are consistent in anything, it's their inconsistency. And therein lies a serious challenge to observational science (including evolutionary biology).
An important part of the problem is that when effects are small (assuming they're real), there's a substantial probability that the excess risk they're responsible for won't be detected in a study sample. Study samples match cases and controls, or some similar kind of comparison, and a minor cause will be found about as often in both groups, and that means by chance may be found either more often in the controls or not sufficiently more often in cases to pass a test of statistical significance.
A second problem is complexity and confounding. When many variables are responsible for a given outcome, the controls and cases may differ in ways not actually measured, so that the net effect of the risk factor under test may be swamped by these other factors.
Finally, the putative risk factor may have appeared on someone's radar by chance or by fluke or by a hyperactive imagination, a prejudicial bias, or a Freudian nightmare. We tend to see bad things all around us, and since we don't want anything bad at all of any kind ever, and we have a huge and hungry epidemiology industry, we're bound to test countless things. Puritanism may lead us inadvertently to assume that if it's fun it must be bad for you. Yet, negative findings aren't reported as often as positive ones, and that leads to biased reporting: the flukes that turn out positive get the ink.
We published a paper a while ago in which we listed a number of inconsistent findings. Ken has been told that the existence of this list has made its way into a book, and consequently he's gotten requests for the list. So, we thought we'd post it here. It's out of date now, and we could update it with a lot more examples, but we're sure you can think of plenty on your own.
Wishful thinking and legitimate hopes for knowledge lead us to tend to believe things that are far more tentative than they may appear on the surface. It's only natural--but it's not good science. It's a major problem that we face in relating science to society today.
Table of irreproducible results? |
Hormone replacement therapy and heart disease |
Hormone replacement therapy and cancer |
Stress and stomach ulcers |
Annual physical checkups and disease prevention |
Behavioural disorders and their cause |
Diagnostic mammography and cancer prevention |
Breast self-exam and cancer prevention |
Echinacea and colds |
Vitamin C and colds |
Baby aspirin and heart disease prevention |
Dietary salt and hypertension |
Dietary fat and heart disease |
Dietary calcium and bone strength |
Obesity and disease |
Dietary fibre and colon cancer |
The food pyramid and nutrient RDAs |
Cholesterol and heart disease |
Homocysteine and heart disease |
Inflammation and heart disease |
Olive oil and breast cancer |
Fidgeting and obesity |
Sun and cancer |
Mercury and autism |
Obstetric practice and schizophrenia |
Mothering patterns and schizophrenia |
Anything else and schizophrenia |
Red wine (but not white, and not grape juice) and heart disease |
Syphilis and genes |
Mothering patterns and autism |
Breast feeding and asthma |
Bottle feeding and asthma |
Anything and asthma |
Power transformers and leukaemia |
Nuclear power plants and leukaemia |
Cell phones and brain tumours |
Vitamin antioxidants and cancer, aging |
HMOs and reduced health care cost |
HMOs and healthier Americans |
Genes and you name it! |
Saturday, August 21, 2010
She hates the thought of socks
by Matthea Harvey
From The New Yorker, August 16, 2010
The straightforward mermaid starts every sentence with “Look . . . ” This comes from being raised in a sea full of hooks. She wants to get points 1, 2, and 3 across, doesn’t want to disappear like a river into the ocean. When she’s feeling despairing, she goes to eddies at the mouth of the river and tries to comb the water apart with her fingers. The straightforward mermaid has already said to five sailors, “Look, I don’t think this is going to work,” before sinking like a sullen stone. She’s supposed to teach Rock Impersonation to the younger mermaids, but every beach field trip devolves into them trying to find shells to match their tail scales. They really love braiding. “Look,” says the straightforward mermaid. “Your high ponytails make you look like fountains, not rocks.” Sometimes she feels like a third gender—preferring primary colors to pastels, the radio to singing. At least she’s all mermaid: never gets tired of swimming, hates the thought of socks.
Friday, August 20, 2010
Genes playing possum!
By
Ken Weiss
A story by Gina Kolata in the NY Times reports a muscular disorder that is due to what was considerd a 'dead' gene, or 'junk' DNA. A dead gene, usually called a pseudogene, is a DNA sequence derived from an incomplete copy of a functionally active gene, or a gene that was once actively used but whose transcription regulatory sequence mutated away. Or a gene could suffer a mutation in its coding sequence that makes the resulting protein not work.
The smugness with which DNA sequence in between regular protein-coding genes was called 'junk' DNA is rapidly fading. Much of our DNA has no known function, but even there we have evidence that there may be function--for example, some regions of such DNA have sequence that is conserved (basically the same) among species that haven't shared a common ancestor for many millions or more years. If it had no function maintained by natural selection, why hasn't mutation simply erased the similarity among species of that sequence?
To find that a pseudogene region that was known to be transcribed into RNA can actually interfere with normal processes is interesting and biomedically important. It's worthy of a story in the Times (and Kolata is a worthy person to write it) (see, we don't just criticize the popular science media!).
We can't make generalizations from this about 'junk' DNA however. The degree to which a bit of non-coding DNA has some function, of some sort, is difficult to know simply because proving 'no' function is virtually impossible. That's why evolutionary conservation is a persuasive indicator that something's still usefully active.
Francis Collins is quoted as saying that this is interesting and complex in its mechanism, which is not yet understood. He rightly says that in genetics, whatever can go wrong will go wrong--a principle Ken called the Rusty Rule of life in his 1990 book on disease genes, because evolutionarily we know this had to be so, since mutation can strike anywhere in the genome.
There may be DNA with no function, not even spacing-function to keep other functional elements some proper distance apart. Perhaps it could be called 'junk'. But right now the problem we face in non-coding DNA is the opposite: there is so much of it, it's hard to understand how natural selection could be maintaining it.
When a sequence variant has very little function--and most of this DNA seems clearly in that category--then we expect genetic drift (chance) to determine how the frequency of the variant will change over time. In that case, deep evolutionary conservation is not to be expected or at least should be less than that of really functional DNA. But even saying 'less' is problematic, because we need a baseline for the rate at which variation will accumulate in truly nonfunctional DNA.
But if we can't be sure of what's really nonfunctional, where is our baseline? We can try theory, try some experimental things (like watching bacteria over thousands of generations in a lab), but it's not easy to know.
Ironically, a common bit of DNA to use for that baseline is--you may have guessed it--pseudogenes! Because a dead gene has no function! Well, the Times example shows that some, at least, do have a function, and it is very possible that this disorder, though not lethal, could affect the reproductive success of those unfortunate enough to carry it.
Life is always playing tricks on it. What you see may not be what you get. Something may look pseudo, but only be playing possum.
The smugness with which DNA sequence in between regular protein-coding genes was called 'junk' DNA is rapidly fading. Much of our DNA has no known function, but even there we have evidence that there may be function--for example, some regions of such DNA have sequence that is conserved (basically the same) among species that haven't shared a common ancestor for many millions or more years. If it had no function maintained by natural selection, why hasn't mutation simply erased the similarity among species of that sequence?
To find that a pseudogene region that was known to be transcribed into RNA can actually interfere with normal processes is interesting and biomedically important. It's worthy of a story in the Times (and Kolata is a worthy person to write it) (see, we don't just criticize the popular science media!).
We can't make generalizations from this about 'junk' DNA however. The degree to which a bit of non-coding DNA has some function, of some sort, is difficult to know simply because proving 'no' function is virtually impossible. That's why evolutionary conservation is a persuasive indicator that something's still usefully active.
Francis Collins is quoted as saying that this is interesting and complex in its mechanism, which is not yet understood. He rightly says that in genetics, whatever can go wrong will go wrong--a principle Ken called the Rusty Rule of life in his 1990 book on disease genes, because evolutionarily we know this had to be so, since mutation can strike anywhere in the genome.
There may be DNA with no function, not even spacing-function to keep other functional elements some proper distance apart. Perhaps it could be called 'junk'. But right now the problem we face in non-coding DNA is the opposite: there is so much of it, it's hard to understand how natural selection could be maintaining it.
When a sequence variant has very little function--and most of this DNA seems clearly in that category--then we expect genetic drift (chance) to determine how the frequency of the variant will change over time. In that case, deep evolutionary conservation is not to be expected or at least should be less than that of really functional DNA. But even saying 'less' is problematic, because we need a baseline for the rate at which variation will accumulate in truly nonfunctional DNA.
But if we can't be sure of what's really nonfunctional, where is our baseline? We can try theory, try some experimental things (like watching bacteria over thousands of generations in a lab), but it's not easy to know.
Ironically, a common bit of DNA to use for that baseline is--you may have guessed it--pseudogenes! Because a dead gene has no function! Well, the Times example shows that some, at least, do have a function, and it is very possible that this disorder, though not lethal, could affect the reproductive success of those unfortunate enough to carry it.
Life is always playing tricks on it. What you see may not be what you get. Something may look pseudo, but only be playing possum.
Thursday, August 19, 2010
Charles Darwin and biobanks: heredity and disease, part II
By
Ken Weiss
In our previous post on the inheritance of disease, we referred to the ancient view that what we are is molded by what we experience in life, and is in turn transmitted to our children. To Darwin, the results would be screened for 'fitness'--survival and reproductive success--by natural selection as the core of the evolutionary process.
As we noted in our earlier post, there was great concern in Darwin's time about the nature of hereditary disease. It was only right to try to work out the principles of inheritance. Biobanking in those days would concern the offspring of consanguineous marriages, because if disease was hereditary, marrying relatives would increase or at least help reveal the nature of the risk.
In Darwin's 'Lamarckian' view of inheritance, while not simplistic and not entirely consistent, variation arose through life experiences and changed the physical elements (he called them gemmules) of our individual nature. The gemmules managed to get into the germline, and a child is a blend, as Darwin called it, of the gemmules it received from its two parents.
Now, Darwin clearly knew that offspring don't always bear their parents' traits. Some skip generations (e.g., what we know as recessive traits and incomplete 'penetrance'), some seem to come out of the blue as 'sports' (mutations), and males and females are clearly not simply blends of their parents. In The Variation of Animals and Plants under Domestication, where he expounded on his theory in the most detail, Darwin struggled to fit these apparent anomalies into his theory.
This is a highly deterministic view of life: you are made by your gemmules and you transmit them to your kids, and that will affect how well they do in life. While probability theory was certainly in existence in Darwin's time, to him ideas about chance were vague and informal relative to today's sampling and testing study designs. In his world it was natural to ask about consanguineous matings, because relatives would have similar gemmules that might reveal how inheritance worked, and this idea was only reinforced, after Darwin's time, in 1900 when Mendel's principles of inheritance were rediscovered. Dominant or recessive genes (or at least those examples that were clearly so were the ones studied and cataloged and used for research) were shown experimentally to be deterministic. In fact, Archibald Garrod's work published in 1902, on various recessive 'inborn errors of metabolism', as they appeared in offspring of cousin marriages, was the founding work in modern human genetics.
Given deterministic cause of human traits, the law of natural selection was to Darwin also essentially a deterministic screen of variation, in which every little advantage proliferated. It is no wonder that we have inherited, so to speak, a deterministic view of evolution, and hence of life, and hence of genetic causation. But this is essentially the long and subtle legacy of Darwinian's Lamarckian view of inheritance.
The validity of Darwin's theory of inheritance was suspected even at the time (and Darwin coyly said he was only tossing it out for testing), and we now know that it is fundamentally wrong--indeed, backwards in terms of causation. Genes, unlike gemmules, are causally related to traits, but not caused by them. They don't flow from all tissues into the gonads. They don't (by and large) change in response to circumstance as was thought.
And Darwin's essentially deterministic view of natural selection has a comparably long shadow into the present. We now know why the effects of genetic variants are typically statistical rather than deterministic. They do react to experience ('environment'), but the interaction is probabilistic. That means that prediction from gene to trait is usually not deterministic but only probabilistic as well.
This is why screening to find genes 'for' a trait like diabetes or heart disease or cancer or intelligence is a mistaken endeavor. When countless genes contribute to traits, they all vary in every population, the variants usually only have small statistical effects, and they interact with environments that also vary over time and among individuals, the gene-trait connection is typically weak.
The exceptions are the 'knockout' effects, when a defective gene basically just doesn't work or is wildly out of whack. There are many ways to break something, and hundreds of genes are known that, when broken, lead to disease in a basically deterministic way. But those diseases are typically congenital (present at birth or even earlier). They're not the target of genomewide association studies (GWAS), nor of the expensive gene-based biobanking initiatives now being established in many countries.
So why, given what we clearly know from GWAS, evolutionary theory, natural variation, and experimental studies, do we persist in thinking in an essentially Darwinian/Lamarckian way, that biobank kinds of resources are going to be so important to understanding the heritability of disease?
As we noted in our earlier post, there was great concern in Darwin's time about the nature of hereditary disease. It was only right to try to work out the principles of inheritance. Biobanking in those days would concern the offspring of consanguineous marriages, because if disease was hereditary, marrying relatives would increase or at least help reveal the nature of the risk.
In Darwin's 'Lamarckian' view of inheritance, while not simplistic and not entirely consistent, variation arose through life experiences and changed the physical elements (he called them gemmules) of our individual nature. The gemmules managed to get into the germline, and a child is a blend, as Darwin called it, of the gemmules it received from its two parents.
Now, Darwin clearly knew that offspring don't always bear their parents' traits. Some skip generations (e.g., what we know as recessive traits and incomplete 'penetrance'), some seem to come out of the blue as 'sports' (mutations), and males and females are clearly not simply blends of their parents. In The Variation of Animals and Plants under Domestication, where he expounded on his theory in the most detail, Darwin struggled to fit these apparent anomalies into his theory.
This is a highly deterministic view of life: you are made by your gemmules and you transmit them to your kids, and that will affect how well they do in life. While probability theory was certainly in existence in Darwin's time, to him ideas about chance were vague and informal relative to today's sampling and testing study designs. In his world it was natural to ask about consanguineous matings, because relatives would have similar gemmules that might reveal how inheritance worked, and this idea was only reinforced, after Darwin's time, in 1900 when Mendel's principles of inheritance were rediscovered. Dominant or recessive genes (or at least those examples that were clearly so were the ones studied and cataloged and used for research) were shown experimentally to be deterministic. In fact, Archibald Garrod's work published in 1902, on various recessive 'inborn errors of metabolism', as they appeared in offspring of cousin marriages, was the founding work in modern human genetics.
Given deterministic cause of human traits, the law of natural selection was to Darwin also essentially a deterministic screen of variation, in which every little advantage proliferated. It is no wonder that we have inherited, so to speak, a deterministic view of evolution, and hence of life, and hence of genetic causation. But this is essentially the long and subtle legacy of Darwinian's Lamarckian view of inheritance.
The validity of Darwin's theory of inheritance was suspected even at the time (and Darwin coyly said he was only tossing it out for testing), and we now know that it is fundamentally wrong--indeed, backwards in terms of causation. Genes, unlike gemmules, are causally related to traits, but not caused by them. They don't flow from all tissues into the gonads. They don't (by and large) change in response to circumstance as was thought.
And Darwin's essentially deterministic view of natural selection has a comparably long shadow into the present. We now know why the effects of genetic variants are typically statistical rather than deterministic. They do react to experience ('environment'), but the interaction is probabilistic. That means that prediction from gene to trait is usually not deterministic but only probabilistic as well.
This is why screening to find genes 'for' a trait like diabetes or heart disease or cancer or intelligence is a mistaken endeavor. When countless genes contribute to traits, they all vary in every population, the variants usually only have small statistical effects, and they interact with environments that also vary over time and among individuals, the gene-trait connection is typically weak.
The exceptions are the 'knockout' effects, when a defective gene basically just doesn't work or is wildly out of whack. There are many ways to break something, and hundreds of genes are known that, when broken, lead to disease in a basically deterministic way. But those diseases are typically congenital (present at birth or even earlier). They're not the target of genomewide association studies (GWAS), nor of the expensive gene-based biobanking initiatives now being established in many countries.
So why, given what we clearly know from GWAS, evolutionary theory, natural variation, and experimental studies, do we persist in thinking in an essentially Darwinian/Lamarckian way, that biobank kinds of resources are going to be so important to understanding the heritability of disease?
Wednesday, August 18, 2010
Nature does it again
Surely we can get even you, Occamseraser, to admit that there's a creationist smell about the Aug 12 Nature cover. The "first cut"? In what possible sense could this be characterized that way? If the cover had said 'earliest-known' that could be responsible journalism. But since tool use must have evolved over millions of years, as its presence in chimpanzees shows, it absolutely verges on creationism to suggest that there was a 'first' tool, much less that it was in any way sophisticated or even recognizable archeologically. Did Adam use it to bring dinner home to Eve?
We think this is misleading melodrama. If it were the first instance, we wouldn't make so much of it, but they do it over and over again. It may not be seriously misleading, but point-cause thinking about evolution is all too prevalent these days, if implicit, even among scientists.
And here we are not considering the claimed possibility that the marks were made by a crocodile, not a hominin. If that turns out to be the case, will Nature call it "The First Bite"?
We think this is misleading melodrama. If it were the first instance, we wouldn't make so much of it, but they do it over and over again. It may not be seriously misleading, but point-cause thinking about evolution is all too prevalent these days, if implicit, even among scientists.
And here we are not considering the claimed possibility that the marks were made by a crocodile, not a hominin. If that turns out to be the case, will Nature call it "The First Bite"?
Charles Darwin and biobanks: heredity and disease, part I
This post is about biobanks, a subject on which we've written several times in the past (e.g., here). We don't think much of biobanks in the context of genetic approaches to medicine and public health, for reasons that have to do with a proper understanding of how evolution works and of how genes affect variation in complex traits--that is, most traits.
This is manifestly not because disease isn't heritable! It has to do with how human traits are inherited -- if you read this blog regularly, you've had read our argument before (but if not, last week's post on DTC genetic testing pretty much lays it out). Concern about heritable disease is by no means new. In a sense it traces back to the earliest medical writings, by Hippocrates and his group. Their idea about inheritance was roughly the same as Lamarck's: physical changes that happen to you during life are transmitted directly to your offspring.
People well-enough educated and well-enough off to worry about whether or who to marry, or whether their children might not be healthy have been writing about this for centuries. The concern extended to mental disorders ('madness'), or perhaps even focused on them. One Mercatus wrote a treatise on the subject in the early 1600s, and a Dr Portal published a major work in France in 1808 on hereditary disease using various kinds of evidence available at a time when modern science was becoming a more central part of medicine. Great concerns were raised about the risks of bearing affected children.
An Englishman, Joseph Adams, was upset by this frenzy of worry, and tried to lay out the principles of hereditary disease in his own book a few years later (1814). Adams showed that many of Portal's fears were groundless. And he had an evolutionary perspective that strikingly foreshadows Darwin's ideas 40 years later (see Ken's Evolutionary Anthropology article, "Joseph Adams in the Judgment of Paris"). For example, if disease was hereditary it most likely will have late onset, because natural selection (he didn't use the term, which didn't exist then) would remove such heredity from the population. That is, if a disease struck before reproductive age, it wouldn't be passed on to later generations.
Jean Lamarck published his Zoological Philosophy in 1809, about the same time as Portal, showing the kinds of thoughts that were 'in the air' at the time in France. Lamarck tried to explain the origin of the diverse species in life through natural processes, which required a theory of inheritance, and he echoed the long-standing idea that your nature is modified by experience, and those acquired characteristics are transmitted to your children. It was only natural to think so. And of the awful things, madness was one of the worst to bestow on your own children!
Adams provided one of the first attempts at a scientific analysis of hereditary disease (as Ken's article outlines). But the details and differences in familial patterns were complex and perplexing to those who thought about them. One of those who brooded in this way was Charles Darwin: "My dread is hereditary ill-health." and "The worst of my bugbears is hereditary weakness."
After his beloved daughter Annie died, but even more generally, Darwin worried about the transmission of disease. Could he have been a source of the problem that took Annie away? In particular, he worried that cousin marriages were more likely to concentrate heritable effects and result in children with heritable diseases -- and he had married his cousin. He held Lamarckian views about the nature of inheritance, even if he differed greatly about the nature of evolution (that's a separate topic, well-known, but beyond our scope here). With such a view, how could someone not transmit their ills to their children? It haunted Darwin considerably.
Darwin was a believer in observational data rather than hand-waving, and he felt that a clearer understanding of the actual patterns of inheritance of disease was needed. He favored a suggestion that had been made in Britain at the time, to establish a national registry of consanguineous marriages to gather the needed data. In the very last section of Descent of Man, he stated his advocacy of such a Victorian biobank and sneered at legislators who opposed the proposal as just "ignorant".
So why wouldn't we still benefit today from a national biobank to understand the nature of inherited disease, which is just what is being proposed and established in many countries? It has to do partly with the politics and funding of health care and research. But it has more to do with the fact that now, unlike in Darwin's time, we do know a lot about inheritance, and disease. In a next post, we'll go into some interesting relevant aspects of Darwin's view, and how the long shadow of its legacy might be misleading the present.
This is manifestly not because disease isn't heritable! It has to do with how human traits are inherited -- if you read this blog regularly, you've had read our argument before (but if not, last week's post on DTC genetic testing pretty much lays it out). Concern about heritable disease is by no means new. In a sense it traces back to the earliest medical writings, by Hippocrates and his group. Their idea about inheritance was roughly the same as Lamarck's: physical changes that happen to you during life are transmitted directly to your offspring.
People well-enough educated and well-enough off to worry about whether or who to marry, or whether their children might not be healthy have been writing about this for centuries. The concern extended to mental disorders ('madness'), or perhaps even focused on them. One Mercatus wrote a treatise on the subject in the early 1600s, and a Dr Portal published a major work in France in 1808 on hereditary disease using various kinds of evidence available at a time when modern science was becoming a more central part of medicine. Great concerns were raised about the risks of bearing affected children.
An Englishman, Joseph Adams, was upset by this frenzy of worry, and tried to lay out the principles of hereditary disease in his own book a few years later (1814). Adams showed that many of Portal's fears were groundless. And he had an evolutionary perspective that strikingly foreshadows Darwin's ideas 40 years later (see Ken's Evolutionary Anthropology article, "Joseph Adams in the Judgment of Paris"). For example, if disease was hereditary it most likely will have late onset, because natural selection (he didn't use the term, which didn't exist then) would remove such heredity from the population. That is, if a disease struck before reproductive age, it wouldn't be passed on to later generations.
Jean Lamarck published his Zoological Philosophy in 1809, about the same time as Portal, showing the kinds of thoughts that were 'in the air' at the time in France. Lamarck tried to explain the origin of the diverse species in life through natural processes, which required a theory of inheritance, and he echoed the long-standing idea that your nature is modified by experience, and those acquired characteristics are transmitted to your children. It was only natural to think so. And of the awful things, madness was one of the worst to bestow on your own children!
Adams provided one of the first attempts at a scientific analysis of hereditary disease (as Ken's article outlines). But the details and differences in familial patterns were complex and perplexing to those who thought about them. One of those who brooded in this way was Charles Darwin: "My dread is hereditary ill-health." and "The worst of my bugbears is hereditary weakness."
After his beloved daughter Annie died, but even more generally, Darwin worried about the transmission of disease. Could he have been a source of the problem that took Annie away? In particular, he worried that cousin marriages were more likely to concentrate heritable effects and result in children with heritable diseases -- and he had married his cousin. He held Lamarckian views about the nature of inheritance, even if he differed greatly about the nature of evolution (that's a separate topic, well-known, but beyond our scope here). With such a view, how could someone not transmit their ills to their children? It haunted Darwin considerably.
Darwin was a believer in observational data rather than hand-waving, and he felt that a clearer understanding of the actual patterns of inheritance of disease was needed. He favored a suggestion that had been made in Britain at the time, to establish a national registry of consanguineous marriages to gather the needed data. In the very last section of Descent of Man, he stated his advocacy of such a Victorian biobank and sneered at legislators who opposed the proposal as just "ignorant".
So why wouldn't we still benefit today from a national biobank to understand the nature of inherited disease, which is just what is being proposed and established in many countries? It has to do partly with the politics and funding of health care and research. But it has more to do with the fact that now, unlike in Darwin's time, we do know a lot about inheritance, and disease. In a next post, we'll go into some interesting relevant aspects of Darwin's view, and how the long shadow of its legacy might be misleading the present.
Tuesday, August 17, 2010
Francis Collins, personalized genomic medicine, and the nature of probabilistic risk
By
Ken Weiss
Probability and the weather
Personalized forecasts
People regularly complain about the weather forecast in a way that misunderstands probability. It's a sensitive subject for Ken, as a one-time meteorologist in a long-ago former life. If the forecast is for a 30% chance of showers, and it stays dry, it's often viewed as a bad forecast. You should have played golf after all. If it rains, it's often viewed as a bad forecast. Dammit, they ruined your golf outing, by driving you prematurely into the 19th hole bar!
Neither conclusion is right.
One can only really tell whether the forecast was right the next day, by totaling up the area that received rain, or by averaging the fraction of each area that received rain at any given time, or something like that.
The key point is that this is a 'personalized' forecast in the same way that genomic medicine, and Direct to Consumer (DTC) genetic risk estimation are personalized. For a given genotype, or given golf course, whether it actually rains or not, or whether you get the disease in question, is almost irrelevant to whether the forecast was a good one.
Probability and risk of disease
Personalized genomics
We said on Saturday that Francis Collins reactions to his genotypic diagnosis of being at elevated risk for diabetes were, at his diabetes-free status at age 60, irrelevant to the genotype-testing result. If the population average risk of diabetes is, say, 10%, that means 10% of the population will get the disease at some point in their lives. If Francis' genotype-based relative risk is elevated by, say, 20% relative to the risk of the average Joe, his absolute risk is raised to 12% (and we're exaggerating risks here to make the point, since most of the genotype-based risks of common diseases add considerably less than 20% of the average lifetime relative risk, and most lifetime risks are less than 10%).
But in fact he doesn't have diabetes. That says almost nothing about the accuracy or usefulness of his risk estimate. 12% risk means that 88% of people with the same genotype don't get diabetes, and his status is wholly consistent with that (or with his being at the average risk), and says nothing about whether the risk estimate was accurate. The only risk estimate that should affect his behavior (he reports slimming down etc. in response to the genotype data), is the 'conditional' risk of getting diabetes at some future age, for a white male who has already survived until age 60. That is probably an unknown value, for many reasons, such as lack of enough data. Indeed even if the genetic risk estimate is accurate, a person with his genotype could, if diabetes-free at age 60, actually be at lower than average risk from his age onward. Why? Because the bulk of those who were born with the same genotype might already have gotten diabetes at a younger age: earlier onset is a typical way that elevated-risk genotypes work.
The issues are subtle, and just like weather forecasts, easy to misunderstand. Another reason people should shun DTC services, and just adopt health lifestyles to the extent they can.
More profoundly, if Dr Collins is at our hypothetical 20% relative risk, that means that the average person with his detected genotype is at 20% increased risk, but does not mean that everyone with that genotype is! For example, those with the genotype who eat too much McFastfood may be at a 90% risk of diabetes, hugely elevated over the general population risk perhaps, while vegans with the same genotype could even be at lower risk than the population average. We rarely know enough about interacting or confounding variables, like other genes or lifestyles, relative to the one we know about (the tested genotype) to say more than what's average for that genotype. But we have every reason to believe that not everyone with the genotype is at its average risk.
When risk differences are less than huge in absolute terms, personalized medicine cannot judge the relevance of these issues except at best in a here-and-now population context, such as by a case-controls study, and these exposure contexts are always changing. That's why it's important to understand what probabilistic predictions mean, when it comes both to your golfing decisions, and to your life.
In fact for most diseases if you want to know much, much more about your risk than any genotype service can tell you, just look at the health history of your relatives. That's been known since Darwin's time. And unlike DTC services, that information's free! No doctor bills, no unnecessary testing, no specific genes need to be identified. If we learn of specific gene-based therapies, and you're at high familial risk, then it would be worthwhile to have a proper genetic counseling service do the test.
A Snake Oil factor, but it cuts both ways
The DTC business is cowboy capitalism at this stage, and selling this kind of snake oil to people totally unable to understand the actual meaning of the data is an ethical as well as policy issue, and of course there is disagreement about where the ethical lines are where they should be drawn. Even most doctors--even most geneticists--are helpless to understand the nature, accuracy, or stability of these risks. Of course there is always a lot of selling going on, as a glance at Parade or an airline magazine will clearly show (shrink your prostate! get rid of age wrinkles! have a 20 year-old body at age 70!). As P T Barnum said, there's a sucker sitting in every airplane seat. But this is why regulation is important, in our view, in something so closely involved with life, death, and health-care costs.
We've suggested in our posts that the FDA should treat this kind of DTC 'advice' as practicing medicine, and license it only as a part of genetic counseling where it could in principle be properly constrained and regulated to stick closer to truths that we can generally agree on and that consumers (and counselors and physicians) can make reliable sense of.
However we must admit that this is partly a political stance by us. If the entire business were just shifted to medical clinics, the same testing would probably take place, with the same level of understanding. After all, the issues are complex even for those with sophisticated knowledge.
In fact, transferring the business (doubtlessly to be supplied, probably on a larger scale, by the same companies) and perhaps establishing it part of what would be lobbied into routine medical exams, the genomic testing could be much more costly to the health-care system, and more lucrative for the companies in the future, and hence perhaps even more problematic than it is now. Still, we think regulation is better than caveat emptor.
Bottom line?
So the simple conclusion for us is that it's best to follow advice we've only known since Hippocrates a mere 2400 years ago (moderation in all things), and skip both that extra dessert and the personalized DTC genotyping.
Personalized forecasts
People regularly complain about the weather forecast in a way that misunderstands probability. It's a sensitive subject for Ken, as a one-time meteorologist in a long-ago former life. If the forecast is for a 30% chance of showers, and it stays dry, it's often viewed as a bad forecast. You should have played golf after all. If it rains, it's often viewed as a bad forecast. Dammit, they ruined your golf outing, by driving you prematurely into the 19th hole bar!
Neither conclusion is right.
One can only really tell whether the forecast was right the next day, by totaling up the area that received rain, or by averaging the fraction of each area that received rain at any given time, or something like that.
The key point is that this is a 'personalized' forecast in the same way that genomic medicine, and Direct to Consumer (DTC) genetic risk estimation are personalized. For a given genotype, or given golf course, whether it actually rains or not, or whether you get the disease in question, is almost irrelevant to whether the forecast was a good one.
Probability and risk of disease
Personalized genomics
We said on Saturday that Francis Collins reactions to his genotypic diagnosis of being at elevated risk for diabetes were, at his diabetes-free status at age 60, irrelevant to the genotype-testing result. If the population average risk of diabetes is, say, 10%, that means 10% of the population will get the disease at some point in their lives. If Francis' genotype-based relative risk is elevated by, say, 20% relative to the risk of the average Joe, his absolute risk is raised to 12% (and we're exaggerating risks here to make the point, since most of the genotype-based risks of common diseases add considerably less than 20% of the average lifetime relative risk, and most lifetime risks are less than 10%).
But in fact he doesn't have diabetes. That says almost nothing about the accuracy or usefulness of his risk estimate. 12% risk means that 88% of people with the same genotype don't get diabetes, and his status is wholly consistent with that (or with his being at the average risk), and says nothing about whether the risk estimate was accurate. The only risk estimate that should affect his behavior (he reports slimming down etc. in response to the genotype data), is the 'conditional' risk of getting diabetes at some future age, for a white male who has already survived until age 60. That is probably an unknown value, for many reasons, such as lack of enough data. Indeed even if the genetic risk estimate is accurate, a person with his genotype could, if diabetes-free at age 60, actually be at lower than average risk from his age onward. Why? Because the bulk of those who were born with the same genotype might already have gotten diabetes at a younger age: earlier onset is a typical way that elevated-risk genotypes work.
The issues are subtle, and just like weather forecasts, easy to misunderstand. Another reason people should shun DTC services, and just adopt health lifestyles to the extent they can.
More profoundly, if Dr Collins is at our hypothetical 20% relative risk, that means that the average person with his detected genotype is at 20% increased risk, but does not mean that everyone with that genotype is! For example, those with the genotype who eat too much McFastfood may be at a 90% risk of diabetes, hugely elevated over the general population risk perhaps, while vegans with the same genotype could even be at lower risk than the population average. We rarely know enough about interacting or confounding variables, like other genes or lifestyles, relative to the one we know about (the tested genotype) to say more than what's average for that genotype. But we have every reason to believe that not everyone with the genotype is at its average risk.
When risk differences are less than huge in absolute terms, personalized medicine cannot judge the relevance of these issues except at best in a here-and-now population context, such as by a case-controls study, and these exposure contexts are always changing. That's why it's important to understand what probabilistic predictions mean, when it comes both to your golfing decisions, and to your life.
In fact for most diseases if you want to know much, much more about your risk than any genotype service can tell you, just look at the health history of your relatives. That's been known since Darwin's time. And unlike DTC services, that information's free! No doctor bills, no unnecessary testing, no specific genes need to be identified. If we learn of specific gene-based therapies, and you're at high familial risk, then it would be worthwhile to have a proper genetic counseling service do the test.
A Snake Oil factor, but it cuts both ways
The DTC business is cowboy capitalism at this stage, and selling this kind of snake oil to people totally unable to understand the actual meaning of the data is an ethical as well as policy issue, and of course there is disagreement about where the ethical lines are where they should be drawn. Even most doctors--even most geneticists--are helpless to understand the nature, accuracy, or stability of these risks. Of course there is always a lot of selling going on, as a glance at Parade or an airline magazine will clearly show (shrink your prostate! get rid of age wrinkles! have a 20 year-old body at age 70!). As P T Barnum said, there's a sucker sitting in every airplane seat. But this is why regulation is important, in our view, in something so closely involved with life, death, and health-care costs.
We've suggested in our posts that the FDA should treat this kind of DTC 'advice' as practicing medicine, and license it only as a part of genetic counseling where it could in principle be properly constrained and regulated to stick closer to truths that we can generally agree on and that consumers (and counselors and physicians) can make reliable sense of.
However we must admit that this is partly a political stance by us. If the entire business were just shifted to medical clinics, the same testing would probably take place, with the same level of understanding. After all, the issues are complex even for those with sophisticated knowledge.
In fact, transferring the business (doubtlessly to be supplied, probably on a larger scale, by the same companies) and perhaps establishing it part of what would be lobbied into routine medical exams, the genomic testing could be much more costly to the health-care system, and more lucrative for the companies in the future, and hence perhaps even more problematic than it is now. Still, we think regulation is better than caveat emptor.
Bottom line?
So the simple conclusion for us is that it's best to follow advice we've only known since Hippocrates a mere 2400 years ago (moderation in all things), and skip both that extra dessert and the personalized DTC genotyping.
Monday, August 16, 2010
Ten scratches on two bone fragments distinguish vegetarians from carnivores
It seems like everyone wants to know when we started eating animals, and why we started eating animals, and how that affected our evolution downstream. (Or, if you’d rather, upstream.)
Even if they’re not intrinsically interested in human evolution, most people (including Ozzy over there) still want to learn more about some of our species’ favorite pastimes: Eating and killing things.
If you're ever visiting the gorillas at the zoo, it's pretty obvious what humans are interested in. Of the juvenile chewing on lettuce and carrots, a mother says to her toddler, “Look sweetie, he’s eating his vegetables like a good little boy,” and a teen-aged girl says to her brother, “See, they don’t eat meat and since we share, like, most the genome with them we shouldn’t eat meat either.” Of the silverback male sitting quietly in the corner, a man observes, “Dude’s large and in-charge. I wouldn’t want to [bleep] with that. He’d kill the [bleep] outta me.”
Most of the profound pronouncements you’ll hear at the zoo’s mirror for humanity are centered around food and violence.
Likewise, many, if not most, of the big-picture explanations for human evolution—for why humans are the way we are, for why humans are unlike other apes—revolve around eating and killing animals. Just to name a few…We walk upright to carry and throw things like tools, which are presumably used for hunting prey and processing carcasses (Darwin).We walk upright to move about as terrestrial foragers (which at some point begins to include scavenging and hunting) on the open savannas (The Savanna Hypothesis). Humanity was born in a hunting past (Washburn and Lancaster). Big brains were able to evolve because of the supplemental nutrition and calories obtained through scavenging and hunting (just about everyone). Scavenging, hunting, and stone-tool manufacture require a larger, more cognitively complex brain (just about everyone).
So it makes sense that paleoanthropologists are desperately searching for evidence of our ancestors eating animals. Especially very early evidence. And, in order to find it, you’ve got to not only collect every scrap of bone that you see while you survey or that you dig up while you excavate, but you’ve got to scrutinize each one of those scraps, preferably under a microscope.
Otherwise, you might miss something like the ten scratches on two bone fragments (one rib and one femur) that were just recently reported in Nature.
Cut-marked bone at 3.4 million years ago takes the title of First Meat-Eater away from the genus Homo (Homo habilis, Homo erectus,.... , Homo sapiens) and gives it to the only hominin species known at the time from that region of Ethiopia: Australopithecus afarensis, known for being Lucy’s species.
If you're not quaking in your seat right now, well, then you have got some serious nerding-up to do. This is kind of HUGE: Lucy’s relatives, and maybe Lucy herself, not only used stone tools but used them on big mammal carcasses!?
Until this discovery, all we had were stone tools at 2.6 mya and cut-marked bones at 2.5 mya and everyone was content in thinking that the major behavioral shift associated with the human genus, Homo, was obtaining ever-increasing amounts of animal fats and proteins into our diets. That's the best way to make sense of the trend for encephalization which is so much more pronounced in Homo than in previous hominins (where brain sizes were basically chimp-like for the four million years that our ancestors were undergoing selection for increasingly efficient bipedal locomotion).
And see... that has been a little bit of problem: Why would natural selection continue to hone our bodies for bipedal locomotion if not to use them for hunting? With this new evidence for meat-eating earlier in our past, bipedalism makes a lot more sense. At least from the traditional standpoint.
But you’re probably thinking… ten scratches on just two bones? Well, it’s come down to less than this before. When the evidence is good, you certainly don’t need more than a meal’s-worth to show that an animal was butchered with stone tools.
Even if those stone tools were not shaped or chipped or flaked first, even if they were just rocks that somebody grabbed from a dried-up riverbed, nobody but us and our ancestors uses/used stone tools on animals. (Okay, otters use rocks to bash open sea urchins and other snacks but not bones of East African bovids.) Chimpanzees use rocks to crack open nuts, but when they occasionally hunt prey they either use sticks as spears or they simply use their strong bare hands, like baboons are known to do as well. So, bones with scratch marks that were made by stone tools can’t be interpreted any other way than to have been made by human ancestors, or at least relatives of our ancestors (collectively termed hominins or hominids, depending on preference).
Except, EXCEPT!, there are some who would still disagree and they may make the following arguments:
1. Carnivores, like lions and hyenas, can scratch bones of prey with their teeth and if those bones aren’t crushed to smithereens as part of the meal, then they can be discovered millions of years later and trick paleontologists. Okay, but that old chestnut’s been cracked wide open. Microscopic details that distinguish tooth scratches from tool scratches have been established and these bones weren’t scratched by teeth.
2. At any time after the animal’s death, its bones could have been trampled and that could explain the scratch marks. Okay, but again, under a microscope and x-rays, those marks differ from the marks left by tools and, again, these bones weren’t scratched by feet.
So what does this mean for our reconstruction of human evolution?
Well, first of all, like we discussed after the Malapa hominins were announced, the genus Homo is probably going to have a makeover here soon. As of now, the earliest members of the genus Homo are no older than 2.4 million years ago. And now that there are three sites (Bouri, Ethiopia at 2.5 mya, Gona, Ethiopia at 2.6 mya, and now Dikika, Ethiopia at 3.4 mya) with stone tools or cut-marked bones that precede the appearance of Homo fossils, either the behavioral or the anatomical criteria must change or both.
We must now accept the idea that eating meat had a much earlier involvement in our evolutionary history than we thought.
It also means that australopiths, with their ape-sized brains, were performing activities that we have always considered pretty difficult. And this cognitive complexity came not only without the aide of human brains but without the aide of human trainers!
And finally, if you ever have a chance to have Lucy over for dinner, now you know what she likes: BBQ ribs and rump roast. Good thing, because vegetarian dinner-party guests can be so persnickety and, oh boy, does that beat the heck out of underground storage organs!
References
Shannon P. McPherron et al. 2010. Evidence for stone-tool-assisted consumption of animal tissues before 3.39 million years ago at Dikika, Ethiopia. Nature 466: 857-860
Saturday, August 14, 2010
Putting Francis Collins on the run
By
Ken Weiss
In its Aug 12 issue, Nature evaluates how Francis Collins, who has taken on the mantle of hyperbolist-in-chief for turning medicine into genetics, has done in his first year as director of NIH. Of course, he's had his genome tested for risks (3 times! Replication is always good). And of course he's credited direct to consumer risk services as showing the way to personalized medicine.
What he's reported to have found is that he's at elevated risk for adult-onset diabetes. So he's hit the exercise trail (though he's still riding his famous motorcycle to the office instead of walking or running) and he's lost 11 kilos, watched his diet, and remarkably, hasn't come down with diabetes during the year.
That this is proclaimed as a success for genotype-based medicine is a travesty of the truth. And here are several reasons for saying so:
First, Francis is 60 years old and not diabetic. The population risk for diabetes by his age is something around 10%, and he doesn't have it, which is actual data not a statistical prediction. Whether the risk estimated for his genotype is above average or not, at his age such information is largely useless.
Second, given his age and that he's escaped so far, if he ever does get diabetes it can hardly be called premature or due to some special risk.
Third, his health regimen was apparently not based on any clinical finding like a glucose tolerance test, so it can't be called therapeutic--it's not personalized 'genomic medicine'. If it was based on a clinical test it was personalized medicine, to be sure, but the same kind that's been the job of medicine since Hippocrates.
Fourth, and above all, if you're overweight, or have diabetes in your family, or don't get enough exercise, then watching diet and getting exercise is a great preventive thing to do regardless of any genotype information. It doesn't have to be 'personalized'. Why? Most diabetes cannot be predicted by genetic risk at all, or not by identified genes, as the GWAS results have very clearly and repeatedly shown. In fact, the low predictive power of specific genetic variants is actually one of the more positive real findings of GWAS!
You don't need genotyping to do what he's done, and his experience gives no support for genetically personalized medicine (it doesn't say personalized genomic medicine is useless, either, of course).
Personalized medicine has its place, as we have said before. Its place has to do with those who really do have identifiable genetic risk and some reason to suspect that. We discussed that yesterday, in the context of genetic counseling, the legitimate medical use of genotypic data.
What he's reported to have found is that he's at elevated risk for adult-onset diabetes. So he's hit the exercise trail (though he's still riding his famous motorcycle to the office instead of walking or running) and he's lost 11 kilos, watched his diet, and remarkably, hasn't come down with diabetes during the year.
That this is proclaimed as a success for genotype-based medicine is a travesty of the truth. And here are several reasons for saying so:
First, Francis is 60 years old and not diabetic. The population risk for diabetes by his age is something around 10%, and he doesn't have it, which is actual data not a statistical prediction. Whether the risk estimated for his genotype is above average or not, at his age such information is largely useless.
Second, given his age and that he's escaped so far, if he ever does get diabetes it can hardly be called premature or due to some special risk.
Third, his health regimen was apparently not based on any clinical finding like a glucose tolerance test, so it can't be called therapeutic--it's not personalized 'genomic medicine'. If it was based on a clinical test it was personalized medicine, to be sure, but the same kind that's been the job of medicine since Hippocrates.
Fourth, and above all, if you're overweight, or have diabetes in your family, or don't get enough exercise, then watching diet and getting exercise is a great preventive thing to do regardless of any genotype information. It doesn't have to be 'personalized'. Why? Most diabetes cannot be predicted by genetic risk at all, or not by identified genes, as the GWAS results have very clearly and repeatedly shown. In fact, the low predictive power of specific genetic variants is actually one of the more positive real findings of GWAS!
You don't need genotyping to do what he's done, and his experience gives no support for genetically personalized medicine (it doesn't say personalized genomic medicine is useless, either, of course).
Personalized medicine has its place, as we have said before. Its place has to do with those who really do have identifiable genetic risk and some reason to suspect that. We discussed that yesterday, in the context of genetic counseling, the legitimate medical use of genotypic data.
Friday, August 13, 2010
Fear for sale? Are DTC genetic testing companies selling anything else?
Direct-to-Consumer Genetic Testing
Two opinion pieces appear in this week's Nature (here and here) on the problems facing the direct-to-consumer genetic testing industry. These are companies, such as 23andMe, deCodeMe, or DNA Direct, that sell estimates of the likelihood of a consumer having a genetic disease or trait. Obviously, what is on offer, and what people mainly want to know about, is what to be afraid they might get. Fear for sale is not too far off the mark for this industry.
The industry has been growing fast, as more and more genes 'for' traits and diseases are published, but legal and scientific questions about what these companies have to offer abound.
Regulation
According to the FDA, these companies are performing medical tests and marketing medical devices without a license, and this needs to be corrected. There's no doubt that's how many people view these tests, and how they interpret the results. As such, the FDA believes it should regulate these companies as it regulates any company selling medical devices or medical tests. The FDA has recently brought this to the attention of many of these companies -- here is the letter sent to 23andMe on this subject in June, e.g..
But, many observers believe that the way the testing is done also needs to be regulated to standardize DNA sequencing results, for example, and otherwise assure that results are valid. So, that's also something that's under consideration. According to the Nature piece, less than 1% of DTC genetic testing is regulated in the UK, and the proportion is low in the US as well.
Science
While regulatory and legal issues are real and must be thrashed out, a deeper and thornier issue has to do with the science -- the actual causal connections (if and where they exist) between DNA sequence differences and a particular disease or other trait, and the assumption that we have accurate knowledge of those connections, which is usually far from the case.
If you send your DNA to 3 of these companies, you'll get not only risk estimates for 3 different sets of diseases and traits, but estimates you're given for the same diseases can differ. Yet isn't there just one truth out there? The problem arises because the study results these companies are basing their estimates on can vary considerably, depending on who's included in the study, how the trait is defined and so forth, and which results the company chooses to use to calculate their own estimates will determine the estimated risks they send to you. Further, there are often in fact no correct answers when it comes to complex diseases -- your risk of type 2 diabetes, or heart disease or stroke is genetic and environmental, and your particular genetic component is unique to you, and exposure patterns are always changing, which all means it can't be necessarily be accurately predicted from a pool of other people's genes.
Different genes and different alleles contribute to risk in different populations, and in every individual. And, risk of complex diseases is due to the contribution of more genes than have yet been identified, or even that will ever be identified for you. Even if all the genes that contribute to risk of type 2 diabetes could be identified for your neighbor, or even your parents!, the list won't be the same for you.
And this is before we even begin to consider the environmental contribution to your risk of disease, and this can be very elusive. One example is risk of breast cancer in women with BRCA1 or 2 mutations -- mutations which confer some of the highest risk of cancer known. Risk is very different depending on whether a woman was born before 1940 or after, because of changes in exposure to environmental risk factors. And, we can't predict future environmental risk factors, so it's impossible to know what gene by environment risk will be going forward, which is what these companies in effect are doing. Yet recent history very, very clearly shows that secular trends in risk, which must be due to lifestyle exposure differences, frequently make huge differences in risk.
The solution?
Most results of DTC genotype testing amount in some way to fear for sale. That may not be the companies' intent, but it's not unfair to describe them in this way, because it's increasingly unlikely, as more data accumulate, that anyone will be given a no-worry clean bill of genetic health. Everyone is at risk for something! Especially if the criteria for expressing risk are not particularly major or definitive.
And, as we've said, the risks they are selling are mainly pretty elusive. But when it comes to real and substantial risk, we've already got a system, called genetic counseling, for estimating risk for hundreds of known single gene disorders. These are genes for which risk and predictability are extremely high, and that are primarily pediatric disorders like Tay Sachs or cystic fibrosis, though Huntington's, which strikes in adulthood, is another example. Prediction of these disorders has long been done by genetic counselors, who are trained to predict risk (which, with these disorders, is fairly simple to do, as they generally follow Mendelian rules of inheritance) and to inform and counsel people about their risk. They work in closely integrated ways with clinics, physicians, and medical schools. Counselors are professionals who are tested, regulated, and must keep up their license with regular education. So, where risk is actually definable, we've got no need for DTC testing.
If genetics was still in the business of finding genes that follow Mendelian rules, they'd just be added to the list of traits that genetic counselors understand and counsel about. But, largely driven by the profit motive it must be said, the genes these days being identified 'for' complex diseases by and large have individually small effects, effect estimates vary considerably by study, and they are simply not now, or probably never will be useful for accurately predicting your risk of complex disease.
So, how the DTC genetic testing industry does its testing is one question, and the FDA is trying to decide how to regulate this. Whether the industry has anything of real value to sell you is another question entirely. That's the rub.
Two opinion pieces appear in this week's Nature (here and here) on the problems facing the direct-to-consumer genetic testing industry. These are companies, such as 23andMe, deCodeMe, or DNA Direct, that sell estimates of the likelihood of a consumer having a genetic disease or trait. Obviously, what is on offer, and what people mainly want to know about, is what to be afraid they might get. Fear for sale is not too far off the mark for this industry.
The industry has been growing fast, as more and more genes 'for' traits and diseases are published, but legal and scientific questions about what these companies have to offer abound.
The burgeoning, but virtually unregulated, direct-to-consumer (DTC) genetic-testing industry faces some serious changes in the United States. In a series of hearings last month, the US Food and Drug Administration (FDA) hinted that it will impose new regulations on companies selling such tests. The agency has also sent letters to test makers, as well as to one maker of the gene chips on which many such tests rely, saying that the firms are not in compliance with its rules.
In addition, the Government Accountability Office (GAO) last month unveiled the findings of its year-long investigation into the scientific validity, safety and utility of the gene tests used by the industry. The report called some of the tests misleading, pointing out inconsistencies in the results they provided, as well as some companies' shady marketing practices.Not all companies have been found to have shady marketing practices, or to intentionally mislead, but many of the other issues pertain to all of these companies because they pertain to both the science and the product they sell. Indeed, it can be argued that the probabilistic nature of these tests means that they all mislead, even if unintentionally, especially as so few people have a good sense of what probability means.
Regulation
Genetic-testing services are proliferating fast. In 1993, tests were available for about 100 diseases. By 2009, the number was almost 1,900. Some forms of testing are major advances in the diagnosis of certain conditions, such as Rett syndrome and types of brittle bone disease. The clinical utility of others — such as the high-throughput genotyping that is widely offered by companies that sell tests directly to consumers — is debatable.The problems with these companies are multiple, some having to do with the marketing of tests, and some with the quality of the testing and the test results themselves.
According to the FDA, these companies are performing medical tests and marketing medical devices without a license, and this needs to be corrected. There's no doubt that's how many people view these tests, and how they interpret the results. As such, the FDA believes it should regulate these companies as it regulates any company selling medical devices or medical tests. The FDA has recently brought this to the attention of many of these companies -- here is the letter sent to 23andMe on this subject in June, e.g..
But, many observers believe that the way the testing is done also needs to be regulated to standardize DNA sequencing results, for example, and otherwise assure that results are valid. So, that's also something that's under consideration. According to the Nature piece, less than 1% of DTC genetic testing is regulated in the UK, and the proportion is low in the US as well.
Science
While regulatory and legal issues are real and must be thrashed out, a deeper and thornier issue has to do with the science -- the actual causal connections (if and where they exist) between DNA sequence differences and a particular disease or other trait, and the assumption that we have accurate knowledge of those connections, which is usually far from the case.
If you send your DNA to 3 of these companies, you'll get not only risk estimates for 3 different sets of diseases and traits, but estimates you're given for the same diseases can differ. Yet isn't there just one truth out there? The problem arises because the study results these companies are basing their estimates on can vary considerably, depending on who's included in the study, how the trait is defined and so forth, and which results the company chooses to use to calculate their own estimates will determine the estimated risks they send to you. Further, there are often in fact no correct answers when it comes to complex diseases -- your risk of type 2 diabetes, or heart disease or stroke is genetic and environmental, and your particular genetic component is unique to you, and exposure patterns are always changing, which all means it can't be necessarily be accurately predicted from a pool of other people's genes.
Different genes and different alleles contribute to risk in different populations, and in every individual. And, risk of complex diseases is due to the contribution of more genes than have yet been identified, or even that will ever be identified for you. Even if all the genes that contribute to risk of type 2 diabetes could be identified for your neighbor, or even your parents!, the list won't be the same for you.
And this is before we even begin to consider the environmental contribution to your risk of disease, and this can be very elusive. One example is risk of breast cancer in women with BRCA1 or 2 mutations -- mutations which confer some of the highest risk of cancer known. Risk is very different depending on whether a woman was born before 1940 or after, because of changes in exposure to environmental risk factors. And, we can't predict future environmental risk factors, so it's impossible to know what gene by environment risk will be going forward, which is what these companies in effect are doing. Yet recent history very, very clearly shows that secular trends in risk, which must be due to lifestyle exposure differences, frequently make huge differences in risk.
The solution?
Most results of DTC genotype testing amount in some way to fear for sale. That may not be the companies' intent, but it's not unfair to describe them in this way, because it's increasingly unlikely, as more data accumulate, that anyone will be given a no-worry clean bill of genetic health. Everyone is at risk for something! Especially if the criteria for expressing risk are not particularly major or definitive.
And, as we've said, the risks they are selling are mainly pretty elusive. But when it comes to real and substantial risk, we've already got a system, called genetic counseling, for estimating risk for hundreds of known single gene disorders. These are genes for which risk and predictability are extremely high, and that are primarily pediatric disorders like Tay Sachs or cystic fibrosis, though Huntington's, which strikes in adulthood, is another example. Prediction of these disorders has long been done by genetic counselors, who are trained to predict risk (which, with these disorders, is fairly simple to do, as they generally follow Mendelian rules of inheritance) and to inform and counsel people about their risk. They work in closely integrated ways with clinics, physicians, and medical schools. Counselors are professionals who are tested, regulated, and must keep up their license with regular education. So, where risk is actually definable, we've got no need for DTC testing.
If genetics was still in the business of finding genes that follow Mendelian rules, they'd just be added to the list of traits that genetic counselors understand and counsel about. But, largely driven by the profit motive it must be said, the genes these days being identified 'for' complex diseases by and large have individually small effects, effect estimates vary considerably by study, and they are simply not now, or probably never will be useful for accurately predicting your risk of complex disease.
So, how the DTC genetic testing industry does its testing is one question, and the FDA is trying to decide how to regulate this. Whether the industry has anything of real value to sell you is another question entirely. That's the rub.
Thursday, August 12, 2010
I'll be home for dinner (but don't wait up)!
Can snails find their way 'home'? The possibility of a snail homing instinct has long been suspected by, for example, people who want to get rid of snails without killing them so just toss them over the hedge, but a listener to the BBC's Material World program, Ruth Brooks, wanted to actually test the hypothesis. She was chosen by the program last spring as one of four 'amateur scientists' to work with a mentor to try to answer the question.
To do so, she marked a handful of snails from her garden with nail polish and counted how many came back after she had carried them varying distances away from where she'd originally found them. While her experiments are still ongoing, to her science advisor's surprise, her preliminary data (snails are sluggish) suggests that snails placed less than 10 meters away easily return home, though anecdotal evidence she's receiving due to all the media coverage she's getting (on many sites, including on the BBC website) suggests that they can return to a garden from up to a quarter of a mile away.
Ms Brooks and her mentor are now asking anyone in Britain with a garden and a willing neighbor to add data to their database by marking snails from their own garden and swapping them with differently marked snails from their neighbor's garden. (You can follow the experiment on their Facebook page or check it out on the Material World website.)
But why was the advisor surprised? As he says, it's because conventional thinking has it that snails are far too simple to do something complicated like find their way home. But this is odd since we all know that 'simple' insects like bees and ants and butterflies very ably find their way home.
It seems to us that this experiment is testing more than the snail homing instinct. It's also testing the testers' preconceived assumptions about the what makes organisms simple and what makes them complex.
Indeed, it's largely our assumptions about what 'simple' organisms can do that makes the exhausting treks of many birds, fish, whales, and insects (like Monarch butterflies) seem so remarkable and truly worth featuring on programs like Discover and Nova. How it works is still largely unknown but magnetic particles in neurons, celestial navigation, and olfaction, seem variably to be part of it. It also seems easy to evolve, since it's happened in parallel, using largely if not entirely different mechanisms and through different media and distances, in so many species.
Probably it's hard for us to imagine because it's so different from our own experience. But a remarkable fact is that we, too, have evolved an uncannily accurate homing mechanism, and we've done it very, very rapidly. It's called a GPS navigator.
To do so, she marked a handful of snails from her garden with nail polish and counted how many came back after she had carried them varying distances away from where she'd originally found them. While her experiments are still ongoing, to her science advisor's surprise, her preliminary data (snails are sluggish) suggests that snails placed less than 10 meters away easily return home, though anecdotal evidence she's receiving due to all the media coverage she's getting (on many sites, including on the BBC website) suggests that they can return to a garden from up to a quarter of a mile away.
Ms Brooks and her mentor are now asking anyone in Britain with a garden and a willing neighbor to add data to their database by marking snails from their own garden and swapping them with differently marked snails from their neighbor's garden. (You can follow the experiment on their Facebook page or check it out on the Material World website.)
But why was the advisor surprised? As he says, it's because conventional thinking has it that snails are far too simple to do something complicated like find their way home. But this is odd since we all know that 'simple' insects like bees and ants and butterflies very ably find their way home.
It seems to us that this experiment is testing more than the snail homing instinct. It's also testing the testers' preconceived assumptions about the what makes organisms simple and what makes them complex.
Indeed, it's largely our assumptions about what 'simple' organisms can do that makes the exhausting treks of many birds, fish, whales, and insects (like Monarch butterflies) seem so remarkable and truly worth featuring on programs like Discover and Nova. How it works is still largely unknown but magnetic particles in neurons, celestial navigation, and olfaction, seem variably to be part of it. It also seems easy to evolve, since it's happened in parallel, using largely if not entirely different mechanisms and through different media and distances, in so many species.
Probably it's hard for us to imagine because it's so different from our own experience. But a remarkable fact is that we, too, have evolved an uncannily accurate homing mechanism, and we've done it very, very rapidly. It's called a GPS navigator.
Subscribe to:
Posts (Atom)