Once upon a time science was about understanding our world. Alas, now it's another marketing ploy. We see this in genetics, and we are not the only ones who have noticed the phenomenon. Any finding is immediately turned into a genetic-counseling-for-sale event, or a plea for more grant money.
But heck, let's change tunes (and sciences) for at least a moment. NASA recently announced that their Moon bomb discovered 'copious' amounts of water, a whopping dozen 2-gallon buckets (i.e., about a small bathtub's worth), and one ebullient project scientist said "We practically tasted it with the impact." Unfortunately, even in the first stories about this, what the taste was, was the taste of money.
The stories basically focused quickly around the importance of this frozen-mircotub as the salvation of future Lunarnauts, NASA's next Gimme project, not on the legitimately scientifically interesting reasons for the water, the actual amounts, how it got to where it is, etc.
We're in a bottom-line society, where lobbying and advertising are indistinguishable from real work, and the scientists involved know this very well. Much about the events, and the excitement, is staged lobbying.
In genetics and, perhaps especially in biomedical research, the same is true: a prevailing ethic is that of lobbying. "Grantsmanship" is taught early on in graduate students' academic careers. That means how to strategize. It is routinely said, often glowingly or even gloatingly, by senior investigators that they apply for grants to do what they've already done (but not yet published), so they know they can later claim success.
This is lying! It is but the tip of the iceberg of questionable science ethics. Dissembling and over-stating is par for the course. But so what? As anthropologists we can say, truthfully, that this is simply how our particular culture manages its dispensing of research funds (public through government grants, or privately through investment decisions). Whether anyone likes it or not, all cultures have their ways, almost always not entirely in synch with their supposed rules and ethics, and if the system functions, then as long as its nature generally understood among the practitioners, which in this case it is, well, that's life!
But it is legitimate to ask, if scientists are trained from the graduate school crib to mislead, where does one draw the line? Even if the line of actual data forgery is very rarely crossed, how much waste in the system is due to these practices, that are at best dissembling? Under competitive pressure, and when a project addresses the vested interests of a funder (e.g., a drug company or biotech equipment maker, or a faculty member seeking grants, salary, or tenure), how much are we misled? It's a fair question: even if this is our 'system', it is always in order to try to make it live up to its nominal claims (in this case, that science is ruthlessly objective honesty).
People always complain about the state of society, and a century from now we (well, somebody then) will look back and say, almost certainly, that recent decades have been the most scientifically productive of knowledge of any time in history, human foibles and personal vanities notwithstanding. We'll know who won the lobbying tournaments, and we'll know the science that resulted. We'll know how many entrepreneurs left little footprint, and how many were truly major contributors. But that's no excuse to look the other way. And it doesn't mean resources and thinking can't be put to better uses for society at large.
But lest we be accused simply of undue cynicism, failing simply to recognize that culture is as culture does, we note that while the Moon landings of the distant past brought us major societal advances (as NASA repeatedly tells us), they mainly consisted of Teflon and Tang. Personally, while we do occasionally cook with Teflon pans, we prefer actual orange juice to robotic orange juice (probably, that makes us some kind of food weirdos).
But we can at least, and without reservation, rejoice in the fact that future astronauts will not have to take their Tang dry. Now there's progress!
Monday, November 30, 2009
Thursday, November 26, 2009
Keeping your cool with global warming
By
Ken Weiss
Well, we just had to write, even over this holiday break (in the US), rather than just leave any hungry or un-turkeyed readers with just our Oulipo challenge to keep them occupied.
The story about the hacked emails from the University of East Anglia's program on climate change is making the rounds. A hacker discovered correspondence, including from faculty here at our own Penn State group on climate change, that appeared to indicate cover-up and cooking of the data about whether global warming is a true, anthropogenic (human-produced) fact.
As expected, the anti group, who want to keep on truckin' (with the exhaust emissions that go with it), are reveling in what they claim is a "Gotcha!" moment, showing the faking of evidence, with the earth-huggers responding in outrage that their innocent statements have been taken out of context.
Neither party seems to be against science in this case, so we're not into religion vs science conflicts here. But who's right? Clearly each side is coloring the evidence to support its pre-existing perspective. Here is yet another instance where tribalism, bullying, and ideology reflect the realities of the real world of human beings. So we offer another installment in the "What is 'evidence-based?' " series of posts.
We are scientists ourselves, so we know how language used as shorthand can be twisted, and how data can also be ambiguous. One of the hacked emails referred to a 'trick' referred to in a Nature article, which the smoke-spewers say shows that data were fudged if not faked. But 'trick' clearly meant analytic method of blending different kinds of data into a single unified analysis, done completely in the open, and not fraud in any sense. To get a continuous climate record from 1400AD to now, the authors had to blend indirect data such as from tree-ring patterns with direct instrumental measurements which were not available in the past. Was this legitimate? Yes, it seems to have been an entirely routine and appropriate.
There was other intercepted email correspondence about 'hiding' some tree-ring data set because of a change in responsiveness to climate. So says the news story. The Penn State correspondent is quoted as replying that nothing bad was done and the dirt-digging was just a way to undermine the 'strong consensus' that global warming is real.
But that is no kind of defense! Science is not a democracy and a 'consensus' is in itself no indicator of truth (or else we'd still be practicing Galenic four-humors medicine, and who knows what else). Comparably illegitimate would be the argument that we should excuse hiding data because, overall, the 'consensus' must be right. But the underpublication of negative results is well known and is a problem of that sort.
We don't know the details in this particular instance, but science always works from a theory or hypothesis. We are only human, and we like our own ideas, so we try to defend them against the evidence. It is routine to find ways to fit new data into our preferred theory, even when the data on the surface seem to conflict. There is a subjective element to this, in that we try to show how our theory really is true. Eventually, if enough 'bad' data are found, some other theory is needed, as the pretzel of contorted explanations is replaced by something more plausible. But no scientist gives up easily!
Scientists certainly color evidence and the way they present it. Negative results are under-reported, if reported at all, and weak positive evidence is highly touted. Of course this will happen, when careers, prestige, money, research resources, and publications are at stake! It's not just science -- after all, shell-games are how bankers picked your pockets and got away with it, no?
In the same way, those opposed to a given view will stress the problematic data and minimize the oomph of positive findings. We write about this all the time. It's a perfectly natural part of our culture. If you think it is not part of genetics and evolutionary biology and biomedical research, you are being naive (and have negligently not been reading our recent posts!)
In a way it's a kind of built-in dishonesty in what is supposed to be a bluntly honest area of life. Part of the problem here, as in other aspects of human social life is that most of us who pass judgment don't have the facts at hand, but have to base our judgment on what we read (even in science articles, since we, after all, didn't do the actual work ourselves). Yet, it's right and important to question and criticize the system, to keep it as straight as possible.
But criticism of a theory, as in the gloating embers of the coal-huggers over these hacked emails, can also be pure opportunism of those with something to gain that has zero to do with the real issues--such as profiting by clear-cutting a forest whether or not it damages our climate. In this case, from what we know, the emails were the usual kind of informal chat, perhaps even about coloring the data to present the best case of a theory in which the correspondents believed. So we think there was no fraud here, and no scandal.
But, of course, we're tree-huggers!
The story about the hacked emails from the University of East Anglia's program on climate change is making the rounds. A hacker discovered correspondence, including from faculty here at our own Penn State group on climate change, that appeared to indicate cover-up and cooking of the data about whether global warming is a true, anthropogenic (human-produced) fact.
As expected, the anti group, who want to keep on truckin' (with the exhaust emissions that go with it), are reveling in what they claim is a "Gotcha!" moment, showing the faking of evidence, with the earth-huggers responding in outrage that their innocent statements have been taken out of context.
Neither party seems to be against science in this case, so we're not into religion vs science conflicts here. But who's right? Clearly each side is coloring the evidence to support its pre-existing perspective. Here is yet another instance where tribalism, bullying, and ideology reflect the realities of the real world of human beings. So we offer another installment in the "What is 'evidence-based?' " series of posts.
We are scientists ourselves, so we know how language used as shorthand can be twisted, and how data can also be ambiguous. One of the hacked emails referred to a 'trick' referred to in a Nature article, which the smoke-spewers say shows that data were fudged if not faked. But 'trick' clearly meant analytic method of blending different kinds of data into a single unified analysis, done completely in the open, and not fraud in any sense. To get a continuous climate record from 1400AD to now, the authors had to blend indirect data such as from tree-ring patterns with direct instrumental measurements which were not available in the past. Was this legitimate? Yes, it seems to have been an entirely routine and appropriate.
There was other intercepted email correspondence about 'hiding' some tree-ring data set because of a change in responsiveness to climate. So says the news story. The Penn State correspondent is quoted as replying that nothing bad was done and the dirt-digging was just a way to undermine the 'strong consensus' that global warming is real.
But that is no kind of defense! Science is not a democracy and a 'consensus' is in itself no indicator of truth (or else we'd still be practicing Galenic four-humors medicine, and who knows what else). Comparably illegitimate would be the argument that we should excuse hiding data because, overall, the 'consensus' must be right. But the underpublication of negative results is well known and is a problem of that sort.
We don't know the details in this particular instance, but science always works from a theory or hypothesis. We are only human, and we like our own ideas, so we try to defend them against the evidence. It is routine to find ways to fit new data into our preferred theory, even when the data on the surface seem to conflict. There is a subjective element to this, in that we try to show how our theory really is true. Eventually, if enough 'bad' data are found, some other theory is needed, as the pretzel of contorted explanations is replaced by something more plausible. But no scientist gives up easily!
Scientists certainly color evidence and the way they present it. Negative results are under-reported, if reported at all, and weak positive evidence is highly touted. Of course this will happen, when careers, prestige, money, research resources, and publications are at stake! It's not just science -- after all, shell-games are how bankers picked your pockets and got away with it, no?
In the same way, those opposed to a given view will stress the problematic data and minimize the oomph of positive findings. We write about this all the time. It's a perfectly natural part of our culture. If you think it is not part of genetics and evolutionary biology and biomedical research, you are being naive (and have negligently not been reading our recent posts!)
In a way it's a kind of built-in dishonesty in what is supposed to be a bluntly honest area of life. Part of the problem here, as in other aspects of human social life is that most of us who pass judgment don't have the facts at hand, but have to base our judgment on what we read (even in science articles, since we, after all, didn't do the actual work ourselves). Yet, it's right and important to question and criticize the system, to keep it as straight as possible.
But criticism of a theory, as in the gloating embers of the coal-huggers over these hacked emails, can also be pure opportunism of those with something to gain that has zero to do with the real issues--such as profiting by clear-cutting a forest whether or not it damages our climate. In this case, from what we know, the emails were the usual kind of informal chat, perhaps even about coloring the data to present the best case of a theory in which the correspondents believed. So we think there was no fraud here, and no scandal.
But, of course, we're tree-huggers!
Wednesday, November 25, 2009
An Oulipo challenge!
By
Ken Weiss
Here's a change of pace for a drizzly holiday week (here in Pennsylvania, at least), a game challenge for any readers brave enough to take it! You've heard of Googlelwhacking? It's the old internet challenge to find two words that, together, bring up only a single hit on Google. Think it's easy? Try it! It was a lot easier when there weren't so many websites!
Now here's another entry into the intellectual game-world: the Oulipics. Oulipo is a 50-year old French elite organization that tries to find potential in literature--ways of doing art that follows very strict rules or structures, rather than conventional constraints (or freedoms). The name is a contraction of the French for Workshop of Potential Literature. I learned about this from a BBC podcast (my only source of real news!).
One classic Oulipo formulaic is called "N+7": Take a piece of writing and substitute every noun with the noun that appears seven nouns later in the dictionary (agreement such as pluralization is OK, and for our purposes to make it easy, we'll accept an English dictionary--sorry, no French on this post! And to make this even a tad more practicable, we'll allow adding or subtracting articles like 'a' or 'the')
Here's an example. The original, from Longfellow's poem Evangeline, is:
For starters, here's the most obvious example:
"There is grandiflora in this view of life-giving...."
Que les jeux commencents!
Now here's another entry into the intellectual game-world: the Oulipics. Oulipo is a 50-year old French elite organization that tries to find potential in literature--ways of doing art that follows very strict rules or structures, rather than conventional constraints (or freedoms). The name is a contraction of the French for Workshop of Potential Literature. I learned about this from a BBC podcast (my only source of real news!).
One classic Oulipo formulaic is called "N+7": Take a piece of writing and substitute every noun with the noun that appears seven nouns later in the dictionary (agreement such as pluralization is OK, and for our purposes to make it easy, we'll accept an English dictionary--sorry, no French on this post! And to make this even a tad more practicable, we'll allow adding or subtracting articles like 'a' or 'the')
Here's an example. The original, from Longfellow's poem Evangeline, is:
THIS is the forest primeval. The murmuring pines and the hemlocks, bearded with moss, and in garments green, indistinct in the twilight, Stand like Druids of eld...
- Oulipoing this, we get:
THIS is forgiveness primeval. The murmuring pinks and the henroosts, bearded with moths, and in garrets green, indistinct in the twirl, stand like drumsticks of the electorate...
- Now that may be highly adventurous as literature, but it's right in the spirit of Oulipo to break the bounds of conventionality--and it's easy, no? (Actually, the original was written in the 1800s so for my N+7's I used Samuel Johnson's Dictionary, the definitive source at the time).
For starters, here's the most obvious example:
"There is grandiflora in this view of life-giving...."
Que les jeux commencents!
Tuesday, November 24, 2009
Drinking your way to....health? oblivion? cancer? More on 'evidence based' medicine
By
Ken Weiss
Perhaps the main concern of our blog is to understand biological causation. Our interests are general, but the issue comes up disproportionately in understanding the findings of medical research, because that is naturally what a large fraction of funding supports.
Well, last week the BBC reported a study in Spain that says that men's health is substantially improved by moderate daily drinking (sorry, women, maybe your turn will come with the next discovery). That is, 3 or so drinks a day reduces heart disease risk by 35 to even 50 percent-- a very substantial difference indeed!
But why did this story make the news? How many times does alcohol consumption have to be studied in regard to health risks? It is included in many, perhaps the vast majority of epidemiological studies, and it has been so for many decades. How could such an effect have been missed? Indeed, how could it possibly be that we don't have solid, irrefutable knowledge by this time? Why would even a Euro cent have to be spent to study its effects any further?
This is highly relevant to the notion of 'evidence based' medicine, because the recommendation about alcohol use bounces around like silly putty. The only thing that is uniformly agreed on is that too much is, well, too much (but the greatest heart disease risk reduction includes those Spanish guys downing 11+ drinks a day).
Here is a case in which culture is part of the nature of 'evidence'. In prudish America, alcohol is considered something so pleasurable as to be necessarily a sin and is studied intensely. It has been controversial whether hospitals should offer patients a dinner glass of wine. Officials dread to recommend drinking at all. So the 'evidence' required to make a recommendation depends on subjective value judgments. But even if one were to be a hard-nosed empiricist, we again ask how we could possibly not know the answers with indisputable rigor.
If this is the nature of evidence, then what evidence do we accept? Is it always the latest study? Why do we think that is any better than the next latest study to come down the pike tomorrow? And if so, why don't we ignore today's study? Why do we think former studies were wrong (some may be identifiably so, but most aren't obviously flawed). Is some aggregate set of studies to be believed? Is it the study that let's business as usual be carried on, for whatever reason?
Our answer is that in addition to the cultural side-issues, there are so many complex factors at play, both causal in regard to what alcohol does in the body and what the body does to it, and in regard to confounding factors, that there is no simple 'truth' and hence it is unclear what counts as 'evidence.' Confounders are factors that may not be known or measured but that are highly correlated with the measured variable of interest (daily alcohol consumption) so that cause itself is hard to identify.Confounders may be causal on their own, or may causally interact with the factor under study.
For example, if the more you drink the more you smoke or the less sleep you get, or the more sex you enjoy (which, since pleasurable, is a sin and must be harmful) these other factors, rather than the alcohol, could be what affect your heart disease risk directly.
Confounders are confoundedly difficult to identify or tease out. Their exposure patterns and even their identity can change with lifestyle changes. And then there are many potentially directly relevant variables, too. When do you drink? What do you drink? With or without olives or a twist of lemon (or salt on the rim)? How uniform is your daily consumption whose average is measured on a survey? Even if these things were known, the future exposure patterns cannot be known, so that today's evidence is really about yesterday's exposures and so the accuracy of future risks based on this evidence is inherently unknowable.
Finally, if drinking is encouraged and heart disease is reduced, will this be good for public health? Or will it increase the number of, say, fatal accidents or violent crimes? Or is it -- well, it is -- a kind of Get-Cancer program? Why? Because if you don't get heart disease you'll live longer and that by itself increases your cancer risk. Not to mention the risk of Alzheimer's, hearing and vision problems, and a host of other older-age problems.
Perhaps the oldest advice in western medical history is from Hippocrates, about 400 BCE. That was "moderation in all things." In today's world, with our romantic notions about the powers of science, such advice is so non-specific and non-technical, that it is considered a cop-out that is not 'evidence-based'. Maybe so, but it's still the best advice. That's because it implicitly includes the unmeasured and unpredictable risk-factor regimes to which people are exposed--and that is evidence.
"Just the facts" sounds like raw empiricism, the kind of rational empiricism our society values. But the 'facts' weave a tangled web.
Well, last week the BBC reported a study in Spain that says that men's health is substantially improved by moderate daily drinking (sorry, women, maybe your turn will come with the next discovery). That is, 3 or so drinks a day reduces heart disease risk by 35 to even 50 percent-- a very substantial difference indeed!
But why did this story make the news? How many times does alcohol consumption have to be studied in regard to health risks? It is included in many, perhaps the vast majority of epidemiological studies, and it has been so for many decades. How could such an effect have been missed? Indeed, how could it possibly be that we don't have solid, irrefutable knowledge by this time? Why would even a Euro cent have to be spent to study its effects any further?
This is highly relevant to the notion of 'evidence based' medicine, because the recommendation about alcohol use bounces around like silly putty. The only thing that is uniformly agreed on is that too much is, well, too much (but the greatest heart disease risk reduction includes those Spanish guys downing 11+ drinks a day).
Here is a case in which culture is part of the nature of 'evidence'. In prudish America, alcohol is considered something so pleasurable as to be necessarily a sin and is studied intensely. It has been controversial whether hospitals should offer patients a dinner glass of wine. Officials dread to recommend drinking at all. So the 'evidence' required to make a recommendation depends on subjective value judgments. But even if one were to be a hard-nosed empiricist, we again ask how we could possibly not know the answers with indisputable rigor.
If this is the nature of evidence, then what evidence do we accept? Is it always the latest study? Why do we think that is any better than the next latest study to come down the pike tomorrow? And if so, why don't we ignore today's study? Why do we think former studies were wrong (some may be identifiably so, but most aren't obviously flawed). Is some aggregate set of studies to be believed? Is it the study that let's business as usual be carried on, for whatever reason?
Our answer is that in addition to the cultural side-issues, there are so many complex factors at play, both causal in regard to what alcohol does in the body and what the body does to it, and in regard to confounding factors, that there is no simple 'truth' and hence it is unclear what counts as 'evidence.' Confounders are factors that may not be known or measured but that are highly correlated with the measured variable of interest (daily alcohol consumption) so that cause itself is hard to identify.Confounders may be causal on their own, or may causally interact with the factor under study.
For example, if the more you drink the more you smoke or the less sleep you get, or the more sex you enjoy (which, since pleasurable, is a sin and must be harmful) these other factors, rather than the alcohol, could be what affect your heart disease risk directly.
Confounders are confoundedly difficult to identify or tease out. Their exposure patterns and even their identity can change with lifestyle changes. And then there are many potentially directly relevant variables, too. When do you drink? What do you drink? With or without olives or a twist of lemon (or salt on the rim)? How uniform is your daily consumption whose average is measured on a survey? Even if these things were known, the future exposure patterns cannot be known, so that today's evidence is really about yesterday's exposures and so the accuracy of future risks based on this evidence is inherently unknowable.
Finally, if drinking is encouraged and heart disease is reduced, will this be good for public health? Or will it increase the number of, say, fatal accidents or violent crimes? Or is it -- well, it is -- a kind of Get-Cancer program? Why? Because if you don't get heart disease you'll live longer and that by itself increases your cancer risk. Not to mention the risk of Alzheimer's, hearing and vision problems, and a host of other older-age problems.
Perhaps the oldest advice in western medical history is from Hippocrates, about 400 BCE. That was "moderation in all things." In today's world, with our romantic notions about the powers of science, such advice is so non-specific and non-technical, that it is considered a cop-out that is not 'evidence-based'. Maybe so, but it's still the best advice. That's because it implicitly includes the unmeasured and unpredictable risk-factor regimes to which people are exposed--and that is evidence.
"Just the facts" sounds like raw empiricism, the kind of rational empiricism our society values. But the 'facts' weave a tangled web.
Monday, November 23, 2009
De-code me
By
Ken Weiss
Well, we return to real science after our soap opera of the last post!
We referred the other day to the story that deCode Genetics, the Icelandic genomics company, has gone into Chapter 11 bankruptcy. It's been coming for some time. As a story in the Times put it,
But the story is not over. deCode will be taken over by someone, and their founder says their activities will continue. Apparently also, another genetic testing company, 23andMe, has just raised its prices, which has been speculated as indicating financial trouble in Silicon City, too. Whether financial recovery will occur remains to be seen, and these entities may morph to other kinds of analysis or testing to stay in business. But it's not the first time; Celera Genomics, which co-sequenced the human genome, folded some years ago on much the same grounds, that genotypes don't effectively predict disease.
The proponents of the current GWAS (genome-wide association studies)-personalized medicine-and-biobank-laden ethos, that has been so good for the grant, equipment, and recreational genomics business, naturally want to keep the tap open on the promise of Heaven-to-come. Director Francis Collins has nominally staked NIH's future substantially on the basis of this belief. "Salvation's just around the corner" is always how faith-healers have justified passing the hat, and the same can be seen in the quote with which the Times story ends:
It is true, depending on your definition of 'success', that GWAS has found genes that contribute to disease. But there is no reason to expect, and every biological and evolutionary reason not to expect that major complex diseases will ever be generally predictable by individual genotypes--and that is what the GWAS main findings have actually shown. Commercial prediction (or, ethically more responsible, within-clinic genetic counseling) services will grow in their ability to predict certain kinds of disorder, like classical single-gene diseases. That will include a subset of common 'complex' diseases. But most risk variants cause only small statistical, environmentally affected, rather than certain risk.
The Times article was again incorrect, and rather naively so, in stating that
The Times was right, however, that the aggregate effects of large numbers of rare alleles seem to be the reason why family correlation of risk for many or even most traits is high, implicating genetic factors, but we can't find the genes. There is -- and has long been -- a plethora of evidence of many kinds, from many species, to suggest that this is the case.
The problem was the genome frenzy, driven largely by gold-rush mentality, but also genuine frustration at the difficult of understanding the cause of so many important diseases. That frenzy could not be stopped, and many GWAS and biobank studies are being launched or are committed to that will have similar bleak futures; unfortunately they entail commitments to long-term expense, because vested interests cannot easily be slowed. Country after country has sheepishly followed the wealthier countries, and they're now heavily invested in these costly but questionable studies. They could have known better!
The story is subtly deeper than meets the eye, too. deCode was founded on the idea that Iceland was an isolate founded by a small number of individuals and hence greatly reduced variation so that causal variants could more easily be detected. There were always problems with that logic, but it quickly turned out that the founding population wasn't all that small, relative to the questions at hand. Instead, Iceland did provide a great potential advantage: it was to have a whole-population genealogy, connecting everyone in families back for centuries, with comprehensive medical records in recent generations.
There were apparently lots of political and social problems involved, but the great advantage deCode had, at least in principle, was that a genealogy will generally carry fewer variants than a whole population, and linkage studies in families are more statistically able to detect things than comparisons between cases and controls. That loaded the dice in deCode's favor, and they did find a number of things. But the bottom line was that those findings did not reduce complex traits to simple traits. Just as experiments with crosses between just two inbred animals have shown, even very small populations carry enough variation for complex traits to remain complex.
deCode failed to do what its commercial hyperbole promised even with these advantages. From a scientific point of view, that may be a commercial failure but it's a positive scientific contribution to understanding. That understanding should be what is needed to show convincingly that over-geneticizing common diseases is a mug's game, for reasons that we've known for a long, long time. What else is science all about except to learn and adapt to the realities of Nature?
The best way to slow this train so we can get off and go to more creative approaches is to formally curb funding for these kinds of wholesale genetic studies, and thus force investigators to think and think again about other ways to approach these diseases--and not just by scaling up their demands for funds. As long as more of the same is fundable, that's what'll be proposed.
Other approaches, going under names like 'network' or 'systems' biology will likely make progress, and probably can stimulate the opening of the conceptual door, since molecular interactions are what life is all about (as we argue as a major theme of The Mermaid's Tale). However, the network studies to date mainly show just what GWAS finds and evolutionary biology predicts: things can be genetic in the sense that many interacting genes contribute, but are not easily targetable in terms of 'personalized' medicine related to individual genotypes.
So, if there's something clearly genetic in your family, see a legitimate genetic counselor. Otherwise, put away your wallets: Tea leaves and crystal balls are almost as effective as DNA sequence for predicting risk, and much, much cheaper. if you're in a truly high-risk family, see a real genetic counselor. Meanwhile, to stay healthy, eat right, exercise, and stop worrying: Drink your tea, but toss away the leaves!
We referred the other day to the story that deCode Genetics, the Icelandic genomics company, has gone into Chapter 11 bankruptcy. It's been coming for some time. As a story in the Times put it,
Whatever business errors deCode may have made, a principal reason for its downfall is scientific — the genetic nature of human disease has turned out to be far more complex than thought.We certainly have a vested interest in saying that this statement is far from the truth. "than thought" refers to those who believed, wished, or tried to quickly capitalize on the notion, never supported by the body of facts, that common diseases were genetic in the sense of being reliably predicted from individual genotypes. Those who hoped or hyped this tale know that today's state of affairs has been predicted and for correct reasons that have, among other things, to do with evolution, for at least a decade.
But the story is not over. deCode will be taken over by someone, and their founder says their activities will continue. Apparently also, another genetic testing company, 23andMe, has just raised its prices, which has been speculated as indicating financial trouble in Silicon City, too. Whether financial recovery will occur remains to be seen, and these entities may morph to other kinds of analysis or testing to stay in business. But it's not the first time; Celera Genomics, which co-sequenced the human genome, folded some years ago on much the same grounds, that genotypes don't effectively predict disease.
The proponents of the current GWAS (genome-wide association studies)-personalized medicine-and-biobank-laden ethos, that has been so good for the grant, equipment, and recreational genomics business, naturally want to keep the tap open on the promise of Heaven-to-come. Director Francis Collins has nominally staked NIH's future substantially on the basis of this belief. "Salvation's just around the corner" is always how faith-healers have justified passing the hat, and the same can be seen in the quote with which the Times story ends:
“DeCode has been very successful using genome-wide association studies, and among the first to publish many discoveries,” said Dr. David Altshuler, a medical geneticist at the Massachusetts General Hospital. But he expressed optimism that the human genome project would succeed despite deCode’s stumble.
“It would be a mistake to draw any connection between the medical promise of the human genome and the success of a specific company and business model,” he said.This is right to the extent that no sane person thinks that we'll learn nothing more from genetics! But if the quote accurately portrays what David, who is a first-rate scientist said, he might have been being cagey. Nobody is questioning whether the 'genome project' would 'succeed'. But we're not talking about the genome project here: instead, what's questioned is how usefully genotype data predicts disease.
It is true, depending on your definition of 'success', that GWAS has found genes that contribute to disease. But there is no reason to expect, and every biological and evolutionary reason not to expect that major complex diseases will ever be generally predictable by individual genotypes--and that is what the GWAS main findings have actually shown. Commercial prediction (or, ethically more responsible, within-clinic genetic counseling) services will grow in their ability to predict certain kinds of disorder, like classical single-gene diseases. That will include a subset of common 'complex' diseases. But most risk variants cause only small statistical, environmentally affected, rather than certain risk.
The Times article was again incorrect, and rather naively so, in stating that
Natural selection seems to be much more efficient than expected at ridding the population of dangerous genes, even of those that act well after the age of reproduction.To the contrary, it was always 'expected' that selection would purge very serious mutations, so that mainly the brand-new, or older but highly recessive (low penetrance) ones would be found at any given time. But selection has little way to purge mutants with long post-reproductive effects.
The Times was right, however, that the aggregate effects of large numbers of rare alleles seem to be the reason why family correlation of risk for many or even most traits is high, implicating genetic factors, but we can't find the genes. There is -- and has long been -- a plethora of evidence of many kinds, from many species, to suggest that this is the case.
The problem was the genome frenzy, driven largely by gold-rush mentality, but also genuine frustration at the difficult of understanding the cause of so many important diseases. That frenzy could not be stopped, and many GWAS and biobank studies are being launched or are committed to that will have similar bleak futures; unfortunately they entail commitments to long-term expense, because vested interests cannot easily be slowed. Country after country has sheepishly followed the wealthier countries, and they're now heavily invested in these costly but questionable studies. They could have known better!
The story is subtly deeper than meets the eye, too. deCode was founded on the idea that Iceland was an isolate founded by a small number of individuals and hence greatly reduced variation so that causal variants could more easily be detected. There were always problems with that logic, but it quickly turned out that the founding population wasn't all that small, relative to the questions at hand. Instead, Iceland did provide a great potential advantage: it was to have a whole-population genealogy, connecting everyone in families back for centuries, with comprehensive medical records in recent generations.
There were apparently lots of political and social problems involved, but the great advantage deCode had, at least in principle, was that a genealogy will generally carry fewer variants than a whole population, and linkage studies in families are more statistically able to detect things than comparisons between cases and controls. That loaded the dice in deCode's favor, and they did find a number of things. But the bottom line was that those findings did not reduce complex traits to simple traits. Just as experiments with crosses between just two inbred animals have shown, even very small populations carry enough variation for complex traits to remain complex.
deCode failed to do what its commercial hyperbole promised even with these advantages. From a scientific point of view, that may be a commercial failure but it's a positive scientific contribution to understanding. That understanding should be what is needed to show convincingly that over-geneticizing common diseases is a mug's game, for reasons that we've known for a long, long time. What else is science all about except to learn and adapt to the realities of Nature?
The best way to slow this train so we can get off and go to more creative approaches is to formally curb funding for these kinds of wholesale genetic studies, and thus force investigators to think and think again about other ways to approach these diseases--and not just by scaling up their demands for funds. As long as more of the same is fundable, that's what'll be proposed.
Other approaches, going under names like 'network' or 'systems' biology will likely make progress, and probably can stimulate the opening of the conceptual door, since molecular interactions are what life is all about (as we argue as a major theme of The Mermaid's Tale). However, the network studies to date mainly show just what GWAS finds and evolutionary biology predicts: things can be genetic in the sense that many interacting genes contribute, but are not easily targetable in terms of 'personalized' medicine related to individual genotypes.
So, if there's something clearly genetic in your family, see a legitimate genetic counselor. Otherwise, put away your wallets: Tea leaves and crystal balls are almost as effective as DNA sequence for predicting risk, and much, much cheaper. if you're in a truly high-risk family, see a real genetic counselor. Meanwhile, to stay healthy, eat right, exercise, and stop worrying: Drink your tea, but toss away the leaves!
Saturday, November 21, 2009
The miracle of soap
Two days ago Jennifer had a baby goat screaming in pain. She treated her for everything she could think of (she's a miracle worker when it comes to goat medicine). The poor baby clearly hurt, but she wasn't having diarrhea, so my sister didn't want to give her her usual fix for that, Pepto Bismol. So she fed her some warm water mixed with molasses, and something else which I don't remember now, and decided to wait it out. But the poor thing really hurt, and Jennifer worried she was going to lose her. When I spoke with Jen on the phone, I could hear extreme distress in the background.
After we talked, I suddenly thought maybe the poor thing was constipated. So I texted that idea to Jen, and Jen texted back 1/2 hour later saying that she'd given her some soap, and she'd pooped and seemed to be feeling better. Yesterday, she said she'd drunk her whole bottle and was back out in the barn.
Ken said, Holly's post on her experience with her cyst coming quickly to mind, "Wait a minute, this is too miraculous. Jen feeds her soap and she immediately poops and is all better. What kind of weird alternative medicine is that?" So he called Jennifer and asked how washing the poor baby's mouth out with soap had worked so fast. Jennifer said, "Of course I didn't feed it to her! I squirted it into her rectum!"
Ken was very relieved.
After we talked, I suddenly thought maybe the poor thing was constipated. So I texted that idea to Jen, and Jen texted back 1/2 hour later saying that she'd given her some soap, and she'd pooped and seemed to be feeling better. Yesterday, she said she'd drunk her whole bottle and was back out in the barn.
Ken said, Holly's post on her experience with her cyst coming quickly to mind, "Wait a minute, this is too miraculous. Jen feeds her soap and she immediately poops and is all better. What kind of weird alternative medicine is that?" So he called Jennifer and asked how washing the poor baby's mouth out with soap had worked so fast. Jennifer said, "Of course I didn't feed it to her! I squirted it into her rectum!"
Ken was very relieved.
Friday, November 20, 2009
More on the nature of evidence: mammograms, pap smears, PSA tests and cancer
By
Ken Weiss
We're doing a series of posts on the subject of 'evidence based' medicine, because of a flurry of items in the news. They have to do with 3 tests for cancer: PSA tests for prostate cancer in men, and pap smears and mammograms for breast cancer in women. (the issues are much, much broader but of a similar nature in regard to many other things--even things like climate change--where even people who would strongly proclaim themselves to be evidence-based manifest the same experience). Here's the latest on the cancer stories from the New York Times.
In all cases, the problem is that early diagnosis can lead to treatment of tumors that would either never reach clinical significance (prostate cancers in particular) or would go away on their own. The treatments have costs including the trauma of the diagnosis in the first place, the financial costs to the individual and the health-care system, and the trauma of surgery or other treatments like radiation or chemotherapy.
But people seem to be quickly deciding to carry on as usual anyway. Why? Presumably doctors will order tests at least in part to avoid law suits. People will continue presumably because the fear of cancer outweighs the more abstract and distant fear of the treatments and the diagnostic tests, the fact that costs may be covered on their health insurance, and because they still believe it to be best. And it takes a hell of a lot of guts to say "don't bother to check me," or "well, let's not intervene in this early tumor right now; let's wait to see what happens."
But science is about evidence. The purpose of statistical significance tests (which these studies largely rest on, since risks, costs, and benefits are always based on statistics) is to inform decision-making -- that's the entire rationale behind such studies. So if we're really as evidence-based as we think we are, why not respond to the new data by changing our behavior?
There are many reasons, but in a nutshell, clearly we are not just evidence-based. We choose a conventional significance level (the famous p value), in an arbitrary or conventional way, but that doesn't mean we must follow the result rigidly. But if not follow it, why not?
Fear, for example, is part of the equation. Fear of disease. Fear of lawsuits. Fear of the consequences of a missed diagnosis. Fear of changing how we do things. This outweighs fear of wrong decisions which in any way can't be proven (once you carve out a tumor that would have regressed on its own, you can never know that, so you can always feel you did the right thing).
Of course, when studies come up with inconsistent, differing conclusions on a regular basis, for the same question, as we see in this general field, there is also the subjective aspect of deciding which study to believe. Is the most recent one the definitive one? It is not easy to decide which bit of evidence is the evidence that counts.
Relevant to this is that in none of these cases do we think those who want to stand pat challenge the studies, nor their p values. That would be a different thing, and sometimes happens, most properly when a single new study challenges accepted wisdom. But that's not the case here.
The 'scientific method' is not nearly so straightforward as we tend to say in our classrooms and in the media. It's nobody's fault. But decision-making, even by scientists is substantially subjective. it would be better if we recognized the problem. How we might then change our criteria for change is anybody's guess, but it might help.
In all cases, the problem is that early diagnosis can lead to treatment of tumors that would either never reach clinical significance (prostate cancers in particular) or would go away on their own. The treatments have costs including the trauma of the diagnosis in the first place, the financial costs to the individual and the health-care system, and the trauma of surgery or other treatments like radiation or chemotherapy.
But people seem to be quickly deciding to carry on as usual anyway. Why? Presumably doctors will order tests at least in part to avoid law suits. People will continue presumably because the fear of cancer outweighs the more abstract and distant fear of the treatments and the diagnostic tests, the fact that costs may be covered on their health insurance, and because they still believe it to be best. And it takes a hell of a lot of guts to say "don't bother to check me," or "well, let's not intervene in this early tumor right now; let's wait to see what happens."
But science is about evidence. The purpose of statistical significance tests (which these studies largely rest on, since risks, costs, and benefits are always based on statistics) is to inform decision-making -- that's the entire rationale behind such studies. So if we're really as evidence-based as we think we are, why not respond to the new data by changing our behavior?
There are many reasons, but in a nutshell, clearly we are not just evidence-based. We choose a conventional significance level (the famous p value), in an arbitrary or conventional way, but that doesn't mean we must follow the result rigidly. But if not follow it, why not?
Fear, for example, is part of the equation. Fear of disease. Fear of lawsuits. Fear of the consequences of a missed diagnosis. Fear of changing how we do things. This outweighs fear of wrong decisions which in any way can't be proven (once you carve out a tumor that would have regressed on its own, you can never know that, so you can always feel you did the right thing).
Of course, when studies come up with inconsistent, differing conclusions on a regular basis, for the same question, as we see in this general field, there is also the subjective aspect of deciding which study to believe. Is the most recent one the definitive one? It is not easy to decide which bit of evidence is the evidence that counts.
Relevant to this is that in none of these cases do we think those who want to stand pat challenge the studies, nor their p values. That would be a different thing, and sometimes happens, most properly when a single new study challenges accepted wisdom. But that's not the case here.
The 'scientific method' is not nearly so straightforward as we tend to say in our classrooms and in the media. It's nobody's fault. But decision-making, even by scientists is substantially subjective. it would be better if we recognized the problem. How we might then change our criteria for change is anybody's guess, but it might help.
Getting the bugs out of medical research
Infectious disease born again?
Of many interesting stories for us to blog about this week -- so many that they'll be spilling over into next week -- here's one that seems to represent a more sensible approach to disease than the relentless focus on genetics that we so commonly see. It's about a new effort by pharmaceutical companies to invest in vaccine development. The AP story says
Another story of what seems to be money well-spent appears on the BBC website. Researchers have developed a new-fangled lab-on-a-chip that will allow easier, faster and cheaper diagnosis of dozens of diseases.
Some of the vaccines in the pipeline won't pan out, but some surely will, and we can imagine the lab-on-a-chip device being useful in many settings, including medically underserved areas, so we find these stories rather heartening. The money being invested is private, not taxpayer money, but not long ago a lot of pharmaceutical money was being bet on personalized genomics, and so on, which our regular readers will recognize as efforts we wouldn't have put our money on. So, it's good to see that following the money takes us in a different direction these days -- industry sees a lot more promise in preventing and treating infectious disease than in fixing genes. Indeed, a lot more disease seems to be infectious than the age of genetics led us to believe.
The Fall of deCode Genetics
It's interesting to juxtapose these two stories with this story from the Wednesday New York Times that reports on the demise of a company established to "exploit the promise of the human genome", that is, to profit from what it could learn about genetic disease from the genealogies of Iceland. Predicated on the idea that common genetic variants would be found to explain most complex disease, deCode Genetics set out to find those variants in Iceland and then develop drugs to target them. But, it turns out that complex disease is too complex for that. Again, regular readers won't be surprised if we find it hard to suppress a little "told you so".
Now, here we want to be careful about the concepts -- and it's related to central issues in The Mermaid's Tale. Life is lived, day to day, on the molecular level. Infection is essentially attack from without, and the immune system tries to recognize molecular signatures of the invading soldiers, to latch onto them and destroy them. Vaccines traditionally help the immune system do that, by exposing it to harmless mimics of the real thing (dead viruses, so to speak).
There are countless infectious diseases, affecting most body systems, and more and more complex 'chronic' diseases that were thought to be 'environmental' or 'genetic' in the traditional senses, seem to be turning out to have infectious or inflammatory components. Thus, enhanced abilities to make vaccines could have farther-reaching implications than has been thought.
The immune system is 'genetic' of course, and its functions are fairly close to genes in many ways. But there may be other and perhaps even surprising ways this subject can bring us back to genetics. We'll deal with them in a post in the near future....
Of many interesting stories for us to blog about this week -- so many that they'll be spilling over into next week -- here's one that seems to represent a more sensible approach to disease than the relentless focus on genetics that we so commonly see. It's about a new effort by pharmaceutical companies to invest in vaccine development. The AP story says
Malaria. Tuberculosis. Alzheimer's disease. AIDS. Pandemic flu. Genital herpes. Urinary tract infections. Grass allergies. Traveler's diarrhea. You name it, the pharmaceutical industry is working on a vaccine to prevent it.
Another story of what seems to be money well-spent appears on the BBC website. Researchers have developed a new-fangled lab-on-a-chip that will allow easier, faster and cheaper diagnosis of dozens of diseases.
The device relies on an array of antibody molecules that are designed to latch on to the protein-based molecular markers of disease in blood.
The antibodies are chemically connected to molecules that emit light of a specific colour when illuminated - but only when they have bound to the disease markers.
Some of the vaccines in the pipeline won't pan out, but some surely will, and we can imagine the lab-on-a-chip device being useful in many settings, including medically underserved areas, so we find these stories rather heartening. The money being invested is private, not taxpayer money, but not long ago a lot of pharmaceutical money was being bet on personalized genomics, and so on, which our regular readers will recognize as efforts we wouldn't have put our money on. So, it's good to see that following the money takes us in a different direction these days -- industry sees a lot more promise in preventing and treating infectious disease than in fixing genes. Indeed, a lot more disease seems to be infectious than the age of genetics led us to believe.
The Fall of deCode Genetics
It's interesting to juxtapose these two stories with this story from the Wednesday New York Times that reports on the demise of a company established to "exploit the promise of the human genome", that is, to profit from what it could learn about genetic disease from the genealogies of Iceland. Predicated on the idea that common genetic variants would be found to explain most complex disease, deCode Genetics set out to find those variants in Iceland and then develop drugs to target them. But, it turns out that complex disease is too complex for that. Again, regular readers won't be surprised if we find it hard to suppress a little "told you so".
Now, here we want to be careful about the concepts -- and it's related to central issues in The Mermaid's Tale. Life is lived, day to day, on the molecular level. Infection is essentially attack from without, and the immune system tries to recognize molecular signatures of the invading soldiers, to latch onto them and destroy them. Vaccines traditionally help the immune system do that, by exposing it to harmless mimics of the real thing (dead viruses, so to speak).
There are countless infectious diseases, affecting most body systems, and more and more complex 'chronic' diseases that were thought to be 'environmental' or 'genetic' in the traditional senses, seem to be turning out to have infectious or inflammatory components. Thus, enhanced abilities to make vaccines could have farther-reaching implications than has been thought.
The immune system is 'genetic' of course, and its functions are fairly close to genes in many ways. But there may be other and perhaps even surprising ways this subject can bring us back to genetics. We'll deal with them in a post in the near future....
Thursday, November 19, 2009
Mammography: Grim tales of real life.
By
Ken Weiss
The use of x-rays to detect breast cancer, known as mammography, started around 1960. The idea was that x-rays could give an in-depth picture of the breast that would be superior to palpation for detecting small tumors that had not yet become obvious or symptomatic. It seemed like a very good idea that could save many lives, and became not just widespread but formally recommended as part of preventive care.
This was based on the belief, or perhaps even dogma, that tumors are 'transformed' cells that are out of control and will continue dividing without the normal context-specific restraint. The tumors induced vascularization that nourished its cells, and eventually cells flake off into the blood or lymph systems, to be carried along to other sites where they would eventually lodge, spreading the tumor (this is called metastasis). If anything, treatment or just competition would lead this distributing clone of transformed cells to gain an increasing evolutionary advantage over the woman's (and, in much rarer instances, men's) normal tissue: tumor cells would continue to accumulate mutations at the regular or even an accelerated rate, that would give them even further growth advantage.
Sometimes tumors seemed to regress, but this was difficult to explain and often it was thought that perhaps the initial diagnosis was wrong. If the tumor had escaped immune destruction when it was only a single or few cells large, what could then later make it regress?
Thus the general dogma in cancer biology that the earlier it was caught, the less likely it would spread. That also meant the earlier in life one was screened, the better. Local surgery could then cure the disease.
But there was a problem: the same x-radiation used to detect different cell densities between tumor and normal tissue, is also a very well-known mutagen and cause of cancer!
Worse, the more actively dividing cells were, the more liable to mutation and thus transmission to increased numbers of a descendant line of daughter cells in the tissue. Since breast tissue grows every menstrual cycle, pre-menopausal women would be particularly vulnerable to iatric carcinogenesis. Yet the idea was that earlier screening was better!
Even further, early onset cases are more likely to be or to become bilateral (both breasts) or multiclonal (more independent tumors), and it was suspected and is now known that some of this, at least, is due to inherited susceptibility mutations (in BRCA1 and BRCA1 and a few other genes). These mutations put a woman at very high risk, so earlier and more frequent screening--but higher total radiation doses!--could be important.
Especially after the atomic bombing of Japan in World War II, and the subsequent fallout from nuclear reactors and bomb tests, and the proliferation of diagnostic x-rays, many extensive studies were done to document the risk, and for example chest x-rays used in routine tuberculosis screening were shown to be a risk for cancers including breast cancer.
So, to screen or not to screen? The obvious answer to this Hobson's choice was a grim cost-benefit analysis: how many cancers are detected and cured vs those that are caused by mammographic screening? Even grimmer, this could be evaluated by age, so that recommendations could be made based on a judgment as to how favorable the age-specific balance between cause and cure was. And there's more: radiation-induced carcinomas take years to develop before they would appear as clinically detectable tumors, so evaluating and attributing risk was (and is) not easy.
Breast cancer is unfortunately quite common, but the differences being considered, among many additional variables known and unknown, are small. That means very large, long-term studies needed even to come to a tentative rational ('evidence-based') conclusion. The result was recommendations of occasional mammograms for women in their 40's, with more frequent screens in 50's and beyond.
This made sense....until a few studies recently began to appear with curious results. Several studies showed that the number of cancers in women not screened was lower than those in women who had been screened. How can this be? The answer appears to be that screening leads to detection, reporting, and treatment of tumors that would eventually disappear on their own. So screening led to interventions of various types, some rather grim in themselves, in a substantial fraction of cases that would go away without any treatment with its associated cost and trauma.
The same has been found recently in PSA testing of men for prostate cancer, so it's not a fluke of the study design. Scars of remitted tumors have been found, showing clearly that they regressed without diagnosis or treatment.
So now a panel of experts has recommended backing off, and doing screening less often (except in those who, in a grim kind of good luck, know they carry a high-risk mutation and hence need to be checked carefully, and often, where early detection can more clearly be effective).
Now if that isn't 'evidence' what is? Yet this is controversial, because it goes against accepted practice. In the Wednesday NY Times it's reported that some physicians don't plan to change their recommendations (what will insurance companies, our most noble citizens, and the entities that will actually drive this decision, do?). The NIH Secretary also backed away from this new report. This is curious to say the least and relevant, of course, to the notion of 'evidence based' medicine that we discussed in a recent post, and why we think the notion of evidence is actually rather slippery.
This strikes close to home for many of us, who have very close relatives who have died of breast cancer. For us, research on this subject could hardly be more important. If you're a young woman you face these grim or even terrifying choices. But in real life, rather than fairy stories, there's no easy answer.
This was based on the belief, or perhaps even dogma, that tumors are 'transformed' cells that are out of control and will continue dividing without the normal context-specific restraint. The tumors induced vascularization that nourished its cells, and eventually cells flake off into the blood or lymph systems, to be carried along to other sites where they would eventually lodge, spreading the tumor (this is called metastasis). If anything, treatment or just competition would lead this distributing clone of transformed cells to gain an increasing evolutionary advantage over the woman's (and, in much rarer instances, men's) normal tissue: tumor cells would continue to accumulate mutations at the regular or even an accelerated rate, that would give them even further growth advantage.
Sometimes tumors seemed to regress, but this was difficult to explain and often it was thought that perhaps the initial diagnosis was wrong. If the tumor had escaped immune destruction when it was only a single or few cells large, what could then later make it regress?
Thus the general dogma in cancer biology that the earlier it was caught, the less likely it would spread. That also meant the earlier in life one was screened, the better. Local surgery could then cure the disease.
But there was a problem: the same x-radiation used to detect different cell densities between tumor and normal tissue, is also a very well-known mutagen and cause of cancer!
Worse, the more actively dividing cells were, the more liable to mutation and thus transmission to increased numbers of a descendant line of daughter cells in the tissue. Since breast tissue grows every menstrual cycle, pre-menopausal women would be particularly vulnerable to iatric carcinogenesis. Yet the idea was that earlier screening was better!
Even further, early onset cases are more likely to be or to become bilateral (both breasts) or multiclonal (more independent tumors), and it was suspected and is now known that some of this, at least, is due to inherited susceptibility mutations (in BRCA1 and BRCA1 and a few other genes). These mutations put a woman at very high risk, so earlier and more frequent screening--but higher total radiation doses!--could be important.
Especially after the atomic bombing of Japan in World War II, and the subsequent fallout from nuclear reactors and bomb tests, and the proliferation of diagnostic x-rays, many extensive studies were done to document the risk, and for example chest x-rays used in routine tuberculosis screening were shown to be a risk for cancers including breast cancer.
So, to screen or not to screen? The obvious answer to this Hobson's choice was a grim cost-benefit analysis: how many cancers are detected and cured vs those that are caused by mammographic screening? Even grimmer, this could be evaluated by age, so that recommendations could be made based on a judgment as to how favorable the age-specific balance between cause and cure was. And there's more: radiation-induced carcinomas take years to develop before they would appear as clinically detectable tumors, so evaluating and attributing risk was (and is) not easy.
Breast cancer is unfortunately quite common, but the differences being considered, among many additional variables known and unknown, are small. That means very large, long-term studies needed even to come to a tentative rational ('evidence-based') conclusion. The result was recommendations of occasional mammograms for women in their 40's, with more frequent screens in 50's and beyond.
This made sense....until a few studies recently began to appear with curious results. Several studies showed that the number of cancers in women not screened was lower than those in women who had been screened. How can this be? The answer appears to be that screening leads to detection, reporting, and treatment of tumors that would eventually disappear on their own. So screening led to interventions of various types, some rather grim in themselves, in a substantial fraction of cases that would go away without any treatment with its associated cost and trauma.
The same has been found recently in PSA testing of men for prostate cancer, so it's not a fluke of the study design. Scars of remitted tumors have been found, showing clearly that they regressed without diagnosis or treatment.
So now a panel of experts has recommended backing off, and doing screening less often (except in those who, in a grim kind of good luck, know they carry a high-risk mutation and hence need to be checked carefully, and often, where early detection can more clearly be effective).
Now if that isn't 'evidence' what is? Yet this is controversial, because it goes against accepted practice. In the Wednesday NY Times it's reported that some physicians don't plan to change their recommendations (what will insurance companies, our most noble citizens, and the entities that will actually drive this decision, do?). The NIH Secretary also backed away from this new report. This is curious to say the least and relevant, of course, to the notion of 'evidence based' medicine that we discussed in a recent post, and why we think the notion of evidence is actually rather slippery.
This strikes close to home for many of us, who have very close relatives who have died of breast cancer. For us, research on this subject could hardly be more important. If you're a young woman you face these grim or even terrifying choices. But in real life, rather than fairy stories, there's no easy answer.
Wednesday, November 18, 2009
What kind of evidence is evidence-based medicine based on?
Evidence-based medicine
"Evidence-based medicine" is much bruited about these days, but was medicine ever not evidence-based? And aren't hunches still an important part of the art of medicine? Perhaps the question in our era is whether what is counted as evidence, even by those who mean data formally gathered by science, has gotten easier to evaluate? In this regard, several stories about the efficacy of various drugs and even vitamins have showed up in the news this week. One is a report of a study of the effectiveness of cholesterol-lowering medication vs. vitamin B12, published in the New England Journal of Medicine, and the other a report of the association of vitamin D insufficiency with, well, just about everything. After Holly's post last week about the effectiveness, or not, of her homeopathic treatment, it seems fitting to look at these reports in a little more detail.
The common criticism leveled by scientists at alternative medicine is that it is not evidence-based, which presumably means that formally designed studies have not shown a statistically significant benefit, and/or that no mechanism is known, or that no studies have been done. Or, in practice, it's that doctors wildcat--treat patients their own way for their own reasons--rather than following some centrally specified way.
It is at least slightly strange that placebo effects are not credited as a form of medicine, even if real, because they are not rigorously understood--e.g., in 'dose-response' kinds of terms. In any case, evidence-based medicine (EBM) is defined in Wikipedia as an approach that
Science as we know it is a method that our society has evolved over the past 200-300 years. It is based on replication, the attempt to isolate individual variables (e.g, averaging over all other variables) and to show that their variation is causally associated with variation in target outcomes such as disease.
In former times different kinds of evidence were accepted. For example, it was part of Aristotle's worldview that, in effect, our minds were built to have a true picture of the nature of the real world, so that deductive reasoning had some empirical cogency. We don't accept that today. And, spiritual experiences or opinions, not being subject to the same kind of technical scrutiny, are not considered evidence.
We're not arguing that informal experience should count as evidence, or that mystical water-cures should replace surgery. Novocaine alone is a strong enough argument for western science!
But people do tend to have a high acceptance tolerance for views based on current study results, even though history shows how often and how wrong, often diametrically wrong, they can be.
We can't know the true truth until we know it, and knowledge may always be incomplete. But how do we know how to implement 'evidence-based' criteria, if this does not at its core rest on subjective judgment about 'evidence'? If it is that we should base action on what we know today from a particular kind of evidence (that is described as 'scientific'), that is perfectly defensible in principle as simply being the best we can do. But even then, the evidence is not so easy to interpret.
Evidence-based decisions?
Statins and vitamin B12
The current cholesterol study getting so much play reports that the two treatments that were being tested reduce LDL (the 'bad' cholesterol), but that the (pricey) medication under review, ezetimibe, contained in the cholesterol meds Zetia and Vytorin, isn't as good at reducing arterial plaque as (the cheaper one) Niaspan (niacin, or vitamin B12), and Niaspan raises HDL (the 'good' cholesterol). Because millions of people control their cholesterol levels with ezetimibe drugs, this is of some concern.
But, and there are always buts in these stories, because perfect studies are few and far between, the study was halted early (that is, before all the subjects in the study had completed the course of treatment) because, according to the investigators, it would have been unethical to continue once it became clear that Niaspan was out-performing Zetia. The problem with this, as critics note, is that continuing the study through to its planned conclusion might have led to a different outcome--the study was supposed to follow a design that presumably was explained and justified, and passed peer review, in the grant proposal, presumably for good reasons. Also, the study was fairly small--only 208 subjects, of 363 enrolled, completed the study, which reduces its power. And, perhaps most important in terms of how this study will be evaluated, even though it tells us nothing about the science per se, it turns out that the study was funded by Abbott Laboratories, the maker of Niaspan, and several of the investigators, including the principal investigator, have received thousands of dollars in speaking fees from Abbott. Does this prove that the study is biased? No, but given the vested interest of the funder, it's obviously difficult to trust the results.
This is the story getting so much play in the press. One wonders why this is, given all the good reasons to doubt it (A nice laying out of many of those reasons can be found here.) It's tempting to assume that it's primarily because a lot of money is at stake, out of the pockets of patients and insurance companies and Merck and Co, the makers of Zetia into the pockets of Abbott Labs--as is the health of millions of people with high cholesterol. But, even without the issues about vested interest, are the results clear cut?
Vitamin D
The latest vitamin D studies suggest that vitamin D deficiency is the cause of everything that ails us, including depression, heart disease, diabetes, autoimmune diseases, some cancers and TB, and can lead to 'early death'--and more. According to these studies, two thirds of Utahans don't have enough vitamin D. And that's what most studies of vitamin D say about most populations studied. Some researchers even say that 100% of us have insufficient vitamin D--sometimes it's called 'subclinical insufficiency', meaning that a threshold has been set and most of us don't meet it, regardless of whether we show any ill health effects or not.
The Utah study divided a sample of 27,000 people into three categories by their vitamin D levels, high, sufficient, or deficient and followed them for a year. Those with low vitamin D were much more likely to die, to suffer from depression, heart disease, and so.
Now, no one can doubt that vitamin D is essential to our health. Vitamin D deficiency really can cause rickets because it's involved in calcium uptake, and rickets is the result of weakening and softening of bone, usually because of low vitamin D levels, which lead to inadequate calcium uptake. That's been known for a long time, and it's been known for even longer that exposure to the sun can cure it. As can cod liver oil, one of the few dietary sources of vitamin D.
But, these were observational studies, which by themselves can't be used to determine cause and effect. Further, let's take depression and its purported association with vitamin D deficiency. Our primary source of vitamin D is exposure to the sun. But let's suppose that people suffering from depression aren't spending a lot of time outdoors soaking up rays--not an unreasonable supposition, really. So which comes first? Vitamin D deficiency or depression? If we're all deficient, and depression and heart disease and cancer are common, how do we determine actual cause and effect? A prospective study might help--following people with sufficient vitamin D levels at the outset for some time, and collecting data on their vitamin D levels and disease experience, and in fact it looks like this is the next stage in this study. But then, even when low vitamin D comes before the cancer, does that make it determinative?
Basing decisions on what kind of evidence?
So, what does evidence-based medicine do with this kind of evidence? And these studies are just a few among many that are yielding evidence that's not all that easy to know what to do with. An underlying problem is that there are too many unmeasured and hence possibly confounding variables, along with too many shortcomings in our knowledge of human biology to know how to definitively evaluate these kinds of studies. Those are nobody's fault, and we can only work away at improving.
In an intersection of new and old age, and thus bringing up the question of what counts as health care, and what as evidence, the BBC reports that meditation reduces risk of stroke and heart attack. A nine year case-control study of its effects was carried out by researchers at the Medical College in Wisconsin and the Maharishi University in Iowa. The study showed risk of death, heart attack and stroke was reduced by 47% in the group who meditated. This study now enters the pool of evidence for evidence-based medicine. It was a state-of-the-art study, yielding statistically significant and potentially reproducible results. Even though it tells us nothing about how meditation works, and might even go counter to how some think western medicine should work, that doesn't matter. Demonstrating effect is enough for EBM. Unless, of course, doctors aren't comfortable recommending meditation in spite of the evidence.
Another example of just that, that we'll do another post on for other reasons, is the issue of the right practice for using mammograms to find breast cancer. Several recent studies have lead a government science panel to suggest that the benefits of early detection are not worth the cost in terms of over-treatment, in younger women, and have suggested less screening for women in their 40s. Yet, today, the NY Times reports some doctors saying they won't change their screening practices because they are not persuaded:
"Evidence-based medicine" is much bruited about these days, but was medicine ever not evidence-based? And aren't hunches still an important part of the art of medicine? Perhaps the question in our era is whether what is counted as evidence, even by those who mean data formally gathered by science, has gotten easier to evaluate? In this regard, several stories about the efficacy of various drugs and even vitamins have showed up in the news this week. One is a report of a study of the effectiveness of cholesterol-lowering medication vs. vitamin B12, published in the New England Journal of Medicine, and the other a report of the association of vitamin D insufficiency with, well, just about everything. After Holly's post last week about the effectiveness, or not, of her homeopathic treatment, it seems fitting to look at these reports in a little more detail.
The common criticism leveled by scientists at alternative medicine is that it is not evidence-based, which presumably means that formally designed studies have not shown a statistically significant benefit, and/or that no mechanism is known, or that no studies have been done. Or, in practice, it's that doctors wildcat--treat patients their own way for their own reasons--rather than following some centrally specified way.
It is at least slightly strange that placebo effects are not credited as a form of medicine, even if real, because they are not rigorously understood--e.g., in 'dose-response' kinds of terms. In any case, evidence-based medicine (EBM) is defined in Wikipedia as an approach that
aims to apply the best available evidence gained from the scientific method to medical decision making. It seeks to assess the quality of evidence of the risks and benefits of treatments (including lack of treatment).EBM came of age in the 1980's, as an attempt to standardize medical care. It's probably not entirely coincidental that this was about the same time that reliance on expensive technological diagnostic tools was increasing, and malpractice law suits, and settlement amounts, began to rise--though we have no evidence of actual cause and effect for this. A doctor is expected to follow locally accepted practice, but that criterion itself is one that requires formalized documentation.
Science as we know it is a method that our society has evolved over the past 200-300 years. It is based on replication, the attempt to isolate individual variables (e.g, averaging over all other variables) and to show that their variation is causally associated with variation in target outcomes such as disease.
In former times different kinds of evidence were accepted. For example, it was part of Aristotle's worldview that, in effect, our minds were built to have a true picture of the nature of the real world, so that deductive reasoning had some empirical cogency. We don't accept that today. And, spiritual experiences or opinions, not being subject to the same kind of technical scrutiny, are not considered evidence.
We're not arguing that informal experience should count as evidence, or that mystical water-cures should replace surgery. Novocaine alone is a strong enough argument for western science!
But people do tend to have a high acceptance tolerance for views based on current study results, even though history shows how often and how wrong, often diametrically wrong, they can be.
We can't know the true truth until we know it, and knowledge may always be incomplete. But how do we know how to implement 'evidence-based' criteria, if this does not at its core rest on subjective judgment about 'evidence'? If it is that we should base action on what we know today from a particular kind of evidence (that is described as 'scientific'), that is perfectly defensible in principle as simply being the best we can do. But even then, the evidence is not so easy to interpret.
Evidence-based decisions?
Statins and vitamin B12
The current cholesterol study getting so much play reports that the two treatments that were being tested reduce LDL (the 'bad' cholesterol), but that the (pricey) medication under review, ezetimibe, contained in the cholesterol meds Zetia and Vytorin, isn't as good at reducing arterial plaque as (the cheaper one) Niaspan (niacin, or vitamin B12), and Niaspan raises HDL (the 'good' cholesterol). Because millions of people control their cholesterol levels with ezetimibe drugs, this is of some concern.
But, and there are always buts in these stories, because perfect studies are few and far between, the study was halted early (that is, before all the subjects in the study had completed the course of treatment) because, according to the investigators, it would have been unethical to continue once it became clear that Niaspan was out-performing Zetia. The problem with this, as critics note, is that continuing the study through to its planned conclusion might have led to a different outcome--the study was supposed to follow a design that presumably was explained and justified, and passed peer review, in the grant proposal, presumably for good reasons. Also, the study was fairly small--only 208 subjects, of 363 enrolled, completed the study, which reduces its power. And, perhaps most important in terms of how this study will be evaluated, even though it tells us nothing about the science per se, it turns out that the study was funded by Abbott Laboratories, the maker of Niaspan, and several of the investigators, including the principal investigator, have received thousands of dollars in speaking fees from Abbott. Does this prove that the study is biased? No, but given the vested interest of the funder, it's obviously difficult to trust the results.
This is the story getting so much play in the press. One wonders why this is, given all the good reasons to doubt it (A nice laying out of many of those reasons can be found here.) It's tempting to assume that it's primarily because a lot of money is at stake, out of the pockets of patients and insurance companies and Merck and Co, the makers of Zetia into the pockets of Abbott Labs--as is the health of millions of people with high cholesterol. But, even without the issues about vested interest, are the results clear cut?
Vitamin D
The latest vitamin D studies suggest that vitamin D deficiency is the cause of everything that ails us, including depression, heart disease, diabetes, autoimmune diseases, some cancers and TB, and can lead to 'early death'--and more. According to these studies, two thirds of Utahans don't have enough vitamin D. And that's what most studies of vitamin D say about most populations studied. Some researchers even say that 100% of us have insufficient vitamin D--sometimes it's called 'subclinical insufficiency', meaning that a threshold has been set and most of us don't meet it, regardless of whether we show any ill health effects or not.
The Utah study divided a sample of 27,000 people into three categories by their vitamin D levels, high, sufficient, or deficient and followed them for a year. Those with low vitamin D were much more likely to die, to suffer from depression, heart disease, and so.
Now, no one can doubt that vitamin D is essential to our health. Vitamin D deficiency really can cause rickets because it's involved in calcium uptake, and rickets is the result of weakening and softening of bone, usually because of low vitamin D levels, which lead to inadequate calcium uptake. That's been known for a long time, and it's been known for even longer that exposure to the sun can cure it. As can cod liver oil, one of the few dietary sources of vitamin D.
But, these were observational studies, which by themselves can't be used to determine cause and effect. Further, let's take depression and its purported association with vitamin D deficiency. Our primary source of vitamin D is exposure to the sun. But let's suppose that people suffering from depression aren't spending a lot of time outdoors soaking up rays--not an unreasonable supposition, really. So which comes first? Vitamin D deficiency or depression? If we're all deficient, and depression and heart disease and cancer are common, how do we determine actual cause and effect? A prospective study might help--following people with sufficient vitamin D levels at the outset for some time, and collecting data on their vitamin D levels and disease experience, and in fact it looks like this is the next stage in this study. But then, even when low vitamin D comes before the cancer, does that make it determinative?
Basing decisions on what kind of evidence?
So, what does evidence-based medicine do with this kind of evidence? And these studies are just a few among many that are yielding evidence that's not all that easy to know what to do with. An underlying problem is that there are too many unmeasured and hence possibly confounding variables, along with too many shortcomings in our knowledge of human biology to know how to definitively evaluate these kinds of studies. Those are nobody's fault, and we can only work away at improving.
In an intersection of new and old age, and thus bringing up the question of what counts as health care, and what as evidence, the BBC reports that meditation reduces risk of stroke and heart attack. A nine year case-control study of its effects was carried out by researchers at the Medical College in Wisconsin and the Maharishi University in Iowa. The study showed risk of death, heart attack and stroke was reduced by 47% in the group who meditated. This study now enters the pool of evidence for evidence-based medicine. It was a state-of-the-art study, yielding statistically significant and potentially reproducible results. Even though it tells us nothing about how meditation works, and might even go counter to how some think western medicine should work, that doesn't matter. Demonstrating effect is enough for EBM. Unless, of course, doctors aren't comfortable recommending meditation in spite of the evidence.
Another example of just that, that we'll do another post on for other reasons, is the issue of the right practice for using mammograms to find breast cancer. Several recent studies have lead a government science panel to suggest that the benefits of early detection are not worth the cost in terms of over-treatment, in younger women, and have suggested less screening for women in their 40s. Yet, today, the NY Times reports some doctors saying they won't change their screening practices because they are not persuaded:
“It’s kind of hard to suggest that we should stop examining our patients and screening them,” said Dr. Annekathryn Goodman, director of the fellowship program in gynecological oncology at Massachusetts General Hospital “I would be cautious about changing a practice that seems to work.”This shows the elusive, at least partly subjective and judgment-based idea of 'evidence' and how to use it. Two criteria are mixed, implicitly, here: the evidence from formal studies and the doctor's belief that current practice 'seems to work'. The challenge is to design evaluations that bring these criteria into agreement. What happens with non-traditional medicine is a more complicated question....
Tuesday, November 17, 2009
Wallace, Bates and the Ten Thousand Beetles Project
By
Ken Weiss
n the middle 1800s, numerous European adventurers sallied forth into the wild and largely unknown regions of the world, to see what they could find. There were many such collectors, but two of the most famous were Henry Bates (in whose honor Batesian mimicry for selective advantage of protective coloration was named) and Alfred Wallace (after whom the theory of evolution was not named but should have been at least co-named). They slogged through the Amazon basin, and (in Wallace's case) the islands of what is now Indonesia.
They were 'naturalists', collecting and studying the diversity of exotic (to Europeans) animal species of these tropical wonderlands. Their purpose was mixed but an important aspect was that they supported themselves and their expeditions by sending home trophy species to sell to the wealthy to show off in their parlors and impress their friends. We often think of them now as 'beetle collectors'.
This is rather like the 10,000 vertebrate genome project being proposed by a large consortium of modern beetle collectors (Genome 10K, they're calling it). The idea is to sequence the genomes of 10,000 vertebrate species. The cost of sequencing a genome is dropping and is promised soon to be around $1000. Thus, this project can be done for a mere $10 billion! Compared to landing on the moon, making another nuclear submarine, or who knows how many other mega-projects, it seems cheap, almost a bargain, and well within the routine claims that sciences are making of the public honey-pot.
Not so fast! For starters, this is only the foot in the door. Once these sequences are done, there will inevitably follow an open-ended demand for persons to stuff, curate, and protect the specimens, for gear and programmers to house them in the Museum of Natural Genomes.
So is this proposal one that should be given priority? It is certainly a legitimate scientific objective. But such proposals are becoming the routine thinking of Big Science, with very little expressed concern for the actual likely impact of the research, much less what else could be done with the same resources. We can't know for sure, and much will of course be learned, even including some surprising or even important facts here and there. But is it likely that such data will truly transform any thinking we already can do, or could do with only, say, 100 more sequences, targeted to specific problems? After all, we already know a lot about cows and dogs and salamanders. What is the sequence (of single individuals, deprived of phenotype data) going to tell us? After all, beetles are beetles.
It may seem that we're once again just being cranks and moaning about the state of the scientific world. But think about this: Just the entry fee of $10 billion alone could fund 10,000 million-dollar grants to do more focused, question-driven science. Or 100,000 $100,000 grants. That's a lot of real questions, and a lot of investigators (with careers of their own to worry about) who would be funded but won't be if the neo-beetles are collected. And the proliferation of opportunities would continue because there would be no demand for unending Museum maintenance expenses.
There is, of course, another very big difference between today and the glory days of Wallace and Bates canoeing upriver with their butterfly nets. In those days, vain wealthy people were paying the tab. Now, it's still feathering the scientists' nests, but it's with our money as taxpayers, and given all the lobbying and maneuvering, we don't really have a say in the science that gets done. This is a rather big change.
Maybe the proposers of the 10K vertebrate genome project feel like they're small potatoes compared the already presumptuous 100,000 human genomes project. And you know very well that insect and plant people will see the precedent, and we'll have the 10,000 Weed and Crop project, the 10,000,000 insect genome project (barely scratching the surface), and sea urchins will be next.
After all, if what you have to do to be a red-blooded American is simply propose more and more and more (and bigger), then where's the limit? (It's not the sky, as NASA's proposals clearly show)
So, we suggest a stunning precedent that would recognize some societal responsibility. Instead of more-more-more, how about those proposing the 10K vertebrate genome project saying, in a civic-minded way, that this is clearly scientific back-burner material of no urgency, and should wait until the $99.99 genome and automatic annotation and data-maintenance systems that don't t require endless additional funding to handle the additional data.
What? Wait a whole decade before we can see the South Assetia Leaping Mudwort's sequence? Yes. Because sequencing costs will surely come down. If by then we really still need to do our DNA-collecting, we'll be able to. Or maybe the novelty will have worn off, and we'll have other things on our minds.
But putting things in perspective, we should admit that our favorites are those big ones with the huge curved jaws and iridescent green carapace. Now they were really worth the effort!
They were 'naturalists', collecting and studying the diversity of exotic (to Europeans) animal species of these tropical wonderlands. Their purpose was mixed but an important aspect was that they supported themselves and their expeditions by sending home trophy species to sell to the wealthy to show off in their parlors and impress their friends. We often think of them now as 'beetle collectors'.
This is rather like the 10,000 vertebrate genome project being proposed by a large consortium of modern beetle collectors (Genome 10K, they're calling it). The idea is to sequence the genomes of 10,000 vertebrate species. The cost of sequencing a genome is dropping and is promised soon to be around $1000. Thus, this project can be done for a mere $10 billion! Compared to landing on the moon, making another nuclear submarine, or who knows how many other mega-projects, it seems cheap, almost a bargain, and well within the routine claims that sciences are making of the public honey-pot.
Not so fast! For starters, this is only the foot in the door. Once these sequences are done, there will inevitably follow an open-ended demand for persons to stuff, curate, and protect the specimens, for gear and programmers to house them in the Museum of Natural Genomes.
So is this proposal one that should be given priority? It is certainly a legitimate scientific objective. But such proposals are becoming the routine thinking of Big Science, with very little expressed concern for the actual likely impact of the research, much less what else could be done with the same resources. We can't know for sure, and much will of course be learned, even including some surprising or even important facts here and there. But is it likely that such data will truly transform any thinking we already can do, or could do with only, say, 100 more sequences, targeted to specific problems? After all, we already know a lot about cows and dogs and salamanders. What is the sequence (of single individuals, deprived of phenotype data) going to tell us? After all, beetles are beetles.
It may seem that we're once again just being cranks and moaning about the state of the scientific world. But think about this: Just the entry fee of $10 billion alone could fund 10,000 million-dollar grants to do more focused, question-driven science. Or 100,000 $100,000 grants. That's a lot of real questions, and a lot of investigators (with careers of their own to worry about) who would be funded but won't be if the neo-beetles are collected. And the proliferation of opportunities would continue because there would be no demand for unending Museum maintenance expenses.
There is, of course, another very big difference between today and the glory days of Wallace and Bates canoeing upriver with their butterfly nets. In those days, vain wealthy people were paying the tab. Now, it's still feathering the scientists' nests, but it's with our money as taxpayers, and given all the lobbying and maneuvering, we don't really have a say in the science that gets done. This is a rather big change.
Maybe the proposers of the 10K vertebrate genome project feel like they're small potatoes compared the already presumptuous 100,000 human genomes project. And you know very well that insect and plant people will see the precedent, and we'll have the 10,000 Weed and Crop project, the 10,000,000 insect genome project (barely scratching the surface), and sea urchins will be next.
After all, if what you have to do to be a red-blooded American is simply propose more and more and more (and bigger), then where's the limit? (It's not the sky, as NASA's proposals clearly show)
So, we suggest a stunning precedent that would recognize some societal responsibility. Instead of more-more-more, how about those proposing the 10K vertebrate genome project saying, in a civic-minded way, that this is clearly scientific back-burner material of no urgency, and should wait until the $99.99 genome and automatic annotation and data-maintenance systems that don't t require endless additional funding to handle the additional data.
What? Wait a whole decade before we can see the South Assetia Leaping Mudwort's sequence? Yes. Because sequencing costs will surely come down. If by then we really still need to do our DNA-collecting, we'll be able to. Or maybe the novelty will have worn off, and we'll have other things on our minds.
But putting things in perspective, we should admit that our favorites are those big ones with the huge curved jaws and iridescent green carapace. Now they were really worth the effort!
Monday, November 16, 2009
Goats that could do with some friendlying
Here's a follow-up on our post last week about crying babies and quacking ducks. No experiments this time, just real life. We've written before about Jennifer and Melvin, dairy goat farmers who run Polymeadows Farm in Vermont. Jennifer is my sister. Ken and I go up to the farm as often as we can, which isn't nearly often enough, and try to be as helpful as we can be, which sometimes isn't as helpful as we'd like, given how much needs to be done, and how much we don't know how to do. But, it's a beautiful place, and we always love going up.
Until this year, Jen and Melvin were milking almost 200 goats. Then the economy crashed, and their buyer stopped buying their milk, so they sold some goats and built a dairy plant (which was heart-wrenching, and astounding, in that order), and now are making and selling the best goat milk, yogurt, feta and chèvre in all of New England (and Albany and soon, New York City). And we're not the only ones who say so!
But, back when they were milking 200 goats, Jennifer was incredibly busy feeding all the kids that the does were producing two times a year. Each kidding season lasted 2 - 3 months, and each doe had 1, 2, 3 or 4 kids. That made for a lot of babies to be fed, even accounting for the fact that they only keep the girls. (Because, it turns out that boys aren't of much use on a dairy farm.)
So, if 200 goats just have twins, that's 400 kids, that all needed to be fed, at least until someone came up and took the non-keepers off their hands. That's bottle-fed. So Jen was very happy when she bought some nipple buckets. She just had to train the babies to use them. She did this for a while, but it quickly became her least favorite job on the farm (some kids just can't figure it out, and sometimes it seems that the purpose of a baby goat's life -- after hugs and kisses -- is to deprive another baby goat of her chance at the milk, which makes training them even less fun). But, for a couple of years this was how she fed the babies.
To Jen and Melvin, right up there on the list of most important qualities a milk goat needs to have is that she be people-friendly. That's almost as important as giving lots of milk, and Jennifer and Melvin were very happy to have built up a herd of goats that always included friendly bucks (not a given in a herd of goats!) and girls who wouldn't leave the milking parlor without a hug and a kiss (from a human, not the buck). These goats were hand-raised with love.
However, Jen began to notice that her nipple-bucket babies weren't as friendly as the bottle-fed ones. Now, less friendly goats aren't mean or aggressive, they mostly just don't like their hugs and kisses as much as other goats do. But this wasn't good, and in spite of the fact that it's more work to feed each one by hand, she's now back to bottle-feeding all her kids. They've got fewer goats now than they used to, and thus a lot fewer kids, but she's still bottle-feeding a bunch of kids.
Jennifer tells me that bottle-feeding doesn't automatically make the wary kid of an unfriendly mother friendly herself. She'll come around, but it takes longer, more cuddling.
Tweedle Dum and Tweedle Dee are sisters who were joined at the hip at birth (right, not literally, but they were inseparable in the barn, the yard, and so on). They are so inseparable that when one of them turns her head, the other one does, too, they go everywhere together, and so on. (Alice hangs with the Tweedles too, but she isn't quite as co-dependent, or into the choreography of life). Tweedle Dum had her first kids before Tweedle Dee, but Jennifer brought them both up to the big barn, where the milk goats live and, by rights, before Tweedle Dee's time, so they wouldn't have to spend even a night apart.
Tweedle Dee just had twin girls -- I asked Jen if they are as inseparable as the Tweedles, and she said yep, so far they are. And, their mother isn't all that people-friendly -- perhaps because she was raised on a nipple bucket, and spent inordinate amounts of time with her sister -- and the kids are also more wary of people. Their grandmother, Dumbo, was wary, too. The kids will eventually warm up, but they'll take more friendlying than kids who aren't so wary at birth, Jen says.
Is any of this a surprise? No, but it's interesting to see it playing out on the farm, where a specific behavior doesn't look to be genetic per se, but the ability to learn how to respond to the environment might be. This is a point Ken and I frequently make -- it's adaptability that evolved, more so than exquisite adaptation.
Equally as interesting, to a geneticist, was the time Jennifer called to say that a goat had just given birth to a baby...and a leg. Just a leg. Or, when she called to tell us about the baby with hairy eyeballs -- his wattles, those long things that dangle from each side of a goat's neck, had gotten seriously misplaced. He could see just fine, but it was weird. (I have a picture, but do you really want to see it?) He was adopted by a little boy who loved him dearly.
Geneticists spend so much time in the lab, with inbred flies, mice, zebrafish, arabidopsis plants, and so on that it's good to be reminded what it all looks like on the ground.
Until this year, Jen and Melvin were milking almost 200 goats. Then the economy crashed, and their buyer stopped buying their milk, so they sold some goats and built a dairy plant (which was heart-wrenching, and astounding, in that order), and now are making and selling the best goat milk, yogurt, feta and chèvre in all of New England (and Albany and soon, New York City). And we're not the only ones who say so!
But, back when they were milking 200 goats, Jennifer was incredibly busy feeding all the kids that the does were producing two times a year. Each kidding season lasted 2 - 3 months, and each doe had 1, 2, 3 or 4 kids. That made for a lot of babies to be fed, even accounting for the fact that they only keep the girls. (Because, it turns out that boys aren't of much use on a dairy farm.)
So, if 200 goats just have twins, that's 400 kids, that all needed to be fed, at least until someone came up and took the non-keepers off their hands. That's bottle-fed. So Jen was very happy when she bought some nipple buckets. She just had to train the babies to use them. She did this for a while, but it quickly became her least favorite job on the farm (some kids just can't figure it out, and sometimes it seems that the purpose of a baby goat's life -- after hugs and kisses -- is to deprive another baby goat of her chance at the milk, which makes training them even less fun). But, for a couple of years this was how she fed the babies.
To Jen and Melvin, right up there on the list of most important qualities a milk goat needs to have is that she be people-friendly. That's almost as important as giving lots of milk, and Jennifer and Melvin were very happy to have built up a herd of goats that always included friendly bucks (not a given in a herd of goats!) and girls who wouldn't leave the milking parlor without a hug and a kiss (from a human, not the buck). These goats were hand-raised with love.
However, Jen began to notice that her nipple-bucket babies weren't as friendly as the bottle-fed ones. Now, less friendly goats aren't mean or aggressive, they mostly just don't like their hugs and kisses as much as other goats do. But this wasn't good, and in spite of the fact that it's more work to feed each one by hand, she's now back to bottle-feeding all her kids. They've got fewer goats now than they used to, and thus a lot fewer kids, but she's still bottle-feeding a bunch of kids.
Jennifer tells me that bottle-feeding doesn't automatically make the wary kid of an unfriendly mother friendly herself. She'll come around, but it takes longer, more cuddling.
Tweedle Dum and Tweedle Dee are sisters who were joined at the hip at birth (right, not literally, but they were inseparable in the barn, the yard, and so on). They are so inseparable that when one of them turns her head, the other one does, too, they go everywhere together, and so on. (Alice hangs with the Tweedles too, but she isn't quite as co-dependent, or into the choreography of life). Tweedle Dum had her first kids before Tweedle Dee, but Jennifer brought them both up to the big barn, where the milk goats live and, by rights, before Tweedle Dee's time, so they wouldn't have to spend even a night apart.
Tweedle Dee just had twin girls -- I asked Jen if they are as inseparable as the Tweedles, and she said yep, so far they are. And, their mother isn't all that people-friendly -- perhaps because she was raised on a nipple bucket, and spent inordinate amounts of time with her sister -- and the kids are also more wary of people. Their grandmother, Dumbo, was wary, too. The kids will eventually warm up, but they'll take more friendlying than kids who aren't so wary at birth, Jen says.
Is any of this a surprise? No, but it's interesting to see it playing out on the farm, where a specific behavior doesn't look to be genetic per se, but the ability to learn how to respond to the environment might be. This is a point Ken and I frequently make -- it's adaptability that evolved, more so than exquisite adaptation.
Equally as interesting, to a geneticist, was the time Jennifer called to say that a goat had just given birth to a baby...and a leg. Just a leg. Or, when she called to tell us about the baby with hairy eyeballs -- his wattles, those long things that dangle from each side of a goat's neck, had gotten seriously misplaced. He could see just fine, but it was weird. (I have a picture, but do you really want to see it?) He was adopted by a little boy who loved him dearly.
Geneticists spend so much time in the lab, with inbred flies, mice, zebrafish, arabidopsis plants, and so on that it's good to be reminded what it all looks like on the ground.
Friday, November 13, 2009
The Amazing Story of Holly Dunsworth and her Osteopathic Doctor
Recently I read a fascinating article on The Huffington Post called, “The Amazing Story of Charles Darwin and his Homeopathic Doctor.”
Quoted from the article:
I hope that you read either of these articles and then come back.
Welcome back!
Now, I don’t know a whole lot about homeopathy. But I do know a little about science and I also know a little about some of the stuff James Randi has done to debunk homeopathy. So…Psssst….let me let you in on a little secret:
Homeopathy is cow manure.
(Masculinize “cow” and lewdify “manure” to decode my intended sentiment.)
The author of the article is using Darwin’s fame and notoriety to lend legitimacy to homeopathy.
Everyone wants a piece of Darwin, especially this year.
The questions that haunt me after reading the article are probably the same that you have:
• In Darwin’s day, how much more effective was medicine than homeopathy? How scientific was medicine?
• Was Darwin’s use of homeopathy a rejection of science in favor of alternative treatments, or did he simply choose one pretty hopeless option over another?
• Was Darwin's plight so different from that of many people today?
I’m sure that there are historians of science and medicine who have answers for these questions (and please see the thoughtful comments in the Comments section below). But I do know the answer to the last one and it’s NO.
For example, people with eczema can suffer terribly, and there's no certain or easy cure. And this is why people are still drawn to alternative cures, either right off the bat, once they start showing symptoms, or after real science and medicine have failed to help them.
Desperation leads us all to search for solutions to relieve our pain and suffering. However, homeopathy is absolutely stark-raving ridiculous.
If you want to see the kind of discussions people who seek homeopathic cures are having, take a look at this mind-blowing back-and-forth about a little girl with eczema. Make sure to read what "passkey" writes.
It’s actually not that surprising that so many people are drawn to homeopathy, because finding real medicine isn’t always as easy as it sounds.
Recently I learned this lesson for myself.
I’m new to Chicago and so every time I need a new doctor I have to find one. So I go to my insurance company’s website and search for doctors in my neighborhood who do what I think I need them to do based on their listed area of expertise.
I had just come back from doing fieldwork in Kenya with a large bulge on the back of my knee. As far as I could tell it was either a cyst (no big deal) or it was a swollen lymph node reacting to an infected cut on my knee. Infections that begin in Africa can be nasty so I wanted to see a doctor to make sure I didn’t need antibiotics to kill Lake Victoria parasites that may have entered my bloody soccer-wounded knee. (Thank you Google for making me crazy, but not crazy enough to call a homeopath.)
Anyway, I needed to see a doctor to ease my mind. I felt fine. But I’d feel stupid if I didn’t see a doctor and it turned out that I had microbes or worms or some other awesome parasite. I’d never had a bulge on the back of my knee and coming home from Western Kenya with a bulge where there are lymph nodes was too much of a coincidence to ignore.
(Yes, secretly I wanted to have a parasite. It’s complicated.)
I found a General Practitioner with offices in the same building as my gynecologist. I booked an appointment, explaining my knee, and it seemed normal like all appointments I’ve made with doctors before.
But when I arrived at the office I could tell something was different from everything before. There were quotes on the walls about the soul and the spirit (not Jesus, God, or the Holy Spirit) and there was a dream catcher hanging in one of the corners, but what tipped me off that I wasn’t exactly in Medical Kansas anymore was the tiny TV pointed at the waiting room chairs.
The volume was so high that I couldn’t concentrate on filling out the paperwork. I asked the receptionist to please turn it off, but she only turned it down. The speaker on the TV was weaving a story about a patient of his. This man had experienced decades of mysterious illnesses and after seeing a plethora of doctors who could not diagnose, relieve, or cure anything, he was cured when he came to see the speaker, an osteopathic doctor. Now the man can walk again. After this story, the speaker/doctor stood over a different patient, felt around his liver, and diagnosed his immune disease.
At this point I was sure I did not want to be there and was pretty angry that I got lured into this scene from my medical insurance’s website. So I stood up and asked the receptionist about the last question on the forms, hoping to get out of this appointment on technical grounds rather than potentially embarrassing merit ones. “I thought this was covered by my insurance. Why does it say that it may not be? I can’t be here if it’s not covered.”
She didn’t know the answer so she pulled the doctor out of her appointment with a double amputee who I’d seen go back there earlier. The doctor, a friendly hippie, assured me that she wouldn’t do anything that wouldn’t be covered by my insurance.
At this point, since she’d seen me, I couldn’t run away. So I signed the waiver and waited. Just before I worked up the courage to slip out the door, they called my name and I was led to the cluttered examination room.
The doc made small talk. She seemed nervous like a rookie would be, not like the 55+ woman that she was. She glared at me head to toe and then complimented my eyes. She weighed me, got my height, then asked me what was wrong. I told her about my knee. Usually when I say, “...and since I just got back from Africa…” the doc’s eyebrows arch and s/he gets excited or worried or both. But she didn’t flinch a bit and Africa was never spoken of again. So I’m thinking, okay, I’m a hypochondriac – it must be a common cyst. I guess I was still thinking she’d do real medicine since they give osteopathic medicine degrees from prestigious and accredited universities all over this country. I thought she must be okay.
I laid down face-down on the bed so she could feel the back of my knee where it was swollen. She asked me if, “you’ve ever had your teeth braced”. Although I thought the wording was odd I replied, “yes” and noted this as professional jargon. She continued, “Because that means that you have some inherent asymmetries,” and apparently the ones in the teeth indicate asymmetries are throughout the body and these cause problems. She then linked this to my knee problem. No mention of a cyst even as she felt the bulge.
I understand how asymmetries can cause biomechanical issues, etc… but the guy on the video in the waiting room linked asymmetries to cancer. I was very uncomfortable by now, but all of this was too weird and wild to miss. This was the single most bizarre medical experience I’d ever had so I went gleefully along for the ride and pretended to be a cultural anthropologist (but a bad one who didn’t practice disclosure) or an undercover journalist. But I didn't have to pretend that I was anything because I was an actual Science Spy.
She asked if I’d ever had any operations or surgeries in my life. Obviously this has nothing to do with diagnosing and treating a cyst. I told her about my tonsillectomy. I added, “And knee surgery,” but on the other knee, as I pointed out, and not on the knee in question. This, she loved. See? My asymmetries were playing out all over the place. Her story was coming together beautifully. Her case for giving me her special treatment was building, evidence was mounting.
“Well,” she asked, “do you mind if I check something? Can you lay down face-up please?” She twisted my feet so that my legs twisted all the way from my hips. The right one with the cyst didn't twist as far as the left one. Hmmm. Something’s up. This little gumshoe medical mystery was getting exciting. So I fed the bear some more.
When she asked me if I’ve had any back troubles, I told her all about the slipped discs in my neck. She ate it right up. This is exactly what she wanted to hear. The picture was getting clearer and clearer. It was all making sense now.
“Do you mind?” As I was still lying down, she held my head in her hands and twisted my head to the right and to the left. She noticed that she couldn’t twist it as far to the right side. There was that darn asymmetry again.
“Okay," she said, "can I do something else?” This is when she put her right hand over my right hemisphere and her left over my left and then she pulsed her hands in concert with the energy pulses emanating from my brain.
The problem was... my brain's hemispheres weren't pulsing in sync.
So naturally the next thing you’d want to do in this situation is rub my hip with a vibrator. And that’s what she did. Both hips. For about 10 minutes on each side while we chit-chatted about all kinds of fun stuff. She was a really nice woman. It felt marvelous.
Then she re-checked my legs. Both twisted just fine now. She re-checked my brain waves. Both sides pulsed in sync now. She re-measured my height. I was taller now!
No mention of what could cause the bulge in my knee or how to deal with it. Just, “It was great to meet you and let’s do a follow-up soon to see how that knee is doing. Maybe do another treatment in a couple days. Let’s put you down on the calendar.”
We made a follow-up appointment. I knew it was all a lie and that I was going to cancel it, but after she’d used a vibrator on me, I felt like I had to make a second date.
I called back that afternoon and told the receptionist, that aw shucks I actually can’t make that appointment. The doctor called me back each day for the next three days and in her messages she asked how my knee was doing and she acted hurt and concerned that I’d canceled our follow-up. Her last message was pretty prickly.
As Google informed me, swollen lymph nodes take 12 days to shrink back to their normal size and by day 12 the bulge on the back of my right knee- whether it was a cyst or a swollen lymph node - was gone and the infected cut on my knee was healed as well. This was just 4 days after my vibration treatment.
Osteopathy worked!
Quoted from the article:
"We may all have to thank the water cure and homeopathic treatment provided by Dr. Gully for Darwin’s survival."and
"Lucky for all of humanity, Charles Darwin sought out a different type of medical care and experienced a profound improvement in his health."This article is a summary of a more detailed article that was published in a medical journal published by Oxford University Press, called eCAM (which stands for “Evidence Based Complementary and Alternative Medicine"). Click here to see the article in its entirety.
I hope that you read either of these articles and then come back.
Welcome back!
Now, I don’t know a whole lot about homeopathy. But I do know a little about science and I also know a little about some of the stuff James Randi has done to debunk homeopathy. So…Psssst….let me let you in on a little secret:
Homeopathy is cow manure.
(Masculinize “cow” and lewdify “manure” to decode my intended sentiment.)
The author of the article is using Darwin’s fame and notoriety to lend legitimacy to homeopathy.
Everyone wants a piece of Darwin, especially this year.
The questions that haunt me after reading the article are probably the same that you have:
• In Darwin’s day, how much more effective was medicine than homeopathy? How scientific was medicine?
• Was Darwin’s use of homeopathy a rejection of science in favor of alternative treatments, or did he simply choose one pretty hopeless option over another?
• Was Darwin's plight so different from that of many people today?
I’m sure that there are historians of science and medicine who have answers for these questions (and please see the thoughtful comments in the Comments section below). But I do know the answer to the last one and it’s NO.
For example, people with eczema can suffer terribly, and there's no certain or easy cure. And this is why people are still drawn to alternative cures, either right off the bat, once they start showing symptoms, or after real science and medicine have failed to help them.
Desperation leads us all to search for solutions to relieve our pain and suffering. However, homeopathy is absolutely stark-raving ridiculous.
If you want to see the kind of discussions people who seek homeopathic cures are having, take a look at this mind-blowing back-and-forth about a little girl with eczema. Make sure to read what "passkey" writes.
It’s actually not that surprising that so many people are drawn to homeopathy, because finding real medicine isn’t always as easy as it sounds.
Recently I learned this lesson for myself.
I’m new to Chicago and so every time I need a new doctor I have to find one. So I go to my insurance company’s website and search for doctors in my neighborhood who do what I think I need them to do based on their listed area of expertise.
I had just come back from doing fieldwork in Kenya with a large bulge on the back of my knee. As far as I could tell it was either a cyst (no big deal) or it was a swollen lymph node reacting to an infected cut on my knee. Infections that begin in Africa can be nasty so I wanted to see a doctor to make sure I didn’t need antibiotics to kill Lake Victoria parasites that may have entered my bloody soccer-wounded knee. (Thank you Google for making me crazy, but not crazy enough to call a homeopath.)
Anyway, I needed to see a doctor to ease my mind. I felt fine. But I’d feel stupid if I didn’t see a doctor and it turned out that I had microbes or worms or some other awesome parasite. I’d never had a bulge on the back of my knee and coming home from Western Kenya with a bulge where there are lymph nodes was too much of a coincidence to ignore.
(Yes, secretly I wanted to have a parasite. It’s complicated.)
I found a General Practitioner with offices in the same building as my gynecologist. I booked an appointment, explaining my knee, and it seemed normal like all appointments I’ve made with doctors before.
But when I arrived at the office I could tell something was different from everything before. There were quotes on the walls about the soul and the spirit (not Jesus, God, or the Holy Spirit) and there was a dream catcher hanging in one of the corners, but what tipped me off that I wasn’t exactly in Medical Kansas anymore was the tiny TV pointed at the waiting room chairs.
The volume was so high that I couldn’t concentrate on filling out the paperwork. I asked the receptionist to please turn it off, but she only turned it down. The speaker on the TV was weaving a story about a patient of his. This man had experienced decades of mysterious illnesses and after seeing a plethora of doctors who could not diagnose, relieve, or cure anything, he was cured when he came to see the speaker, an osteopathic doctor. Now the man can walk again. After this story, the speaker/doctor stood over a different patient, felt around his liver, and diagnosed his immune disease.
At this point I was sure I did not want to be there and was pretty angry that I got lured into this scene from my medical insurance’s website. So I stood up and asked the receptionist about the last question on the forms, hoping to get out of this appointment on technical grounds rather than potentially embarrassing merit ones. “I thought this was covered by my insurance. Why does it say that it may not be? I can’t be here if it’s not covered.”
She didn’t know the answer so she pulled the doctor out of her appointment with a double amputee who I’d seen go back there earlier. The doctor, a friendly hippie, assured me that she wouldn’t do anything that wouldn’t be covered by my insurance.
At this point, since she’d seen me, I couldn’t run away. So I signed the waiver and waited. Just before I worked up the courage to slip out the door, they called my name and I was led to the cluttered examination room.
The doc made small talk. She seemed nervous like a rookie would be, not like the 55+ woman that she was. She glared at me head to toe and then complimented my eyes. She weighed me, got my height, then asked me what was wrong. I told her about my knee. Usually when I say, “...and since I just got back from Africa…” the doc’s eyebrows arch and s/he gets excited or worried or both. But she didn’t flinch a bit and Africa was never spoken of again. So I’m thinking, okay, I’m a hypochondriac – it must be a common cyst. I guess I was still thinking she’d do real medicine since they give osteopathic medicine degrees from prestigious and accredited universities all over this country. I thought she must be okay.
I laid down face-down on the bed so she could feel the back of my knee where it was swollen. She asked me if, “you’ve ever had your teeth braced”. Although I thought the wording was odd I replied, “yes” and noted this as professional jargon. She continued, “Because that means that you have some inherent asymmetries,” and apparently the ones in the teeth indicate asymmetries are throughout the body and these cause problems. She then linked this to my knee problem. No mention of a cyst even as she felt the bulge.
I understand how asymmetries can cause biomechanical issues, etc… but the guy on the video in the waiting room linked asymmetries to cancer. I was very uncomfortable by now, but all of this was too weird and wild to miss. This was the single most bizarre medical experience I’d ever had so I went gleefully along for the ride and pretended to be a cultural anthropologist (but a bad one who didn’t practice disclosure) or an undercover journalist. But I didn't have to pretend that I was anything because I was an actual Science Spy.
She asked if I’d ever had any operations or surgeries in my life. Obviously this has nothing to do with diagnosing and treating a cyst. I told her about my tonsillectomy. I added, “And knee surgery,” but on the other knee, as I pointed out, and not on the knee in question. This, she loved. See? My asymmetries were playing out all over the place. Her story was coming together beautifully. Her case for giving me her special treatment was building, evidence was mounting.
“Well,” she asked, “do you mind if I check something? Can you lay down face-up please?” She twisted my feet so that my legs twisted all the way from my hips. The right one with the cyst didn't twist as far as the left one. Hmmm. Something’s up. This little gumshoe medical mystery was getting exciting. So I fed the bear some more.
When she asked me if I’ve had any back troubles, I told her all about the slipped discs in my neck. She ate it right up. This is exactly what she wanted to hear. The picture was getting clearer and clearer. It was all making sense now.
“Do you mind?” As I was still lying down, she held my head in her hands and twisted my head to the right and to the left. She noticed that she couldn’t twist it as far to the right side. There was that darn asymmetry again.
“Okay," she said, "can I do something else?” This is when she put her right hand over my right hemisphere and her left over my left and then she pulsed her hands in concert with the energy pulses emanating from my brain.
The problem was... my brain's hemispheres weren't pulsing in sync.
So naturally the next thing you’d want to do in this situation is rub my hip with a vibrator. And that’s what she did. Both hips. For about 10 minutes on each side while we chit-chatted about all kinds of fun stuff. She was a really nice woman. It felt marvelous.
Then she re-checked my legs. Both twisted just fine now. She re-checked my brain waves. Both sides pulsed in sync now. She re-measured my height. I was taller now!
No mention of what could cause the bulge in my knee or how to deal with it. Just, “It was great to meet you and let’s do a follow-up soon to see how that knee is doing. Maybe do another treatment in a couple days. Let’s put you down on the calendar.”
We made a follow-up appointment. I knew it was all a lie and that I was going to cancel it, but after she’d used a vibrator on me, I felt like I had to make a second date.
I called back that afternoon and told the receptionist, that aw shucks I actually can’t make that appointment. The doctor called me back each day for the next three days and in her messages she asked how my knee was doing and she acted hurt and concerned that I’d canceled our follow-up. Her last message was pretty prickly.
As Google informed me, swollen lymph nodes take 12 days to shrink back to their normal size and by day 12 the bulge on the back of my right knee- whether it was a cyst or a swollen lymph node - was gone and the infected cut on my knee was healed as well. This was just 4 days after my vibration treatment.
Osteopathy worked!
Thursday, November 12, 2009
Hunting of the mobile snark
By
Ken Weiss
Lewis Carroll's Hunting of the Snark, is a poem about a motley crew that sets out to pursue a rare prey, the snark, a rare and indeed fanciful beast. Have we come to the right place?
Sometimes this fits our predispositions but often the story is more complex. Often we are chasing after truths about rare causal phenomena, where adequate data are tough or impossible to collect and it is sometimes not even clear what we need to collect.
Do mammograms cause more cancer than they detect, or treatment for tumors that would have regressed on their own? What about dental x-rays or occupational exposures such as, for example, work in a molecular genetics lab that uses radioisotopes for labeling reactions?
Or, to take the story on the CNN website on Tuesday that triggered this post, does mobile-phone usage cause brain cancer? The concern has been around since some early studies about a decade ago, but most subsequent studies have not confirmed the effect. The idea is that a cell phone emits weak electromagnetic radiation that, when held close to the head can cause DNA damage that leads to cancer. Yet, the energy level would seem to be too small to do that, based on what's known about DNA. And some studies (including the one reported this week) claim the additional persuasive kind of evidence that the side of the head the user prefers to listen with is the side that preferentially develops a tumor.
The CNN story reports that a decade-long World Health Organization study of cellphone risk is due out by the end of the year, and that it will claim "significantly increased risk" of some brain tumors. A meta-analysis of 23 different studies, that is, one that unified the separate studies into a single overall analysis, published in The Journal of Clinical Oncology in December, 2008, found a slightly increased risk as well, in the eight studies the authors deemed likely to be the most accurate (that is, they were double-blind, and not funded by mobile phone companies; this is actually true--the studies funded by phone companies in fact found that mobile phone use was slightly protective!).
But even people who defend the idea that cellphones are risky don't deny that there are problems with each of these studies. They are all retrospective, for one thing, which increases the possibility of faulty memory of phone usage, or the amount of head-side preference, and many were too short-term--e.g., six months--to adequately measure risk, since brain tumors generally develop over years, not months. So, even the data from the best of these studies is not conclusive, partly because of the way these studies were designed and partly because if there is a risk from cellphone use, it's small and going to be very difficult to tease out, even with the best of studies.
Other studies of the risks of low-dose radiation have been notoriously plagued by similar issues--yet are important for setting exposure limits to radiation workers, medical patients, etc.
But, if cellphone risk is real, this is a genetic story in that the tumors would be due to some exposed cell by bad luck acquiring the wrong set of mutations that led it to stop obeying 'don't grow' signals. But the changes are inherited from the first brain cell to its descendants in the brain, but not transmitted in the germ line. Of course, and here would be yet another challenge to detect, some individuals might be susceptible because their brain cells have inherited some set of mutations that already lead them part-way down the path to cancer. (These aspects of change and inheritance are major subjects in our book.)
However, the reason the evidence for mobile-phone effects seems implausible to some may (again, if the effects are real) be because the assumption that the cause is DNA damage. Suppose it is some other induced change in cell behavior. Then perhaps the arguments about energy levels are wrong. One might guess what the other types of change might be.....but we are not the ones to do such guessing since that would be far out of what we know anything about.
Still, questions such as these show the elusive nature of making sense of data when the snarks we're trying to catch are elusive.
If the exposure is, from our normal point of view, very very small and hard to estimate, and the outcome very rare, how can we tell? Even a single cancer can be a tragedy for that individual. But do we outlaw mobiles for that?
Like the assertion in the poem, we see the same claim made repeatedly in the media, reporting this study or that. We also see the same denial of the assertion made repeatedly in the media and in some other study. Is repetition a good basis for holding either opinion about such causal claims?
Clearly repetition without evidence is not in itself any kind of criterion. Anyone can say anything (as the group leader did in regard to the snark). For well-known statistical reasons, even a large study can fail to detect a small effect, with there being no design problems or fault on the part of the scientist. Flip a coin twice: though the true probability of a Heads is 1/2, many times you'll get two heads or two tails.
Many important problems in biology are like this and because of the seriousness of the outcome, radiation exposures are among them. But we're besieged with claims about this or that risk factor, where the risks are thought to be substantially more than with cellphones, and yet we can't seem to get a definitive answer even then.
Evolution works slowly, or so it seems. So slowly that mutation is very rare, selective differences very small, and life's conditions highly irregular. As with other small causal factors and rare or very slow effects, it is difficult to know what evidence counts the most. It presents a sobering challenge to science. And lots of room for strong, if often emotional rather than factually solid debate.
After all, it turned out that the snark was a boojum, you see!
-Ken and Anne
- `Just the place for a Snark! I have said it twice:
- That alone should encourage the crew.
- Just the place for a Snark! I have said it thrice:
- What I tell you three times is true.''
Sometimes this fits our predispositions but often the story is more complex. Often we are chasing after truths about rare causal phenomena, where adequate data are tough or impossible to collect and it is sometimes not even clear what we need to collect.
Do mammograms cause more cancer than they detect, or treatment for tumors that would have regressed on their own? What about dental x-rays or occupational exposures such as, for example, work in a molecular genetics lab that uses radioisotopes for labeling reactions?
Or, to take the story on the CNN website on Tuesday that triggered this post, does mobile-phone usage cause brain cancer? The concern has been around since some early studies about a decade ago, but most subsequent studies have not confirmed the effect. The idea is that a cell phone emits weak electromagnetic radiation that, when held close to the head can cause DNA damage that leads to cancer. Yet, the energy level would seem to be too small to do that, based on what's known about DNA. And some studies (including the one reported this week) claim the additional persuasive kind of evidence that the side of the head the user prefers to listen with is the side that preferentially develops a tumor.
The CNN story reports that a decade-long World Health Organization study of cellphone risk is due out by the end of the year, and that it will claim "significantly increased risk" of some brain tumors. A meta-analysis of 23 different studies, that is, one that unified the separate studies into a single overall analysis, published in The Journal of Clinical Oncology in December, 2008, found a slightly increased risk as well, in the eight studies the authors deemed likely to be the most accurate (that is, they were double-blind, and not funded by mobile phone companies; this is actually true--the studies funded by phone companies in fact found that mobile phone use was slightly protective!).
But even people who defend the idea that cellphones are risky don't deny that there are problems with each of these studies. They are all retrospective, for one thing, which increases the possibility of faulty memory of phone usage, or the amount of head-side preference, and many were too short-term--e.g., six months--to adequately measure risk, since brain tumors generally develop over years, not months. So, even the data from the best of these studies is not conclusive, partly because of the way these studies were designed and partly because if there is a risk from cellphone use, it's small and going to be very difficult to tease out, even with the best of studies.
Other studies of the risks of low-dose radiation have been notoriously plagued by similar issues--yet are important for setting exposure limits to radiation workers, medical patients, etc.
But, if cellphone risk is real, this is a genetic story in that the tumors would be due to some exposed cell by bad luck acquiring the wrong set of mutations that led it to stop obeying 'don't grow' signals. But the changes are inherited from the first brain cell to its descendants in the brain, but not transmitted in the germ line. Of course, and here would be yet another challenge to detect, some individuals might be susceptible because their brain cells have inherited some set of mutations that already lead them part-way down the path to cancer. (These aspects of change and inheritance are major subjects in our book.)
However, the reason the evidence for mobile-phone effects seems implausible to some may (again, if the effects are real) be because the assumption that the cause is DNA damage. Suppose it is some other induced change in cell behavior. Then perhaps the arguments about energy levels are wrong. One might guess what the other types of change might be.....but we are not the ones to do such guessing since that would be far out of what we know anything about.
Still, questions such as these show the elusive nature of making sense of data when the snarks we're trying to catch are elusive.
If the exposure is, from our normal point of view, very very small and hard to estimate, and the outcome very rare, how can we tell? Even a single cancer can be a tragedy for that individual. But do we outlaw mobiles for that?
Like the assertion in the poem, we see the same claim made repeatedly in the media, reporting this study or that. We also see the same denial of the assertion made repeatedly in the media and in some other study. Is repetition a good basis for holding either opinion about such causal claims?
Clearly repetition without evidence is not in itself any kind of criterion. Anyone can say anything (as the group leader did in regard to the snark). For well-known statistical reasons, even a large study can fail to detect a small effect, with there being no design problems or fault on the part of the scientist. Flip a coin twice: though the true probability of a Heads is 1/2, many times you'll get two heads or two tails.
Many important problems in biology are like this and because of the seriousness of the outcome, radiation exposures are among them. But we're besieged with claims about this or that risk factor, where the risks are thought to be substantially more than with cellphones, and yet we can't seem to get a definitive answer even then.
Evolution works slowly, or so it seems. So slowly that mutation is very rare, selective differences very small, and life's conditions highly irregular. As with other small causal factors and rare or very slow effects, it is difficult to know what evidence counts the most. It presents a sobering challenge to science. And lots of room for strong, if often emotional rather than factually solid debate.
After all, it turned out that the snark was a boojum, you see!
-Ken and Anne
Wednesday, November 11, 2009
Bawling like a baby....but which baby?
A study published last week in Current Biology reports that newborns already speak their mother's language (Current Biology 19, 1–4, December 15, 2009, Newborns’ Cry Melody Is Shaped by Their Native Language, Birgit Mampe et al.). Or rather, the tonality of their cry tends to mimic the intonation of their native language (you can listen to the differences here). The researchers analyzed the cries of 30 French and 30 German newborns and found that the cries of the French babies tended to rise, while those of the German babies fell, both in line with the intonation patterns of the language spoken by the infants' mothers. This was a repeatable observation, according to the authors, who attributed it to what they heard around them in their last trimester of gestation (amazing! They didn't say there was a gene for the trait!).
This isn't really a surprise, as the same sorts of results were found by psychologist Gilbert Gottlieb when, in the 1960's and 70's, he experimented with embryonic ducks to determine whether responding to their mother's vocalizations was learned or innate. He devocalized embryos so they couldn't make their own sounds in the egg, which they do naturally, and raised the eggs in isolation so they weren't hearing their own or their mother's vocalizations. He found that they didn't respond readily to their mother at hatching, unlike chicks that develop normally.
In fact, Gottlieb was a central figure in the growth of developmental systems theory, the idea that genetics is generally considered to be far more integral to development than it actually is, and that evolutionary theory should be less centered around genes. Evidence for the effects of the environment on gestational development in general, and on language learning specifically, is nothing new, and the fact that the current story has made such a splash suggests another case of ignoring what's already known (a state of affairs that we discussed here).
We tend these days to attribute everything to genes and hard-wiring, because that fits our current culture's deterministic predisposition. There is a tension between human exceptionalizing--making us different from other species--and the juicy, almost prurient need many geneticists seem to feel to pry into our genomes and discover what really makes us what we are. In a way, that's different from giving us free will, which many in our society would like to do.
Of course, we know very well that the uterine environment can affect us in many ways. Even late-onset disease risks reflect pre-natal effects. But we have tended to attribute behavioral traits to either post-natal learning--or to genes. The baby crying study reminds us of something we already know--gestational learning can make behaviors look like instinct.
This isn't really a surprise, as the same sorts of results were found by psychologist Gilbert Gottlieb when, in the 1960's and 70's, he experimented with embryonic ducks to determine whether responding to their mother's vocalizations was learned or innate. He devocalized embryos so they couldn't make their own sounds in the egg, which they do naturally, and raised the eggs in isolation so they weren't hearing their own or their mother's vocalizations. He found that they didn't respond readily to their mother at hatching, unlike chicks that develop normally.
In fact, Gottlieb was a central figure in the growth of developmental systems theory, the idea that genetics is generally considered to be far more integral to development than it actually is, and that evolutionary theory should be less centered around genes. Evidence for the effects of the environment on gestational development in general, and on language learning specifically, is nothing new, and the fact that the current story has made such a splash suggests another case of ignoring what's already known (a state of affairs that we discussed here).
We tend these days to attribute everything to genes and hard-wiring, because that fits our current culture's deterministic predisposition. There is a tension between human exceptionalizing--making us different from other species--and the juicy, almost prurient need many geneticists seem to feel to pry into our genomes and discover what really makes us what we are. In a way, that's different from giving us free will, which many in our society would like to do.
Of course, we know very well that the uterine environment can affect us in many ways. Even late-onset disease risks reflect pre-natal effects. But we have tended to attribute behavioral traits to either post-natal learning--or to genes. The baby crying study reminds us of something we already know--gestational learning can make behaviors look like instinct.
Tuesday, November 10, 2009
Ig-nor'-ance, or Hire the Fired Coach!
By
Ken Weiss
Dismal Science
As readers of this blog will know, we regularly listen to many of the excellent radio programs on the BBC. There was recently an episode of a Radio 4 program called Analysis about the state of professional economics in the current state of the economies.
The point was to ask why the self-characterized "dismal science" of economics so dismally lived down to its reputation in failing to predict the current awful state of things. It would take a rather bold economist not to be at least a bit apologetic, and some actually are that unrepentent, justifying models as at least putting constraints on our understanding of the world, and being adjustable so as to be better next time (though without evidence that they can be better).
But that is pretty lame, to say the least, and some economists are openly nearly honest, saying that macroeconomic models (about how things work on the large scale, rather than at your local 7-Eleven) are essentially worthless. We say 'nearly' honest, because perhaps the most honest thing to say is that economics departments ought to be shut down and the resources diverted to something useful (one could say similar things about much of what happens in 'research' universities these days).
In addition to economic theory being essentially useless in making predictions, and hence guiding actions, theories tend to come around episodically. That's why there is a lot of Ode to Keynes these days. But when ideas cycle like this, it's either because we are ignorant (but then why are we so assertive about our wisdom and theories?), or we are ig-nor'-ant: we ignore the past and its lessons.
Robert Proctor, an historian at Stanford, and, sadly, late of Penn State, calls culturally-induced ignorance agnotology, although he is particularly interested in the perpetuation of knowingly inaccurate or false information for some purpose. Here, we're less interested in conspiracy theory than in why false or misleading theories are repeatedly perpetuated, in spite of their often well-recognized limitations.
Ig-nor'-ance and its consequences (or not)
Really, those who perpetrated the false theories that led to policy that led to disaster should be shunned, ostracized, and ignored. Instead, the head perps, like Alan Greenspan, will still be published, still get job offers at prestigious think tanks or Ivy League universities (at high salaries and teaching very little), will still get grants, and will get even larger fees for giving rubber-chicken dinner talks.
This is like the well-known phenomenon in sports whereby a head coach whose team regularly loses and so is fired, is quickly hired as the coach of another team, rather than left to go be a health-insurance salesman. Of course sports doesn't do much damage, except to the bones of football players, but economists have done huge damage to people (but generally not, surprise-surprise! to themselves) who lose jobs, houses, and normal lives as a result of culpably, knowingly false theories.
Ignor'ance makes ignorance a matter of policy
Ignor'ance is quite common and we can think of several anthropological explanations for it. You can't make a career in an institutionalized society like ours by echoing your masters (that was what academics did for centuries--being experts on the Bible, Aristotle, and so on--until the research focus of universities began to bloom). You have to make a name for yourself, to find something new, or to show that you're smarter than your forbears were. And because we need to earn a living, we can't agree to quit or shut down our paper-publishing factories just because we don't know as much as we need to think we do (to pamper our egos) or claim (to get grants and have secure jobs).
Ignor'ance makes ignorance a matter of policy, if students or junior faculty do not know or respect the history of their fields. It becomes a willful way to, in a sense, keep getting credit for old ideas dressed in new technical clothes. This is the case in a sense in economics, where the new clothes are the nearly instantaneous trading speeds, and vastly speedier computers that allow models to be ever more mathematical and automated, but without titering them to the real world.
Ignor'ance and biology
There are parallels in all this to the subject matter of this blog. One can go back to Darwin and earlier to see many of our current ideas in genetics or biology stated, sometimes only in rudiment, but sometimes clearly. The fact that we know much more now than they did then ameliorates this a bit, but not entirely. We engage in ignor'ance as well as ignorance in many ways when we persist in dogmatic views of evolution or genetic determinism. Often a theory drifts out of sight, and then is ignor'antly rediscovered and touted as if there was new data that made it more plausible. Sometimes there is, indeed, and that's progress!
But often the data are different but don't change things in any serious way. Analysis of human variation (and the concept of 'race', whatever euphemism is used for it), aspects of arguments about natural selection and speciation, and so on are examples. Genetics tells us nothing today that can justify human geneticists (with or without any proper understanding of anthropology) making pronouncements about human races--but they do, often based on new and exotic molecular or statistical data. But these statements are virtually identical to what was said 100 years ago, and 50 years before that in Darwin's time, and in earlier iterations. Other biological parallels include the nature/nurture gene/environment pendulum, views about the nature of speciation, the importance of selection vs mutation in adaptive evolution, and many more.
We lose touch with the valuable things in our past when professionalism in its varied manifestations leads us to engage in ignor'ance. This happens for various reasons, not least being our love affair with technology that leads students to disregard things from the past, and the lack of time or attention given to courses that include the history of our discipline. This isn't to romanticize the past or just grumble about the present. We have to make our careers, of course, but we waste resources and even those valued careers in a sense, if we don't teach, learn, or remember our past--especially when we have every way and reason to know better.
Not learning from non-progress
If we haven't progressed in understanding in 100 years, that is a sign of weakness of our science and the complexity of the world we're tying to understand. It is a problem only when we don't come clean to the public, to funders, to students, and most of all to ourselves about our limitations. Like well-done negative experimental results, non-progress is a kind of very useful knowledge: that our current approaches are not cogent or sufficient relative to the problem at hand.
An obvious part of the solution is to slow down the train and make the reward system reward conceptual novelty, not just technical production (of which, of course, there is an amazing and potentially amazingly valuable amount). Instead, our industrialized 'productivity'-driven culture has spread into science. but if the problem is hard, let's work on it humbly, rather than dismissing a new idea and recycling an older one, etc. Let's rethink our conceptual approaches, not just gear-up for more intense technological data gathering, computer programming, and the like.
A remarkable statement!
The way we are embedded in our culture is illustrated by another remarkable statement made in the BBC program about economic theory. An economic historian said correctly that the mathematizing of economics made things more structured than the real world is, and ignored the emotional or other aspects of real, local human actions.
He is an advocate for what one might call a more subjective or sociological view of economics. Rather than throw out the formal models, he said professors from the social sciences and economic 'theorists' ought to get together and unite their views. The resulting mix would be less rigidly mathematical, and 'less scientific but closer to the truth.'
This is nearly an exact quote and is a remarkable reflection of the iron cage of culture. Why is this? Because it shows the deep level to which our culture--our worldview itself--has decided that if something is rigidly mathematical (or molecular in the case of biology and genetics) it is 'science', but if it's not like that but is closer to the truth of the real world, it isn't science!
We hate to say it, but this shows the kind of thing that post-modernists, the inveterate opponents of much of our culture's obsession with science and technology, would point out as the deeply cultural matrix of all that is done. Because if something is closer to the truth that by definition should be the most 'scientific'. The statement shows, as did others on the program, the way that academicians in essence protect their territory without even realizing that's what they're doing.
Complexity
Our particular concerns are genetics and evolution, but society and other sciences also face many problems with similar characteristics: in today's parlance, they are 'complex'--they involve many contributing factors and many or most may be individually very minor.
That we haven't adequately understood complexity is not a fault of ours, since the problems are tough ones. But it is a fault if we refuse to accept what we already know, and continue to do the same thing again and again (as Einstein said, insanity is doing the same thing over and over and expecting the results to be different).
We can be excused for our manifest ignorance, but not for our ignor'ance.
-Ken and Anne
As readers of this blog will know, we regularly listen to many of the excellent radio programs on the BBC. There was recently an episode of a Radio 4 program called Analysis about the state of professional economics in the current state of the economies.
The point was to ask why the self-characterized "dismal science" of economics so dismally lived down to its reputation in failing to predict the current awful state of things. It would take a rather bold economist not to be at least a bit apologetic, and some actually are that unrepentent, justifying models as at least putting constraints on our understanding of the world, and being adjustable so as to be better next time (though without evidence that they can be better).
But that is pretty lame, to say the least, and some economists are openly nearly honest, saying that macroeconomic models (about how things work on the large scale, rather than at your local 7-Eleven) are essentially worthless. We say 'nearly' honest, because perhaps the most honest thing to say is that economics departments ought to be shut down and the resources diverted to something useful (one could say similar things about much of what happens in 'research' universities these days).
In addition to economic theory being essentially useless in making predictions, and hence guiding actions, theories tend to come around episodically. That's why there is a lot of Ode to Keynes these days. But when ideas cycle like this, it's either because we are ignorant (but then why are we so assertive about our wisdom and theories?), or we are ig-nor'-ant: we ignore the past and its lessons.
Robert Proctor, an historian at Stanford, and, sadly, late of Penn State, calls culturally-induced ignorance agnotology, although he is particularly interested in the perpetuation of knowingly inaccurate or false information for some purpose. Here, we're less interested in conspiracy theory than in why false or misleading theories are repeatedly perpetuated, in spite of their often well-recognized limitations.
Ig-nor'-ance and its consequences (or not)
Really, those who perpetrated the false theories that led to policy that led to disaster should be shunned, ostracized, and ignored. Instead, the head perps, like Alan Greenspan, will still be published, still get job offers at prestigious think tanks or Ivy League universities (at high salaries and teaching very little), will still get grants, and will get even larger fees for giving rubber-chicken dinner talks.
This is like the well-known phenomenon in sports whereby a head coach whose team regularly loses and so is fired, is quickly hired as the coach of another team, rather than left to go be a health-insurance salesman. Of course sports doesn't do much damage, except to the bones of football players, but economists have done huge damage to people (but generally not, surprise-surprise! to themselves) who lose jobs, houses, and normal lives as a result of culpably, knowingly false theories.
Ignor'ance makes ignorance a matter of policy
Ignor'ance is quite common and we can think of several anthropological explanations for it. You can't make a career in an institutionalized society like ours by echoing your masters (that was what academics did for centuries--being experts on the Bible, Aristotle, and so on--until the research focus of universities began to bloom). You have to make a name for yourself, to find something new, or to show that you're smarter than your forbears were. And because we need to earn a living, we can't agree to quit or shut down our paper-publishing factories just because we don't know as much as we need to think we do (to pamper our egos) or claim (to get grants and have secure jobs).
Ignor'ance makes ignorance a matter of policy, if students or junior faculty do not know or respect the history of their fields. It becomes a willful way to, in a sense, keep getting credit for old ideas dressed in new technical clothes. This is the case in a sense in economics, where the new clothes are the nearly instantaneous trading speeds, and vastly speedier computers that allow models to be ever more mathematical and automated, but without titering them to the real world.
Ignor'ance and biology
There are parallels in all this to the subject matter of this blog. One can go back to Darwin and earlier to see many of our current ideas in genetics or biology stated, sometimes only in rudiment, but sometimes clearly. The fact that we know much more now than they did then ameliorates this a bit, but not entirely. We engage in ignor'ance as well as ignorance in many ways when we persist in dogmatic views of evolution or genetic determinism. Often a theory drifts out of sight, and then is ignor'antly rediscovered and touted as if there was new data that made it more plausible. Sometimes there is, indeed, and that's progress!
But often the data are different but don't change things in any serious way. Analysis of human variation (and the concept of 'race', whatever euphemism is used for it), aspects of arguments about natural selection and speciation, and so on are examples. Genetics tells us nothing today that can justify human geneticists (with or without any proper understanding of anthropology) making pronouncements about human races--but they do, often based on new and exotic molecular or statistical data. But these statements are virtually identical to what was said 100 years ago, and 50 years before that in Darwin's time, and in earlier iterations. Other biological parallels include the nature/nurture gene/environment pendulum, views about the nature of speciation, the importance of selection vs mutation in adaptive evolution, and many more.
We lose touch with the valuable things in our past when professionalism in its varied manifestations leads us to engage in ignor'ance. This happens for various reasons, not least being our love affair with technology that leads students to disregard things from the past, and the lack of time or attention given to courses that include the history of our discipline. This isn't to romanticize the past or just grumble about the present. We have to make our careers, of course, but we waste resources and even those valued careers in a sense, if we don't teach, learn, or remember our past--especially when we have every way and reason to know better.
Not learning from non-progress
If we haven't progressed in understanding in 100 years, that is a sign of weakness of our science and the complexity of the world we're tying to understand. It is a problem only when we don't come clean to the public, to funders, to students, and most of all to ourselves about our limitations. Like well-done negative experimental results, non-progress is a kind of very useful knowledge: that our current approaches are not cogent or sufficient relative to the problem at hand.
An obvious part of the solution is to slow down the train and make the reward system reward conceptual novelty, not just technical production (of which, of course, there is an amazing and potentially amazingly valuable amount). Instead, our industrialized 'productivity'-driven culture has spread into science. but if the problem is hard, let's work on it humbly, rather than dismissing a new idea and recycling an older one, etc. Let's rethink our conceptual approaches, not just gear-up for more intense technological data gathering, computer programming, and the like.
A remarkable statement!
The way we are embedded in our culture is illustrated by another remarkable statement made in the BBC program about economic theory. An economic historian said correctly that the mathematizing of economics made things more structured than the real world is, and ignored the emotional or other aspects of real, local human actions.
He is an advocate for what one might call a more subjective or sociological view of economics. Rather than throw out the formal models, he said professors from the social sciences and economic 'theorists' ought to get together and unite their views. The resulting mix would be less rigidly mathematical, and 'less scientific but closer to the truth.'
This is nearly an exact quote and is a remarkable reflection of the iron cage of culture. Why is this? Because it shows the deep level to which our culture--our worldview itself--has decided that if something is rigidly mathematical (or molecular in the case of biology and genetics) it is 'science', but if it's not like that but is closer to the truth of the real world, it isn't science!
We hate to say it, but this shows the kind of thing that post-modernists, the inveterate opponents of much of our culture's obsession with science and technology, would point out as the deeply cultural matrix of all that is done. Because if something is closer to the truth that by definition should be the most 'scientific'. The statement shows, as did others on the program, the way that academicians in essence protect their territory without even realizing that's what they're doing.
Complexity
Our particular concerns are genetics and evolution, but society and other sciences also face many problems with similar characteristics: in today's parlance, they are 'complex'--they involve many contributing factors and many or most may be individually very minor.
That we haven't adequately understood complexity is not a fault of ours, since the problems are tough ones. But it is a fault if we refuse to accept what we already know, and continue to do the same thing again and again (as Einstein said, insanity is doing the same thing over and over and expecting the results to be different).
We can be excused for our manifest ignorance, but not for our ignor'ance.
-Ken and Anne
Subscribe to:
Posts (Atom)