The BBC Radio 4 program, More or Less, is a show about statistics, how they are used and abused in reporting the news. Among other regular messages, the presenters spend a lot of time explaining that correlation is not causation, which of course is something we like to hear, since we say it a lot on MT, too (e.g., here).
For the 12/17 show, they decided to test science journalists in Britain, to see whether they'd bite on a correlation/causation story they cooked up, or whether they were by now savvy enough not to. The numbers were true, but the mathematician on the show tried to sell the idea that one caused the other, hoping it would warrant a spot on the news.
This guy's story was that there's an extremely strong correlation between the number of mobile phone towers in a given location and number of births. In fact, each tower is correlated with 17.4 births, to be precise. A small village with only 1 tower will have very few births, and a city with a lot of towers will have many more. Well, no one bit. Or rather, one media outlet bit on the story, Radio Wales, who wanted to talk with him about the problem of confusing correlation and causation. Apparently it was pretty obvious.
At first look, though, the mathematician assumed it would appear that the number of towers causes an increase in births. But in fact, of course, both the number of towers and the number of births are a consequence of population size. They are confounded by population size, an unmeasured variable that affects both observed variables. And, regular readers know that the issue of confounding is another frequent feature of MT.
The More or Less presenter is hoping that the fact that this story had basically no takers means that British science journalists are beginning to get the correlation-doesn't-equal-causation message, though as the mathematician pointed out, a recent story about mobile phone use causing bad behavior at school suggests otherwise. And, a glance at the BBC science or health pages is an almost daily confirmation that the problem persists, something we also point out on an as-needed basis.
But that's not really what interested us about this story. What interested us was what happened next, when the presenter asked the mathematician why making causal links was so appealing to humans, given that they are so often false.
The mathematician answered that it's just our instinct, our brains have developed to recognize patterns and respond to them. He said we think of patterns as causal links because 'we survive better that way.' Our ancestors thought that the movement of the stars causes seasons to change, for example -- and....somehow that allowed them to live longer. Thus, he said, it's hard to overcome our instinct to assign causality.
Translated, what he meant was that we evolved to make sense of patterns by finding causal links between two things. (If true, this certainly isn't unique to humans -- we used to have a dog who was terrified when the wind closed a door. But if a human closed it, that was perfectly fine. She actually did understand causation!)
But, isn't the mathematician making the very same error he cautions against? Because we evolved, and because we can see patterns, one caused the other? This is also something we write a lot about, the idea that because a trait exists, it has an adaptive purpose -- the Just-So approach to anthropology, or genetics. Many things come before many other things, but that doesn't help identify causal principles that connect particular sets of things. And, correlation can be made between variables in many different ways. Most are not known to us, or at least we're usually just guessing about what the truth is.
Thursday, December 30, 2010
Wednesday, December 29, 2010
Boondoggle-omics, or the end of Enlightenment science?
Mega-omics!
We're in the marketing age, make no mistake. In life science it's the Age of Omni-omics. Instead of innovation, which both capitalism and science are supposed to exemplify, we are in the age of relentless aping. Now, since genetics became genomics with the largesse of the Human Genome Project, we've been awash in 'omics': proteomics, exomics, nutriomics, and the like. The Omicists knew a good thing when they saw it: huge mega-science budgets justified with omic-scale rhetoric. But you ain't seen nothing yet!
Now, according to a story in the NY Times, we have the Human Connectome Project. This is the audacious, or is it bodaceous, and certainly grandiose grab for funds that will attempt to visualize and hence computerize the entire wiring system of the brain. Well, of some aspect of some brains, that is, of a set of lab mouse brains. The idea is to use high resolution microscopy to record every brain connection.
This is technophilia beyond anything seen in the literature of mythical love, more than Paris for Helen by far. The work is a consortium so that there will be different mice being scanned, and these will be inbred lab mice, and all that goes with their at least partial artificiality. The idea that this, orders of magnitude greater complexity than genomes, will be of use is doubted even by some of the scientists involved....though of course they highly tout their megabucks project--who wouldn't?!
Eat your heart out, li'l mouse!
One might make satire of the cute coarseness of the scientists who, having opened up a living (but hopefully anesthetized) mouse, to perfuse its heart with chemicals to prepare the brain for later sectioning and imaging, occasionally munch on mouse chow as they do it. Murine Doritos! Apparently as long as the mouse is knocked out you can do what you want with them (I wonder if anyone argues about whether mice feel pain, as we now are forced to acknowledge that fish do?).
This project is easy to criticize in an era with high unemployment, people being tossed out of their homes, undermining of welfare for those who need it, and in the health field itself.....well, you already know the state of health care in this country. But no matter, this fundamental science will some day, perhaps, maybe help out some well-off patrons who get neurological disease.
On the other hand, it's going to happen, and you're going to pay for it, so could there be something deeper afoot, something with significant implications beyond the welfare of a few university labs?
But what more than Baloney-omics might this mean?
The Enlightenment period that began in Europe in the 18th century, building on international shipping and trade, on various practical inventions, and on the scientific transformations due to people like Galileo and Newton, Descartes and Bacon, and others, ushered in the idea that empiricism rather than Deep Thought was the way to understand the world. Deep Thought had been, in a sense, the modus operandi of science since classical Greek thought had established itself in our Western tradition.
The Enlightenment changed that: to know the world you had to make empirical observation, and some criteria for that were established: there were, indeed, natural laws of the physical universe, but they had to be understood not in ideal terms, but by the messiness of observational and experimental data. A major criterion for grasping a law of nature was to isolate variables and repeatedly observe them under controlled conditions. Empirical induction of this kind would lead to generalization, but this required highly specific hypotheses to be tested, what has since that time come to be called 'the scientific method'. It has been de rigeuer for science, including life science, ever since. But is that changing as a result of technology, the industrialization of science, and the Megabucks Megamethod?
If complexity on the scale of things we are now addressing is what our culture's focus has become, then perhaps a switch to this kind of science reflects a recognition that reductionism is not working the way it did for the couple of centuries after its Enlightenment launching. Integrating many factors that can each vary, into coherent or 'emergent' wholes, may not be an effective approach, and enumerating the factors may not yield a satisfactory understanding. Something more synthetic is needed, something that involves reductionistic concepts that the world is assembled from fundamental entities--atoms, functional genomic units, neural connections--but that to understand it we must somehow view it from 'above' the level of those units. This certainly seems to be the case, as many of our posts (rants?) on MT have tried to show. Perhaps the Omics Age is the de facto response, even a kind of conceptual shift that will profoundly change the nature of human approach to knowledge.
The Connectome project has, naturally, a flashy web site and is named 'human' presumably because that is how you hype it, make it seem like irresistible Disney entertainment, and get NIH to pay for it. But the vague ultimate goal and the necessity for making it a mega-Project may be yet another canary in the mine, an indicator that, informally and even sometimes formally, we are walking away from the scientific method, away from specific hypotheses, to a different kind of agnostic methodology: we acknowledge that we don't know what's going on but, because we can now aim to study everything-at-once, the preferred approach is to let the truth--whatever form it takes, and whether we can call it 'laws', emerge on its own.
If that's what's happening, it will be a profound change in the culture of human knowledge, that has crept subtly into Western thought.
We're in the marketing age, make no mistake. In life science it's the Age of Omni-omics. Instead of innovation, which both capitalism and science are supposed to exemplify, we are in the age of relentless aping. Now, since genetics became genomics with the largesse of the Human Genome Project, we've been awash in 'omics': proteomics, exomics, nutriomics, and the like. The Omicists knew a good thing when they saw it: huge mega-science budgets justified with omic-scale rhetoric. But you ain't seen nothing yet!
Now, according to a story in the NY Times, we have the Human Connectome Project. This is the audacious, or is it bodaceous, and certainly grandiose grab for funds that will attempt to visualize and hence computerize the entire wiring system of the brain. Well, of some aspect of some brains, that is, of a set of lab mouse brains. The idea is to use high resolution microscopy to record every brain connection.
This is technophilia beyond anything seen in the literature of mythical love, more than Paris for Helen by far. The work is a consortium so that there will be different mice being scanned, and these will be inbred lab mice, and all that goes with their at least partial artificiality. The idea that this, orders of magnitude greater complexity than genomes, will be of use is doubted even by some of the scientists involved....though of course they highly tout their megabucks project--who wouldn't?!
Eat your heart out, li'l mouse!
One might make satire of the cute coarseness of the scientists who, having opened up a living (but hopefully anesthetized) mouse, to perfuse its heart with chemicals to prepare the brain for later sectioning and imaging, occasionally munch on mouse chow as they do it. Murine Doritos! Apparently as long as the mouse is knocked out you can do what you want with them (I wonder if anyone argues about whether mice feel pain, as we now are forced to acknowledge that fish do?).
This project is easy to criticize in an era with high unemployment, people being tossed out of their homes, undermining of welfare for those who need it, and in the health field itself.....well, you already know the state of health care in this country. But no matter, this fundamental science will some day, perhaps, maybe help out some well-off patrons who get neurological disease.
On the other hand, it's going to happen, and you're going to pay for it, so could there be something deeper afoot, something with significant implications beyond the welfare of a few university labs?
But what more than Baloney-omics might this mean?
The Enlightenment period that began in Europe in the 18th century, building on international shipping and trade, on various practical inventions, and on the scientific transformations due to people like Galileo and Newton, Descartes and Bacon, and others, ushered in the idea that empiricism rather than Deep Thought was the way to understand the world. Deep Thought had been, in a sense, the modus operandi of science since classical Greek thought had established itself in our Western tradition.
The Enlightenment changed that: to know the world you had to make empirical observation, and some criteria for that were established: there were, indeed, natural laws of the physical universe, but they had to be understood not in ideal terms, but by the messiness of observational and experimental data. A major criterion for grasping a law of nature was to isolate variables and repeatedly observe them under controlled conditions. Empirical induction of this kind would lead to generalization, but this required highly specific hypotheses to be tested, what has since that time come to be called 'the scientific method'. It has been de rigeuer for science, including life science, ever since. But is that changing as a result of technology, the industrialization of science, and the Megabucks Megamethod?
If complexity on the scale of things we are now addressing is what our culture's focus has become, then perhaps a switch to this kind of science reflects a recognition that reductionism is not working the way it did for the couple of centuries after its Enlightenment launching. Integrating many factors that can each vary, into coherent or 'emergent' wholes, may not be an effective approach, and enumerating the factors may not yield a satisfactory understanding. Something more synthetic is needed, something that involves reductionistic concepts that the world is assembled from fundamental entities--atoms, functional genomic units, neural connections--but that to understand it we must somehow view it from 'above' the level of those units. This certainly seems to be the case, as many of our posts (rants?) on MT have tried to show. Perhaps the Omics Age is the de facto response, even a kind of conceptual shift that will profoundly change the nature of human approach to knowledge.
The Connectome project has, naturally, a flashy web site and is named 'human' presumably because that is how you hype it, make it seem like irresistible Disney entertainment, and get NIH to pay for it. But the vague ultimate goal and the necessity for making it a mega-Project may be yet another canary in the mine, an indicator that, informally and even sometimes formally, we are walking away from the scientific method, away from specific hypotheses, to a different kind of agnostic methodology: we acknowledge that we don't know what's going on but, because we can now aim to study everything-at-once, the preferred approach is to let the truth--whatever form it takes, and whether we can call it 'laws', emerge on its own.
If that's what's happening, it will be a profound change in the culture of human knowledge, that has crept subtly into Western thought.
Tuesday, December 28, 2010
Science for sale: where we are and how we got here
Well, our flight to France was cancelled -- twice, and then finally rescheduled for so far into our trip that it didn't make sense to go. So, you're stuck with us now for the duration -- but that's ok (sort of); science marches on, and we'll keep marching with it.
For a discussion of the origins of our era of entrepreneurial science, in the context of the industrial revolution, listen to the first of a 2-part series of the wonderful BBC Radio program In Our Time. The second part will be aired the last week of December. In Our Time, every Thursday except in summer, is a pleasure and educational wonder of our intellectually threadbare media time--listening regularly is like getting a college degree, without having to pay any tuition! The discussion is usually calm and congenial, but the first installment on the Industrial Revolution in Britain got pretty steamed up....and not just about the role of the steam engine, or the inventors of the steam engine, but about contesting views of the nature of, and proper course of, society that we still see today in society, and in science as well. The discussion is well worth listening to.
The industrial revolution, which mainly occurred in Britain, grew out of the Enlightenment period, of the overthrow of medieval and Classical concepts of a static universe that could be understood by thought and deductive reasoning. Led by the giants Galileo, Newton, Descartes, and many others famous and otherwise, this period ushered in an era of empiricism, an era in which we still live. The Enlightenment, a largely Continental view that the new kinds of knowledge--that has morphed into institutionalized 'science'--could enable society's problems of suffering and inequity to be relieved through a better, more systematic understanding of the real, rather than ideal world.
Francis Bacon is credited with introducing the scientific method's reliance on induction--repeated, controlled observation. In an In Our Time installment in 2009, Bacon's reasoning was discussed: he felt that science could be put into the service of the nation, to exploit the colonies and gain international political and economic dominance. We're living that legacy still, as many scientists argue -- believe -- that knowledge can only be called 'science' if it can lead us to manipulate the world.
Part of the debate is one that threads through the 19th century and exists still today: did major advances come because of the stream of culture, or because of the genius of A Few Good Men? Associated with that is the contrast between the view that cultural, including scientific advances belong to and are enabled by society, vis-à-vis the view that individual self-interest is the source and should receive the rewards of technological advance. The industrial revolution led to great progress in many different areas of life, but also to great misery in many different lives. In turn, that spawned the contest between capitalistic and socialistic thinking. In its stream were Marx's view of history as a struggle between material interests, Darwin's of history as a struggle between individuals, and many more.
In the US, figures like Bell and Edison led the way in commercializing, and publicly hyping science, and in setting up research laboratories aimed at industrial commercialization of ideas. In our age, the comparable questions include whether genetic research should be publicly funded, and if so, should resulting royalties as well as new products go back to the public? Should genes be patentable? Who owns human embryos? If the technicians, students, or other workers make the biotech inventor's work possible, why are they paid so little relative to him or her? Should research funds be put into areas that will yield commercial products at some vague future time, or should the funds--that come from taxes--be used to improve nutrition and vaccination of people here and now? Should NSF and NIH be pressured to see that a criterion for the science they fund be that it can be quickly commercialized?
To what extent should science be for sale? How much is owed to scientific discoverers? Indeed, how much credit should discoverers actually be given individually, rather than being viewed in corks floating on the ideas of their time? Should science be supported on the basis of its commercial potential?
The product of specific inventors, or the specific products of the times?
The industrial revolution involved many inventors, who improved technologies including looms, shipping, iron, steam, rail, and other aspects of the mechanization and industrialization of life. Step by step, innovators invented, tinkered with, and learned how to apply all sorts of new or improved techniques, machinery, and manufacturing technologies. The explosive growth of machinery-based industry that resulted transformed rural populations to urban proletarians, who depended for their survival on the products of industry rather than their own plots of land. Government made Britain's industrial advance possible through tax policy, the Royal Navy, the captive market of the Empire, import restrictions, banking laws, and in other ways. These policies nurtured, stimulated, and enabled the individual incentives of countless major and minor tinkering inventors (the equivalent of today's biotechnology innovators) to make their ideas and market them intellectually and commercially to make their livings (and to dream of riches). But how much credit actually belongs to the inventors and how much is owed to the workers who implemented inventions?
The debate over whether history is a cultural stream or whether it's transformed periodically by Great Men is a serious debate. For most ideas credited to The Great Genius, others can be found who at the same time or earlier had similar ideas for similarly good reasons. Darwin had his Wells, Wallace, Mathews, Adams, Grandfather Erasmus Darwin, and others. Newton his Leibniz. Einstein his Poincaré. If you're in science and have an original idea, you can be sure that if you hunt around in the literature, you'll find others expressing the same insight. It's a humbling experience many of us have had. Without Watson and Crick, when would the structure of DNA have been discovered--eventually, never, or right away by Linus Pauling or Rosalind Franklin?
The cultural stream vs Great Man theories of history have been interesting questions in anthropology for a long time. It's about how culture works as a phenomenon, and among other things how science works as a way of knowing the world, and about how moral decisions are made about social equity. Maybe it's something appropriate to think about at this holiday time of year.
And if you want to know more, and didn't get a good book for Christmas, nestle down by a nice warm fire, with a brandy, and open a little story called War and Peace. It asks how important Napoleon was to Napoleonic history.
For a discussion of the origins of our era of entrepreneurial science, in the context of the industrial revolution, listen to the first of a 2-part series of the wonderful BBC Radio program In Our Time. The second part will be aired the last week of December. In Our Time, every Thursday except in summer, is a pleasure and educational wonder of our intellectually threadbare media time--listening regularly is like getting a college degree, without having to pay any tuition! The discussion is usually calm and congenial, but the first installment on the Industrial Revolution in Britain got pretty steamed up....and not just about the role of the steam engine, or the inventors of the steam engine, but about contesting views of the nature of, and proper course of, society that we still see today in society, and in science as well. The discussion is well worth listening to.
The industrial revolution, which mainly occurred in Britain, grew out of the Enlightenment period, of the overthrow of medieval and Classical concepts of a static universe that could be understood by thought and deductive reasoning. Led by the giants Galileo, Newton, Descartes, and many others famous and otherwise, this period ushered in an era of empiricism, an era in which we still live. The Enlightenment, a largely Continental view that the new kinds of knowledge--that has morphed into institutionalized 'science'--could enable society's problems of suffering and inequity to be relieved through a better, more systematic understanding of the real, rather than ideal world.
Francis Bacon is credited with introducing the scientific method's reliance on induction--repeated, controlled observation. In an In Our Time installment in 2009, Bacon's reasoning was discussed: he felt that science could be put into the service of the nation, to exploit the colonies and gain international political and economic dominance. We're living that legacy still, as many scientists argue -- believe -- that knowledge can only be called 'science' if it can lead us to manipulate the world.
Part of the debate is one that threads through the 19th century and exists still today: did major advances come because of the stream of culture, or because of the genius of A Few Good Men? Associated with that is the contrast between the view that cultural, including scientific advances belong to and are enabled by society, vis-à-vis the view that individual self-interest is the source and should receive the rewards of technological advance. The industrial revolution led to great progress in many different areas of life, but also to great misery in many different lives. In turn, that spawned the contest between capitalistic and socialistic thinking. In its stream were Marx's view of history as a struggle between material interests, Darwin's of history as a struggle between individuals, and many more.
In the US, figures like Bell and Edison led the way in commercializing, and publicly hyping science, and in setting up research laboratories aimed at industrial commercialization of ideas. In our age, the comparable questions include whether genetic research should be publicly funded, and if so, should resulting royalties as well as new products go back to the public? Should genes be patentable? Who owns human embryos? If the technicians, students, or other workers make the biotech inventor's work possible, why are they paid so little relative to him or her? Should research funds be put into areas that will yield commercial products at some vague future time, or should the funds--that come from taxes--be used to improve nutrition and vaccination of people here and now? Should NSF and NIH be pressured to see that a criterion for the science they fund be that it can be quickly commercialized?
To what extent should science be for sale? How much is owed to scientific discoverers? Indeed, how much credit should discoverers actually be given individually, rather than being viewed in corks floating on the ideas of their time? Should science be supported on the basis of its commercial potential?
The product of specific inventors, or the specific products of the times?
The industrial revolution involved many inventors, who improved technologies including looms, shipping, iron, steam, rail, and other aspects of the mechanization and industrialization of life. Step by step, innovators invented, tinkered with, and learned how to apply all sorts of new or improved techniques, machinery, and manufacturing technologies. The explosive growth of machinery-based industry that resulted transformed rural populations to urban proletarians, who depended for their survival on the products of industry rather than their own plots of land. Government made Britain's industrial advance possible through tax policy, the Royal Navy, the captive market of the Empire, import restrictions, banking laws, and in other ways. These policies nurtured, stimulated, and enabled the individual incentives of countless major and minor tinkering inventors (the equivalent of today's biotechnology innovators) to make their ideas and market them intellectually and commercially to make their livings (and to dream of riches). But how much credit actually belongs to the inventors and how much is owed to the workers who implemented inventions?
The debate over whether history is a cultural stream or whether it's transformed periodically by Great Men is a serious debate. For most ideas credited to The Great Genius, others can be found who at the same time or earlier had similar ideas for similarly good reasons. Darwin had his Wells, Wallace, Mathews, Adams, Grandfather Erasmus Darwin, and others. Newton his Leibniz. Einstein his Poincaré. If you're in science and have an original idea, you can be sure that if you hunt around in the literature, you'll find others expressing the same insight. It's a humbling experience many of us have had. Without Watson and Crick, when would the structure of DNA have been discovered--eventually, never, or right away by Linus Pauling or Rosalind Franklin?
The cultural stream vs Great Man theories of history have been interesting questions in anthropology for a long time. It's about how culture works as a phenomenon, and among other things how science works as a way of knowing the world, and about how moral decisions are made about social equity. Maybe it's something appropriate to think about at this holiday time of year.
And if you want to know more, and didn't get a good book for Christmas, nestle down by a nice warm fire, with a brandy, and open a little story called War and Peace. It asks how important Napoleon was to Napoleonic history.
Friday, December 24, 2010
Slinging it further -- the arsenic bacteria saga continues
Well, we're not gone traveling yet -- and the arsenic bacteria saga continues. Naturally enough, for a paper whose publication was preceded by such hype -- and followed by such immediate, justified skepticism. Today's Science has an interview with Felisa Wolfe-Simon, the principal author of the paper that made such a splash a few weeks ago, and engendered such immediate skepticism, for many reasons, including that the experiment was inadequately controlled. In brief, Wolfe-Simon is exhausted, and still hoping to collaborate with others who can confirm, or not, the findings she reported. But being a true-blooded product of our culture, naturally she defends her results (graduate schools must train people to say a lot of things, but never "I was wrong!").
We won't go into any detail about the arguments here, because that's already being done much better elsewhere than we could do but we did want to include links here to some of that give-and-take, since we did post on the subject when the paper first came out. Microbiologist Rosie Redfield's posts on her blog, including the most recent very detailed response here, and reader comments, are of particular interest, as she goes point by point through the original paper, as well as the authors' attempts to answer skepticism, which they do here.
So far, without the additional carefully done experimental results that doubters are hoping to provide and/or analyze, it looks to us as though, at best, the jury is still out on these arsenic bacteria, and at worst, the original experiments were poorly carried out, poorly interpreted, and poorly reviewed.
And, as we said when we originally commented on this work, even if it were solid science, it doesn't say anything about life in space. So NASA's Baloney campaign on its behalf should remain pilloried -- and it was NASA, not Wolfe-Simon who got the hype engine going on this one. NASA seems to be good at engineering, but they should stay out of the basic biology business. NSF should fund that. 'Astrobiology' is, if anything, a political wing of theirs, and Congress should remove its funding. We know several good scientists who feed at that trough, but they'd still do good work funded in more conventional ways. A truly disinterested panel could go through the current portfolio, transfer what's worthy to NSF, and put the rest to rest.
Science baloney is not new. Legitimate as well as bogus scientists have long known that they can get ahead by dissembling, self-promotion, and puffing up their work in order to get attention, support, publication, or sales. It's always up to the community to purge and self-police. These days, the media (including many journals and even funding agencies) flood us with such puffery. That we will not likely restrain over-claims should not stop us from resisting it, in the name of good science.
We won't go into any detail about the arguments here, because that's already being done much better elsewhere than we could do but we did want to include links here to some of that give-and-take, since we did post on the subject when the paper first came out. Microbiologist Rosie Redfield's posts on her blog, including the most recent very detailed response here, and reader comments, are of particular interest, as she goes point by point through the original paper, as well as the authors' attempts to answer skepticism, which they do here.
So far, without the additional carefully done experimental results that doubters are hoping to provide and/or analyze, it looks to us as though, at best, the jury is still out on these arsenic bacteria, and at worst, the original experiments were poorly carried out, poorly interpreted, and poorly reviewed.
And, as we said when we originally commented on this work, even if it were solid science, it doesn't say anything about life in space. So NASA's Baloney campaign on its behalf should remain pilloried -- and it was NASA, not Wolfe-Simon who got the hype engine going on this one. NASA seems to be good at engineering, but they should stay out of the basic biology business. NSF should fund that. 'Astrobiology' is, if anything, a political wing of theirs, and Congress should remove its funding. We know several good scientists who feed at that trough, but they'd still do good work funded in more conventional ways. A truly disinterested panel could go through the current portfolio, transfer what's worthy to NSF, and put the rest to rest.
Science baloney is not new. Legitimate as well as bogus scientists have long known that they can get ahead by dissembling, self-promotion, and puffing up their work in order to get attention, support, publication, or sales. It's always up to the community to purge and self-police. These days, the media (including many journals and even funding agencies) flood us with such puffery. That we will not likely restrain over-claims should not stop us from resisting it, in the name of good science.
Thursday, December 23, 2010
Only with freedom of speech, a message of appreciation
We here at MT, and our many commenters, enjoy a rare privilege. It's the ability to comment on anything we choose, and to criticize the Established order when it deserves it. Indeed, even if it doesn't deserve it, there is nothing stopping us from saying what we think. We hope and try always to be responsible, but the point is that we don't have to be. And that's important, because it means no Authority decides what's responsible--and stifles its opponents.
Science requires freedom of expression to be properly self-monitored, to help direct resources in the most appropriate directions when there is an excess of claimants for support, and to help guide the use of science in public policy.
This is a holiday season, in which many of us will be on breaks or traveling in various ways -- we in fact will be away for a few weeks, and posting only occasionally. Holly will be posting some, too. It's a time of cheer, but also perhaps a time to reflect: Where does the inch by inch augmentation of protection become constraint, in a free society? Where do other kinds of controls also enable an easing of restraint on opinion? And anyway, why not a little holiday fun?!
So, in the spirit of both objectives, we provide this link (ear worm alert!).
And wish you all Happy Holidays, Bonnes Fêtes, Bones Festes, Felices Fiestas! كل عام وأنتم بخير
Science requires freedom of expression to be properly self-monitored, to help direct resources in the most appropriate directions when there is an excess of claimants for support, and to help guide the use of science in public policy.
This is a holiday season, in which many of us will be on breaks or traveling in various ways -- we in fact will be away for a few weeks, and posting only occasionally. Holly will be posting some, too. It's a time of cheer, but also perhaps a time to reflect: Where does the inch by inch augmentation of protection become constraint, in a free society? Where do other kinds of controls also enable an easing of restraint on opinion? And anyway, why not a little holiday fun?!
So, in the spirit of both objectives, we provide this link (ear worm alert!).
And wish you all Happy Holidays, Bonnes Fêtes, Bones Festes, Felices Fiestas! كل عام وأنتم بخير
Wednesday, December 22, 2010
A new broom sweeps gene?
Today we write about a story that isn't hot off the press, but was published in Nature a few months ago, and happens to be one of those findings that we think more people should know about:. The issue is the genetic signature of selection, something that has become the focus of much anthropological and population genetic research with the advent of whole genome sequencing data.
What the authors did was to follow populations of fruit flies over 600 generations and comb the entire genome of 260 of them for variation after applying intense artificial selection on the measurable, and malleable, traits of accelerated development and early fertility. They bred a population in which development was about 20% faster than in unselected populations. The question was whether a single gene or multiple genes would be responsible for the change.
They compared the genomes of the selected and control populations with the reference fruit fly genome, and found hundreds of thousands of SNPs, or single nucleotide polymorphisms, differences between the populations. Of these, they found tens of thousands of amino acid-changing SNPs, about 200 segregating stop codons and 118 segregating splice variants -- that is, variants that could be responsible for the phenotypic changes they had selected for. They further narrowed down these candidate loci to 662 SNPs in 506 genes that they considered to be potential candidates "for encoding the causative differences between the ACO and CO populations, to the extent that those differences are due to structural as opposed to regulatory variants."
They went on to do a 'sliding window' comparison of regions of the genome that diverged significantly between the selected and control populations, and identified 'a large number'.
But, it wouldn't be a surprise to RA Fisher (this is a link to his Facebook page, by the way -- go friend him, he only has 6!) that the observed changes are due to polygenes. But it is nice to see an experimental confirmation, and to note the implications it has for understanding complex traits.
Many biologists have been lured into single-gene thinking by the research paradigm and model set up initially by Mendel. For decades single-gene traits formed the core of what we would call the evolving molecular genetics including Morgan's work on chromosomal arrangement of genes, many human geneticists' work on 'Mendelian' disease, the work leading to the idea that genes code for proteins, and much else.
Besides these examples of causal genetics, we had selection examples such as sickle cell anemia, that seemed to reflect evolutionary genetics and were due to single protein changes. But we always knew (or those who cared to understand genetics should have known and could have) that traits were more complex than that as a rule. Sewall Wright and others knew this clearly in the early 20th century. 'Quantitative genetics' going back basically to Darwin (or at least his 2d cousin Francis Galton) recognized the idea of quantitative inheritance and Fisher's influential but largely impenetrable (to mere mortals) 1918 paper was a flagship that reflected formally the growing recognition that complex traits could be reconciled with Mendelian genetics if many genes contributed to complex traits.
The idea that strong directional or 'positive' selection favored a single gene grew out of the Mendelian thread, but nobody in quantitative genetics (such as agricultural breeders or many working in population genetics theory) and those who understood gene networks, should have known that most of the time, especially given the typical weakness of selection, selection would not just find and fix a single allele in a single gene.
We had reason to know, and certainly know now that when a trait's effects are spread across many variable and contributing genes, the net selective difference on most if not all of them will be very small. The response to selection will be just what the fly experiments, and many others likewise, found.
In the television attention-seeking era we need melodramatic terms, and that is just what ideas like selective 'sweeps' are. The circumstances under which a single allele will 'sweep' (watch out, here comes that broom sweeping clean!) would occur across an entire species' habitat replacing all other alleles that affect a trait are likely to be very restrictive. We don't need terms like hard and soft sweeps, and should not be over-dramatizing what we find. Even a hard 'sweep' at the phenotype level is typically 'soft' at the specific gene level, and usually also softly leaves phenotypic variation in the population after it's over.
At the same time, these experiments are giving us great detailed knowledge about how evolution works, when there is, and when there is not strong selection. This supports long-standing theory and is no kind of 'paradigm shif', it's true, but it is new understanding of the details and genetic mechanisms by which Nature gets from here to there--whether it does that in a hurry or not.
What the authors did was to follow populations of fruit flies over 600 generations and comb the entire genome of 260 of them for variation after applying intense artificial selection on the measurable, and malleable, traits of accelerated development and early fertility. They bred a population in which development was about 20% faster than in unselected populations. The question was whether a single gene or multiple genes would be responsible for the change.
They compared the genomes of the selected and control populations with the reference fruit fly genome, and found hundreds of thousands of SNPs, or single nucleotide polymorphisms, differences between the populations. Of these, they found tens of thousands of amino acid-changing SNPs, about 200 segregating stop codons and 118 segregating splice variants -- that is, variants that could be responsible for the phenotypic changes they had selected for. They further narrowed down these candidate loci to 662 SNPs in 506 genes that they considered to be potential candidates "for encoding the causative differences between the ACO and CO populations, to the extent that those differences are due to structural as opposed to regulatory variants."
For the biological processes, there is an apparent excess of genes important in development; for example, the top ten categories are imaginal disc development, smoothened signalling pathway, larval development, wing disc development, larval development (sensu Amphibia), metamorphosis, organ morphogenesis, imaginal disc morphogenesis, organ development and regionalization. This is not an unexpected result, given the ACO [accelerated development population] selection treatment for short development time, but it indicates an important role for amino-acid polymorphisms in short-term phenotypic evolution.Actually the idea that adaptive change was brought about by gene-impeding mutations (premature stop codons and splice variants, for example) is interesting. It means that adaptive change under selection doesn't just improve function, but it may also destroy function--to pave the way to the change, one might surmise.
They went on to do a 'sliding window' comparison of regions of the genome that diverged significantly between the selected and control populations, and identified 'a large number'.
...it is apparent that allele frequencies in a large portion of the genome have been affected following selection on development time, suggesting a highly multigenic adaptive response.The authors interpreted this work in terms of the 'soft' or 'hard' sweep idea that is often used to explain reduced gene frequencies ( a 'hard sweep' being when a single mutation quickly becomes fixed in a population, and a 'soft sweep' being when multiple genes influence a trait). They suggest two explanations for their 'failure to observe the signature of a classic sweep in these populations, despite strong selection' (not enough time for the causative gene to reach fixation in the population, or that selection acts on standing, not new mutations).
RA Fisher |
Many biologists have been lured into single-gene thinking by the research paradigm and model set up initially by Mendel. For decades single-gene traits formed the core of what we would call the evolving molecular genetics including Morgan's work on chromosomal arrangement of genes, many human geneticists' work on 'Mendelian' disease, the work leading to the idea that genes code for proteins, and much else.
Besides these examples of causal genetics, we had selection examples such as sickle cell anemia, that seemed to reflect evolutionary genetics and were due to single protein changes. But we always knew (or those who cared to understand genetics should have known and could have) that traits were more complex than that as a rule. Sewall Wright and others knew this clearly in the early 20th century. 'Quantitative genetics' going back basically to Darwin (or at least his 2d cousin Francis Galton) recognized the idea of quantitative inheritance and Fisher's influential but largely impenetrable (to mere mortals) 1918 paper was a flagship that reflected formally the growing recognition that complex traits could be reconciled with Mendelian genetics if many genes contributed to complex traits.
The idea that strong directional or 'positive' selection favored a single gene grew out of the Mendelian thread, but nobody in quantitative genetics (such as agricultural breeders or many working in population genetics theory) and those who understood gene networks, should have known that most of the time, especially given the typical weakness of selection, selection would not just find and fix a single allele in a single gene.
We had reason to know, and certainly know now that when a trait's effects are spread across many variable and contributing genes, the net selective difference on most if not all of them will be very small. The response to selection will be just what the fly experiments, and many others likewise, found.
In the television attention-seeking era we need melodramatic terms, and that is just what ideas like selective 'sweeps' are. The circumstances under which a single allele will 'sweep' (watch out, here comes that broom sweeping clean!) would occur across an entire species' habitat replacing all other alleles that affect a trait are likely to be very restrictive. We don't need terms like hard and soft sweeps, and should not be over-dramatizing what we find. Even a hard 'sweep' at the phenotype level is typically 'soft' at the specific gene level, and usually also softly leaves phenotypic variation in the population after it's over.
At the same time, these experiments are giving us great detailed knowledge about how evolution works, when there is, and when there is not strong selection. This supports long-standing theory and is no kind of 'paradigm shif', it's true, but it is new understanding of the details and genetic mechanisms by which Nature gets from here to there--whether it does that in a hurry or not.
Tuesday, December 21, 2010
Science tribalism?
This is a difficult story. At its face value, if we understand what actually happened, the University of Kentucky denied an appointment to an astronomer because he had Christian creationistic views. The University people were afraid he'd disseminate such views as if they were science or in contradiction to what accepted science says.
We personally do not think that a creationist belongs in a science department if his (or her) views preclude acceptance of the scientific approach to material problems. We would say such views make a person unqualified for the job. But that's the job as we see it. For example, s/he might be able to argue that this or that result or experiment failed because God intervened. Such post hoc explanations are totally unacceptable within science....but if God exists and can meddle in the world they would be perfectly legitimate (whether testable or not is a relevant question, however).
Is our objection to such an appointment reasonable, or is it just our version of tribalism--we want our view and only our view in our community? If we're honest, this is a difficult question. Of course, scientists as a rule are convinced that empirical rather than biblical methods are needed to understand the world, and more importantly that they provide a better understanding of the world.
But creationists, at least the sincere ones, don't agree. Whatever their reasons. Among other things, fundamentalists disagree as to what constitutes evidence (for example, many would say they have direct communication with God, which is beyond the kind of Enlightenment-derived empirical science but--to them--a legitimate kind of evidence or proof about reality). And in universities, especially public ones, which are supposed to be centers of learning, and if we believe in democracy, what right does one group have to take over the criteria of knowledge? After all, roughly half of Americans (and who knows what fraction of Kentuckians!?) do not accept evolution as the explanation for life.
Honest assessments make this a very problematic issue. Those of us in science don't want shallow, ideological loony tunes on academic faculties of which we are a part. But most voters may. This is a clash of belief systems and is very frustrating for both sides. Again, here, we're being generous and crediting the 'other side' with legitimately, honestly held views rather than just conveniently political tribal ones.
If we insist that we're right, then we don't really have a 'democratic' view of how things should be. It's a problem because, despite many failings and vanities in science, science really does seem (to us) clearly to be right when it comes to a comparison of theological literalism. But science, too, has a history of clinging tightly to wrong ideas! Ask Galileo what he thought of Aristotle's cosmology!
Can a publicly supported university refuse admission to a student, say for graduate work, who is a fundamentalist? S/he could perhaps properly be informed of the minimal likelihood of forming a PhD committee or something like that, if it's the case. But admission?
These are difficult, serious, legitimate questions. Anthropologically, they can be understood in terms of how culture works. Politically and legally, within our own culture, the issues are less clear. But we should think about them.
We personally do not think that a creationist belongs in a science department if his (or her) views preclude acceptance of the scientific approach to material problems. We would say such views make a person unqualified for the job. But that's the job as we see it. For example, s/he might be able to argue that this or that result or experiment failed because God intervened. Such post hoc explanations are totally unacceptable within science....but if God exists and can meddle in the world they would be perfectly legitimate (whether testable or not is a relevant question, however).
Is our objection to such an appointment reasonable, or is it just our version of tribalism--we want our view and only our view in our community? If we're honest, this is a difficult question. Of course, scientists as a rule are convinced that empirical rather than biblical methods are needed to understand the world, and more importantly that they provide a better understanding of the world.
But creationists, at least the sincere ones, don't agree. Whatever their reasons. Among other things, fundamentalists disagree as to what constitutes evidence (for example, many would say they have direct communication with God, which is beyond the kind of Enlightenment-derived empirical science but--to them--a legitimate kind of evidence or proof about reality). And in universities, especially public ones, which are supposed to be centers of learning, and if we believe in democracy, what right does one group have to take over the criteria of knowledge? After all, roughly half of Americans (and who knows what fraction of Kentuckians!?) do not accept evolution as the explanation for life.
Honest assessments make this a very problematic issue. Those of us in science don't want shallow, ideological loony tunes on academic faculties of which we are a part. But most voters may. This is a clash of belief systems and is very frustrating for both sides. Again, here, we're being generous and crediting the 'other side' with legitimately, honestly held views rather than just conveniently political tribal ones.
If we insist that we're right, then we don't really have a 'democratic' view of how things should be. It's a problem because, despite many failings and vanities in science, science really does seem (to us) clearly to be right when it comes to a comparison of theological literalism. But science, too, has a history of clinging tightly to wrong ideas! Ask Galileo what he thought of Aristotle's cosmology!
Can a publicly supported university refuse admission to a student, say for graduate work, who is a fundamentalist? S/he could perhaps properly be informed of the minimal likelihood of forming a PhD committee or something like that, if it's the case. But admission?
These are difficult, serious, legitimate questions. Anthropologically, they can be understood in terms of how culture works. Politically and legally, within our own culture, the issues are less clear. But we should think about them.
Monday, December 20, 2010
To know or not to know: is that the question?
Hamlet famously brooded about the pros and cons of existence, but in our age of science the pros and cons of knowledge, especially partial knowledge, provide a comparable dilemma.
Triggering our post today is another very fine story by science reporter Gina Kolata in the NT Times, on early diagnosis of Alzheimer's Disease (AD). This can be a devastating disease, but there is currently no cure or effective treatment. That means that an early diagnosis can devastate people, and their families, for more years than the disease actually will do itself. So the question is asked whether the early diagnosing test that now can be done, should be done.
One can't control whether he or she exists, though one certainly can end that blissful state. But we can control information....or can we? Many argue that once information, or the ability to gain information exists, there is no corresponding kind of data suicide that is realistic. Someone, somewhere will pry Pandora's box back open, so the argument goes.
This is, however, not just an argument about the realities of the world, but also an argument of convenience by individuals who want to pry the box open for self-interested reasons. Often, as in the case, say, of stem-cell or GMO plant research, this is for commercial reasons. Others are amoral and simply want to know, or like to pry, or don't think information can be bottled up. Science itself provides answers to the empirical questions, but not these social, ethical, or moral ones.
We certainly do have limits on what can be done. Crimes are defined behavioral prohibitions in society in general. In science, we have research review boards and some laws that attempt, at least, to regulate what is allowed. You cannot get approved to pull people's fingernails out in order to test their pain thresholds, even if somehow you could get volunteers who were informed about the nature of the study. You cannot secretly sterilize psychiatric patients to see if it affects the sexual content of their dreams. You can do almost anything to insects or fish, on the incredible posturing that they feel no fear, but you can't euthanize a chimp you've used for research.
You can't build an atomic bomb in your basement, or park a functioning army tank (loaded with ammo) in your driveway.
So arguments that we can't control what is done in research are simply disingenuous. Instead of such posturing to justify anything goes because it can't be stopped, the discussion should be on whether we should, as a society, allow or allow payment for, various aspects of medical practice.
Partly this should depend on whether the procedure can harm the subject more than it can help. Partly it should ensure that it is entirely informed and voluntary. But 'informed' is a critical part of the story. How well does the test actually predict the future disease? How much imprecision is tolerable? According to the Times story, the uncertainties are still great. And family history is itself a (free) partial and informative predictor. And, of course, as you age you're going to get something unpleasant, and mental fog is a common part of that. Should we test for every disease of old-age decades in advance?
And there are practical implications: AD screening is very costly, may be quite unpleasant (spinal taps) and some tests are intrusive. Anticipatory treatment based on a screening result can involve very costly interventions (as we know from mammography, PSA, cholesterol, glucose, and blood pressure testing). Those costs are spread across insurance pools, so we all pay for them one way or another.
We have clear precedent: There are pretty reliable tests for Huntington's Disease, but many who know they are in affected families and are potentially at risk choose not to be genotyped. At the same time, genotyping is available if they want it.
Hamlet's internal debate is relevant to this, as to existence itself:
To know or not to know.
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing end them? To die: to sleep;
No more; and by a sleep to say we end
The heart-ache and the thousand natural shocks
That flesh is heir to, 'tis a consummation
Devoutly to be wish'd.
Triggering our post today is another very fine story by science reporter Gina Kolata in the NT Times, on early diagnosis of Alzheimer's Disease (AD). This can be a devastating disease, but there is currently no cure or effective treatment. That means that an early diagnosis can devastate people, and their families, for more years than the disease actually will do itself. So the question is asked whether the early diagnosing test that now can be done, should be done.
One can't control whether he or she exists, though one certainly can end that blissful state. But we can control information....or can we? Many argue that once information, or the ability to gain information exists, there is no corresponding kind of data suicide that is realistic. Someone, somewhere will pry Pandora's box back open, so the argument goes.
This is, however, not just an argument about the realities of the world, but also an argument of convenience by individuals who want to pry the box open for self-interested reasons. Often, as in the case, say, of stem-cell or GMO plant research, this is for commercial reasons. Others are amoral and simply want to know, or like to pry, or don't think information can be bottled up. Science itself provides answers to the empirical questions, but not these social, ethical, or moral ones.
We certainly do have limits on what can be done. Crimes are defined behavioral prohibitions in society in general. In science, we have research review boards and some laws that attempt, at least, to regulate what is allowed. You cannot get approved to pull people's fingernails out in order to test their pain thresholds, even if somehow you could get volunteers who were informed about the nature of the study. You cannot secretly sterilize psychiatric patients to see if it affects the sexual content of their dreams. You can do almost anything to insects or fish, on the incredible posturing that they feel no fear, but you can't euthanize a chimp you've used for research.
You can't build an atomic bomb in your basement, or park a functioning army tank (loaded with ammo) in your driveway.
So arguments that we can't control what is done in research are simply disingenuous. Instead of such posturing to justify anything goes because it can't be stopped, the discussion should be on whether we should, as a society, allow or allow payment for, various aspects of medical practice.
Partly this should depend on whether the procedure can harm the subject more than it can help. Partly it should ensure that it is entirely informed and voluntary. But 'informed' is a critical part of the story. How well does the test actually predict the future disease? How much imprecision is tolerable? According to the Times story, the uncertainties are still great. And family history is itself a (free) partial and informative predictor. And, of course, as you age you're going to get something unpleasant, and mental fog is a common part of that. Should we test for every disease of old-age decades in advance?
And there are practical implications: AD screening is very costly, may be quite unpleasant (spinal taps) and some tests are intrusive. Anticipatory treatment based on a screening result can involve very costly interventions (as we know from mammography, PSA, cholesterol, glucose, and blood pressure testing). Those costs are spread across insurance pools, so we all pay for them one way or another.
We have clear precedent: There are pretty reliable tests for Huntington's Disease, but many who know they are in affected families and are potentially at risk choose not to be genotyped. At the same time, genotyping is available if they want it.
Hamlet's internal debate is relevant to this, as to existence itself:
To know or not to know.
Whether 'tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take arms against a sea of troubles,
And by opposing end them? To die: to sleep;
No more; and by a sleep to say we end
The heart-ache and the thousand natural shocks
That flesh is heir to, 'tis a consummation
Devoutly to be wish'd.
Friday, December 17, 2010
Peering over Peer Review
We all like to flatter ourselves about the deep insights that we publish in 'peer reviewed' journals. Peer review means that a couple of people who (an Editor thinks) are qualified have looked at a submitted paper and said it's OK to publish in the journal in question.
Peer review has important purposes: done at its best, it can spot errors, help with clarity of expression, and weed out trivial or irrelevant results. In part this keeps science at least somewhat honest, and can serve to avoid total Old Boy network exclusion of competing ideas. If we have to have standards for publication, and to judge scientists for grant support, tenure, promotion and the like, this is a reasonable way to do it.
But like anything, it can become an insider's snob system that either still supports the Old Boys or is exclusive, especially of maverick ideas. As it grew over the years, peer review like any other system, became vulnerable to gaming by savvy investigators. And because an ever-growing set of journals have to keep cranking out the pubs, the standards simply can't be kept that unbiased quality uniformly high. Even with no ill intent, reviewers are far too overloaded to pay close attention to papers they review (and the same goes for grant peer review). And to end all, perhaps, there is a proliferation of online journals some of which are like vanity press: in the pretense (sometimes justified) of avoiding the creativity stifling of peer review, almost anything submitted gets published....with the pretense of peer review in the ability of readers to comment, blog-style.
Meanwhile, free-standing social networking (like MT and other blogs) has become an important source of views and information, and an outlet for skepticism, circumspection, and different perspectives. It is not entirely trustworthy, because bloggers like anyone else have their perspectives and agendas (but of course so do the peer reviewers that editors choose -- among whom many are bloggers). But it's not useless and a careful blog-reader can (and should) judge the reliability.
A paper in Nature reviews the recent storm of complaint about what is increasingly being characterized as over-stated (at best), the story on arsenic based life, shows the state of play. We immediately jumped on this paper here in MT for both scientific reasons and because this seemed more like NASA advertising than carefully presented science. We think we were justified in criticizing the story because it was far from a complete demonstration of what the media hype said it was biologically, and because despite NASA-hype it was irrelevant to any serious question about whether there were Little Green Men out there beyond Hollywood. So in a sense ours was a meta-criticism: not of the primary work, but of the unjustified extension of it as if it had astrobiological implications.
Many others in the blogosphere, probably the most visible being Carl Zimmer over at The Loom, have seized on the paper, and some have noticed problems with the chemistry itself, that we are not competent to have noticed. We said the paper itself, in Science, seemed reasoned in terms of overclaiming. Perhaps even Science was vulnerable to the Big Story self-promotion and failed to obtain adequate peer review.
But there's more because the Nature commentary was as much about the legitimacy of blogosopheric reaction as it was about the science itself. And the expected reaction, in our culture, is immediately to milk this for more self-gain: (1) NASA and the authors, who have clammed up in the face of this clamor (contrary to their courting of attention initially), want studies to confirm their results....in other words, please pass out more money. We would oppose that because the work even if well done would have no real 'astrobiology' research. (2) Nature noted that they encourage online discussion of such issues, that is, use of their blog site and subscriptions to their journal.
Both Nature and NASA are vested interests. It's hard not to have self-interested motives these days, and perhaps we shouldn't harp on them. But they do bring to attention changes in the system by which scientific findings gain credence. The blogosphere that needs nurturing in this context is not one controlled by a commercial journal, but the independent, non-vested, free-for-all of the internet, where there is no control for PR-spinning or profit motive.
The blogosphere is peer review, of a heterogeneous kind but perhaps not so much worse than stuffy professional expert-controlled peer review. One need not pee over peer review to ask how more democratic and openly rather than covertly the social wheels are turned to decide what should see the light of day.
Peer review has important purposes: done at its best, it can spot errors, help with clarity of expression, and weed out trivial or irrelevant results. In part this keeps science at least somewhat honest, and can serve to avoid total Old Boy network exclusion of competing ideas. If we have to have standards for publication, and to judge scientists for grant support, tenure, promotion and the like, this is a reasonable way to do it.
But like anything, it can become an insider's snob system that either still supports the Old Boys or is exclusive, especially of maverick ideas. As it grew over the years, peer review like any other system, became vulnerable to gaming by savvy investigators. And because an ever-growing set of journals have to keep cranking out the pubs, the standards simply can't be kept that unbiased quality uniformly high. Even with no ill intent, reviewers are far too overloaded to pay close attention to papers they review (and the same goes for grant peer review). And to end all, perhaps, there is a proliferation of online journals some of which are like vanity press: in the pretense (sometimes justified) of avoiding the creativity stifling of peer review, almost anything submitted gets published....with the pretense of peer review in the ability of readers to comment, blog-style.
Meanwhile, free-standing social networking (like MT and other blogs) has become an important source of views and information, and an outlet for skepticism, circumspection, and different perspectives. It is not entirely trustworthy, because bloggers like anyone else have their perspectives and agendas (but of course so do the peer reviewers that editors choose -- among whom many are bloggers). But it's not useless and a careful blog-reader can (and should) judge the reliability.
A paper in Nature reviews the recent storm of complaint about what is increasingly being characterized as over-stated (at best), the story on arsenic based life, shows the state of play. We immediately jumped on this paper here in MT for both scientific reasons and because this seemed more like NASA advertising than carefully presented science. We think we were justified in criticizing the story because it was far from a complete demonstration of what the media hype said it was biologically, and because despite NASA-hype it was irrelevant to any serious question about whether there were Little Green Men out there beyond Hollywood. So in a sense ours was a meta-criticism: not of the primary work, but of the unjustified extension of it as if it had astrobiological implications.
Many others in the blogosphere, probably the most visible being Carl Zimmer over at The Loom, have seized on the paper, and some have noticed problems with the chemistry itself, that we are not competent to have noticed. We said the paper itself, in Science, seemed reasoned in terms of overclaiming. Perhaps even Science was vulnerable to the Big Story self-promotion and failed to obtain adequate peer review.
But there's more because the Nature commentary was as much about the legitimacy of blogosopheric reaction as it was about the science itself. And the expected reaction, in our culture, is immediately to milk this for more self-gain: (1) NASA and the authors, who have clammed up in the face of this clamor (contrary to their courting of attention initially), want studies to confirm their results....in other words, please pass out more money. We would oppose that because the work even if well done would have no real 'astrobiology' research. (2) Nature noted that they encourage online discussion of such issues, that is, use of their blog site and subscriptions to their journal.
Both Nature and NASA are vested interests. It's hard not to have self-interested motives these days, and perhaps we shouldn't harp on them. But they do bring to attention changes in the system by which scientific findings gain credence. The blogosphere that needs nurturing in this context is not one controlled by a commercial journal, but the independent, non-vested, free-for-all of the internet, where there is no control for PR-spinning or profit motive.
The blogosphere is peer review, of a heterogeneous kind but perhaps not so much worse than stuffy professional expert-controlled peer review. One need not pee over peer review to ask how more democratic and openly rather than covertly the social wheels are turned to decide what should see the light of day.
Thursday, December 16, 2010
Doc, can you fill me in on the latest approach to dental pot-holes?
Good grief! Can't any story ever be put to rest? When can we say that science has actually answered something. Here's a story about the US FDA trying to decide if more research is needed to see whether or not amalgam fillings for dental caries can cause mercury poisoning. Amalgam is 50% mercury and mercury is an undisputed neuro (and other) toxin. Whether using it in children or pregnant women could cause damage due to mercury poisoning is at issue.
The risk must be small or we'd clearly know that there was one. Here, the benefits of amalgam over other materials (stability, better interaction with the tooth) would have to be weighed against the toxicity risk, if there is any of the latter.
Estimating very small risks--including, by the way, that of dental X-rays--is a very challenging business, not unlike the challenge of estimating genetic risk in GWAS and other wild-goose-chase research. It's hard to estimate risk with acceptable reliability, and perhaps at least as hard to make risk-benefit calculations. The same has been the case with PSA tests for prostate cancer and mammograms to detect early breast cancer, though these two cases may be easier than the dental one because the risks may be somewhat higher and there may be a lot more data.
Testing evolutionary just-so stories about Darwinian adaptive selection in Nature is another instance where theory ideology comes up against the facts, or lack of available facts. Arguments are heated, issues basically never settled.
Things are often complicated by politics (as in climate change), or vested interests, but it doesn't seem to be the case here. Perhaps here we see what happens when scientific technologies improve so much in power that we can even think to ask these kinds of questions.
Whatever the case in this instance, statistical inference on weak, complex probabilistic causal situations will be a major and common aspect of life as well as physical science in the foreseeable future.
The risk must be small or we'd clearly know that there was one. Here, the benefits of amalgam over other materials (stability, better interaction with the tooth) would have to be weighed against the toxicity risk, if there is any of the latter.
Estimating very small risks--including, by the way, that of dental X-rays--is a very challenging business, not unlike the challenge of estimating genetic risk in GWAS and other wild-goose-chase research. It's hard to estimate risk with acceptable reliability, and perhaps at least as hard to make risk-benefit calculations. The same has been the case with PSA tests for prostate cancer and mammograms to detect early breast cancer, though these two cases may be easier than the dental one because the risks may be somewhat higher and there may be a lot more data.
Testing evolutionary just-so stories about Darwinian adaptive selection in Nature is another instance where theory ideology comes up against the facts, or lack of available facts. Arguments are heated, issues basically never settled.
Things are often complicated by politics (as in climate change), or vested interests, but it doesn't seem to be the case here. Perhaps here we see what happens when scientific technologies improve so much in power that we can even think to ask these kinds of questions.
Whatever the case in this instance, statistical inference on weak, complex probabilistic causal situations will be a major and common aspect of life as well as physical science in the foreseeable future.
Wednesday, December 15, 2010
Every program has bugs and every bug has a program!
Here's another Nature paper being embraced as though it's new news. Single celled organisms, in this case E coli, can be 'programmed' like computers. As one report puts it,
The idea that life works by a kind of logic was a theme in our earlier 2004 book with that title: Genetics and the Logic of Evolution, and in Mermaid's Tale. We didn't invent the ideas, of course, but have tried to show their ubiquity as characteristics of life: presence or absence, combinations of factors, and so on, are how life works. It's a combination of 'boolean' phenomena and others, and isn't a computer program the same way Excel (or blogspot.com) is. But if evolution accounts for aspects of how life gets that way, is by no means the only story in life.
Here's one example of the computer metaphor invoked by the media:
In fact, the authors of the paper do appropriately recognize that this is how gene signaling actually works, since it was the basis of their work. What they did was take advantage of the quorum sensing ability of bacteria, a trait that allows them to collect into large groups called biofilms by sending and receiving signals among themselves when times are tough, and group living would be more beneficial than trying to make it alone. The researchers engineered two gene promoters, segments of DNA that are involved in 'turning on' a gene, to signal to a repressor, a gene that represses another gene's expression by blocking its promoter, to study the patterns of cell to cell communication between 2 different strains of E coli.
There is, however, a danger in using metaphor uncritically or in the media. In that sense this is just more media hype, just like the arsenic story last week and so many Big Stories in genetics every week. It's interesting in itself without that, even if it's being misrepresented in terms of the impression that this is conceptually new.
Genomes are not computer programs. They work very differently in many ways, and one of them is that they involve quantity as well as quality (they are not strictly 'Boolean'). The timing and concentration of components of a logical system are important in life, not just its digital aspects. So may be their 3-dimensional concentrations within cells, or their 2-dimensional patterning on exposed cell surfaces, as well as their 1-dimensional pattern along chromosomes. There are other dimensional aspects of gene control, too. These quantitative aspects would perhaps correspond to what are called 'analog' computing. Life uses both. But in so many ways it is misleading to think of life in terms that would be familiar to C++ (or even Perl!) scribblers.
What this study shows is that there are controllable aspects of gene usage that go beyond individual gene knock-out or knock-in modifications such as are so often done in experimental systems like fruit flies or inbred mice. Complex communication can be engineered: in some circumstance, bacteria start making something and secreting it, and other bacteria detect and respond to it in engineered ways. The media are touting this as if, in a way, it will be an equivalent to a new electronics era, of an enormous array of useful products designed and produced. Maybe it will, but to what extent is the story being taken beyond a technical achievement to boost stock prices?
The company with which the investigators collaborated, Life Tech, does have big plans, as reported here:
A team at the University of California has successfully implanted E coli bacteria with the key molecular circuitry to act as computers. They've given the cells the same sort of logic gates, and created a method to build circuits by 'rewiring' communications between cells. It means cells could be turned into miniature computers, they say.Not electronic IO devices, these 'logic gates' are built of genes. Naturally enough. Since that's exactly how communication happens naturally within and between cells. But the story is being reported so metaphorically, in robotic terms, that the beauty of how genetic signaling has worked for 4 billion years is getting lost in translation.
The idea that life works by a kind of logic was a theme in our earlier 2004 book with that title: Genetics and the Logic of Evolution, and in Mermaid's Tale. We didn't invent the ideas, of course, but have tried to show their ubiquity as characteristics of life: presence or absence, combinations of factors, and so on, are how life works. It's a combination of 'boolean' phenomena and others, and isn't a computer program the same way Excel (or blogspot.com) is. But if evolution accounts for aspects of how life gets that way, is by no means the only story in life.
Here's one example of the computer metaphor invoked by the media:
"The result is that bacteria can be enslaved to become part of a hive mind computer, performing the will of a central controller."The Nature paper's senior author does the same in an interview:
"Here, we've taken a colony of bacteria that are receiving two chemical signals from their neighbors, and have created the same logic gates that form the basis of silicon computing."
Staphylococcus biofilm (photo from Wikimedia Commons) |
The repressor inactivates a promoter that serves as the output. Individual colonies of E. coli carry the same NOR gate, but the inputs and outputs are wired to different orthogonal quorum-sensing ‘sender’ and ‘receiver’ devices. The quorum molecules form the wires between gates. By arranging the colonies in different spatial configurations, all possible two-input gates are produced, including the difficult XOR and EQUALS functions. The response is strong and robust, with 5- to greater than 300-fold changes between the ‘on’ and ‘off’ states. This work helps elucidate the design rules by which simple logic can be harnessed to produce diverse and complex calculations by rewiring communication between cells.Everything in a digital computer is dependent on highly ordered on-off states (that can be represented as true/false or 1's and 0's). Continuing with the computer program metaphor, what the experiment shows is that a bacterium can be transgenically altered to switch genes on or off under logic-gate types of conditions. The idea isn't new in any way, since it's how gene expression works, but the engineering aspect shows that progress is being made towards highly controlled manipulability of genome usage and, at least initially, the engineering of microbes to be the next beasts of burden for human society: to do work for us.
There is, however, a danger in using metaphor uncritically or in the media. In that sense this is just more media hype, just like the arsenic story last week and so many Big Stories in genetics every week. It's interesting in itself without that, even if it's being misrepresented in terms of the impression that this is conceptually new.
Genomes are not computer programs. They work very differently in many ways, and one of them is that they involve quantity as well as quality (they are not strictly 'Boolean'). The timing and concentration of components of a logical system are important in life, not just its digital aspects. So may be their 3-dimensional concentrations within cells, or their 2-dimensional patterning on exposed cell surfaces, as well as their 1-dimensional pattern along chromosomes. There are other dimensional aspects of gene control, too. These quantitative aspects would perhaps correspond to what are called 'analog' computing. Life uses both. But in so many ways it is misleading to think of life in terms that would be familiar to C++ (or even Perl!) scribblers.
What this study shows is that there are controllable aspects of gene usage that go beyond individual gene knock-out or knock-in modifications such as are so often done in experimental systems like fruit flies or inbred mice. Complex communication can be engineered: in some circumstance, bacteria start making something and secreting it, and other bacteria detect and respond to it in engineered ways. The media are touting this as if, in a way, it will be an equivalent to a new electronics era, of an enormous array of useful products designed and produced. Maybe it will, but to what extent is the story being taken beyond a technical achievement to boost stock prices?
The company with which the investigators collaborated, Life Tech, does have big plans, as reported here:
The company plans to commercialize the technology as genetic programming software, said Kevin Clancy, senior staff scientist of bioinformatics.
The software would "look just like programming language for a computer or a robot," Clancy said. It would convert the instructions into a DNA sequence that could be made by a DNA synthesis company. The DNA would then be inserted into the cell, such as a bacterial, yeast or mammal cell.But, cells came first, so let's turn this on its head. Digital computers, restricted to simple 0, 1 states as they are, even when the 1's and 0's are arrayed with as much complexity needed as to, say, connect the world through Facebook, would only make pretty primitive cells. The complex combinatorial 'Boolean' aspects of a cell dwarf even the simple life of bacteria that are being manipulated. But if all other things in the bacteria can be controlled well enough, the hope would be that the changes the engineers would like to add, would be seen through the rest of the complexity. In spite of the media hype, clearly if bacteria really can be harnessed, (and this seems likely), this could have many useful applications.
Tuesday, December 14, 2010
Should we cut Darwin out of parts of the human skin color story?
A great deal of my students seem to think so!
Just so we’re all on the same page, here’s a translation of the various scenarios for the loss of a trait, like pigmentation:
But why do so many animals in UV-limited habitats (caves and sea floor) lose pigment? Is it just one of the few visible traits that a constant accumulation of mutations can safely derail under the watchful eye of selection?
And what about epigenetics and pigmentation? Hmm?
My freshmen students and I have just spent a semester reading through Nina Jablonski’s book Skin: A Natural History in which she lays out a hypothesis for the evolution of human skin color variation based on natural selection, a.k.a. Darwinian evolution.
[I think it's a fantastic book for introducing anthropology to freshmen (written by a dear friend who I happen to also greatly admire), so I built a class around it.]
Where there is intense UV radiation (the tropics) people adapted to its destructive powers by evolving natural sunscreen, that is, lots of melanin in their skin.
Conversely, in areas where there is relatively little UV (away from the tropics, going towards the poles), people lost pigmentation in order to maximize the sun’s stimulation of Vitamin D synthesis in the skin (something that melanin inhibits).
Your skin color is about the UV environment of your ancestors. Thus, Seal and Heidi Klum are explained.
As all adaptive scenarios need be, these phenotypes are linked to reproductive success. Highly melanized skin is the primitive condition in humans, that our common ancestor in Sub-Saharan Africa evolved post-fur loss to prevent UV radiation from destroying folate—a process that can lead to death and birth defects of offspring. Once humans began dispersing around the globe, the ones to live in low UV environs evolved poorly melanized skin in order to allow enough vitamin D to be synthesized by a mother so that her fetus could form properly and then eventually grow up to reproduce successfully too. We're specifically talking about the development of the skeleton, since vitamin D is necessary for calcium to do its thing.
That women are lighter than men around the globe supports this notion that allowing UV to penetrate the skin during pregnancy is important.
Perhaps the strongest support for this hypothesis is the stunning map of the globe that Jablonski and Chaplin put together. With some exceptions, global distribution of UV intensity is positively correlated with the amount of melanin in indigenous humans so they were able to construct pretty accurate predictions of human skin color around the world based on UV.
Seal's ancestors are from Africa, while Heidi's are from Scandanavia.
[Of course, before that, Heidi's ancestors, like Seal's and yours and mine and everyone else's, lived in Africa.]
It’s an elegant explanation for the evolution of human skin color variation, and one that has gained a lot of support. But the vitamin D aspect of the story is definitely not a hypothesis preferred by all.
And then of course, as reported here on the MT recently, even a mega-study on vitamin D can't tell us for sure what levels are required to stave off health problems, or even what those health risks are!
But it's difficult to go into the details and nuance of these issues about skin color variation and vitamin D while introducing evolution to students. For many of my students, this is the first time that they’ve learned about evolution in a scholarly setting and we perform activities to illustrate the differences between Lamarckian evolution and Darwinian selection. Of course we also discuss all the known evolutionary forces—mutation, gene flow, drift, and selection—not just selection.
Few students are able to digest all of this the first time they learn it. And regardless of the explanations for why that may be (i.e. instruction quality, lack of effort, difficulty with the concepts, too much bias and misunderstanding brought into the classroom, etc…), it takes longer than a semester to understand how natural selection works and how it does not work.
For many students, the moment that they grasp natural selection, they begin to see the world through selection-colored spectacles. Everything has a reason, much like Dr. Pangloss's philosophy in Voltaire’s Candide. And it’s not just physical features... behaviors become adaptive by default as well.
It’s fine if these ideas are understood to be hypotheses, accounting for the complexities of the genes and physiological processes that lie behind the traits, and accounting for the limitations to testing them. But all too often students blindly assume that natural selection explains EVERYTHING.
[And this leads down the slippery slope to Social Darwinism so it's not something to take lightly.]
Now, even though the adaptationist perspective is rampant, that’s not at all the pattern that emerges when students interpret and explain human skin color.
They do the exact opposite! They take natural selection and adaptation out of half of the story!
Here’s (my paraphrasing of) how many of my freshmen students answer when they are asked to explain the Darwinian folate/Vitamin D hypothesis offered by Jablonski in her book:
Natural Selection explains melanized skin in the tropics because it acts as a natural sunscreen to protect against harmful UV. However, for the non-melanized people in regions with little UV exposure… well, they don't have much melanin because “they don’t need it.”
Depending on how you interpret that (aside from the possibilities that I'm not doing my job well enough, that they're not listening in class, or that they're not doing the reading), the students are invoking genetic drift, neutral theory, or Lamarckian principles! And Darwin is totally out.
I doubt many are aware of the theoretical significance of their answers. But by erroneously explaining a Darwinian concept, they're offering us a window into their intuition.
- If losing it is beneficial, then those who have lost it will out-survive and out-reproduce others and future generations will have more have-nots than haves. If it’s crucial to survival and reproduction, then the loss will become fixed in the population as an adaptation. - Darwinian adaptation through natural (or sexual) selection
- If you don’t need it then you can lose it without issue. Reproductive success does not depend on whether or not you have the trait or not, so chance alone will determine how prevalent haves or have-nots will be in any given generation. Relaxed selection plus chance can ultimately lead to the elimination of alleles all together! Furthermore, there is a constant and low mutation rate and while selection is weeding out deleterious mutations in some genes, or while it is favoring adaptive mutations in some genes, the mutations in genes for traits that do not affect evolutionary fitness can accumulate. As long as these mutations are not harmful and purged by selection, they can disrupt the gene and either damage the trait or cause complete loss. - Genetic drift and neutral theory (both with relaxed selection)
- If you don’t use it, you’ll lose it, meaning that the trait can fade within a lifetime if it's neglected. That neglected trait is passed on to future generations which will continue to experience its decline if they also stop using it, and if that’s widespread throughout the population, then that trait disappears. (It’s assumed that if a trait is not used that it’s not "needed," which is why the casual wording of this scenario can be similar to #2.) - Lamarckian evolution
The first two scenarios, #1 and #2, are widely accepted as biological phenomena, so they are valid hypotheses for the loss of pigmentation in people who live far from the tropics. The third scenario is seen by the scientific community as a misconceived foil to “real” evolution, having fascinating historical interest and useful pedagogical appeal, but that is all.
Okay then, how can we interpret what I've called this "intuitive" response by my students?
First of all, tanning is certainly enabling this muddling of Darwin. That skin color changes in response to stimulation by UV (and hormones!) and is not static during life makes it complex and matches it to UV in a non-evolutionary way, a way that they're used to assuming. A way that people around the world assume to be! Some of my Kenyan friends think that my skin would look like theirs if only I stopped wearing sunscreen lotion while I visit Kenya.
And second, if students think of melanin as natural sunscreen, then it's probably easy for them to take that metaphor too far and liken it, conceptually, to sunscreen lotion.
You apply sunscreen lotion when you need it and you don't apply it when you don't need it. You need it on the Equator, yet you don't need it as much far from the Equator. This feeds back into their evolutionary story: Melanized skin evolves to be where it's needed and it evolves away where it's not needed. This is, I think, the intuitive rationale behind my students' answers. Relaxed selection, neutral theory, and genetic drift provide the backing scientific power. Plus, pigmentation loss in other animals is overwhelmingly explained this way.
But, additionally, "need" can be one way to casually express the concept of Lamarckian "striving." Are my students really Lamarckists when they say that white people are that way because their ancestors didn't need much melanin? It's hard to say.
But I also can't help but wonder, What if their confusion is not just due to their theoretical naivete? What if a totally Darwinian explanation for human skin color variation is hard to understand because it just isn't the best explanation?
Maybe my next class should be dedicated to testing and fleshing out how we could test the adaptive hypothesis for human pigmentation loss versus the alternatives.
But even if we did know the real scenario, there would still be lingering questions...
Maybe it's a horrible under appreciation of deep time and convergent evolution...Maybe it's a gross underestimation of mutation and genetic drift...
But why do so many animals in UV-limited habitats (caves and sea floor) lose pigment? Is it just one of the few visible traits that a constant accumulation of mutations can safely derail under the watchful eye of selection?
And what about epigenetics and pigmentation? Hmm?
Oh, I don't know, maybe Stephen Colbert had it right: Pale skin is best for hiding in a snowbank.
* The Darwinian explanation for skin color variation that I describe here (called "Darwinian" because it's about natural selection acting on melanin differently in different environments), is NOT the same as Darwin's which he discusses in Descent of Man.
Monday, December 13, 2010
The Climate climate and a big footprint in the sand
Well, it seems that there's to be little change about climate change. Yet another UN climate meeting goes belly up because peoples' various types of greed over-ride agreement.
Too bad! If people have to change the way they live, and perhaps especially if rich people have to live less rich, then naturally they'll find reasons to scuttle any agreement that's actually going to do something. There's enough to be cynical about already, but one can add, perhaps, something about hypocritical UN cynicism that may reinforce why some don't want to go along.
Where was this meeting held? The delegates humbly treated themselves to Cancun, which is nothing other than a hard-to-reach fancy resort. This was not a low carbon footprint meeting! Lots of flying to get there, fancy (air conditioned?) hotel, fancy food at least much of which had to be transported in from afar. Lots of delegates from everywhere spewing out CO2 to get there.
It appears that the result of this costly conference is a vague and largely non-committal, non-enforceable agreement to say something will be done, without being compelled to actually do much. So the delegates left a footprint, in the sand on the beach as well as the carbon footprint of the main emissions that seem to have come from the meeting: hot air.
Too bad. We would view this as a triumph of the problems of uncertainty in science. If science is honest, and acknowledges weaknesses in data and uncertainty, as climate scientists have done, then the core things that do seem certain can be washed away by the blow of dissembling by those whose lives or wealth might have to be curtailed if the science were recognized. The desire of scientists for their message to be heard has led to some overstatements, but these have been trivial compared to the core of solid evidence. So when science comes up against politics, politics seems to have the upper footprint.
Even some of the posturing doubters recognize that climate is, empirically, changing. They can argue truthfully that it's always changing, but even if they are right that human activity is not responsible (and there's no good reason to think they're right about that), it is still true that if we try to ameliorate the problem, lots of dislocation and who knows what other untoward events or cataclysms could be avoided by modifying how we as a species live.
Too bad! If people have to change the way they live, and perhaps especially if rich people have to live less rich, then naturally they'll find reasons to scuttle any agreement that's actually going to do something. There's enough to be cynical about already, but one can add, perhaps, something about hypocritical UN cynicism that may reinforce why some don't want to go along.
Where was this meeting held? The delegates humbly treated themselves to Cancun, which is nothing other than a hard-to-reach fancy resort. This was not a low carbon footprint meeting! Lots of flying to get there, fancy (air conditioned?) hotel, fancy food at least much of which had to be transported in from afar. Lots of delegates from everywhere spewing out CO2 to get there.
It appears that the result of this costly conference is a vague and largely non-committal, non-enforceable agreement to say something will be done, without being compelled to actually do much. So the delegates left a footprint, in the sand on the beach as well as the carbon footprint of the main emissions that seem to have come from the meeting: hot air.
Too bad. We would view this as a triumph of the problems of uncertainty in science. If science is honest, and acknowledges weaknesses in data and uncertainty, as climate scientists have done, then the core things that do seem certain can be washed away by the blow of dissembling by those whose lives or wealth might have to be curtailed if the science were recognized. The desire of scientists for their message to be heard has led to some overstatements, but these have been trivial compared to the core of solid evidence. So when science comes up against politics, politics seems to have the upper footprint.
Even some of the posturing doubters recognize that climate is, empirically, changing. They can argue truthfully that it's always changing, but even if they are right that human activity is not responsible (and there's no good reason to think they're right about that), it is still true that if we try to ameliorate the problem, lots of dislocation and who knows what other untoward events or cataclysms could be avoided by modifying how we as a species live.
Friday, December 10, 2010
Why most Big Splash findings in science are wrong
"Our facts are losing their truth"
We all are taught the 'scientific method' by which we now understand the world. It is widely held that science is marching steadily toward an ever more refined and objective understanding of the one true truth that's out there. Some philosophers and historians of science question just how objective the process is, or even whether there's just one truth. But what about the method itself? Does it really work as we believe (and is 'believe' the right word for it)?
An article in the Dec 13 issue of the New Yorker raises important issues in a nontechnical way. "The Truth Wears Off", by Jonah Lehrer.
He mentions the assertion by John Ioannidis, a leading advocate of meta-analysis--pooling studies to gain adequate sample size and more stable estimates of effects--related to a paper he wrote about why most big-splash findings are wrong. The gist of the argument is that major scientific journals like Nature (if that's actually a scientific journal) publish big findings--that's what sells, after all. But what are big findings? They're the unexpected ones, strong statistical evidence behind them.....in some study.
But statistical effects arise by chance as well as by cause. That's why we have to support our case with some kind of statistical criteria, such as the level of a 'significance' test. But if hundreds of investigators are investigating countless things, even if they use a test such as the standard 5% (or 1%) significance criterion, some of them will, just by chance, get such a result. The more studies are tried by the more people, the more dramatic will be the fluke result. Yet that is what gets submitted to Nature and what history shows they love to publish.
GWAS studies magnify the problem greatly. By doing hundreds of thousands of marker tests in a given study, the chance of some 'significant' result arising just by chance is substantial. Investigators are well aware of this, and try to adjust for that by using more stringent significance criteria, but nonetheless with lots of studies and markers and traits, flukes are bound to arise.
Worse, what makes for a Big Splash is not the significance test value but the effect size. The usual claim is not just that someone found a GWAS 'hit' in relation to a given trait, but that the effect of the high-risk variant is major--explains a huge fraction of a disease, for example, making it a juicy target for Big Pharma to try to develop a drug or screen for.
But a number of years ago Joe Terwilliger and John Blangero showed by simulation that even when there is no causal element in a genomic search, the estimates of the effect size for the sites that survive the significance tests are bloated....that's how they reached their significance when the criteria were cautiously stringent. The effect size, conditional on a high significance test, is biased upwards. So, as Joe et al. said way back then, you have to do new, unbiased sample of the particular purported cause to begin estimating the strength of effect that the cause actually has.
And this brings us back to the New Yorker story of the diminution of findings with follow-up, and why facts are losing their truth.
These points are directly relevant to evolutionary biology and genetics--and the over-selling of genetic determinacy that we post so often about. They are sobering for those who actually want to do science rather than build their careers on hopes and dreams, using science as an ideological vehicle to do it, in the same way that other ideologies, like religion are used to advance societal or individual self-interest.
But these are examples of 'inconvenient truths'. They are well-known but often honored mainly in the breech. Indeed, even as we write, on our campus a speaker is going to invoke all sorts of high-scale 'omics' (proteomics, genomics, etc.) to tech our way out of what we know to be true: the biological world is largely complex. Some Big Splash findings may not be entirely wrong, but most are at least exaggerated. There are too many ways that flukes or subtle wishful thinking lead science astray, and why the supposedly iron-clad 'scientific method' as an objective way to understand our world, isn't so objective after all.
We all are taught the 'scientific method' by which we now understand the world. It is widely held that science is marching steadily toward an ever more refined and objective understanding of the one true truth that's out there. Some philosophers and historians of science question just how objective the process is, or even whether there's just one truth. But what about the method itself? Does it really work as we believe (and is 'believe' the right word for it)?
An article in the Dec 13 issue of the New Yorker raises important issues in a nontechnical way. "The Truth Wears Off", by Jonah Lehrer.
On September 18, 2007, a few dozen neuroscientists, psychiatrists, and drug-company executives gathered in a hotel conference room in Brussels to hear some startling news. It had to do with a class of drugs known as atypical or second-generation antipsychotics, which came on the market in the early nineties. The therapeutic power of the drugs appeared to be steadily falling. A recent study showed an effect that was less than half of that documented in the first trials, in the early nineties. Before the effectiveness of a drug can be confirmed, it must be tested again and again. The test of replicability, as it’s known, is the foundation of modern research. It’s a safeguard for the creep of subjectivity. But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts are losing their truth. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.Unofficially, it has been called the 'decline effect', and Lehrer cites many examples of strong effects going on to 'suffer from falling effect size'.
He mentions the assertion by John Ioannidis, a leading advocate of meta-analysis--pooling studies to gain adequate sample size and more stable estimates of effects--related to a paper he wrote about why most big-splash findings are wrong. The gist of the argument is that major scientific journals like Nature (if that's actually a scientific journal) publish big findings--that's what sells, after all. But what are big findings? They're the unexpected ones, strong statistical evidence behind them.....in some study.
But statistical effects arise by chance as well as by cause. That's why we have to support our case with some kind of statistical criteria, such as the level of a 'significance' test. But if hundreds of investigators are investigating countless things, even if they use a test such as the standard 5% (or 1%) significance criterion, some of them will, just by chance, get such a result. The more studies are tried by the more people, the more dramatic will be the fluke result. Yet that is what gets submitted to Nature and what history shows they love to publish.
GWAS studies magnify the problem greatly. By doing hundreds of thousands of marker tests in a given study, the chance of some 'significant' result arising just by chance is substantial. Investigators are well aware of this, and try to adjust for that by using more stringent significance criteria, but nonetheless with lots of studies and markers and traits, flukes are bound to arise.
Worse, what makes for a Big Splash is not the significance test value but the effect size. The usual claim is not just that someone found a GWAS 'hit' in relation to a given trait, but that the effect of the high-risk variant is major--explains a huge fraction of a disease, for example, making it a juicy target for Big Pharma to try to develop a drug or screen for.
From Dec 6 New Yorker |
And this brings us back to the New Yorker story of the diminution of findings with follow-up, and why facts are losing their truth.
Lehrer concludes,
Even the law of gravity hasn't always been perfect at predicting real world phenomena. (In one test, physicists measuring gravity by means of deep boreholes in the Nevada desert found a two-and-a-half-per-cent discrepancy between the theoretical predictions and the actual data.) Despite these findings...the law of gravity remains the same.
...Such anomalies demonstrate the slipperiness of empiricism. Although many scientific ideas generate conflicting results and suffer from fallng effect sizes, they continue to get cited in teh textbooks and drive standard medical practice. Why? Because these ideas seem true. Beacuse they make sense. Because we can't bear to let them go. And this is why the decline effect is so troubling. ....it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that's often not the case. Just because an idea is true doesn't mean it can be proved. And just because an idea can be proved doesn't mean it's true. When the experiments are done, we still have to choose what to believe.
These points are directly relevant to evolutionary biology and genetics--and the over-selling of genetic determinacy that we post so often about. They are sobering for those who actually want to do science rather than build their careers on hopes and dreams, using science as an ideological vehicle to do it, in the same way that other ideologies, like religion are used to advance societal or individual self-interest.
But these are examples of 'inconvenient truths'. They are well-known but often honored mainly in the breech. Indeed, even as we write, on our campus a speaker is going to invoke all sorts of high-scale 'omics' (proteomics, genomics, etc.) to tech our way out of what we know to be true: the biological world is largely complex. Some Big Splash findings may not be entirely wrong, but most are at least exaggerated. There are too many ways that flukes or subtle wishful thinking lead science astray, and why the supposedly iron-clad 'scientific method' as an objective way to understand our world, isn't so objective after all.
Thursday, December 9, 2010
Encounters with Evolution....or....the Ark-etype of Ignorance
So the governor of Kentucky is supporting job creation by building an ark. No, not just 'an' ark, the Ark! As the New York Times reports:
Some benighted citizens are wrestling with the question of whether it's constitutional to give governmental funding to a religious endeavor -- the governor says that's not an issue, it's just a jobs issue (or did he mean to say Job's).... but we'll let him battle that one out.
We would like to propose an alternative to the Ark Encounter along the lines of the replica of Lescaux, the early human cave complex in southwest France, with the 17,000 year old paintings of animals that grace the walls. Lescaux II, the reproduction of two halls of the cave, and paintings, has drawn millions of visitors since it was opened in 1983 -- many of them even Americans. Lescaux II is a replica because the tromping and defacing caused by huge numbers of tourists on a human ancestry pilgrimage there was destroying the very site they came to see.
We want to take a tip from Lescaux, however, and suggest that a theme park with replicas of sites around the world that have been of major importance to the understanding the real creation story, how life really got here and evolved, could create as many jobs and draw as many visitors as the Governor's Ark. Maybe not the same visitors, granted.
But imagine it:
Exhibit A could be a replica of the stromatolites found in western Australia, as in the photo to the left, fossilized evidence of the earliest organisms yet found on Earth, from something like 3.8 billion years ago. The foyer might be quickly passed by because it's just a rocky remnant of what looks like layered mud.
But Exhibit B could reproduce much of the Burgess Shale Formation, in the Canadian Rockies, an extraordinary deposit of tens of thousands of fossilized organisms that represent a diversity of life from 500 million years ago, during the Cambrian explosion, that is no longer seen today. It could be hands-on, too, with prickly trilobites for kids to play with, or video Find the Fossil games.
Next perhaps, there'd be the requisite dinosaur exhibit, beloved of kids, and a great draw. T. rex and all that, of course. But no halo'ed humans walking amongst them giving blessings in our exhibit, however.
And then, of course, we'd expect a Hall of Early Hominids -- replica fossil footprints would be fun, such as from Laetoli. And a hulking, growling Neandertal or two. Or a diarama of them sitting around a fire gnawing half-cooked moose meat or hide (to make clothing), or hammering out projectile stones or big Alley Oop clubs. And, of course, the lazy one back in the cave daubing it with graffiti.
And of course, the pièce de résistance, a replica of the HMS Beagle, the ship that Charles Darwin spent 5 years on ship's naturalist, collecting the evidence that eventually convinced him that all life on Earth shares a common origin, and that the diversity of life he saw around him was due to descent from that common origin, with modification. Kids would love exploring the Beagle absolutely as much as they'd love climbing around on an ark. There should be a lifelike statue of Darwin himself, with net in hand, setting out on a bug-collecting expedition. This could be made by a contract to Madame Tussaud's. Here, should one be generous and compromise by allowing a halo to be placed over his head?
This theme park, lets call it Encounters with Evolution, would be educational, that is, about real facts, there would be no controversy over using government funds to build it, and as for jobs, it would employ many people in its construction, and it would surely attract many visitors, as the Governor's Ark.
Of course this is just a quick imagining of such a site. There are many many more exhibits that could be included...surely you could suggest some. Maybe a contest for ideas could be held.
Ah, well. A pipe dream. The sad fact that it's an Ark and not our dream that's being proposed by the Governor of Kentucky (yes, elected by a majority of the state's population!) goes a long way toward explaining why American students score so poorly in science when compared to much of the rest of the world (such as Shanghai). Science education is failing us. (Note to the governor: Google and Intel are looking at graduates from legitimate colleges, not setting up recruiting desks outside revival-meeting tents.)
In the foreseeable future, it'll be us Americans who are sitting around a meagre campfire gnawing raw meat (probably rat-meat, or maybe just McBurgers), while people in other countries, who value real education, will be dining on caviar.....and smiling patronizingly at our plight.
The state has promised generous tax incentives to a group of entrepreneurs who plan to construct a full-size replica of Noah’s ark, load it with animals and actors, and make it the centerpiece of a Bible-based tourist attraction called Ark Encounter.This theme park is being designed by the same people who designed Kentucky's Creation Museum, a tourist destination that, according to the NYT, has drawn 1.2 million visitors in its first 3 years of operation. Yes, Cashing in on Christ, something surely Jesus himself would have gone to (but he'd only have gone there after chasing money-changers out of the temple). The good Governor is surely correct that these same people would happily drop more money in his state at the proposed Ark Encounter. Build it and they will come -- two by two. Or was it in sevens (take a look at Genesis)? And was it really only one male and one female of each type -- or did Noah have a don't ask-don't tell policy?.
Some benighted citizens are wrestling with the question of whether it's constitutional to give governmental funding to a religious endeavor -- the governor says that's not an issue, it's just a jobs issue (or did he mean to say Job's).... but we'll let him battle that one out.
We would like to propose an alternative to the Ark Encounter along the lines of the replica of Lescaux, the early human cave complex in southwest France, with the 17,000 year old paintings of animals that grace the walls. Lescaux II, the reproduction of two halls of the cave, and paintings, has drawn millions of visitors since it was opened in 1983 -- many of them even Americans. Lescaux II is a replica because the tromping and defacing caused by huge numbers of tourists on a human ancestry pilgrimage there was destroying the very site they came to see.
We want to take a tip from Lescaux, however, and suggest that a theme park with replicas of sites around the world that have been of major importance to the understanding the real creation story, how life really got here and evolved, could create as many jobs and draw as many visitors as the Governor's Ark. Maybe not the same visitors, granted.
But imagine it:
Exhibit A could be a replica of the stromatolites found in western Australia, as in the photo to the left, fossilized evidence of the earliest organisms yet found on Earth, from something like 3.8 billion years ago. The foyer might be quickly passed by because it's just a rocky remnant of what looks like layered mud.
But Exhibit B could reproduce much of the Burgess Shale Formation, in the Canadian Rockies, an extraordinary deposit of tens of thousands of fossilized organisms that represent a diversity of life from 500 million years ago, during the Cambrian explosion, that is no longer seen today. It could be hands-on, too, with prickly trilobites for kids to play with, or video Find the Fossil games.
Next perhaps, there'd be the requisite dinosaur exhibit, beloved of kids, and a great draw. T. rex and all that, of course. But no halo'ed humans walking amongst them giving blessings in our exhibit, however.
And then, of course, we'd expect a Hall of Early Hominids -- replica fossil footprints would be fun, such as from Laetoli. And a hulking, growling Neandertal or two. Or a diarama of them sitting around a fire gnawing half-cooked moose meat or hide (to make clothing), or hammering out projectile stones or big Alley Oop clubs. And, of course, the lazy one back in the cave daubing it with graffiti.
And of course, the pièce de résistance, a replica of the HMS Beagle, the ship that Charles Darwin spent 5 years on ship's naturalist, collecting the evidence that eventually convinced him that all life on Earth shares a common origin, and that the diversity of life he saw around him was due to descent from that common origin, with modification. Kids would love exploring the Beagle absolutely as much as they'd love climbing around on an ark. There should be a lifelike statue of Darwin himself, with net in hand, setting out on a bug-collecting expedition. This could be made by a contract to Madame Tussaud's. Here, should one be generous and compromise by allowing a halo to be placed over his head?
This theme park, lets call it Encounters with Evolution, would be educational, that is, about real facts, there would be no controversy over using government funds to build it, and as for jobs, it would employ many people in its construction, and it would surely attract many visitors, as the Governor's Ark.
Of course this is just a quick imagining of such a site. There are many many more exhibits that could be included...surely you could suggest some. Maybe a contest for ideas could be held.
Ah, well. A pipe dream. The sad fact that it's an Ark and not our dream that's being proposed by the Governor of Kentucky (yes, elected by a majority of the state's population!) goes a long way toward explaining why American students score so poorly in science when compared to much of the rest of the world (such as Shanghai). Science education is failing us. (Note to the governor: Google and Intel are looking at graduates from legitimate colleges, not setting up recruiting desks outside revival-meeting tents.)
In the foreseeable future, it'll be us Americans who are sitting around a meagre campfire gnawing raw meat (probably rat-meat, or maybe just McBurgers), while people in other countries, who value real education, will be dining on caviar.....and smiling patronizingly at our plight.
Wednesday, December 8, 2010
An aspirin a day keeps the doctor away?
A paper published online in The Lancet on Dec 5 reports that regular aspirin use for at least 5 years reduces the risk of cancer by 20% over 20 years. It's particularly true for gastrointestinal tumors, with reduction in risk of other specific cancers of up to 40%. Since lifetime risk of cancer is 40% in the developed world, aspirin use could have a significant effect on cancer morbidity and mortality.
Writing about this study, the BBC reports:
The number of some cancers was too small to allow estimation of risk reduction, if it indeed occurred. Although how cancer reduces risk is not actually known, it either reduces incidence or the rate of growth of the tumor, and animal studies showed this was "mediated at least in part by inhibition of the cyclo-oxygenase (COX) enzymes and reduced production of prostaglandins and other inflammatory mediators", according to the BBC.
Now, these results are interesting, not least because they were found serendipitously, as the result of looking for the possible effect of a daily dose of salicylic acid (aspirin) on vascular events, via reduction of blood clotting. And one or more of the investigators were following up on earlier findings (perhaps in the same study) that aspirin may be protective against colorectal cancer. Because environmental factors are correlated and highly variable, whether aspirin on its own would have persistent effects in changing lifestyle environments is an open question. People who have taken aspirin may be different in many unmeasured ways from others. But let's take this study as given.
The surprising thing is that this seems an out of the blue finding: why aspirin? One possible explanation suggested by the investigator was that cancerous cells respond to aspirin by undergoing apoptosis -- cellular suicide. Why normal cells would not do that as well is unclear, and whether there was any substantial evidence for the explanation (as opposed to just a guess), we can't say. However, by chance we were having dinner with a visiting cell biologist from South Africa, and he said this is likely the result of the much higher metabolic rate of tumor cells, so that the aspirin effect would affect normal cells as well, but a far lesser rate; that is, the aspirin effect isn't targeting the tumor cells specifically. Aspirin may affect vascular tissue (small blood vessels) in ways that deprive the 'hungrier' tumor cells.
A more important point to us is that we're in the age in which our NIH and other global research supporters seem determined to turn every disease into a genetic disease. Cancer does indeed seem to be a genetic disorder at the cell level--genes gone bad for some reason lead the cell to misbehave and grow out of normal control. Though many different means to alter gene function or expression in cells may be involved, they seem largely to occur somatically during life. That means that cancer isn't mainly 'genetic' in the sense of its causal mutations being inherited by children from their parents.
Some cancer susceptibility certainly is inherited. But the risk increases associated with most genes currently known to affect inherited cancer risk are far less than the 20-40% figures claimed in this aspirin study. This means that cheap aspirin prevention would be worth many times what we would be able to do using genotypes in expensive high-tech approaches. So why devote so much funding to search for 'cancer genes', rather than intensely figuring out what to do with the genetic variants that we know really do increase risk? We hasten to add as a case in point that such progress is certainly being made in this direction in at least some instances, including the notorious BRCA1/2 genes associated with breast/ovarian cancer.
More importantly, if we concentrated on those clear-cut cancer risk genes, while we let aspirin prevention take its course for many other common tumors, we would be left with the cancer cases that are more likely to be really genetic -- the ones that occur despite taking aspirin. Maybe that is where genetic research in cancer should go, rather than into huge, open ended black holes of long-term megagenomics projects.
Writing about this study, the BBC reports:
For individual cancers the reduction was about 40% for bowel cancer, 30% for lung cancer, 10% for prostate cancer and 60% for oesophageal cancer.These results are from a meta-analysis of 3 large observational studies of approximately 25,000 adults in the UK, originally randomly assigned to daily aspirin or control (placebo, another antiplatelet agent or nothing) to assess the effects of aspirin on vascular events (heart attack or stroke) and followed for 4 - 20 years. Each study collected statistics on deaths from cancer as well as cancer incidence.
The number of some cancers was too small to allow estimation of risk reduction, if it indeed occurred. Although how cancer reduces risk is not actually known, it either reduces incidence or the rate of growth of the tumor, and animal studies showed this was "mediated at least in part by inhibition of the cyclo-oxygenase (COX) enzymes and reduced production of prostaglandins and other inflammatory mediators", according to the BBC.
Now, these results are interesting, not least because they were found serendipitously, as the result of looking for the possible effect of a daily dose of salicylic acid (aspirin) on vascular events, via reduction of blood clotting. And one or more of the investigators were following up on earlier findings (perhaps in the same study) that aspirin may be protective against colorectal cancer. Because environmental factors are correlated and highly variable, whether aspirin on its own would have persistent effects in changing lifestyle environments is an open question. People who have taken aspirin may be different in many unmeasured ways from others. But let's take this study as given.
The surprising thing is that this seems an out of the blue finding: why aspirin? One possible explanation suggested by the investigator was that cancerous cells respond to aspirin by undergoing apoptosis -- cellular suicide. Why normal cells would not do that as well is unclear, and whether there was any substantial evidence for the explanation (as opposed to just a guess), we can't say. However, by chance we were having dinner with a visiting cell biologist from South Africa, and he said this is likely the result of the much higher metabolic rate of tumor cells, so that the aspirin effect would affect normal cells as well, but a far lesser rate; that is, the aspirin effect isn't targeting the tumor cells specifically. Aspirin may affect vascular tissue (small blood vessels) in ways that deprive the 'hungrier' tumor cells.
A more important point to us is that we're in the age in which our NIH and other global research supporters seem determined to turn every disease into a genetic disease. Cancer does indeed seem to be a genetic disorder at the cell level--genes gone bad for some reason lead the cell to misbehave and grow out of normal control. Though many different means to alter gene function or expression in cells may be involved, they seem largely to occur somatically during life. That means that cancer isn't mainly 'genetic' in the sense of its causal mutations being inherited by children from their parents.
Some cancer susceptibility certainly is inherited. But the risk increases associated with most genes currently known to affect inherited cancer risk are far less than the 20-40% figures claimed in this aspirin study. This means that cheap aspirin prevention would be worth many times what we would be able to do using genotypes in expensive high-tech approaches. So why devote so much funding to search for 'cancer genes', rather than intensely figuring out what to do with the genetic variants that we know really do increase risk? We hasten to add as a case in point that such progress is certainly being made in this direction in at least some instances, including the notorious BRCA1/2 genes associated with breast/ovarian cancer.
More importantly, if we concentrated on those clear-cut cancer risk genes, while we let aspirin prevention take its course for many other common tumors, we would be left with the cancer cases that are more likely to be really genetic -- the ones that occur despite taking aspirin. Maybe that is where genetic research in cancer should go, rather than into huge, open ended black holes of long-term megagenomics projects.