A BBC report of a new study by sleep researchers suggests that night shift workers have higher risk of various health problems than do we daytime doodlers; heart attacks, cancers and type 2 diabetes. This is because the expression patterns of many genes are based on the day-night cycle, and the 'chrono-chaos' of night work upsets lots of body functions, the story says.
The study, published in the Proceedings of the National Academy of Sciences, found that mistimed sleep caused gene expression to fall significantly. Genes affected included those having to do with circadian rhythms, or the maintenance of our sleep/wake cycles.
One can't be totally surprised, although one might expect that the graveyarders would get used to their diurnal cycle and do just fine. One has to wonder if there are other things about who chooses to do night work, or doesn't have options, so that nightshifting is a consequence rather than cause. In that case, nightshifting would be a confounder relative to the health implications rather than their cause.
The point here is rather just a brief one, that we and many others have repeatedly made. If these types of variables are not known or taken into account, or there isn't enough of this risk factor detectable in the study sample, then attributions of causation of what is measured will be inaccurate of misleading. This is one of the challenges of epidemiological research, including the search for reliable risk factors in the genome.
Then there is the question, related to an earlier point above, whether any genetic risk factors lead the bearer to look for nightwork and hence appear to be associated with some health result only indirectly. What about variants in the chrono-genes? Many such questions come to mind.
Inferential chaos?
Maybe, therefore, the chrono-chaos is a different form of informational and inferential disorder. A disorder of incorrectly done studies. As we know, many results of association studies, genetic or otherwise, are not confirmed by attempts to replicate them (and here we're not referring to the notorious failure to report negative results, which exacerbates the problem). We don't know if the 'fault' is in the study design, the claimed finding of the first study, other biases, or just bad statistical luck.
A piece in Monday's New York Times laments the high fraction of scientific results that are not replicable. This topic has not gone unnoticed; we've written about different reasons for nonreplicability over the years ourselves. The degree of
confidence in each report as it comes out is thus surprising, unless one
thinks in terms of careerism, a 24/7 news cycle and so on.
Showing posts with label irreproducible results. Show all posts
Showing posts with label irreproducible results. Show all posts
Wednesday, January 22, 2014
Monday, August 23, 2010
The inconstancy of life...or is it just of our findings? Irreproducible results
By
Ken Weiss
We are awash in inconsistent findings from the kinds of research that is often done in epidemiology, genetics, and even evolutionary biology. One study leads to a given assertion--some risk factor or selective agent causes some outcome--but the excited follow-up study fails to confirm it. Today Twinkies are good for you, tomorrow they're nearly lethal!
We post about this regularly, and it can be found almost every day in the popular science news and even in the scientific literature itself. GWAS are of course a very good example that we mention frequently.
These results are basically statistical ones resulting from various kinds of sampling. If they are consistent in anything, it's their inconsistency. And therein lies a serious challenge to observational science (including evolutionary biology).
An important part of the problem is that when effects are small (assuming they're real), there's a substantial probability that the excess risk they're responsible for won't be detected in a study sample. Study samples match cases and controls, or some similar kind of comparison, and a minor cause will be found about as often in both groups, and that means by chance may be found either more often in the controls or not sufficiently more often in cases to pass a test of statistical significance.
A second problem is complexity and confounding. When many variables are responsible for a given outcome, the controls and cases may differ in ways not actually measured, so that the net effect of the risk factor under test may be swamped by these other factors.
Finally, the putative risk factor may have appeared on someone's radar by chance or by fluke or by a hyperactive imagination, a prejudicial bias, or a Freudian nightmare. We tend to see bad things all around us, and since we don't want anything bad at all of any kind ever, and we have a huge and hungry epidemiology industry, we're bound to test countless things. Puritanism may lead us inadvertently to assume that if it's fun it must be bad for you. Yet, negative findings aren't reported as often as positive ones, and that leads to biased reporting: the flukes that turn out positive get the ink.
We published a paper a while ago in which we listed a number of inconsistent findings. Ken has been told that the existence of this list has made its way into a book, and consequently he's gotten requests for the list. So, we thought we'd post it here. It's out of date now, and we could update it with a lot more examples, but we're sure you can think of plenty on your own.
Wishful thinking and legitimate hopes for knowledge lead us to tend to believe things that are far more tentative than they may appear on the surface. It's only natural--but it's not good science. It's a major problem that we face in relating science to society today.
We post about this regularly, and it can be found almost every day in the popular science news and even in the scientific literature itself. GWAS are of course a very good example that we mention frequently.
These results are basically statistical ones resulting from various kinds of sampling. If they are consistent in anything, it's their inconsistency. And therein lies a serious challenge to observational science (including evolutionary biology).
An important part of the problem is that when effects are small (assuming they're real), there's a substantial probability that the excess risk they're responsible for won't be detected in a study sample. Study samples match cases and controls, or some similar kind of comparison, and a minor cause will be found about as often in both groups, and that means by chance may be found either more often in the controls or not sufficiently more often in cases to pass a test of statistical significance.A second problem is complexity and confounding. When many variables are responsible for a given outcome, the controls and cases may differ in ways not actually measured, so that the net effect of the risk factor under test may be swamped by these other factors.
Finally, the putative risk factor may have appeared on someone's radar by chance or by fluke or by a hyperactive imagination, a prejudicial bias, or a Freudian nightmare. We tend to see bad things all around us, and since we don't want anything bad at all of any kind ever, and we have a huge and hungry epidemiology industry, we're bound to test countless things. Puritanism may lead us inadvertently to assume that if it's fun it must be bad for you. Yet, negative findings aren't reported as often as positive ones, and that leads to biased reporting: the flukes that turn out positive get the ink.
We published a paper a while ago in which we listed a number of inconsistent findings. Ken has been told that the existence of this list has made its way into a book, and consequently he's gotten requests for the list. So, we thought we'd post it here. It's out of date now, and we could update it with a lot more examples, but we're sure you can think of plenty on your own.
Wishful thinking and legitimate hopes for knowledge lead us to tend to believe things that are far more tentative than they may appear on the surface. It's only natural--but it's not good science. It's a major problem that we face in relating science to society today.
| Table of irreproducible results? |
| Hormone replacement therapy and heart disease |
| Hormone replacement therapy and cancer |
| Stress and stomach ulcers |
| Annual physical checkups and disease prevention |
| Behavioural disorders and their cause |
| Diagnostic mammography and cancer prevention |
| Breast self-exam and cancer prevention |
| Echinacea and colds |
| Vitamin C and colds |
| Baby aspirin and heart disease prevention |
| Dietary salt and hypertension |
| Dietary fat and heart disease |
| Dietary calcium and bone strength |
| Obesity and disease |
| Dietary fibre and colon cancer |
| The food pyramid and nutrient RDAs |
| Cholesterol and heart disease |
| Homocysteine and heart disease |
| Inflammation and heart disease |
| Olive oil and breast cancer |
| Fidgeting and obesity |
| Sun and cancer |
| Mercury and autism |
| Obstetric practice and schizophrenia |
| Mothering patterns and schizophrenia |
| Anything else and schizophrenia |
| Red wine (but not white, and not grape juice) and heart disease |
| Syphilis and genes |
| Mothering patterns and autism |
| Breast feeding and asthma |
| Bottle feeding and asthma |
| Anything and asthma |
| Power transformers and leukaemia |
| Nuclear power plants and leukaemia |
| Cell phones and brain tumours |
| Vitamin antioxidants and cancer, aging |
| HMOs and reduced health care cost |
| HMOs and healthier Americans |
| Genes and you name it! |
Subscribe to:
Posts (Atom)