It looks like a food-fight at the Precision Corral! Maybe the Big Data era is over! That's because what we really seem to need (of course) is even bigger GWAS or other sorts of enumerative (or EnumerOmics studies, because then (and only then) will we really realize how complex traits are caused, so that we can produce 'precision' genomic medicine to cure all that ails us. After all, there no such thing as enough 'data' or a big (and open-ended) enough study. Of course, because so much knowledge....er, money....is at stake, such a food-fight is not just children in a sand box, but purported adults, scientists even, wanting more money from you, the taxpayer (what else?). The contest will never end on its own. It will have to be ended from the outside, in one way or another, because it is predatory: it takes resources away from what might be focused, limited, but actually successful problem-solving research.
The idea that we need larger and larger GWAS studies, not to mention almost any other kind of 'omics enumerative study, reflects the deeper idea that we have no idea what to do with what we've got. The easiest word to say is "more", because that keeps the fiscal flood gates open. Just as preachers keep the plate full by promising redemption in the future--a future that, like an oasis to desert trekkers, can be a mirage never reached, scientists are modern preachers who've learned the tricks of the trade. And, of course, since each group wants its flood gates to stay wide open it must resist any even faint suggestion that somebody else's gates might open wider.
There is a kind of desperate defense, as well as food fight, over the situation. This, at least, is one way to view a recent exchange between an assertion by Boyle et al. (Cell 169(7):1177-86, 2018**) that some few key genes perhaps with rare alleles scattered across the genome are the 'core' genes responsible for complex diseases, but that lesser often indirect or incidental genes across the genome provide other pathways to affect a trait, and are detected in GWAS. If a focus on this model were to take place, it might threaten the gravy train of more traditional, more mindless, Big Data chasing. As a plea to avoid that is Wray et al.'s falsely polite spitball in return (Cell 173:1573-80, 2018**) urging that things really are spread all over the genome, differently so in everyone. Thus, of course, the really true answer is some statistical prediction method, after we have more and even larger studies.
Could it be, possibly, that this is at its root merely a defense of large statistical data bases and Big Data per se, expressed as if it were a legitimate debate about biological causation? Could it be that for vested interests, if you have a well-funded hammer everything can be presented as if it were a nail (or, rather, a bucket's worth of nails, scattered all over the place)?
Am I being snide here?
Yes, of course. I'm not the Ultimate Authority to adjudicate about who's right, or what metric to use, or how many genome sites, in which individuals, can dance on the head of the same 'omics trait. But I'm not just being snide. One reason is that both the Boyle and Wray papers are right, as I'll explain.
The arguments seem in essence to assert that complex traits are due either to many genetic variants strewn across the genome, or to a few rare larger-effect alleles here and there complemented by nearby variants that may involve indirect pathways to the 'main' genes, and that these are scattered across the genome ('omnigenic'). Or that we can tinker with GWAS results and various technical measurements from them to get the real truth?
We are chasing our tails these days in an endless-seeming circle to see who can do the biggest and most detailed enumerative study, to find the most and tiniest of effects, with the most open-ended largesse, while Rome burns. Rome, here, are the victims of the many diseases which might be studied with actual positive therapeutic results by more, focused, if smaller, studies. Or, in many cases, by a real effort at revealing and ameliorating the lifestyle exposures that typically, one might say overwhelmingly, are responsible for common diseases.
If, sadly, it were to turn out that there is no more integrative way, other than add-'em-up, by which genetic variants cause or predispose to disease, then at least we should know that and spend our research resources elsewhere, where they might do good for someone other than universities. I actually happen to think that life is more integratively orderly than its effects typically being enumeratively additive, and that more thoughtful approaches, indeed reflecting findings of the decades of GWAS data, might lead to better understanding of complex traits. But this seemingly can't be achieved by just sampling extensively enough to estimate 'interactions'. The interactions may, and I think probably, have higher-level structure that can be addressed in other ways.
But if not, if these traits are as they seem, and there is no such simplifying understanding to be had, then let's come clean to the public and invest our resources in other ways to improve our lives before these additive trivia add up to our ends when those supporting the work tire of exaggerated promises.
Our scientific system, that we collectively let grow like mushrooms because it was good for our self interests, puts us in a situation where we must sing for our supper (often literally, if investigators' salary depends on grants). No one can be surprised at the cacophony of top-of-the-voice arias ("Me-me-meeeee!"). Human systems can't be perfect, but they can be perfected. At some point, perhaps we'll start doing that. If it happens, it will only partly reflect the particular scientific issues at issue, because it's mainly about the underlying system itself.
**NOTE: We provide links to sources, but, yep, they are paywalled --unless you just want to see the abstract or have access to an academic library. If you have the looney idea that as a taxpayer you have already paid for this research so private selling of its results should be illegal--sorry!--that's not our society.
Friday, October 19, 2018
Thursday, October 18, 2018
When is a consistent account in science good enough?
We often want our accounts in science to be consistent with the facts. Even if we can't explain all the current facts, we can always hope to say, truthfully, that our knowledge is imperfect but our current theory is at least largely true....or something close to that....until some new 'paradigm' replaces it.
It is also only natural to sneer at our forebears' primitive ideas, of which we, naturally, now know much better. Flat earth? Garden of Eden? Phlebotomy? Phlogiston? Four humors? Prester John, the mysterious Eastern Emperoro who will come to our rescue? I mean, really! Who could ever have believed such nonsense?
In fact, leaders among our forebears accepted these and much else like it, took them as real, sought them for solace from life's cares not just because they were promised (as in religious figures) but as earthly answers. Or, to seem impressively knowledgeable, found arcane ways to say "I dunno" without admitting it. And, similarly, many used ad hoc 'explanations' for personal gain--as self-proclaimed gurus, promisers of relief from life's sorrows or medical woes (usually, if you cross their palms with silver first).
Even in my lifetime in science, I've seen forced after-the-fact 'explanations' of facts, and the way a genuine new insight can show how wrong those explanations were, because the new insight accounts for them more naturally or in terms of some other new facts, forces, or ideas. Continental drift was one that had just come along in my graduate school days. Evolution, relativity, and quantum mechanics are archetypes of really new ideas that transformed how our forebears had explained what is now our field of endeavor.
Such lore, and our more broad lionizing of leading political, artistic or other similarly transformative figures, organizes how we think. In many ways it gives us a mythology, or ethnology, that leads us to order success into a hierarchy of brilliant insights. This, in turn, and in our careerist society, provides an image to yearn for, a paradigm to justify our jobs, indeed our lives, make them meaningful--make them important in some cosmic sense, and really worth living.
Indeed, even ordinary figures from our parents, to the police, generals, teachers, and politicians have various levels of aura as idols or savior figures, who provide comforting answers to life's discomfiting questions. It is natural for those burdened by worrisome questions to seek soothing answers.
But of course, all is temporary (unless you believe in eternal heavenly bliss). Even if we truly believe we've made transformative discoveries or something like that during our lives, we know all is eventually dust. In the bluntest possible sense, we know that the Earth will some day destruct and all our atoms scatter to form other cosmic structures.
But we live here and now and perhaps because we know all is temporary, many want to get theirs now, and we all must get at least some now--a salary to put food on the table at the very least. And in an imperfect and sometimes frightening world, we want the comfort of experts who promise relief from life's material ills as much as preachers promise ultimate relief. This is the mystique often given to, or taken by, medical professionals and other authority figures. This is what 'precision genomic medicine' was designed, consciously or possibly just otherwise, to serve.
And we are in the age of science, the one True field (we seem to claim) that delivers only objectively true goods; but are we really very different from those in similar positions of other sorts of lore? Is 'omics any different from other omnibus beliefs-du-jour? Or do today's various 'omical incantations and promises of perfection (called 'precision') reveal that we are, after all, even in the age of science, only human and not much different from our typically patronized benighted forebears?
Suppose we acknowledge that the latter is, at least to a considerable extent, part of our truth. Is there a way that we can better use, or better allocate, resources to make them more objectively dedicated to solving the actually soluble problems of life--for the public everyday good, and perhaps less used, as from past to today, to guild the thrones of those making the promises of eternal bliss?
Or does sociology, of science or any other aspect of human life, tell us that this is, simply, the way things are?
It is also only natural to sneer at our forebears' primitive ideas, of which we, naturally, now know much better. Flat earth? Garden of Eden? Phlebotomy? Phlogiston? Four humors? Prester John, the mysterious Eastern Emperoro who will come to our rescue? I mean, really! Who could ever have believed such nonsense?
Prester John to the rescue (from Br Library--see Wikipedia entry) |
Even in my lifetime in science, I've seen forced after-the-fact 'explanations' of facts, and the way a genuine new insight can show how wrong those explanations were, because the new insight accounts for them more naturally or in terms of some other new facts, forces, or ideas. Continental drift was one that had just come along in my graduate school days. Evolution, relativity, and quantum mechanics are archetypes of really new ideas that transformed how our forebears had explained what is now our field of endeavor.
Such lore, and our more broad lionizing of leading political, artistic or other similarly transformative figures, organizes how we think. In many ways it gives us a mythology, or ethnology, that leads us to order success into a hierarchy of brilliant insights. This, in turn, and in our careerist society, provides an image to yearn for, a paradigm to justify our jobs, indeed our lives, make them meaningful--make them important in some cosmic sense, and really worth living.
Indeed, even ordinary figures from our parents, to the police, generals, teachers, and politicians have various levels of aura as idols or savior figures, who provide comforting answers to life's discomfiting questions. It is natural for those burdened by worrisome questions to seek soothing answers.
But of course, all is temporary (unless you believe in eternal heavenly bliss). Even if we truly believe we've made transformative discoveries or something like that during our lives, we know all is eventually dust. In the bluntest possible sense, we know that the Earth will some day destruct and all our atoms scatter to form other cosmic structures.
But we live here and now and perhaps because we know all is temporary, many want to get theirs now, and we all must get at least some now--a salary to put food on the table at the very least. And in an imperfect and sometimes frightening world, we want the comfort of experts who promise relief from life's material ills as much as preachers promise ultimate relief. This is the mystique often given to, or taken by, medical professionals and other authority figures. This is what 'precision genomic medicine' was designed, consciously or possibly just otherwise, to serve.
And we are in the age of science, the one True field (we seem to claim) that delivers only objectively true goods; but are we really very different from those in similar positions of other sorts of lore? Is 'omics any different from other omnibus beliefs-du-jour? Or do today's various 'omical incantations and promises of perfection (called 'precision') reveal that we are, after all, even in the age of science, only human and not much different from our typically patronized benighted forebears?
Suppose we acknowledge that the latter is, at least to a considerable extent, part of our truth. Is there a way that we can better use, or better allocate, resources to make them more objectively dedicated to solving the actually soluble problems of life--for the public everyday good, and perhaps less used, as from past to today, to guild the thrones of those making the promises of eternal bliss?
Or does sociology, of science or any other aspect of human life, tell us that this is, simply, the way things are?
Wednesday, October 17, 2018
The maelstrom of science publishing: once you've read it, when should you shred it?
There is so much being published in the science literature--a veritable tsunami of results. New journals are being started almost monthly, it seems, and mainly or only by for-profit companies. There seems to be a Malthusian growth of the number of scientists, which has certainly produced a genuine explosion of research and knowledge, but the intense pressure on scientists to publish has perhaps changed the relative value of every paper.
And as I look at the ancient papers (that is, ones from 2016-17) that I've saved in my Must-Read folder, I see all sorts of things that, if they had actually been widely read, much less heeded, would mean that many papers being published today might not seem so original. At least, new work might better reflect what we already know--or should know if we cared about or read that ancient literature.
At least I think that, satire aside, in the rush to publish what's truly new, as well as for professional score-counting and so on, and with the proliferating plethora of journals, the past is no longer prologue (sorry, Shakespeare!) as it once was and, one can argue should still be. The past is just the past; it doesn't seem to pay to recognize, much less to heed it, except for strategic citation-in-passing reasons and because bibliography software can be used to winnow out citable papers so that reviewers of papers or grant applications won't be negative because their work wasn't cited. You can judge for yourself whether this is being realistic or too cynical (perhaps both)!
The flux of science publishing is enormous for many reasons. Not least is the expansion in the number of scientists. But this is exacerbated by careerist score-counting criteria that have been growing like the proverbial Topsy in recent decades: the drive to get grants, big and bigger, long and longer. Often in biomedical sciences, at least, grants must include investigator salaries, so there is massive self-interest in enumerable 'productivity'. The journals proliferate to fill this market, and of course to fill the coffers of the publishers' self-interest. Too cynical?
Over the years, in part to deflate Old Boy networks, 'objective' criteria have come to include, besides grants garnered, a faculty member's number of papers, ranking of the journals they're in, citation counts, and other 'impact factor' measures. This grew in some ways also to feed the growing marketeering by vendors, even who provide score-counting tools, and university bureaucracies. More generally, it reflects the way middle-class life, the life most of us now lead, has become--attempts to earn status, praise, wealth, and so on by something measurable and therefore ostensibly objective. Too cynical?
Indeed, it is now common for graduate students--or even undergrads--to attend careerism seminars. Instruction in how to get published, how to get funded, how to work the System. This may be good in a sense, or at least realistic, even if it was not so when, long ago, I was a graduate student. It does, however, put strategizing rather than science up front, a first-year learning priority. One wonders how much time is lost that, in those bad old days, was spent thinking and learning about the science itself. We were, for example, to spend our 2-year Master's program learning our field, only then to get into a lab and do original work, which was what a PhD was about. It is fair to ask whether this is just a change in our means of being and doing, without effect on the science itself, or whether careerism is displacing or even replacing really creative science? When is objection to change nothing more than nostalgic cynicism?
Is science more seriously 'productive' than it used to be?
Science journals have always been characterized largely by the minutiae they publish, because (besides old boy-ism) real, meaty, important results are hard to come by. Most observation in the past, and experiment these days, yields little more than curios. You can see this by browsing decades-old volumes even of the major science journals. The reports may be factually correct, but of minimal import. Even though science has become a big industry rather than the idle rich's curiosity, most science publishing now, as in the past, might more or less still be vanity publishing. Yet, as science has become more of a profession, there are important advances, so it is not clear whether science is now more splash than substance than it was in the past.
So, even if science has become an institutionalized, established, middle-class industry, and most of us will go down and out, basically unknown in the history of our fields, that has probably always been the case. Any other view probably is mainly retrospective selective bias: we read biographies of our forebears, making them seem few and far between, and all substantial heroes; but what we are reading is about those forebears who really did make a difference. The odd beetle collector is lost to history (except maybe to historians, who themselves may be making their livings on arcane minutiae). So if that's just reality, there is no need to sneer cynically at it.
More time and energy are taken up playing today's game than was the case, or was necessary, in the past--at least I think that is pretty clear, if impossible to prove. Even in the chaff-cloud, lasting knowledge does seem to be much more per year than it used to be. That seems real, but it reveals another reality. We can only deal with so much. With countless papers published weekly, indeed many of them reviews (so we don't have to bother reading the primary papers), overload is quick and can be overwhelming.
That may be cynical, but it's also a reality. My Must-Read folder on my computer is simply over-stuffed, with perhaps a hundred or more papers that I 'Saved' every year. When I went to try to clean my directory this morning, I was overwhelmed: what papers before, say, 2015 are still trustworthy, as reports or even as reviews of then-recent work? Can one even take reviews seriously, or cite them or past primary papers without revealing one's out-of-dateness? New work obviously can obsolesce prior reviews. Yet reviews make the flood of prior work at least partially manageable. But would it be safer just to Google the subject if it might affect one's work today? It is, at least, not just cynicism to ask.
Maybe to be safe, given this situation, there would be two solutions:
And as I look at the ancient papers (that is, ones from 2016-17) that I've saved in my Must-Read folder, I see all sorts of things that, if they had actually been widely read, much less heeded, would mean that many papers being published today might not seem so original. At least, new work might better reflect what we already know--or should know if we cared about or read that ancient literature.
At least I think that, satire aside, in the rush to publish what's truly new, as well as for professional score-counting and so on, and with the proliferating plethora of journals, the past is no longer prologue (sorry, Shakespeare!) as it once was and, one can argue should still be. The past is just the past; it doesn't seem to pay to recognize, much less to heed it, except for strategic citation-in-passing reasons and because bibliography software can be used to winnow out citable papers so that reviewers of papers or grant applications won't be negative because their work wasn't cited. You can judge for yourself whether this is being realistic or too cynical (perhaps both)!
The flux of science publishing is enormous for many reasons. Not least is the expansion in the number of scientists. But this is exacerbated by careerist score-counting criteria that have been growing like the proverbial Topsy in recent decades: the drive to get grants, big and bigger, long and longer. Often in biomedical sciences, at least, grants must include investigator salaries, so there is massive self-interest in enumerable 'productivity'. The journals proliferate to fill this market, and of course to fill the coffers of the publishers' self-interest. Too cynical?
Over the years, in part to deflate Old Boy networks, 'objective' criteria have come to include, besides grants garnered, a faculty member's number of papers, ranking of the journals they're in, citation counts, and other 'impact factor' measures. This grew in some ways also to feed the growing marketeering by vendors, even who provide score-counting tools, and university bureaucracies. More generally, it reflects the way middle-class life, the life most of us now lead, has become--attempts to earn status, praise, wealth, and so on by something measurable and therefore ostensibly objective. Too cynical?
Indeed, it is now common for graduate students--or even undergrads--to attend careerism seminars. Instruction in how to get published, how to get funded, how to work the System. This may be good in a sense, or at least realistic, even if it was not so when, long ago, I was a graduate student. It does, however, put strategizing rather than science up front, a first-year learning priority. One wonders how much time is lost that, in those bad old days, was spent thinking and learning about the science itself. We were, for example, to spend our 2-year Master's program learning our field, only then to get into a lab and do original work, which was what a PhD was about. It is fair to ask whether this is just a change in our means of being and doing, without effect on the science itself, or whether careerism is displacing or even replacing really creative science? When is objection to change nothing more than nostalgic cynicism?
Is science more seriously 'productive' than it used to be?
Science journals have always been characterized largely by the minutiae they publish, because (besides old boy-ism) real, meaty, important results are hard to come by. Most observation in the past, and experiment these days, yields little more than curios. You can see this by browsing decades-old volumes even of the major science journals. The reports may be factually correct, but of minimal import. Even though science has become a big industry rather than the idle rich's curiosity, most science publishing now, as in the past, might more or less still be vanity publishing. Yet, as science has become more of a profession, there are important advances, so it is not clear whether science is now more splash than substance than it was in the past.
So, even if science has become an institutionalized, established, middle-class industry, and most of us will go down and out, basically unknown in the history of our fields, that has probably always been the case. Any other view probably is mainly retrospective selective bias: we read biographies of our forebears, making them seem few and far between, and all substantial heroes; but what we are reading is about those forebears who really did make a difference. The odd beetle collector is lost to history (except maybe to historians, who themselves may be making their livings on arcane minutiae). So if that's just reality, there is no need to sneer cynically at it.
More time and energy are taken up playing today's game than was the case, or was necessary, in the past--at least I think that is pretty clear, if impossible to prove. Even in the chaff-cloud, lasting knowledge does seem to be much more per year than it used to be. That seems real, but it reveals another reality. We can only deal with so much. With countless papers published weekly, indeed many of them reviews (so we don't have to bother reading the primary papers), overload is quick and can be overwhelming.
That may be cynical, but it's also a reality. My Must-Read folder on my computer is simply over-stuffed, with perhaps a hundred or more papers that I 'Saved' every year. When I went to try to clean my directory this morning, I was overwhelmed: what papers before, say, 2015 are still trustworthy, as reports or even as reviews of then-recent work? Can one even take reviews seriously, or cite them or past primary papers without revealing one's out-of-dateness? New work obviously can obsolesce prior reviews. Yet reviews make the flood of prior work at least partially manageable. But would it be safer just to Google the subject if it might affect one's work today? It is, at least, not just cynicism to ask.
Maybe to be safe, given this situation, there would be two solutions:
1. Just Google the subject and get the most recent papers and reviews;
2. There should be software that detects and automatically shreds papers in a Science Download directory, that haven't had any measurable impact in, say, 5 or (to be generous) 10 years. We already have sites like Reddit, whose contents may not have a doomsday eraser. But in science, to have mercy on our minds and our hard discs, what we need is Shred-it!
Tuesday, October 16, 2018
Where has all the thinking gone....long time passing?
Where did we get the idea that our entire nature, not just our embryological development, but everything else, was pre-programmed by our genome? After all, the very essence of Homo sapiens compared to all other species, is that we use culture--language, tools, etc.--to do our business rather than just our physical biology. In a serious sense, we evolved to be free of our bodies, our genes made us freer from our genes than most if not all other species! And we evolved to live long enough to learn--language, technology, etc.--in order to live our thus-long lives.
Yet isn't an assumption of pre-programming the only assumption by which anyone could legitimately promise 'precision' genomic medicine? Of course, Mendel's work, adopted by human geneticists over a century ago, allowed great progress in understanding how genes lead at least to the simpler of our traits, with discrete (yes/no) manifestations, traits that do include many diseases that really, perhaps surprisingly, do behave in Mendelian fashion, and for which concepts like dominance and recessiveness been applied and that, sometimes, at least approximately hold up to closer scrutiny.
Even 100 years ago, agricultural and other geneticists who could do experiments, largely confirmed the extension of Mendel to continuously varying traits, like blood pressure or height. They reasoned that many genes (whatever they were, which was unknown at the time) contributed individually small effects. If each gene had two states in the usual Aa/AA/aa classroom example sense, but there were countless such genes, their joint action could approximate continuously varying traits whose measure was, say, the number of A alleles in an individual. This view was also consistent with the observed correlation of trait measure with kinship-degree among relatives. This history has been thoroughly documented. But there are some bits, important bits, missing, especially when it comes to the fervor for Big Data 'omics analysis of human diseases and other traits. In essence, we are still, a century later, conceptual prisoners of Mendel.
'Omics over the top: key questions generally ignored
Let us take GWAS (genomewide association studies) on their face value. GWAS find countless 'hits', sites of whatever sort across the genome whose variation affects variation in WhateverTrait you choose to map (everything simply must be 'genomic' or some other 'omic, no?). WhateverTrait varies because every subject in your study has a different combination of contributing alleles. Somewhat resembling classical Mendelian recessiveness, contributing alleles are found in cases as well as controls (or across the measured range of quantitative traits like stature or blood pressure), where the measured trait reflects how many A's one has: WhateverTrait is essentially the sum of A's in 'cases', which may be interpreted as a risk--some sort of 'probability' rather than certainty--of having been affected or of having the measured trait value.
We usually treat risk as a 'probability,' a single value, p, that applies to everyone with the same genotype. Here, of course, no two subjects have exactly the same genotype so some sort of aggregate risk score, adding up each person's 'hits', is assigned a p. This, however, tacitly assumes something like that each site contributes some fixed risk or 'probability' of affection. But this treats these values as if they were essential to the site, each thus acting as a parameter of risk. That is, sites are treated as a kind of fixed value or, one might say 'force', relative to the trait measure in question.
One obvious and serious issue is that these are necessarily estimated from past data, that is, by induction from samples. Not only is there sampling variation that usually is only crudely estimated by some standard statistical variation-related measure, but we know that the picture will be at least somewhat different in any other sample we might have chosen, not to mention other populations; and those who are actually candid about what they are doing know very well that the same people living in a different place or time would have different risks for the same trait.
No study is perfect, so we use some conveniently assumed well-behaved regression/correction adjustments to account for the statistical 'noise' due to factors like age, sex, and unmeasured environmental effects. Much worse than these issues, there are clearly factors of imprecision, and the obvious major one, taboo even to think about much less to mention, that relevant future factors (mutations, environments, lifestyles) are unknowable, even in principle. So what we really do, are forced to do, is extend what the past was like to the assumed future. But besides this, we don't count somatic changes (mutation arising in body tissues during life, that were not inherited), because they'd mess up our assertions of 'precision', and we can't measure them well in any case (so just shut one's eyes and pretend the ghost isn't in the house!).
All of these together mean that we are estimating risks from imperfect existing samples and past life-experience, but treating them as underlying parameters so that we can extend them to future samples. What that does is equate induction with deduction, assuming the past is rigorously parametric and will be the same in the future; but this is simply scientifically and epistemologically wrong, no matter how inconvenient it is to acknowledge this. Mutations, genotypes, and environments of the future are simply unpredictable, even in principle.
None of this is a secret, or new discovery, in any way. What it is, is inconvenient truth. These things should have been enough, by themselves and without badgering investigators about environmental factors that (we know very well, typically predominate) prevent all the NIH's precision promises from being accurate ('precise'), or even to a knowable degree. Yet this 'precision' sloganeering is being, sheepishly, aped all over the country by all sorts of groups who don't think for themselves and/or who go along lest they get left off the funding gravy train. This is the 'omics fad. If you think I am being too cynical, just look at what's being said, done, published, and claimed.
These are, to me, deep flaws in the way the GWAS and other 'omics industries, very well-heeled, are operating these days, to pick the public's pocket (pharma may, slowly, be awakening-- Lancet editorial, "UK life science research: time to burst the biomedical bubble," Lancet 392:187, 2018). But scientists need jobs and salaries, and if we put people in a position where they have to sing in this way for their supper, what else can you expect of them?
Unfortunately, there are much more serious problems with the science, and they have to do with the point-cause thinking on which all of this is based.
Even a point-cause must act through some process
By far most of the traits, disease or otherwise, that are being GWAS'ed and 'omicked these days, at substantial public expense, are treated as if the mapped 'causes' are point causes. If there are n causes, and a person has an unlucky set m out of many possible sets, one adds 'em up and predicts that person will have the target trait. And there is much that is ignored, assumed, or wishfully hidden in this 'will'. It is not clear how many authors treat it, tacitly, as a probability vs a certainty, because no two people in a sample have the same genotype and all we know is that they are 'affected' or 'unaffected'.
The genomics industry promises, essentially, that from conception onward, your DNA sequence will predict your diseases, even if only in the form of some 'risk'; the latter is usually a probability and despite the guise of 'precision' it can, of course, be adjusted as we learn more. For example, it must be adjusted for age, and usually other variables. Thus, we need ever larger and more and longer-lasting samples. This alone should steer people away from being profiteered by DNA testing companies. But that snipe aside, what does this risk or 'probability' actually mean?
Among other things, those candid enough to admit it know that environmental and lifestyle factors have a role, interacting with the genotype if not, usually, overwhelming it, meaning, for example, that the genotype only confers some, often modest, risk probability, the actual risk much more affected by lifestyle factors, most of which are not measured or not measured with accuracy, or not even yet identified. And usually there is some aspect that relates to age, or some assumption about what 'lifetime' risk means. Whose lifetime?
Aspects of such a 'probability'
There are interesting issues, longstanding issues, about these probabilities, even if we assume they have some kind of meaning. Why do so many important diseases, like cancers, only arise at some advanced age? How can a genomic 'risk' be so delayed and so different among people? Why are mice, with very similar genotypes to humans (which is why we do experiments on them to learn about human disease) only live to 3 while we live to our 70s and beyond?
Richard Peto, raised some of these questions many decades ago. But they were never really addressed, even in an era when NIH et al were spending much money on 'aging' research including studies of lifespan. There were generic theories that suggested from an evolutionary theory why some diseases were deferred to later ages (it is called 'negative pleiotropy'), but nobody tried seriously to explain why that was from a molecular/genetic point of view. Why do mice only live only 3 years, anyway? And so on.
These are old questions and very deep ones but they have not been answered and, generally, are conveniently forgotten--because, one might argue, they are inconvenient.
If a GWAS score increases the risk of a disease, that has a long delayed onset pattern, often striking late in life, and highly variable among individuals or over time, what sort of 'cause' is that genotype? What is it that takes decades for the genes to affect the person? There are a number of plausible answers, but they get very little attention at least in part because that stands in the way of the vested interests of entrenched too-big-to-kill Big Data faddish 'research' that demands instant promises to the public it is trephining for support. If the major reason is lifestyle factors, then the very delayed onset should be taken as persuasive evidence that the genotype is, in fact, by itself not a very powerful predictor.
Why would the additive effects of some combination of GWAS hits lead to disease risk? That is, in our complex nature why would each gene's effects be independent of each other contributor? In fact, mapping studies usually show evidence that other things, such as interactions are important--but they are at present almost impossibly complex to be understood.
Does each combination of genome-wide variants have a separate age-onset pattern, and if not, why not? And if so, how does the age effect work (especially if not due to person-years of exposure to the truly determining factors of lifestyle)? If such factors are at play, how can we really know, since we never see the same genotype twice? How can we assume that the time-relationship with each suspect genetic variant will be similar among samples or in the future? Is the disease due to post-natal somatic mutation, in which case why make predictions based on the purported constitutive genotypes of GWAS samples?
Obviously, if long delayed onset patterns are due not to genetic but to lifestyle exposures interacting with genotypes, then perhaps lifestyle exposures should be the health-related target, not exotic genomic interventions. Of course, the value of genome-based prediction clearly depends on environmental/lifestyle exposures, and the future of these exposure is obviously unknowable (as we clearly do know from seeing how unpredictable past exposures have affected today's disease patterns).
The point here is that our reliance on genotypes is a very convenient way of keeping busy, bringing in the salaries, but not facing up to the much more challenging issues that the easy one (run lots of data through DNA sequencers) can't address. I did not invent these points, and it is hard to believe that at least the more capable and less me-too scientists don't clearly know them, if quietly. Indeed, I know this from direct experience. Yes, scientists are fallible, vain, and we're only human. But of all human endeavors, science should be based on honesty because we have to rely on trust of each other's work.
The scientific problems are profound and not easily solved, and not soluble in a hurry. But much of the problem comes from the funding and careerist system that shackles us. This is the deeper explanation in many ways. The paint on the House of Science is the science itself, but it is the House that supports that paint that is the real problem.
A civically responsible science community, and its governmental supporters, should be freed from the iron chains of relentless Big Data for their survival, and start thinking, seriously, about the questions that their very efforts over the past 20 years, on trait after trait, in population after population, and yes, with Big Data, have clearly revealed.
Yet isn't an assumption of pre-programming the only assumption by which anyone could legitimately promise 'precision' genomic medicine? Of course, Mendel's work, adopted by human geneticists over a century ago, allowed great progress in understanding how genes lead at least to the simpler of our traits, with discrete (yes/no) manifestations, traits that do include many diseases that really, perhaps surprisingly, do behave in Mendelian fashion, and for which concepts like dominance and recessiveness been applied and that, sometimes, at least approximately hold up to closer scrutiny.
Even 100 years ago, agricultural and other geneticists who could do experiments, largely confirmed the extension of Mendel to continuously varying traits, like blood pressure or height. They reasoned that many genes (whatever they were, which was unknown at the time) contributed individually small effects. If each gene had two states in the usual Aa/AA/aa classroom example sense, but there were countless such genes, their joint action could approximate continuously varying traits whose measure was, say, the number of A alleles in an individual. This view was also consistent with the observed correlation of trait measure with kinship-degree among relatives. This history has been thoroughly documented. But there are some bits, important bits, missing, especially when it comes to the fervor for Big Data 'omics analysis of human diseases and other traits. In essence, we are still, a century later, conceptual prisoners of Mendel.
'Omics over the top: key questions generally ignored
Let us take GWAS (genomewide association studies) on their face value. GWAS find countless 'hits', sites of whatever sort across the genome whose variation affects variation in WhateverTrait you choose to map (everything simply must be 'genomic' or some other 'omic, no?). WhateverTrait varies because every subject in your study has a different combination of contributing alleles. Somewhat resembling classical Mendelian recessiveness, contributing alleles are found in cases as well as controls (or across the measured range of quantitative traits like stature or blood pressure), where the measured trait reflects how many A's one has: WhateverTrait is essentially the sum of A's in 'cases', which may be interpreted as a risk--some sort of 'probability' rather than certainty--of having been affected or of having the measured trait value.
We usually treat risk as a 'probability,' a single value, p, that applies to everyone with the same genotype. Here, of course, no two subjects have exactly the same genotype so some sort of aggregate risk score, adding up each person's 'hits', is assigned a p. This, however, tacitly assumes something like that each site contributes some fixed risk or 'probability' of affection. But this treats these values as if they were essential to the site, each thus acting as a parameter of risk. That is, sites are treated as a kind of fixed value or, one might say 'force', relative to the trait measure in question.
One obvious and serious issue is that these are necessarily estimated from past data, that is, by induction from samples. Not only is there sampling variation that usually is only crudely estimated by some standard statistical variation-related measure, but we know that the picture will be at least somewhat different in any other sample we might have chosen, not to mention other populations; and those who are actually candid about what they are doing know very well that the same people living in a different place or time would have different risks for the same trait.
No study is perfect, so we use some conveniently assumed well-behaved regression/correction adjustments to account for the statistical 'noise' due to factors like age, sex, and unmeasured environmental effects. Much worse than these issues, there are clearly factors of imprecision, and the obvious major one, taboo even to think about much less to mention, that relevant future factors (mutations, environments, lifestyles) are unknowable, even in principle. So what we really do, are forced to do, is extend what the past was like to the assumed future. But besides this, we don't count somatic changes (mutation arising in body tissues during life, that were not inherited), because they'd mess up our assertions of 'precision', and we can't measure them well in any case (so just shut one's eyes and pretend the ghost isn't in the house!).
All of these together mean that we are estimating risks from imperfect existing samples and past life-experience, but treating them as underlying parameters so that we can extend them to future samples. What that does is equate induction with deduction, assuming the past is rigorously parametric and will be the same in the future; but this is simply scientifically and epistemologically wrong, no matter how inconvenient it is to acknowledge this. Mutations, genotypes, and environments of the future are simply unpredictable, even in principle.
None of this is a secret, or new discovery, in any way. What it is, is inconvenient truth. These things should have been enough, by themselves and without badgering investigators about environmental factors that (we know very well, typically predominate) prevent all the NIH's precision promises from being accurate ('precise'), or even to a knowable degree. Yet this 'precision' sloganeering is being, sheepishly, aped all over the country by all sorts of groups who don't think for themselves and/or who go along lest they get left off the funding gravy train. This is the 'omics fad. If you think I am being too cynical, just look at what's being said, done, published, and claimed.
These are, to me, deep flaws in the way the GWAS and other 'omics industries, very well-heeled, are operating these days, to pick the public's pocket (pharma may, slowly, be awakening-- Lancet editorial, "UK life science research: time to burst the biomedical bubble," Lancet 392:187, 2018). But scientists need jobs and salaries, and if we put people in a position where they have to sing in this way for their supper, what else can you expect of them?
Unfortunately, there are much more serious problems with the science, and they have to do with the point-cause thinking on which all of this is based.
Even a point-cause must act through some process
By far most of the traits, disease or otherwise, that are being GWAS'ed and 'omicked these days, at substantial public expense, are treated as if the mapped 'causes' are point causes. If there are n causes, and a person has an unlucky set m out of many possible sets, one adds 'em up and predicts that person will have the target trait. And there is much that is ignored, assumed, or wishfully hidden in this 'will'. It is not clear how many authors treat it, tacitly, as a probability vs a certainty, because no two people in a sample have the same genotype and all we know is that they are 'affected' or 'unaffected'.
The genomics industry promises, essentially, that from conception onward, your DNA sequence will predict your diseases, even if only in the form of some 'risk'; the latter is usually a probability and despite the guise of 'precision' it can, of course, be adjusted as we learn more. For example, it must be adjusted for age, and usually other variables. Thus, we need ever larger and more and longer-lasting samples. This alone should steer people away from being profiteered by DNA testing companies. But that snipe aside, what does this risk or 'probability' actually mean?
Among other things, those candid enough to admit it know that environmental and lifestyle factors have a role, interacting with the genotype if not, usually, overwhelming it, meaning, for example, that the genotype only confers some, often modest, risk probability, the actual risk much more affected by lifestyle factors, most of which are not measured or not measured with accuracy, or not even yet identified. And usually there is some aspect that relates to age, or some assumption about what 'lifetime' risk means. Whose lifetime?
Aspects of such a 'probability'
There are interesting issues, longstanding issues, about these probabilities, even if we assume they have some kind of meaning. Why do so many important diseases, like cancers, only arise at some advanced age? How can a genomic 'risk' be so delayed and so different among people? Why are mice, with very similar genotypes to humans (which is why we do experiments on them to learn about human disease) only live to 3 while we live to our 70s and beyond?
Richard Peto, raised some of these questions many decades ago. But they were never really addressed, even in an era when NIH et al were spending much money on 'aging' research including studies of lifespan. There were generic theories that suggested from an evolutionary theory why some diseases were deferred to later ages (it is called 'negative pleiotropy'), but nobody tried seriously to explain why that was from a molecular/genetic point of view. Why do mice only live only 3 years, anyway? And so on.
These are old questions and very deep ones but they have not been answered and, generally, are conveniently forgotten--because, one might argue, they are inconvenient.
If a GWAS score increases the risk of a disease, that has a long delayed onset pattern, often striking late in life, and highly variable among individuals or over time, what sort of 'cause' is that genotype? What is it that takes decades for the genes to affect the person? There are a number of plausible answers, but they get very little attention at least in part because that stands in the way of the vested interests of entrenched too-big-to-kill Big Data faddish 'research' that demands instant promises to the public it is trephining for support. If the major reason is lifestyle factors, then the very delayed onset should be taken as persuasive evidence that the genotype is, in fact, by itself not a very powerful predictor.
Why would the additive effects of some combination of GWAS hits lead to disease risk? That is, in our complex nature why would each gene's effects be independent of each other contributor? In fact, mapping studies usually show evidence that other things, such as interactions are important--but they are at present almost impossibly complex to be understood.
Does each combination of genome-wide variants have a separate age-onset pattern, and if not, why not? And if so, how does the age effect work (especially if not due to person-years of exposure to the truly determining factors of lifestyle)? If such factors are at play, how can we really know, since we never see the same genotype twice? How can we assume that the time-relationship with each suspect genetic variant will be similar among samples or in the future? Is the disease due to post-natal somatic mutation, in which case why make predictions based on the purported constitutive genotypes of GWAS samples?
Obviously, if long delayed onset patterns are due not to genetic but to lifestyle exposures interacting with genotypes, then perhaps lifestyle exposures should be the health-related target, not exotic genomic interventions. Of course, the value of genome-based prediction clearly depends on environmental/lifestyle exposures, and the future of these exposure is obviously unknowable (as we clearly do know from seeing how unpredictable past exposures have affected today's disease patterns).
The point here is that our reliance on genotypes is a very convenient way of keeping busy, bringing in the salaries, but not facing up to the much more challenging issues that the easy one (run lots of data through DNA sequencers) can't address. I did not invent these points, and it is hard to believe that at least the more capable and less me-too scientists don't clearly know them, if quietly. Indeed, I know this from direct experience. Yes, scientists are fallible, vain, and we're only human. But of all human endeavors, science should be based on honesty because we have to rely on trust of each other's work.
The scientific problems are profound and not easily solved, and not soluble in a hurry. But much of the problem comes from the funding and careerist system that shackles us. This is the deeper explanation in many ways. The paint on the House of Science is the science itself, but it is the House that supports that paint that is the real problem.
A civically responsible science community, and its governmental supporters, should be freed from the iron chains of relentless Big Data for their survival, and start thinking, seriously, about the questions that their very efforts over the past 20 years, on trait after trait, in population after population, and yes, with Big Data, have clearly revealed.
Monday, October 8, 2018
Evolution, to Engels--and a kind of lesson for us all?
That Beeb program went over many things about Engels that are familiar to anthropologists, among others. But it ended by referring to a work I'd not known of, a partly unfinished book on science called The Dialectics of Nature, which is available on line or as a pdf. The latter has an Introduction by JBS Haldane, one of the early 20th century's founding evolutionary geneticists, and a political leftist.
Engels. (One of many versions on the web) |
Engels discusses the various sciences as existed at the time (1883). Haldane points out some errors that were known by his (Haldane's) time, but Engels' book is a surprisingly deep, broad review of science at his time. I do not know how Engels knew so much science, but apparently he did.
Although Engels never completed it, the book was written only about 25 years after Darwin's Origin of Species, which to Engels was highly relevant to his views on society. But he went much further! He viewed essentially everything, not just human society, as evolving phenomena. While with various errors based on what was known at the time, he recognized astronomical change, geological evolution, and biological evolution as manifestations of the fundamental idea that things cosmic were not Created and thereafter remaining static, as prevailing biblically-derived views generally held, but had beginnings, and then changed. Engels applied his ideas to inanimate physical phenomena as they were then understood, as well as to life itself. In essence, his view was that everything is about change, with human society as just another instance.
Engels was looking for what we might call universal 'laws', in this case concerning how systems change. This would be a major challenge, by science, to the theologically based idea that once Created, worldly things were mainly constant. Engels noted that the classic Greeks had had a more 'modern' and correct view of the dynamics of existence than western Europe had developed under the reign of the Church.
Engles' book shows how grand thinking had led to, or could be made consistent with, the social thinking by which Marx and Engels could believe that sociocultural evolution was similarly non-static. If so, they claimed to see how societal dynamics would lead to future states in which the rather cruel, relatively primitive nature of nation states in his time would evolve to a fairer, more egalitarian kind of society. But Dialectics of Nature shows that Engles was thinking very broadly and 'scientifically', in the sense of trying to account for things not just in terms of opinions or wishes, but of natural forces, and the resulting dynamics of change. He wasn't the only one in his time who thought that the idea of an evolutionary process enabled one to predict its outcome--as seemed to be possible in physics and chemistry.
I am no Engels scholar, and I had no idea he was so knowledgeable about science as it stood at his time, nor that the idea of evolutionary change that he and Marx applied to society was, in a sense, based on the finding, in their view, of similar kinds of change in the physical cosmos. This in a sense, conveniently made the extension of the theory to society seem quite logical, or perhaps even obvious, and as noted above, many were speculating in similar ways. Marx and Engels scholars must be aware of this, but when I was exposed to these theories as an anthropology graduate student decades ago, I did not know of this connection between social and physical dynamics and evolution.
These alleged connections or similarities do not make the Marxist conclusions 'true', in the sense of scientific truth. The idea that geology and species evolve may seem similar to the idea that societal structures evolve. But just because two areas have some sort of similarity, or change over time and space, does not mean they have the same causes. Human culture involves the physical aspects of a society's environment, but culture is largely or mainly about human interactions, beliefs, kinship, and so on. There is no necessary physically causal or deep connection between that and species evolution or the growth and erosion of mountain ranges. A planetary orbit, a hula hoop, and an orange are all more or less 'round', but that does not establish connections between them.
At the same time, Engels worked at the height, one might say, of the idea that there were universal 'laws of Nature'. Darwin informally likened evolution to planetary motion, with law-like properties, and in some of his writing (e.g., about barnacles) he seems to have believed in a kind of selective inevitability--some species being, essentially, on the way to a terminal end found in related species (terminal, at least, as Darwin saw them in his time). This may not be as benighted as it may seem. Biologists still debate the question of what would happen if you could 'rewind the tape' of evolution, and start over. Some have argued that you'd get the same result. Others vigorously oppose this sort of belief in predictable destiny.
Given the ambience of science in the 19th century, and in the legacy of the 'Enlightenment' period in Europe only a century or two before, it is not surprising that Engels, wanting society also to be constrained by 'laws' or forces, and hence to be predictable if not leading to inevitable causal effects, would see parallels in the physical world. Many others in that general time period in Europe had similar law-like ideas about societies. It is, at the very least, interesting that Engles tried to make his social ideas be as reflective of natural laws as are the orbits of planets.
What about us, today?
It is easy to look back and see what was 'in the air' in some past time, and how it influenced people, even across a spectrum of interest areas. In this case, evolutionary concepts spanned the physical, biological, and social sciences. We can see how very clever, insightful people were influenced by the ambient ideas.
So it's easy to look back and discern common themes, about which each person invoking them thought he was having specific, original insights. But that's us looking back at them. What about us in our own time? How much might we, today, be affected by prevailing views--in scientific or societal affairs--that are 'in the air' but may not be as widely applicable as some argue that they are? How many of our prevailing views, that we of course think of as modern and better than the more primitive ones of the past, are similarly just part of the ambience of our times, that will be viewed with patronizing smiles at our naiveté? Does going with the flow, so to speak, of current tides make us see more deeply than our forebears--and how much is it just that we see things differently?
How can we know?
Saturday, October 6, 2018
And yet it moves....our GWAScopes and Galileo's lesson on reality
In 1633, Galileo Galilei was forced to recant before the Pope his ideas about the movement of the Earth, or else to face the most awful penalty. As I understand the story, he did recant....but after leaving the Cathedral, he stomped his foot on the ground, and declared "And yet it moves!" For various reasons, usually reflecting their own selfish vested interests, the powers that be in human society frequently stifle unwelcome truths, truths that would threaten their privileged well-being. It was nothing new in Galileo's time--and it's still prevalent today.
All human endeavors are in some ways captives of current modes of thinking--world-views, beliefs, power and economic structures, levels of knowledge, and explanatory frameworks. Religions and social systems often, or perhaps typically, constrain thinking. They provide comforting answers and explanations, and people feel threatened by those not adhering, not like us in their views. The rejection of heresy applies far beyond formal religion. Dissenters or non-believers are part of 'them' rather than 'us', a potential threat, and it is thus common if not natural to distrust, exclude, or even persecute them.
At the same time, the world is as the world really is, especially when it comes to the physical Nature. And that is the subject of science and scientific knowledge. We are always limited by current knowledge, of course, and history has shown how deeply that can depend on technology, as Galileo's experience with the telescope exemplifies.
When you look through a telescope . . . .
In Galileo's time, it was generally thought or perhaps believed is a better word, that the cosmos was God's creation as known by biblical authority. It was created in the proverbial Genesis way, and the earth--with we humans on it--was the special center of that creation. The crystal spheres bearing the stars and planets, circled around and ennobled us with their divine light. In the west, at least, this was not just the view, it was what had (with few exceptions) seemed right since the ancients.
But knowledge is often, if not perhaps always, limited by our senses, and they in turn are limited by our sensory technology. Here, the classical example is the invention of the telescope, and eventually, what that cranky thinker Galileo saw through it. Before his time, we had we had our naked eyes to see the sun move, and the stars seemed quite plausibly to be crystal spheres bearing twinkles of light, rotating around us.
If you don't know the story, Wikipedia or many other sources can be consulted. But it was dramatic! Galileo's experience taught science a revolutionary lesson about reality vs myth and, very directly, about the importance of technology in our understanding of the world we live in.
The lesson from Galileo was that when you look through a telescope you are supposed to change your mind about what is out there in Nature. The telescope lets you see what's really there--even if it's not what you wanted to see, or thought you'd see, or would be most convenient for you to see.
From Mendel's eyes to ours
Ever since antiquity, plant and animal breeders empirically knew about inheritance, that is, about the physical similarities between parents and offspring. Choose parents with the most desirable traits, and their offspring will have those traits, at least, so to speak, on average. But how does that work?
Mendel heard lectures in Vienna that gave him some notion of the particulate nature of matter. When, in trying to improve agricultural yields, he noticed discrete differences, he decided do test their nature in pea plants which he knew about and were manageable subjects of experiments to understand the Molecular Laws of Life (my phrase, not his).
Analogies are never perfect, but we might say that Mendel's picking discrete, manageable traits was like pre-Newtonians looking at stars but not at what controlled their motion. Mendel got an idea of how parents and offspring could resemble each other in distinct traits. In a similar way that a telescope was the instrument that allowed Galileo to see the cosmos better, and do more observing than guessing, geneticists got their Galilean equivalent, in genomewide mapping (GWAS), which allowed us to do less guessing about inheritance and to see it better. We got our GWAScope!
But what have we done with our new toy? We have been mesmerized by gene-gazing. Like Galileo's contemporaries who, finally accepting that what he saw really was there and not just an artifact of the new instrument, gazed through their telescopes and listed off this and that finding, we are on a grand scale just enumerating, enumerating, and enumerating. We even boast about it. We build our careers on it.
That me-too effort is not surprising nor unprecedented. But it is also become what Kuhn called 'normal science'. It is butting our heads upon a wall. It is doing more and more of the same, without realizing that what we see is what's there, but we're not explaining it. From early in the 20th century we had quantitative genetics theory--the theory that agricultural breeders have used in formal ways for that century, making traditional breeding that had been around since the discovery of agriculture, more formalized and empirically rigorous. But we didn't have the direct genetic 'proof' that the theory was correct. Now we do, and we have it in spades.
We are spinning wheels and spending wealth on simple gene-gazing. It's time, it's high time, for some new insight to take us beyond what our GWAScopes can see, digesting and understanding what our gene-gazing has clearly shown.
Unfortunately, at present we have an 'omics Establishment that is as entrenched, for reasons we've often discussed here on MT, as the Church was for explanations of Truth in Galileo's time. It is now time for us to go beyond gene-gazing. GWAScopes have given us the insight--but who will have the insight to lead the way?
Galileo: see Wikipedia "And yet it moves" |
At the same time, the world is as the world really is, especially when it comes to the physical Nature. And that is the subject of science and scientific knowledge. We are always limited by current knowledge, of course, and history has shown how deeply that can depend on technology, as Galileo's experience with the telescope exemplifies.
When you look through a telescope . . . .
In Galileo's time, it was generally thought or perhaps believed is a better word, that the cosmos was God's creation as known by biblical authority. It was created in the proverbial Genesis way, and the earth--with we humans on it--was the special center of that creation. The crystal spheres bearing the stars and planets, circled around and ennobled us with their divine light. In the west, at least, this was not just the view, it was what had (with few exceptions) seemed right since the ancients.
But knowledge is often, if not perhaps always, limited by our senses, and they in turn are limited by our sensory technology. Here, the classical example is the invention of the telescope, and eventually, what that cranky thinker Galileo saw through it. Before his time, we had we had our naked eyes to see the sun move, and the stars seemed quite plausibly to be crystal spheres bearing twinkles of light, rotating around us.
If you don't know the story, Wikipedia or many other sources can be consulted. But it was dramatic! Galileo's experience taught science a revolutionary lesson about reality vs myth and, very directly, about the importance of technology in our understanding of the world we live in.
The lesson from Galileo was that when you look through a telescope you are supposed to change your mind about what is out there in Nature. The telescope lets you see what's really there--even if it's not what you wanted to see, or thought you'd see, or would be most convenient for you to see.
Galileo's telescope (imagined). source: news.nationalgeographic.com |
Ever since antiquity, plant and animal breeders empirically knew about inheritance, that is, about the physical similarities between parents and offspring. Choose parents with the most desirable traits, and their offspring will have those traits, at least, so to speak, on average. But how does that work?
Mendel heard lectures in Vienna that gave him some notion of the particulate nature of matter. When, in trying to improve agricultural yields, he noticed discrete differences, he decided do test their nature in pea plants which he knew about and were manageable subjects of experiments to understand the Molecular Laws of Life (my phrase, not his).
Analogies are never perfect, but we might say that Mendel's picking discrete, manageable traits was like pre-Newtonians looking at stars but not at what controlled their motion. Mendel got an idea of how parents and offspring could resemble each other in distinct traits. In a similar way that a telescope was the instrument that allowed Galileo to see the cosmos better, and do more observing than guessing, geneticists got their Galilean equivalent, in genomewide mapping (GWAS), which allowed us to do less guessing about inheritance and to see it better. We got our GWAScope!
But what have we done with our new toy? We have been mesmerized by gene-gazing. Like Galileo's contemporaries who, finally accepting that what he saw really was there and not just an artifact of the new instrument, gazed through their telescopes and listed off this and that finding, we are on a grand scale just enumerating, enumerating, and enumerating. We even boast about it. We build our careers on it.
That me-too effort is not surprising nor unprecedented. But it is also become what Kuhn called 'normal science'. It is butting our heads upon a wall. It is doing more and more of the same, without realizing that what we see is what's there, but we're not explaining it. From early in the 20th century we had quantitative genetics theory--the theory that agricultural breeders have used in formal ways for that century, making traditional breeding that had been around since the discovery of agriculture, more formalized and empirically rigorous. But we didn't have the direct genetic 'proof' that the theory was correct. Now we do, and we have it in spades.
We are spinning wheels and spending wealth on simple gene-gazing. It's time, it's high time, for some new insight to take us beyond what our GWAScopes can see, digesting and understanding what our gene-gazing has clearly shown.
Unfortunately, at present we have an 'omics Establishment that is as entrenched, for reasons we've often discussed here on MT, as the Church was for explanations of Truth in Galileo's time. It is now time for us to go beyond gene-gazing. GWAScopes have given us the insight--but who will have the insight to lead the way?
Thursday, October 4, 2018
Processed meat? Really? How to process epidemiological news
So this week's Big Story in health is that processed meat is a risk for breast cancer. A study has been published that finds it so.....so it must be true, right? After all, it's on CNN and in some research report. Well, read even CNN's headliner story and you'll see the caveats, the admissions, softened of course, that the excess risk isn't that great, but, at least, that the past studies have been 'inconsistent'.
Of course, with this sort of 'research' the weak associations with some named risk factors can easily be correlated with who knows how many other behavioral or other factors, and even if researchers tried to winnow them out, it is obvious that it's a guessing game. Too many aspects of our lives are unreported, unknown, or correlated. This is why week after week, it seems, do-this or don't-do-that stories hit the headlines. If you believe them, well, I guess you should stop eating bacon.....until next week when some story will say that bacon prevents some disease or other.
Why breast cancer, by the way? Why not intestinal or many other cancers? Why, if even the current story refers to past results as being 'inconsistent' do we assume this one's right and they, or some of them, were wrong? Could it be that this is because investigators want attention, journalists need news stories, and so on?
Why, by the way, is it always things that are actually pleasurable to eat that end up in these stories? Why is it never cauliflower, or rhubarb, or squash? Why coffee and not hibiscus tea? Could western notions of sin have anything to do with the design of the studies themselves?
But what about, say, protective effects?
Of course, the headlines are always about the nasty diseases to which anything fun, like a juicy bacon sandwich, not to mention alcohol, coffee, cookies, and so on seems to condemn us. This makes for 'news', even if the past studies have been 'inconsistent' and therefore (it seems) we can believe this new one.
However, maybe eating bacon sandwiches has beneficial effects that don't make the headlines. Maybe they protect us from hives, antisocial or even criminal behavior, raise our IQ, or get fewer toothaches. Who could look for all those things, when they're busy trying to find bad things that bacon sandwiches cause? Have investigators of this sort of behavioral exposure asked whether bacon and, say, beer raise job performance, add to longevity, or (heavens!) improve one's sex life? Are these studies, essentially, about bad outcomes from things we enjoy? Is that, in fact, a subtle, indirect effect of the Protestant ethic or something like that? Of the urge to find bad things in these studies because they're paid for by NIH and done by people in medical schools?
The serious question
There are the pragmatic, self-interested aspects to these stories, and indeed even to the publication of the papers in proper journals. If they disagree with previous work on the purportedly same subject, they get new headlines, when they should perhaps not be published without explicitly addressing the disagreement in real detail, as the main point of the work--rather than the subtle implication that now, finally, these new authors have got it right. Or at least, they should not headline their findings. Or something!
Instead, news sells, and thus we build a legacy of yes/yes/no/maybe/no/yes! studies. These may generally be ignored by our baconophilic society, or they could make lots of people switch to spinach sandwiches, or many other kinds of effects. This latter is somewhat akin to the quantum mechanical notion that measurement gives only incomplete information but affects what's being measured.
Epidemiological studies of this sort have been funded, at large expense, for decades now, and if there is anything consistent about them, it's that they are not consistent. There must be a reason! Is it really that the previous studies weren't well done? Is it that if you fish for enough items, you'll catch something--big questionnaire studies looking at too many things? Is it changing behaviors in ways not being identified by the studies?
Or, perchance, is it that these investigators need projects to get funded? This sort of yo-yo result is very, very common. There must be some explanation, and that inconsistency itself is likely as fundamental and important as any given study's findings. Maybe bacon-burgers only are bad for you in some cultural environments, and these change in unmeasured ways, and that varying results are not 'inconsistent' at all--maybe it's the expectation that there's one relevant truth, so that inconsistency suggests problems in study design. Maybe the problem is in simplistic thinking about risks.
Where do cynical possibilities meet serious epistemological ones, and how do we tell?
Yummy poison!! source: from the web, at Static.zoonar.com |
Why breast cancer, by the way? Why not intestinal or many other cancers? Why, if even the current story refers to past results as being 'inconsistent' do we assume this one's right and they, or some of them, were wrong? Could it be that this is because investigators want attention, journalists need news stories, and so on?
Why, by the way, is it always things that are actually pleasurable to eat that end up in these stories? Why is it never cauliflower, or rhubarb, or squash? Why coffee and not hibiscus tea? Could western notions of sin have anything to do with the design of the studies themselves?
But what about, say, protective effects?
Of course, the headlines are always about the nasty diseases to which anything fun, like a juicy bacon sandwich, not to mention alcohol, coffee, cookies, and so on seems to condemn us. This makes for 'news', even if the past studies have been 'inconsistent' and therefore (it seems) we can believe this new one.
However, maybe eating bacon sandwiches has beneficial effects that don't make the headlines. Maybe they protect us from hives, antisocial or even criminal behavior, raise our IQ, or get fewer toothaches. Who could look for all those things, when they're busy trying to find bad things that bacon sandwiches cause? Have investigators of this sort of behavioral exposure asked whether bacon and, say, beer raise job performance, add to longevity, or (heavens!) improve one's sex life? Are these studies, essentially, about bad outcomes from things we enjoy? Is that, in fact, a subtle, indirect effect of the Protestant ethic or something like that? Of the urge to find bad things in these studies because they're paid for by NIH and done by people in medical schools?
The serious question
There are the pragmatic, self-interested aspects to these stories, and indeed even to the publication of the papers in proper journals. If they disagree with previous work on the purportedly same subject, they get new headlines, when they should perhaps not be published without explicitly addressing the disagreement in real detail, as the main point of the work--rather than the subtle implication that now, finally, these new authors have got it right. Or at least, they should not headline their findings. Or something!
Instead, news sells, and thus we build a legacy of yes/yes/no/maybe/no/yes! studies. These may generally be ignored by our baconophilic society, or they could make lots of people switch to spinach sandwiches, or many other kinds of effects. This latter is somewhat akin to the quantum mechanical notion that measurement gives only incomplete information but affects what's being measured.
Epidemiological studies of this sort have been funded, at large expense, for decades now, and if there is anything consistent about them, it's that they are not consistent. There must be a reason! Is it really that the previous studies weren't well done? Is it that if you fish for enough items, you'll catch something--big questionnaire studies looking at too many things? Is it changing behaviors in ways not being identified by the studies?
Or, perchance, is it that these investigators need projects to get funded? This sort of yo-yo result is very, very common. There must be some explanation, and that inconsistency itself is likely as fundamental and important as any given study's findings. Maybe bacon-burgers only are bad for you in some cultural environments, and these change in unmeasured ways, and that varying results are not 'inconsistent' at all--maybe it's the expectation that there's one relevant truth, so that inconsistency suggests problems in study design. Maybe the problem is in simplistic thinking about risks.
Where do cynical possibilities meet serious epistemological ones, and how do we tell?
Wednesday, October 3, 2018
In order to be recognized, you have to be read: an impish truth?
Edgar Allan Poe was an American short story writer, a master of macabre horror--the 3 G's, one might say: Grim, Gruesome, and Ghastly. Eeeeek!! If you don't know Poe, a BBC World Service podcast in the series The Forum (Sept 15, 2018) discusses his life and work. If you haven't yet, you should read him (but not too late at night or in too dark a room!). The Tell-tale Heart, The Murders in the Rue Morgue, The Pit and the Pendulum, and The Cask of Amontillado should be enough to scare the wits out of you! Eeeeek!!
Ah, scare tactics--what a ploy for attention! At a time when not many people were supporting themselves with writing alone, Poe apparently wrote that this going over the top was justified or even necessary if you wanted to make a living as a writer. If you have to sell stories, somebody has to know about them, be intrigued by what they promise, go out and buy them.
Is science also a fantasy horror?
Poe was referring to his use of extreme shock value in literature, stories of the unreal. But a colleague in genetics once boasted that "anything worth saying is worth exaggerating, and worth repeating", and drum-beating essentially the same idea over and over is a common science publishing policy. This attitude seems schemingly antithetical to the ideals of science which should, at least, be incompatible with showmanship for many reasons.
Explaining science and advocating one's view in responsible ways is part of education, and of course the public whose taxes support science has a right to know what scientists do with the money. New ideas may need to be stressed against a recalcitrant public, or even scientific, community. Nonetheless, pandering science to the public as a ploy to get attention or money from them, is unworthy. At the very least, it temps exaggeration or other misrepresentations of what's actually known. We regularly see the evidence of this in terms of outright fraud that is discovered, and also yes-no-yes-again results (does coffee help or hurt you?).
This, I think, reflects a gradual, subtle, but for someone paying attention, a substantial dumbing-down of science reporting, by even the mainstream news media--even the covers and news 'n views headlines of the major science journals approach checkout-counter magazines in this, in my view. Is this only crass but superficial pandering for reader and viewership--for subscription sales, or could it reflect a serious, degeneration in the quality of education itself, on which our society so heavily relies? Eeeeek!!
In fact, showman scientists aren't new. In a way, Hippocrates (whoever he was, if any single individual) once wrote a defensive article (On the Sacred Disease) in explicit competition for 'control' of the business of treating epilepsy, an effort to maintain that territory for medicine against competition from religion. Centuries later, Galen was apparently well-known for public demonstrations of vivisection and so on, to garner attention and presumably wealth.
Robert Boyle gave traveling demonstrations of his famous air-pump, doing cruel things to animals to show that he created a vacuum. Gall hustled his phrenology theory about skull shape and mental traits. In the age of sail, people returning from expeditions to the far unknown gave lurid reports (thrills for paying audiences) and brought back exotica (dead and stuffed). The captain of the Beagle, the ship on which Darwin sailed, brought live, unstuffed Fuegians back to England for display, among other such examples.
Yes, showman science isn't new. And perhaps because of the various facets of the profit motive (now perhaps especially attending biomedical research) we see what seems to be increasingly common reports of corruption even among prominent senior (not just desperate junior) academic scientists. This presumably results from the irresistible lure of lucre or pressure for attention and prominence. Getting funded and attention mean having a career, when promotion, salaries, tenure, and prestige depend on how much rather than on what. Ah, well, human fallibility!
The daily press feeds on, perpetuates (and profits from) simplistic claims of discovery along with breathless announcements that are often basically and culpably exaggerated promises. Universities, hungry for grants, overhead, and attention, are fully in the game. Showboat science isn't new, but I think has palpably ballooned in recent decades. Among other things, scientists intentionally, with self-interest, routinely sow a sense of urgency. Eeeeek!!
So should there be pressure on scientists to quiet down and stop relentless lobbying in every conceivable way? My personal (probably reactionary!) view is a definite 'yes!': we should discourage, or even somehow penalize showmanship of this sort. The public has a right to know what they're paying for, but we should fund science without forcing it to be such a competitive and entrepreneurial system that must be manipulated by 'going public', by advertising. If we want science to be done--and we should--then we should support it properly.
In a more balanced world, if you're hired as a science professor, the university owes you a salary, a lab, and resources to do what they hired you to do. A professor's job should not depend on being a sales agent for oneself and the university, as it very often is, sometimes rather explicitly today. Eeeeek!!
The imp of the perverse--in science today
One of Poe's stories was The Imp of the Perverse. The narrator remarks upon our apparent perverse drive to do just the opposite of what we think--or know--that we should do.
I won't give any spoilers, since you can enjoy it for yourself. (Eeeeek!!) But I think it has relevance to today's attitudes in science. Science should be--our self-mythology is that it is--a dispassionate search for the truth about Nature. Self-interest, biased perspectives, and other subjective aspects of our nature are to be avoided as much as possible. But the imp of our perverse is that it has become (quoth the raven) ever-more important that science be personally self-serving. It is hard to prevent ourselves, our imp, from blurting out that truth (though it is often acknowledged quietly, in private).
On the good side, careers in science have become more accessible to those not from the societal elite. The down side is that therefore we have to sing for our supper. Darwin and most others of science lore were basically of independent means. They didn't do science as a career, but as a calling.
Of course, as science has become more bureaucratic, bourgeois, and routine, Nature yields where mythology--lore, dogma, and religion--had held forth in the past. So, it is not clear whose interest that imp is serving. That's more than a bit unnerving! Eeeeek!!
Science 'ethics': can they be mainly fictional, too?
Each human society does things in some way, and things do get done. Indeed, having been trained as an anthropologist, perhaps I shouldn't be disturbed or surprised by the crass aspects of science--nor that this predictably includes increasingly frequent actual fraud egged on by the imp of the pressure of self-interest. Eeeeek!!
Our mythology of 'science' is the dispassionate attempt to understand Nature. But maybe that's really what it is: a myth. It is our way of pursuing knowledge, which science, of course, does. And in the process, as predecessors such as those I named above show, gaming science is not new. So isn't this just how human societies are, imperfect because we're imperfect beings? Is there reason to try, at least, to resist the accelerating self-promotion, and to put more resources not just to careers but to the substance of real problems that we ought to try to solve?
Or should we just admire how our scientists have learned to work the system--that we let costly projects become entrenched, train excess research personnel, scare the public about disease, or make glowing false promises to get them to put money in the plate every tax year? In the process, perhaps real solutions to problems are delayed, and we produce many more scientists than there are jobs, because one criterion for a successful lab is its size.
Were he alive and a witness to this situation, Poe might have fun dramatizing how science has become, though wonderful for some, for many, a horrible nightmare: Eeeeek!!
Edgar Allan Poe (1809-49) |
Is science also a fantasy horror?
Poe was referring to his use of extreme shock value in literature, stories of the unreal. But a colleague in genetics once boasted that "anything worth saying is worth exaggerating, and worth repeating", and drum-beating essentially the same idea over and over is a common science publishing policy. This attitude seems schemingly antithetical to the ideals of science which should, at least, be incompatible with showmanship for many reasons.
Explaining science and advocating one's view in responsible ways is part of education, and of course the public whose taxes support science has a right to know what scientists do with the money. New ideas may need to be stressed against a recalcitrant public, or even scientific, community. Nonetheless, pandering science to the public as a ploy to get attention or money from them, is unworthy. At the very least, it temps exaggeration or other misrepresentations of what's actually known. We regularly see the evidence of this in terms of outright fraud that is discovered, and also yes-no-yes-again results (does coffee help or hurt you?).
This, I think, reflects a gradual, subtle, but for someone paying attention, a substantial dumbing-down of science reporting, by even the mainstream news media--even the covers and news 'n views headlines of the major science journals approach checkout-counter magazines in this, in my view. Is this only crass but superficial pandering for reader and viewership--for subscription sales, or could it reflect a serious, degeneration in the quality of education itself, on which our society so heavily relies? Eeeeek!!
In fact, showman scientists aren't new. In a way, Hippocrates (whoever he was, if any single individual) once wrote a defensive article (On the Sacred Disease) in explicit competition for 'control' of the business of treating epilepsy, an effort to maintain that territory for medicine against competition from religion. Centuries later, Galen was apparently well-known for public demonstrations of vivisection and so on, to garner attention and presumably wealth.
Robert Boyle gave traveling demonstrations of his famous air-pump, doing cruel things to animals to show that he created a vacuum. Gall hustled his phrenology theory about skull shape and mental traits. In the age of sail, people returning from expeditions to the far unknown gave lurid reports (thrills for paying audiences) and brought back exotica (dead and stuffed). The captain of the Beagle, the ship on which Darwin sailed, brought live, unstuffed Fuegians back to England for display, among other such examples.
Yes, showman science isn't new. And perhaps because of the various facets of the profit motive (now perhaps especially attending biomedical research) we see what seems to be increasingly common reports of corruption even among prominent senior (not just desperate junior) academic scientists. This presumably results from the irresistible lure of lucre or pressure for attention and prominence. Getting funded and attention mean having a career, when promotion, salaries, tenure, and prestige depend on how much rather than on what. Ah, well, human fallibility!
The daily press feeds on, perpetuates (and profits from) simplistic claims of discovery along with breathless announcements that are often basically and culpably exaggerated promises. Universities, hungry for grants, overhead, and attention, are fully in the game. Showboat science isn't new, but I think has palpably ballooned in recent decades. Among other things, scientists intentionally, with self-interest, routinely sow a sense of urgency. Eeeeek!!
So should there be pressure on scientists to quiet down and stop relentless lobbying in every conceivable way? My personal (probably reactionary!) view is a definite 'yes!': we should discourage, or even somehow penalize showmanship of this sort. The public has a right to know what they're paying for, but we should fund science without forcing it to be such a competitive and entrepreneurial system that must be manipulated by 'going public', by advertising. If we want science to be done--and we should--then we should support it properly.
In a more balanced world, if you're hired as a science professor, the university owes you a salary, a lab, and resources to do what they hired you to do. A professor's job should not depend on being a sales agent for oneself and the university, as it very often is, sometimes rather explicitly today. Eeeeek!!
The imp of the perverse--in science today
One of Poe's stories was The Imp of the Perverse. The narrator remarks upon our apparent perverse drive to do just the opposite of what we think--or know--that we should do.
The Imp of the Perverse. Drawing by Arthur Rackham (source: Wiki entry on the story) |
On the good side, careers in science have become more accessible to those not from the societal elite. The down side is that therefore we have to sing for our supper. Darwin and most others of science lore were basically of independent means. They didn't do science as a career, but as a calling.
Of course, as science has become more bureaucratic, bourgeois, and routine, Nature yields where mythology--lore, dogma, and religion--had held forth in the past. So, it is not clear whose interest that imp is serving. That's more than a bit unnerving! Eeeeek!!
Science 'ethics': can they be mainly fictional, too?
Each human society does things in some way, and things do get done. Indeed, having been trained as an anthropologist, perhaps I shouldn't be disturbed or surprised by the crass aspects of science--nor that this predictably includes increasingly frequent actual fraud egged on by the imp of the pressure of self-interest. Eeeeek!!
Our mythology of 'science' is the dispassionate attempt to understand Nature. But maybe that's really what it is: a myth. It is our way of pursuing knowledge, which science, of course, does. And in the process, as predecessors such as those I named above show, gaming science is not new. So isn't this just how human societies are, imperfect because we're imperfect beings? Is there reason to try, at least, to resist the accelerating self-promotion, and to put more resources not just to careers but to the substance of real problems that we ought to try to solve?
Or should we just admire how our scientists have learned to work the system--that we let costly projects become entrenched, train excess research personnel, scare the public about disease, or make glowing false promises to get them to put money in the plate every tax year? In the process, perhaps real solutions to problems are delayed, and we produce many more scientists than there are jobs, because one criterion for a successful lab is its size.
Were he alive and a witness to this situation, Poe might have fun dramatizing how science has become, though wonderful for some, for many, a horrible nightmare: Eeeeek!!
Tuesday, October 2, 2018
Flirting with old-tyme racism. Is anyone paying attention?
The ability to extract DNA from archeological bone specimens has opened a new area for research to reconstruct the past, but in some senses, this is allowing the field of anthropology to recapitulate its sometimes questionable history. Anthropology has always been the study of groups of people, often characterized categorically, that is, as if their members were all alike, and were quite different from other groups.
There's a fine line between this kind of typological thinking and the hierarchical ranking of groups, often been aided and abetted by the technologies of the day, from phrenology in the 19th century, which could be used to show, for example, that Africans were born to be slaves, and in need of masters, to the use of DNA markers today, which have been interpreted by some to confirm the existence of biological races, and the primacy of genes over environment in the determination of who we are. In a time when social policy is too often based on this kind of categorial thinking, with, for example, spreading belief in the evils of immigration, the inherent right of some to more of society's goods, from education to health care to tax relief, etc., our generation's version of "scientific racism" can land on receptive ears. We cannot assume that the gross evils of the passed are gone, and the lessons learned.
There is a long line of examples of dangerously over-simplified but cute dumbed-down categorical assertions about groups, often in the genetic era from non-anthropologically sophisticated but prominent geneticists. One from years ago was the sequence of 'mitochondrial Eve', in which a set of mtDNA sequences was used to infer a common ancestral sequence, and that was then attributed to our founding first woman. There was, of course, one woman in which the imputed mtDNA sequence (or some sequence like it) occurred. But the rest of that woman's genome, her dual sets of 23 chromosomes, had genetic variation that was also found in countless contemporary women (and men); each variant in each gene found in a different set of those contemporaries, and each 'coalescing' as the term is, in some single ancestral individual at some time and in some place. This 'Eve' was only our common ancestor in her mtDNA, not her other genes, and so she, as a person, was not our 'Eve'--our shared female progenitor a la Genesis. Indeed, among all of our genes there was no single, common ancestral time or place--probably not in hundreds of miles, or thousands of years. Each DNA segment found today has its own ancestry.
Using the 'Eve' phrase was a cute liberty that got the story widely circulated, and as a Barnum & Bailey tactic, it worked very well. But its reference to the Biblical Eve, a woman from whom all of us are purportedly descended, was culpably misleading even if catchily attention-seeking. And, of course, the purported common ancestral mtDNA sequence is only inferred in a statistical sense, from today's mtDNA variation. This Eve-imagery came out of the Allan Wilson lab at UC Berkeley, a source of free-wheeling, glibly cute public claims. That sort of thing gets picked up by a media hungry for cute stories and gives it legs. So the behavior is rewarded.
More serious abuses of stereotypes
The 'mitochondrial Eve' characterization was cute and catchy, but perhaps harmless. But categorical oversimplifying by scientists isn't always just cute and harmless. In my day as a graduate student, a prominent physical anthropologist, at Penn no less, Carleton Coon, said in one of his widely read books on racial variation, that "No one can express anguish more convincingly by his facial expression than an Italian. A Negro's facial expression, on the other hand, consists largely of exposing his eyeballs and his teeth. There is good reason for this difference: the Italian's mobile and moving communication would be lost, under most lighting conditions, on a black face."
Yet when I was in graduate school, at about the same time as this was published, I took human anatomy at the University of Michigan medical school. When we got to the superficial facial muscles, here is the illustration of those muscles from my professor's own, prominent, anatomy text:
This drawing, uses a black person as an exemplar of human facial muscles. They are clear and clearly identified as functional; they are not degenerate or minimalized, incapable of full expression. They are not the muscles of but one category of people: they are the human muscles.
Rumors, at least, were that the eminent Professor Coon had argued, behind the scenes, against integrating schools in the US, on the grounds that 'Negroes' were of intellectually inferior ability. Categorical thinking, with its typically concomitant value judgments, is nothing new, and it's never over, but sloppy scientific thinking shouldn't contribute to the problem.
Even without making qualitative value judgments, categorical thinking about humans, a form of racism, is historically dangerous, and everyone in science should know that. Yet, recently, there has been a simple, dramatic story of past human 'breeding' habits that indicates that categorical scientific racism still has legs in our society and, indeed, our most prominent journals. If not intentionally, it's by a kind of convenient default.
Here are the cover, and one of the figures, from a recent issue of Nature. The embracing hands of people of different 'colors' shown as types who mated, indeed thus producing 'hybrids' between a Neanderthal and a Denisovan parent. This is a splashy story because these are considered to be different species. And the journal, naturally, used this as its lurid cover. The cover figure is about the 6 September story in that issue, from which we reproduce one figure that shows groups represented as regionally distributed people of different color. Is it unfair to call this stereotyping, of the old-fashioned type, even if only subliminally? Whatever the intent, the typological thinking is not subtle.
Thinking of this sort should have been long gone from Anthropology because DNA sequencing has clearly shown the internal variation and inter-group (or, better put, inter-geographic) overlap in variation. But when the publicity engines and the sensationalistic adrenalin are at work in science, whatever sells seems OK.
Even with a very long history of racism, including of course intentional slavery and genocide, we cannot seem to give up on types and categories, even inadvertent habits with no value judgment intended. But whether intentional and vicious, or merely inadvertent and de facto, this is essentially racism, and should be called out as such. And racism is dangerous, especially when voiced by scientists who should know better, or even, as I presume in this case, who are not racists in the usual discriminatory sense (that may not apply to their readers!). As a prominent colleague once said privately to me, he was not a 'personal racist' (he had African friends, after all)--he was just a typologist, a genetic determinist; i.e., a scientific racist.
Even if the authors of the human hybrid piece, happy enough for a cover story in a major journal, are not themselves "personal racists:, they perpetuate classificatory thinking. Countless people have lost their lives because of careless sloganeering. No matter its more polite guise, and carefully nonbiological group coloring in the figures, is this any different?
Is science heading back to those good ol' days again?
There's a fine line between this kind of typological thinking and the hierarchical ranking of groups, often been aided and abetted by the technologies of the day, from phrenology in the 19th century, which could be used to show, for example, that Africans were born to be slaves, and in need of masters, to the use of DNA markers today, which have been interpreted by some to confirm the existence of biological races, and the primacy of genes over environment in the determination of who we are. In a time when social policy is too often based on this kind of categorial thinking, with, for example, spreading belief in the evils of immigration, the inherent right of some to more of society's goods, from education to health care to tax relief, etc., our generation's version of "scientific racism" can land on receptive ears. We cannot assume that the gross evils of the passed are gone, and the lessons learned.
There is a long line of examples of dangerously over-simplified but cute dumbed-down categorical assertions about groups, often in the genetic era from non-anthropologically sophisticated but prominent geneticists. One from years ago was the sequence of 'mitochondrial Eve', in which a set of mtDNA sequences was used to infer a common ancestral sequence, and that was then attributed to our founding first woman. There was, of course, one woman in which the imputed mtDNA sequence (or some sequence like it) occurred. But the rest of that woman's genome, her dual sets of 23 chromosomes, had genetic variation that was also found in countless contemporary women (and men); each variant in each gene found in a different set of those contemporaries, and each 'coalescing' as the term is, in some single ancestral individual at some time and in some place. This 'Eve' was only our common ancestor in her mtDNA, not her other genes, and so she, as a person, was not our 'Eve'--our shared female progenitor a la Genesis. Indeed, among all of our genes there was no single, common ancestral time or place--probably not in hundreds of miles, or thousands of years. Each DNA segment found today has its own ancestry.
Using the 'Eve' phrase was a cute liberty that got the story widely circulated, and as a Barnum & Bailey tactic, it worked very well. But its reference to the Biblical Eve, a woman from whom all of us are purportedly descended, was culpably misleading even if catchily attention-seeking. And, of course, the purported common ancestral mtDNA sequence is only inferred in a statistical sense, from today's mtDNA variation. This Eve-imagery came out of the Allan Wilson lab at UC Berkeley, a source of free-wheeling, glibly cute public claims. That sort of thing gets picked up by a media hungry for cute stories and gives it legs. So the behavior is rewarded.
More serious abuses of stereotypes
The 'mitochondrial Eve' characterization was cute and catchy, but perhaps harmless. But categorical oversimplifying by scientists isn't always just cute and harmless. In my day as a graduate student, a prominent physical anthropologist, at Penn no less, Carleton Coon, said in one of his widely read books on racial variation, that "No one can express anguish more convincingly by his facial expression than an Italian. A Negro's facial expression, on the other hand, consists largely of exposing his eyeballs and his teeth. There is good reason for this difference: the Italian's mobile and moving communication would be lost, under most lighting conditions, on a black face."
Yet when I was in graduate school, at about the same time as this was published, I took human anatomy at the University of Michigan medical school. When we got to the superficial facial muscles, here is the illustration of those muscles from my professor's own, prominent, anatomy text:
From Woodburne: Essentials of Human Anatomy (4th ed.), 1969 |
This drawing, uses a black person as an exemplar of human facial muscles. They are clear and clearly identified as functional; they are not degenerate or minimalized, incapable of full expression. They are not the muscles of but one category of people: they are the human muscles.
Rumors, at least, were that the eminent Professor Coon had argued, behind the scenes, against integrating schools in the US, on the grounds that 'Negroes' were of intellectually inferior ability. Categorical thinking, with its typically concomitant value judgments, is nothing new, and it's never over, but sloppy scientific thinking shouldn't contribute to the problem.
Even without making qualitative value judgments, categorical thinking about humans, a form of racism, is historically dangerous, and everyone in science should know that. Yet, recently, there has been a simple, dramatic story of past human 'breeding' habits that indicates that categorical scientific racism still has legs in our society and, indeed, our most prominent journals. If not intentionally, it's by a kind of convenient default.
Here are the cover, and one of the figures, from a recent issue of Nature. The embracing hands of people of different 'colors' shown as types who mated, indeed thus producing 'hybrids' between a Neanderthal and a Denisovan parent. This is a splashy story because these are considered to be different species. And the journal, naturally, used this as its lurid cover. The cover figure is about the 6 September story in that issue, from which we reproduce one figure that shows groups represented as regionally distributed people of different color. Is it unfair to call this stereotyping, of the old-fashioned type, even if only subliminally? Whatever the intent, the typological thinking is not subtle.
Thinking of this sort should have been long gone from Anthropology because DNA sequencing has clearly shown the internal variation and inter-group (or, better put, inter-geographic) overlap in variation. But when the publicity engines and the sensationalistic adrenalin are at work in science, whatever sells seems OK.
Even with a very long history of racism, including of course intentional slavery and genocide, we cannot seem to give up on types and categories, even inadvertent habits with no value judgment intended. But whether intentional and vicious, or merely inadvertent and de facto, this is essentially racism, and should be called out as such. And racism is dangerous, especially when voiced by scientists who should know better, or even, as I presume in this case, who are not racists in the usual discriminatory sense (that may not apply to their readers!). As a prominent colleague once said privately to me, he was not a 'personal racist' (he had African friends, after all)--he was just a typologist, a genetic determinist; i.e., a scientific racist.
Even if the authors of the human hybrid piece, happy enough for a cover story in a major journal, are not themselves "personal racists:, they perpetuate classificatory thinking. Countless people have lost their lives because of careless sloganeering. No matter its more polite guise, and carefully nonbiological group coloring in the figures, is this any different?
Is science heading back to those good ol' days again?
Monday, October 1, 2018
My sexed-up Jordan Peterson fantasy
Picture me dressed in "a new three-piece suit, shiny and brown with wide lapels with a decorative silver flourish" and my cranium and jaws draped luxuriously, top to bottom, in Jordan Peterson hair.
I even pronounce "so-ree" like he does because I got my first period while living in Ontario, so I earned the right. Also, I don't care if it offends him or you.
I am pulling this off beautifully. Trust me:
Jordan Peterson and I, his chaotic mirror, are sitting across from one another in comfy leather armchairs, with nothing in between to break the gaze of my crotchal region pointed at his crotchal region. I'm as cool as he is. I don't have to act like I've been here before because I have.
I'm leaning in, in the most specific way, towards Jordan Peterson, a model of human success.
I ask my first question.
"Could you please lay out the scientific logic linking lobsters to the patriarchy?"
He says many things, including that people obviously aren't lobsters and how we aren't even chimpanzees, which is reasonable because he is skilled at reason.
Then I say, "You're a man of science. So, what do you have to say about any evidence that contradicts your ideas about the natural ways of hierarchies and how they're particularly relevant to human society? Also, have you thought of any ways your ideas could be tested?"
He says many things, but they have little to do with taking contradicting evidence seriously or having thought through the difficulty of testing much of what's fundamental here. And this is largely because, despite the veneer of science, these ideas have breeched the bounds of reasonable, feasible testing.
"What about this, eh?" I offer, "We take the top lobster from the west side of Prince Edward Island and move him to the east side of the island. Is he going to be the top lobster there too? And would this same experiment work for chimpanzees? Or for humans?"
He says many things but will make it seem like there is no point to what I asked. The idea that humans are not lobsters or chimpanzees will resurface--tethering him, once again, to reason and nuance yet not actually producing anything of the sort for us.
So I continue, "Because if this link to lobster hierarchies is supposed to go not just from lobsters to humans, but to individual humans and their natural strengths and weaknesses compared to other individuals, then context shouldn't matter and if we transplant a lobster or a human then they should each assume their natural position in the local social hierarchy, wherever we plop them."
He says many things that sound reasonable.
Then I add "It's not just the top lobster's lived experience that contributes to his place atop the hierarchy, it's everyone else's below him in that hierarchy too--right?"
He must agree with this and does. He says many things that sound reasonable.
"And so where do you change from lobster to human and acknowledge that bad luck and circumstances of birthplace and family and everything else stick people lower in the hierarchy than they could be in different circumstances?"
He says many things about how this is absolutely true for so many young white men in North America all of whom can improve their station if they just read 12 Rules For Life and fill out the worksheets.
"Could a lobster improve its station in any way comparable to what you suggest for your readers?"
He isn't having my silliness.
"Okay, let's back up. What is evolution?"
He's stunned but plays it off perfectly.
"How would you define evolution to your readers/viewers?"
He says many things straight out of Descent of Man and nods to some of the least huggable atheist superstars.
"It's not a single reader or viewer's fault that they don't know any better and just believe your evolutionary insinuations and assumptions, but don't you think you should know better? ... If I were to become globally influential and I wanted to share ideas about clinical psychology that would influence masses of people, being the Ph.D. and professor that I am, I would go to the cutting edge of the field and learn there first before going public. I would try to understand what is known and what is unknown in that field, your field, and appreciate how those things are known and why there still are unknowns. And, if my masses of followers were misunderstanding my take on clinical psychology or my takes on my own areas of expertise, being the Ph.D. and professor I am, I would go out of my way to clarify my ideas in hopes they'd understand me better. Do you do that with the ways that folks interpret your views, like how some take you literally about sex distribution and enforced monogamy?"
He says he thinks his readers/viewers understand him quite well on those topics. This is vague and elusive and I move on because I feel an odd mixture of disappointment, pity, and disgust and I'd like to leave it behind as soon as possible.
"I believe with confidence that there are fundamental cognitive differences between humans and other animals. Do you agree?"
He says he does.
"Do you agree that these cognitive differences have contributed significantly to our domination of the planet?"
He says sure, of course.
"So why be so limited in your vision for achieving equal freedom, equal opportunity? All I've heard from you is that men should act more masculine (and less feminine) and that so should women if men and women want to succeed. Do you have any other ideas? Or is that the extent of your imagination? Because it feels like quite an underappreciation of humankind to me. Like, the opposite of a moonshot, eh?"
He says many things that sound epic, I guess, if you are already a fan of him.
"Lobsters don't go to the moon. I think humankind can do better than just man-up."
He just plays it cool in his chair, there. And we can hear his fans all over the world laughing at me. No amount of masculine dress, hair, or swagger can disguise my big powerful lady PhD. And that's not just hilarious, but it also proves that men hold disproportionate power because of simple lobster logic. I've been dominated, which makes Jordan Peterson even more right about everything.
I even pronounce "so-ree" like he does because I got my first period while living in Ontario, so I earned the right. Also, I don't care if it offends him or you.
I am pulling this off beautifully. Trust me:
This is me being Axl Rose. |
Jordan Peterson and I, his chaotic mirror, are sitting across from one another in comfy leather armchairs, with nothing in between to break the gaze of my crotchal region pointed at his crotchal region. I'm as cool as he is. I don't have to act like I've been here before because I have.
I'm leaning in, in the most specific way, towards Jordan Peterson, a model of human success.
I ask my first question.
"Could you please lay out the scientific logic linking lobsters to the patriarchy?"
He says many things, including that people obviously aren't lobsters and how we aren't even chimpanzees, which is reasonable because he is skilled at reason.
Then I say, "You're a man of science. So, what do you have to say about any evidence that contradicts your ideas about the natural ways of hierarchies and how they're particularly relevant to human society? Also, have you thought of any ways your ideas could be tested?"
He says many things, but they have little to do with taking contradicting evidence seriously or having thought through the difficulty of testing much of what's fundamental here. And this is largely because, despite the veneer of science, these ideas have breeched the bounds of reasonable, feasible testing.
"What about this, eh?" I offer, "We take the top lobster from the west side of Prince Edward Island and move him to the east side of the island. Is he going to be the top lobster there too? And would this same experiment work for chimpanzees? Or for humans?"
He says many things but will make it seem like there is no point to what I asked. The idea that humans are not lobsters or chimpanzees will resurface--tethering him, once again, to reason and nuance yet not actually producing anything of the sort for us.
So I continue, "Because if this link to lobster hierarchies is supposed to go not just from lobsters to humans, but to individual humans and their natural strengths and weaknesses compared to other individuals, then context shouldn't matter and if we transplant a lobster or a human then they should each assume their natural position in the local social hierarchy, wherever we plop them."
He says many things that sound reasonable.
Then I add "It's not just the top lobster's lived experience that contributes to his place atop the hierarchy, it's everyone else's below him in that hierarchy too--right?"
He must agree with this and does. He says many things that sound reasonable.
"And so where do you change from lobster to human and acknowledge that bad luck and circumstances of birthplace and family and everything else stick people lower in the hierarchy than they could be in different circumstances?"
He says many things about how this is absolutely true for so many young white men in North America all of whom can improve their station if they just read 12 Rules For Life and fill out the worksheets.
"Could a lobster improve its station in any way comparable to what you suggest for your readers?"
He isn't having my silliness.
"Okay, let's back up. What is evolution?"
He's stunned but plays it off perfectly.
"How would you define evolution to your readers/viewers?"
He says many things straight out of Descent of Man and nods to some of the least huggable atheist superstars.
"It's not a single reader or viewer's fault that they don't know any better and just believe your evolutionary insinuations and assumptions, but don't you think you should know better? ... If I were to become globally influential and I wanted to share ideas about clinical psychology that would influence masses of people, being the Ph.D. and professor that I am, I would go to the cutting edge of the field and learn there first before going public. I would try to understand what is known and what is unknown in that field, your field, and appreciate how those things are known and why there still are unknowns. And, if my masses of followers were misunderstanding my take on clinical psychology or my takes on my own areas of expertise, being the Ph.D. and professor I am, I would go out of my way to clarify my ideas in hopes they'd understand me better. Do you do that with the ways that folks interpret your views, like how some take you literally about sex distribution and enforced monogamy?"
He says he thinks his readers/viewers understand him quite well on those topics. This is vague and elusive and I move on because I feel an odd mixture of disappointment, pity, and disgust and I'd like to leave it behind as soon as possible.
"I believe with confidence that there are fundamental cognitive differences between humans and other animals. Do you agree?"
He says he does.
"Do you agree that these cognitive differences have contributed significantly to our domination of the planet?"
He says sure, of course.
"So why be so limited in your vision for achieving equal freedom, equal opportunity? All I've heard from you is that men should act more masculine (and less feminine) and that so should women if men and women want to succeed. Do you have any other ideas? Or is that the extent of your imagination? Because it feels like quite an underappreciation of humankind to me. Like, the opposite of a moonshot, eh?"
He says many things that sound epic, I guess, if you are already a fan of him.
"Lobsters don't go to the moon. I think humankind can do better than just man-up."
He just plays it cool in his chair, there. And we can hear his fans all over the world laughing at me. No amount of masculine dress, hair, or swagger can disguise my big powerful lady PhD. And that's not just hilarious, but it also proves that men hold disproportionate power because of simple lobster logic. I've been dominated, which makes Jordan Peterson even more right about everything.