Friday, August 17, 2018

I'm still mad about the Google Memo and David Brooks's column about it


In 2017 there was the Google Memo (When your memo's bad theories give girls heebie jeebies ... That's Damore) and then David Brooks supported Damore in his New York Times column. So I  pitched a reply to the New York Times (rejected by silence) but never posted it here because I was department Chair and [fill in the blank with your wildest dreams].

But it’s not too late to post my thoughts here, and they’re still fresh in my frontal because I’m in the midst of some writing projects where I’m happily channeling my rage against the misuse of my beloved evolutionary thinking. 

So, mermaids, here’s that response to Brooks.  P.S.  I’m on sabbatical, so pardon my fucking French …

***
Dear Editor,

I write to you regarding David Brooks’ column about the firing of ‘Google memo’ author Damore titled , “Sundar Pichai Should Resign as Google’s C.E.O.” I offer some corrections and context for Brooks’ innumerable readers.

There is no debate about human nature being either, on one side, a blank slate or, on the other, evolutionary psychology. The debate pitting nature against nurture is long over and I tell all my students that anyone who says it's still a thing is mistaken. Everyone by 2017 agrees that genes + environment  shape an individual human's behavior over their lifetime (if one must boil biological complexity down to two vague, enormously complicated variables and simple arithmetic).

What is more, the description of evolutionary psychology Brooks provides (genes + environment), while it may describe the perspective of many evolutionary psychologists, is not a description of the field as implied. It describes what experts think across *many many* fields, including evolutionary biology, anthropology, and genetics, even the humanities, where many researchers and scholars are not terribly fond of evolutionary psychology, at least not with the simple, deterministic, overly-confident brand that folks like Damore and Brooks wave about.

By giving this particular brand of evolutionary psychology credit for what most experts in many fields already believe, Brooks has elevated it to the status that Damore did in the memo. Both Brooks and Damore are misleading their audiences about the state of science itself and it's ingenious because it helps them perpetuate the image they want to portray: that science is on their side. It is not.

And Brooks does it again when he quotes evolutionary psychologist, Geoffrey Miller, as a sort of fact-check of Damore’s claims in the memo. Brooks' presentation of Miller's validation leads readers to believe that the empirical support for sex differences is the product of evolutionary psychology. But these data are the products of numerous fields, psychology being one and evolutionary being the theoretical prerogative of some. I’m sure that every one of the scientists and scholars who produced the empirical data to establish sex differences in behavior and personality accept the reality of evolution, but evolutionary psychology, especially this particular brand, is something different.

Most people who have really grappled with how evolution works appreciate its complexity. Unfortunately these usually do not include people with tremendous influence, like Brooks. And Brooks is smitten with some problematic takes on the evolution of sex and gender differences in behavior.  

Could this ignorance, manipulation, or flat-out dishonesty--all with negative consequences for women and people of color--be what was so offensive about the ‘Google memo’ and Brooks’s column to the minds of many academics, instead of it being just some knee-jerk liberal reaction by leftist elites with weak, unscientific cognitive skills? Absolutely.

Evolution is true but it’s complicated and sticking to overly-simplistic and out-dated thinking makes it easy to bend to fit and justify one’s worldview. This is why racists think white people are the pinnacle of evolution. Darwin might have in the nineteenth century, but evolution in 2017 does not. 

Lest readers assume that because I am a female anthropology professor that I am diametrically opposed to the entire enterprise of evolutionary psychology, I am not. But I am critical of its over-zealous application to conceptions of 'human nature' and that's because (1) I regularly take scientific issue with the logic behind the claims, and (2) I understand the history of science and how many mistaken evolutionary claims have harmed human beings, and still do.

And it is really a shame that I have to add something like this but it's *because* intelligent people like Brooks and Damore don’t give enough fucks to think deeply about evolutionary biology, what it is and isn't, that they're able to empower their opinions with old, bad, weak, even untestable 'science.'

I wish I could say that in 2017 people, even the very learned ones, were cautious about what they can and cannot claim about complex phenomena. Here’s to a more humble, more fun future where we can actually figure cool shit out.

Evolution is everyone’s origin story. But takes like Brooks’ and Damore’s drive people away from the thing that gives me so much meaning and the thing I find so beautiful. So here I am. Sincerely,
Holly Dunsworth

Thursday, August 16, 2018

The Litella Factor: Changing the claimspace of science

We may be starting to see rationalizations and wiggle-words as investigators gradually inch away from many of the genomics-based claims, such as last year's slogan du jour that we're going to deliver 'precision' genomic medicine, or this year's that we'll find genomic causes of disease for 'All of Us'.  Science, of all human subjects, should be objective about the world and not sloganeering even if to wangle ever more funding from the public.  Many are by now quietly realizing not only that environments are important, which is nothing new though minimized by geneticists for a generation, but also that genomics itself is more complex, more variable, and less predictively powerful than has been so widely and often touted in recent years.

We've known the likely nature of genomic causal contribution complexity for literally a century (RA  Fisher's 1918 paper is the landmark).  The idea was a reasoned way to resolve what appeared to be fundamental differences between classically discrete Mendelian traits that took on only one or two states (yellow or green peas), and classically quantitative 'heritability' based traits that seemed to vary continuously (like height) and that as a result were presumed to be the main basis of Darwinian evolution.  The former states seemed never to change, and hence to evolve, while selection could move the average values of continuous traits.

The resolution of these two seemingly incompatible views came from the idea that complex traits were produced by many individual 'Mendelian' genes, but each with a very small effect, was a major advance in our understanding of both heritable causation and the evolution of life: agricultural and experimental breeding confirmed this 'modern synthesis' of evolutionary genetics to an extensive and consistent if implicit degree for a century.  

However, the specific genes that were responsible were largely implicit, assumed, or unknown.  There was no way to identify them until large-scale DNA sequencing technology became available.  What genomewide mapping (GWAS and other statistical ways to identify associations between genetic variants and trait variation) has shown is (1) that century-old model was basically right, and (2) we can identify many of the myriad genome regions whose variation is responsible for trait variation.  This was given a real boost in public support by the fact that many diseases were familial and, even more, that if our diseases and other traits are genetic, we can identify the responsible genes (and, hopefully do something to correct harmful variants).

Phenotypes and their evolution (effects on health and reproductive success) are in this context usually based on the individual as a whole, not individual genes--say, your blood pressure's effect on you as a whole person.  That is, the combinations of polygenic effects that GWAS has identified typically differ for each person even if they have the same trait measure.  We have also found something that is entirely consistent with the nature of evolution as a population phenomenon.  That is that much of the contributing genomescape for a given trait (like blood pressure) involves genome sites whose relevant variants have very low frequency or effects too small to measure with statistical 'significance', so that only a fraction of the estimated overall genetic contribution in the population (measured as the trait's 'heritability') is accounted for by mapping.  All of this has been a discovery success that is consistent with what was the basic formal genetic theory of evolution, developed over the 20th century.  

Great success--but.....
The very same work, however, has led to a problem.  This is the convenient practice equating induction with deduction.  That is, we estimate the risk effects of genomic sites from samples of individuals whose current trait-state reflects their genotype and their past lifestyle exposures.  For example, we estimate the average blood pressure with sampled individuals with some particular genotype.  That is induction.  But then we make a prediction that we promise that from a new person's genotype we can, with 'precision', predict his/her future state.  That is, we use this deductively to assume that the average from past samples is a future parameter--say, a probability p of getting some disease.  That is essentially what a genotype-specific risk is.

But this is based on the achieved effects of the individuals' genotypes at the test and other genome sites as well as lifestyle exposures (mainly unmeasurable and unknown).  We assume that similar factors will apply in the future, so that we can predict traits based on genome sequence.  That is what (by assumption) converts induction to deduction.  It rests on many untested or even untestable assumptions.  It is a dubious port of convenience, because future mutations and lifestyle exposures, which we know are crucial to trait causation, are unpredictable--even in principle.  We know this from clearly documented epidemiological history: disease prevalences change in unpredictable ways so that the same genotype a century ago would not have the same phenotype consequences today.

So, while genetic variation is manifestly important, its results are complexly interactive, not nearly the simple, replicable, additive, specific causal phenomena that NIH has been promising so fervently to identify to produce wonders of improved health.  It's been a very good strategy for securing large budgets, and hence very good for lots of scientists, and perhaps as such--its real purpose?--it is a booming success.  It did, one must acknowledge, document the largely theoretical ideas about complex genotypic causation of the early 20th century.  But the casual equating of induction with deduction has also fed a convenient ideology that has not been very good for science, because science should shun ideology: in this case the idea of enumerable, essentially parametric causation is wrongly and far too narrowly focused.  

Perhaps some realization is afoot
But now we're seeing, here and there, various qualifiers and caveats and soft, not fully acknowledged, retreats from genomics promises.  Some light is being shown on the problems and the practices that are common today.  Few if any are admitting they've been too strident, or wrong, or whatever, but instead are asserting their view as either what we all already know, or as a kind of new insight they are making etc.  That is, claiming that things aren't so genomically caused is a claim of original insight and hence new or continued funding.  No apologies, and no acknowledgments of those critics of the current NIH-promoted Belief System, who have been pointing these things out for many years--no offer of Emily Litella's quiet and humble recognition of a mistake:  "Oh.....Never mind!"


"Oh.....Never mind!"  YouTube from NBC's SaturdayNightLive
How seriously should the quiet backtracking be challenged about this?  Is it even fair to call the revisionists 'hypocrites'?  We live and learn via science, so perhaps the claimscape change, though quiet and implicit, is a reflection of good science, not just expediency.  Perhaps that is how science should be, reacting, even if slowly, to new knowledge and giving up on cherished paradigms.

One underlying aspect of modern science is not that we can accept wrong notions, but our hasty, excessive claims rushed to the public, the journals, and the funders. In a sense, this isn't entirely a fault of vanity but of the system we've built for supporting science. A toning down of claims, shunning those who claim too much too quickly, and much higher threshold for 'going public' would improve science and indeed be more honest to the public.  A stone-age suggestion I've made (almost seriously) is that journals should stop publishing any figures or graphs (in the pages or on the cover) in color--that is, to make science papers really, really boring!  Then, only serious and knowledgeable scientists would read, much less shout about, research reports (maybe some black-and-white TV science reporting should be allowed, too).  At least, we are due some serious reforms in science funding itself, so that scientists are not pressured, for their very career survival, into the excessive claimscape of recent years.

In specific terms, I personally think that by far the most important reforms would be to limit the funding available to any single laboratory or project, to stop paying faculty salaries on grants, to provide base funding for faculty hired with research as part of their responsibilities, and decoupling relentless hustling for money from research, so that the science rather than the money would be in the driver's seat.  
Universities, lusting after credit score-count and grant overheads, would have to quiet down and reform as well.  

The infrastructure is broad and altering it would not be easy. But things were once more sane and responsible (even if always with some venal or show-boat exceptions, humans being humans). But if such reforms were to be implemented, young investigators could apply their fresh minds to science rather than science hustling.  And that would be good for science.

Wednesday, August 15, 2018

On the 'probability' of rain (or disease): does it make sense?

We typically bandy the word probability around, as if we actually understand it. The term, or a variant of it like probably, can be used in all sorts of contexts that, on the surface seem quite obvious and related to some sense of uncertainty; e.g., "That's probably true," or "Probably not."  But is it so obvious?  Are the concepts clear at all?  When are they, actually, more than just informally and subjectively, meaningful?

Will it rain today?  Might it?  What is the chance of rain?
One of the typical uses of probabilistic terms in daily life has to do with weather predictions.  As a former meteorologist myself, I find this a cogent context in which to muse about these terms, but with extensions that have much deeper relevance.

Here is an episode of a generally very fine BBC Radio 4 program called More or Less, whose mission is to educate listeners on the proper use and understanding of numbers, statistics, probabilities and the like.  This episode deals, somewhat unclearly and to me quite vaguely, unsatisfactorily, and even somewhat defensively, about the use and interpretation of weather forecasts.

So what does a forecast calling for an x% chance of rain mean?  Let's think of an imaginary chessboard laid over a particular location.  It is raining under the black, but not under the white squares.  There is nothing probabilistic about this.  50% of people in the area will experience rain.  If I don't know where you live, exactly, I'd have to say that you have a 50% chance of rain, but that has nothing to do with the weather itself but rather with my uncertainty of where you live.  Even then it's misleadingly vague since people don't live randomly across a region (they are, for example, usually clustered in some sub-regions).

Another interpretation is that I don't know where the black and white squares will be exactly, at any given time, but my weather models predict that in about half of the region, rain will fall.  This could be because my computer models, necessarily based on imperfect measurement and imperfect theory, are therefore imperfect--but I run them many times, making small random changes in various values to account for that imperfection, and I find that among these model runs, 50% of the time at any given spot, or 50% of the entire area under consideration, experiences rain.

Or, is it that there is an imaginary chessboard moving overhead and so the 50% of the land will be under the black and hence getting rain at any given time, and thus that any given area will only get it 50% of the time, but every area will certainly get rain at some time during the forecast period, indeed every area will be getting rain half of the period?  Then the best forecast is that you will get wet if you stay outside all day, but if you only run out to get the mail you might not?  Might??

Or is it that my models are imperfect but theory or experience tell me that there is a 50% chance of any rain in the area--that is, my knowledge can tell me no more than that.  In that case, any given place will have this guesstimated chance of rain.  But does that mean at any given time during the forecast period, or at every time during it?  Or is it that my knowledge is very good, but the meteorological factors--the nature of atmospheric motion and so on--only probabilistically form droplets that are large enough not just to be clouds but to fall to earth?  That is, is it the atmospheric process itself that is probabilistic--at least based on the theory, since I can't observe every droplet.

If a rain-generating front is passing through the area, it could rain everywhere along the front, but only until the front has moved past the area.  Thus, it may rain with 100% certainty, but only 50% of the specified time, if the front takes that amount of time to pass through.

I've undoubtedly only mentioned some of the many ways that weather forecast probabilities can be intended or interpreted as meaning.  It is not clear--and the BBC program shows this--that everyone or perhaps even anyone making them actually understands, or is thinking clearly about, what these probability forecasts mean.  Even meteorologists themselves, especially when dumbing down for the average Joe who only wants to know if he should carry his brolly with him, are likely ('probably'?!) unclear about these values.  Probably they mean a bit of this and a bit of that.  I wonder if anyone can know which of the meanings are being used in any given forecast.

Well, fine, everyone knows that nobody really knows everything about the weather.  Anyway, it's not that big of a deal if you get an unexpected drenching now and then, or more often haul your raincoat to work but never need it.

But what about things that really matter, like your future health?  My doc takes my blood pressure and looks at my weight, and may warn me that I am at 'risk' of a heart attack or stroke--that without taking some preventive measures I may (or probably will) have such a fate.  That's a lot more important than a soaked shirt.  But what does it mean?  Isn't everybody at some risk of these diseases?Does my doc actually know?  Does anybody?  Who is thinking clearly about these kinds of risk pronouncements?

OK, caveats, caveats: but will I get diabetes?
In genomics 'precision' genomic medicine is one of the genomics marketing slogans of the day, the very vague (I would say culpably false) promise that from your genotype we can predict your future--that's what 'precision' implies.  The same applies even if weaseling now would include environmental factors as well as genomic ones.  And the idea implies knowledge not just of some vague probability, but by implication it means perfection--prediction with certainty.  But to what extent--if any at all--is the promise, or can the promise be true?  What would it mean to be 'true'?  After all, anyone might get, say type 2 diabetes, mightn't they?  Or, more specifically, what does such a sentence itself even mean, if anything?

We know that, today at least, some people get diabetes sometime in their lives, and even if we don't know why or which ones, that seems like a safe assertion.  But to say that any person, not specifically identified, might become diabetic is rather useless.  We want a reason--a cause--and if we have that we assume it will enable us to identify specifically vulnerable individuals.  Even then, however, we don't know more than to say, in some sense that we may not even understand as well as we think we do, that not all the vulnerable will get the disease: but we seem to think that they share some probability of getting it.  But what does that mean, and how do we get such figures?

Does it mean that among all those with a given GWAS! genotype, (1) a fraction f will get diabetes?(2) a fraction f will get diabetes if they live beyond some specified age? (3) a fraction f will get diabetes before they die if they live the same lifestyle diet as those from whom the risk was estimated? (4) a net fraction f will get diabetes, pro-rated year by year as they age; (5) a net fraction related to f will get diabetes, but that is adjusted for current age, sex, race, etc.?

What about each individual consulting their Big Data genomic counselor?  Are these fractions f related to each individual as a probability p=f that s/he will get diabetes (conditional on things like items 1-5 above)?  That is, is every person at the same risk?

Only if we can equate our past sample, from which we estimated f by induction to the probability p used by deduction to assert for each new individual might this, even in principle, lead to 'precision genomic medicine'.  It is prediction, not just description that we are being promised.  Even if we were thinking in public health terms, this is essentially the same, because it would relate to the fraction of individuals who will be affected in the future, because each person is exposed to the same probability.

Of course, we might believe that each person has some unique probability of getting diabetes (related, again, to the above items), and that f reflects the mix (e.g., average) of these probabilities.  But then, we have to assume that all the genotypes and lifestyles and so on in the current group whose future we're offering 'precision' predictions is exactly like the sample from which the predictions were derived, that this mix of risks is, somehow, conserved.  How can such an assumption ever be justified?

Of course, we know very well that no current sample whose future we want to be precise about will be exactly the same as the past sample from which the probabilities (or fractions) were derived.  Obviously, much will differ, but we also know that we simply have no way to assess by how much it will differ.  For example, future diets, sociopolitical, and other factors that affect risk will not be the same as those in the past, and are inherently unpredictable.  So, on what meaningful basis can 'precision' prediction be promised?

Just for fun, let's take the promise of precision genomic medicine at its face value.  I go to the doc, who tells me
"Based on your genome sequence, I must advise you of your fate in regard to diabetes."
"Thanks, doc.  Fire away!"
"You have a 23.5% chance of getting the disease."
"Wow!  That sounds high!  That means I have a 23.5% chance that I won't die in a car or plane crash, right?  That's very comforting.  And if about 10% of people get cancer, then of my 76.5% chance of not getting diabetes, it means only a 7.65% chance of cancer!  Again, wow!"
"But wait, Doc!  Hold on a minute.  I might get diabetes and cancer, right?  About a 7.65% percent chance of that, right?"
"Um, well, um, it doesn't work quite that way [to himself, sotto voce: "at least I think so..."].....that's because you might die of diabetes, so you wouldn't get cancer.  Of course, the cancer could come first, but it would linger, because you have to live long enough to experience your 23.5% risk of diabetes.  That would not be good news.  And, of course, you could get diabetes and then get in a crash.  I said get diabetes, not die of it, after all!"
I gather you, too, can imagine how to construct many different sorts of fantasy conversations like this, even rashly assuming that your doctor understood probability, had read his New England Journal regularly when not too sleepy after a day's work at the clinic--and that the article in the NEJM was actually accurate.  And that NIH knew in sincerity what they were promising in the way of genomic predictability promises.  But wait!  The medical journals, and even the online genotyping scam companies--you can probably name one or two of them--change your estimated risks from time to time as new 'data' come in.  So when can I assume case-closed and I (well, the Doc) really knows the true probabilities?

I mean, what if there are no such true probabilities, because even if there were, not just knowledge, but also circumstances (cultural, not to mention mutations) continually change, and what if we have no way whatever to know how they're gonna change?  Then what is the use of these 'precision' predictions?  They, at best, only apply to a single, current instance.  So what (if anything at all) does 'precision' mean?

It only takes a tad of thinking to see how precisely imprecise these promises all are--must be, except very short-term extrapolations of what past data showed, and extrapolations of unknown (and unknowable) 'precision'.  Except, of course, the very precise truth that you, as a taxpayer, are going to foot the bill for a whole lot more of this sort of promises.

Unlike the weather, we don't have anything close to as rigorous an understanding of human biology and cultures as we do of the behavior of gases and fluids (the atmosphere).  We might want to say, self-protectingly and more honestly modest, that our use of 'probability' is very subjective and really just means an extrapolated rough average of some unspecifiable sort.  But then that doesn't sound like the glowing promise of 'precision', does it?  One has to wonder what sort of advice would make scientifically proper, and honorable, use of the kind of probabilistic, vague, ephemeral evidence we have when we rely on 'omics approaches, or even when it's the best we can do at present.

In meteorology, it used to be (when I was playing that game) that we'd joke "persistence is the best forecast".  This was, of course, for short range, but short range was all we could do with any sort of 'precision'.  We are pretty much in that situation now, in regard to genomics and health.

The difference is, weather forecasters are honest, and admit what they don't know.

Tuesday, August 14, 2018

The Placebome.....can you believe that!

Is it only religion that feeds and reassures the gullible, no matter what catastrophes strike?

When a baby is born with serious health issues, this is apparently the loving God's will (to test the parents' faith; God can, after all, save the baby's soul).  But rather than just blaming God, perhaps one's faith in this same devilish Being, that faith itself, could have curative powers.  At least those powers might extend to the believer him or herself.

When a person's mood ameliorates a disease, yet no formal medical treatment has been involved, that is a psychological effect.  When the person is in a case-control drug trial study, in which s/he has (though unaware of it) been given a sugar pill--a placebo--rather than the drug under test, and that person's health improves anyway, that is called the placebo effect.

It is important when testing a new drug to have a way to determine whether it really does nothing (or, indeed, is harmful) rather than its intended effect.  Since people who are ill might get better or worse for various reasons, a drug trial often compares those patients given the drug with those who are given a placebo.  The drug is considered to be efficacious if it does something, rather than nothing--nothing, that is, as is assumed about the placebo.

But are some unjustified if convenient assumptions being made in this long-used standard comparison as a test of the new drug's efficacy?  Studies including placebo have long been relatively standard, if not indeed mandatory for drug approval.  But how well are the comparisons--and their underlying assumptions--understood?  The answer may not be as obvious as is generally assumed.

Back pain that's a headache
What about this paper by Carvalho et al., in the journal Pain (Carvalho et a., vol 157, number 12, 2016)?  The authors did a randomized control trial of open-label placebos (OLPs) taken in the usual dose way for the usual 3 weeks on patients suffering low back pain.  The authors found clear (that is, statistically significant) reduction in symptoms--even though the 'control' patients knew they were taking a placebo.  Perhaps they still thought they were taking medicine, or perhaps just being in a study seemed to them, somehow, to be a form of care, something positive--that is, systematically better than no treatment.  But this is not supposed to happen, and relates to a variety of very important, if equally inconvenient, issues about what counts as evidence, what counts as therapy and so on.


The samples in the Carvalho study were small and one can quibble about the quality of the research if one wants to dismiss it.  (E.g., if it were really true, why wasn't it published in a major journal? Did reactionary reviewers from these journals keep it from being published there?).  Still, if the placebo effect is real, the idea should not be a surprise.  Biologically, there really need be no reason why subjects must be blinded to being given placebos in order for them to work.  

But is it appropriate to ask whether, in a similar way, religious faith might have a placebo effect, and if so, should it be part of case-control studies of new drugs or treatments?  If so, then.....

....some things to consider
Here's an interesting thought:  If the placebo effect is real, then how do we know that actual medicines work?  They may seem better than placebos in comparison studies, but what if a substantial fraction of the treatment effect is for religious or other reasons?  That is, these subjects experience a kind of placebo effect?  Then, the case-control distinction is less than one thinks: perhaps as a result, the efficacy of the medicine is actually substantially less than is credited by the standard kinds of placebo-comparison study.  Perhaps placebo-response is part of the case side of the comparison, as well as the control side, and without them the 'case' effect would no longer be significant, or as significant?

If we are doing a placebo-based test of a new drug, should case and control religious or other beliefs be identified, and matched in the two groups?  What about atheists--is that also a comparable faith, or would it serve as a control on such faith?  


Even to acknowledge the possibility that we've under-rated the placebo effect, and over-rated the drugs that we rely on, and that belief systems can even have such effect, raises interesting and important questions.  What if we told a patient that s/he had a placebic genotype, and thus, say, tended to believe everything s/he heard or read?  Then would s/he realize this and stop believing, blocking the placebo effect?  In not knowing if s/he were a case or control, actually reduce even the 'case' effect?  Would we tell such people of some meds they could take to 'cure' this placebo-responsive trait?  Would they take it?  These could be interesting areas to explore, though deciding how to do definitive studies would, by the very nature of the subject, not be easy.

And yet. . . .
Of course, scientists being the way they are, there is now a proposed 'placebome' project (Hall et al., Trends in Mol Med, 21 (5), 2015). The researchers want to search for genomic regions that affect the effect which, they claim varies among people and hence assume it must be 'genetic' (this might even be reasonable, in principle, but way too premature for yet another GWAS project).  Is it as silly, bandwagonish, transparent, and premature a version of unquestioning belief and/or marketing as one can imagine?  I think so--you can, if you wish, of course, look at the paper and judge for yourself. 

But even if this is capitalizing on the 'omics fad, a transparent me-too money-seeking strategy that our venal system imposes, that doesn't vitiate the idea that placebic effects could, in fact, be both real and important.  Nor that truly thoughtful, systematic ways of investigating its nature, not just some statistical results related to it, would be possible and appropriate.  But to do this, how would such a study be designed?

One thing this all suggests to me is that we may not have defined placebos carefully (or knowledgeably) enough, or don't understand what is going on that could count for a physiological (as opposed to 'merely psychological') effect.  Since we have the embedded notion that science is about material technology, statistics, and so on, perhaps we just don't believe (and that's the right word for it) that things can happen that are not part of our science heritage, which largely derives from reductionist physics.  If we've not looked in a properly designed way for  this effect, perhaps we should.  At the very least, there may be much to learn.

But before rushing to the 'omics market, there are interesting qusetions to ask.  Why aren't religious believers who pray for God's grace, generally healthier than the non-believers?  Or is there, in fact, a notable but undocumented difference? Does serious religiosity serve as a placebo in daily life, and if not, why not? If there are measurable physiological or neural pathways that can be identified during placebic experience, are they potential therapeutic targets?  

But there's a deeper more serious question
The fact of placebo effects is generally interesting, but raises an important, very curious issue.   How can a placebo effect work on the diversity of traits for which it has been suggested?  If all a placebic effect does is make you feel better no matter how sick you are, then it's not really placebic in that it doesn't mimic the drug being taken and shouldn't affect the specific disease, just the patient's mood.  But if it can affect the disease, how can that be?

Placebos seem to work in many different drugs and treatments, for many physically and/or physiologically different and unrelated disorders.  At least, I think that is what has been reported.  But these involve different tissues and systems.  So how does the patient 'know' which tissue or physiological system to fix, that is, which cell type a real medicine would be targeting, when believing s/he has taken some effective medicine?  

I know very little about the placebo effect, and it doubtlessly shows in this post, to anyone who does.  But I think these are important, or indeed fundamental questions that include, but go beyond asking if the effect is real: they ask what the effect could actually be.  Before we untangle these issues, and understand what the placebo effect really is, we should be highly skeptical of any 'omic project claiming that it will map it and find out what genes are responsible for it.  Among other things, as I've tried to point out here, one needs to know what 'it' actually is.  And as regards genetic studies, is there the proper kind of plausibility evidence on which to build an 'omics case: is there, for example, any reason at all to believe the placebo is familial?

There is already huge waste of research money chasing  'omics fads these days, while real problems go under-served.  One need not jump on every bandwagon.  If there are real questions here, and there seem to be, then the groundwork needs to be laid before we go genome searching.

Monday, August 13, 2018

Big Data: the new Waiting for Godot

In Samuel Beckett's cryptic play, Waiting for Godot, two men spend the entire play anticipating the arrival of someone, Godot, at which point presumably something will happen--one can say, perhaps, that the wait will have been for some achieved objective.  But what?  Could it simply mean that they can then go somewhere else?  Or, perhaps, there will be no end because Godot will never, in fact, arrive.

www.mckellen.com

A good discussion of all of this is on the BBC Radio 4 The Forum podcast.  Apparently, Beckett insisted that any such answers were in the play itself--he didn't imply that there was some external meaning, such as that Godot was God, or that the play was an allegory for the Cold War--which is one reason the play is so enigmatic.

Was the play written intentionally to be a joke, or a hoax?  Of course, since the author refused to answer or perhaps even to recognize the legitimacy of the question, we'll never know.  Or perhaps that in itself, is the tipoff that it really is a hoax.  Or maybe (I think more likely) that because it was written in France in 1949, it's an existentialist era statement of the angst that comes from the recognition that the important questions in life don't have answers.

Waiting for the biomedical Promised Land
That was then, but today we are witnessing real-life versions of the play: things just as cleverly open-ended, with the 'What happens then?' question only having a vague, deferred answer, as in Beckett's title.  And, as in the play, it is not clear how self-aware even some of the perpetrators are of what they are about.

I refer to the possibility that we are witnessing various Big Data endeavors, unknowingly imitative but as cleverly and cryptically open-ended as the implied resolution that will happen when Godot arrives.  Big Data 'omics is a current, perhaps all too convenient, scientific version of the play, that we might call Waiting for God'omics.  The arrival of the objective--indeed, not really stated, but just generically promised as, for example, 'precision genomic medicine' for 'All of Us'--is absolutely as slyly vague as what Vladimir and Estragon were presumably waiting for.  The genomic Godot will never arrive!

This view is largely but not entirely cynical, for reasons that are at least a bit subtle themselves.

Reaching the oasis, the end of the rainbow, or the Promised Land is bad for business
One might note that if the 'omics Godot were ever to arrive, it would be the end of the Big Data (or should one say Big Gravy?) train, so obviously our Drs Vladimirs and Estragons must ensure that such a tragedy, arrival at the promised land, the elimination of all diseases in everyone, or whatever, never happens in real life.  Is there any sense that anyone seriously thinks we would reach resolution of the cause of disease, with precision for all of us, say, and be able (that is, willing) to close down the Big Budget nature of our proliferating 'omictical me-too world?

We have entrenched the search for Godot, a goal so vague as to be unattainable.  Even the proper use of the term 'precision' implies an asymptote, a truth that one never reaches but can get ever closer to.  If we could get there, as is implied, we should have been promised 'exact' genomic medicine. And wouldn't this imply that then, finally, we'll divert the resources towards cures and prevention?

However, even if the perpetrators of the Big Promises never think or aren't aware of it, we must note that the goal cannot be reached even with the best and most honorable of intentions.  Because of births and deaths, and environmental changes, and mutations and recombination, there truly never is the palm-draped oasis at which our venture could cease.  There will never be an 'all' of us, and genetic causation is ever-changing (in part because of the similarly dynamic environment), meaning that there are no such things as risks to be approached with 'precision'.  Risks are changeable and not stable, and indeed not fixed numerical values.  At best, they are collective population (or sample) averages.  So there is never a 'there' there, anywhere.  There is only a different one everywhere.

But awareness of these facts doesn't seem to be part of the 'omicsalyptic promises with which we are inundated.  They seem, by contrast, rote promises that are little if any different from political, economic, or religious promises--if only we do this, we'd get to a Promised Land.  But such a land does not exist.

If we had, say, a real national health system, it would be properly and avowedly open-ended without anyone honorable objecting (if it were done well).  And epidemiologically, of course, there will always be new mutations, recombinations, environments and the like to try to understand--disease with, or without strong genotype-phenotype causation.  There will always be a need for health research (and basic science).  But science, of all fields of human endeavor, should be honest. It should not hold out the promise that Godot will arrive, but in a sense, openly acknowledge that that can never happen.

But this doesn't let those off the guilty hook who are hawking today's implicit Big Data, big open-ended budget promise that by goosing up research now we'll soon eliminate genetic disease (I recall that Francis Collins did indeed, not all that long ago, promise that this Paradise would come soon--um, I think his date was something like 2010!)  It's irresponsible, self-interested promising, of course.  And those in genomics who are intelligent enough to deserve to be in genomics do, or should, know that very well.

Like Vladimir and Estragon, we'll always be told that we're waiting for Godot, and that he'll be coming soon.


NOTE:  One might observe that Godoism is a firmly entrenched strategy elsewhere in our society, for examples, in regard to  theoretical physics, where there will never be a collider big enough to answer the questions about fundamental particles: coming to closure would be as fiscally threatening to physics as it is to life sciences.  Science is not alone in this, but our society does not pay it nearly enough skeptical heed.

Monday, August 6, 2018

Traffic jams ---> Trophic jams

We live in State College, PA, a small university town.  Well, it isn't nearly as small as it was when we moved here in 1985; Penn State enrollment has gone from around 30,000 when we got here to something like 50,000, and the town has grown to keep up.

How did that happen?  In essence, by sacrificing farm fields, turning them into condo centers, fine suburban-style cardboard 'mansions' with big grassy lots, 2-3 car garages (so everyone could drive a few miles to the nearest grocery), and so on. Even in this fairly small town, during the day, there are cars going through most intersections most of the time, even in the residential tracts.

To get from here to anywhere you need to get on I-80 or I-95, or some other throughway, where there is an endless chain of nearly stationary cars and trucks, hour after hour, mile after mile after mile.  Even when not obstructed by an accident or construction, the traffic is so heavy that it's not at all unusual for very slow, or creeping, or stopped traffic jams  tolitter the route.

The global traffic jam....
This same situation is happening all over the country, all over Europe, all over Japan and much of India and China.  It is even happening in parts of Africa and Australia.  This is 24/7.  The endless rivers of steel, rubber, and petrochemicals is like a river, and as Heraclitus said you can't step into it twice:  in no two moments is this same stream actually the same.  The cars and people and their arrangement are different--and, of course, we are never using the same gasoline twice: once used, it is burped into the atmosphere.

When you've been around more than a few decades, you'll start to realize that the current situation isn't 'normal'.  In a decade or two, or three, you'll think today was good and you were used to it, but that what has become normal, the jam of all jams, is what's really intolerable.

The traffic jam is, of course, due to the unconstrained growth of population, and its per capita consumption.  And this traffic jam, in turn, will have its longterm side effects in terms of the resources it uses up.  And that is going to lead to another kind of jam.

The global trophic jam
As we pave and build condos and shopping malls over what has for  millions of years been millions of acres of fertile land, we reduce the potential food production for us and other creatures, plant or animal.  Our sewage and waste claims more in water and land areas.  And we seem unable to prevent there always being more of us.  That means more paving, more building, and altered climate.  This means less fertile land for growing food--a trophic jam.

Climate is changing, and at least some of this is due to human-induced global warming.  Ostrich-like deniers, note: Climate change is happening regardless of why!  This will moisten some arid lands, and even more it will dry out currently fertile lands, in large amounts.  It will raise water levels on coasts and in rivers.  Since before the industrial age, settlements--now cities--were built on waterways for trade and so on, many or even most major cities will be threatened by water rise.  This will drive people inland, to cover over even more arable land.  Those living inland should realize that they will not be able to keep this inrush out.

Some areas, like Northern Canada perhaps, will become wetter.  But hardly anybody lives there.  Other areas, the rich farmlands, will become drier and likely many will become arid.  Nations that rely on food for their people or for trade, will have to look elsewhere--and if all of human history is any guide, this trophic jam will inevitably lead to attempts at military conquest.  If the breadbasket has shifted, say, from the US to Canada, and there's real food pressure, does anyone doubt that military expeditions won't head northward?

Our relentless, unconstrained traffic jams are irritating, especially to the impatient (like me) or those who want to spend time with their families, or bowling, rather than sitting in traffic.  But these headaches may be dooming us to stomach aches--the kind one gets when there isn't enough food.

One can be a climate denying ostrich, or a rosy believer in science and engineering, but if what we here and many others who know much more than we do are making these warnings, they are not all Chicken Littles.  Yet, like the swarm of lemmings, we are headed for the cliffs.

Apparently, today at least, we can't tell, or don't care to tell, the connections between traffic and trophic jams.

Saturday, August 4, 2018

On Montaigne's cat

The person who, in a sense, invented the blog way back in 1580 in the form of his meandering Essays, was Michel de Montaigne.  He rambled across much of the territory of human thought, opining, suggesting, hinting, retreating and, well, just musing often rather incoherently.  Isn't that how most all modern blogs--this one included--are?!

Sadly for him, Montaigne couldn't Tweet his frequent 'posts', but he did Meow one.  In a famous oft-quoted part of his 'An Apology for Raymond Sebond', Montaigne muses about the arrogantly vain and presumptuous way that we judge our own uniqueness, in particular relative to other species.  In a famous passage, he writes:

"When I play with my cat, how do I know that she is not passing time with me rather than I with her"

  "Am I not a 'me'?"  Our own Mu (drawing by Anne Buchanan)

Humans routinely, conveniently, ignore the thought.  It is not in our self-interest.  Indeed, by now our cultural legacy is from the often obscure writing of Rene Descartes who, at least about himself recognized "I think, therefore I am."  But, apparently, a cat doesn't, so isn't.  By turning other creatures into automatons, mere machines, in the period that laid the foundation for modern science, Descartes' objectifying dogma opened not only justification for raising or hunting animals for our tables with a clear conscience, but also for the diverse experimentation that we do on uncountably many laboratory animals (indeed, the story with plants and their sense of self-awareness is becoming more complex, but that is too disturbing to think about).

Mea culpa!
I am personally heavily burdened by the thought of what I did over decades of research to countless mice.  Wholly innocent of any offense, they suffered the ultimate mortal penalty, so we could see what genes were expressed in their unborn young's teeth, or model effects on their craniofacial development or even, unforgivably perhaps, determine when they grew too old and their lives were no longer (to us) worth living and 'sacrificed' them.  No Viagra relief, retirement centers, hearing aids, etc. for them!

We were once forced to 'euthanize' (gas to death) a large number of laboratory mice, males, females, and young.  This was done in the usual 'humane' and research-ethics-approved say.  Deep in their sacrificial tank, as the hissing N2O began and the mice sensed the lack of air, they grouped tightly together in a terrified death huddle, young pressed against their mothers, that as I watched reminded me of images of Hitler's death-showers.  I will never forget that, though it was entirely within the standard accepted IRB protocols by which 'we' manage and manipulate 'them'.  They're just things after all.....aren't they?  Of course, if so, why do we bother with any sort of 'humane' treatment?  Or, if they're like us, why are we allowed to manipulate them, often to their terror and suffering?

We smugly let chimpanzees retire comfortably to senior centers (e.g., Chimp Haven, in Louisiana).  Why?  Because they are like us!  But other animals, even rhesus monkeys, are merely them.  Their lives are disposable.

Fortunately, for doubters at least, there is no after-world in which justice will be served to us, or where we might ask Him (She? It? Them?) why life was created as a food chain in which each depends on one or another form of this sort of savagery just for survival.

The science question
All of this is confession in the side booth, but it does raise the important question that bemused Montaigne: what is the 'me' of a cat like, compared to my own 'me'?  Can we ever know?  Scholars have long mused over what the nature of consciousness might be and how we could ever know it.  When the detached, mechanistic Descartes said, metaphorically, 'I think therefore I am", in the realm of consciousness he was 'thinking' in an exclusive way.

Frans de Waal, a prominent primate-watcher, has argued in a convincing way that 'thought' as we would casually use the term, doesn't really require language--doesn't have to be just the way you, right now, are doing it, to exist in every meaningful sense.

Of course, consciousness and its causative or even phenomenological nature has always been, and still is, essentially elusive.  I think and I am....but how?  How does wiring among a huge bunch of neurons lead to the meta-phenomenon of self-awareness?  Or since clearly cats and even bugs are self-aware in some senses, and many if not all animals have similar genomes and neural structures and wiring, why don't they, too, have the same sense.  Is there such thing as a lesser sense of 'me'?  How could we know, and  more importantly on what basis can we assert that they don't really have It?

Many have opined that science is the specifically objective endeavor by which we, operating from the inside (of our own heads), assess the way the outside world works.  If so, then science can't be expected to look inside the inside, from the inside, so to speak; perhaps consciousness is a literally subjective phenomenon that we experience but cannot examine by what we call 'science'.  Further, we assume that it--whatever it is--is also experienced by (at least some of our more decent) human fellows.

If the notion that in reality consciousness is the internal experience that is out of bounds for the essentially external purview of science, then we may relate our own and describe it as each of us sees it in others, from the outside, but we can't really understand it objectively.  If so it would simply be out of bounds by being inappropriate for science.  Many dabblers have tried to get around these obvious limitations, and they document all sorts of externally observed 'neural correlates', and in the same sense that a bullet through the head ends the phenomenon, these observations may reflect much about its objective nature.  But since consciousness is inherently about the experience, whatever the wiring, these correlates are, so far at least, just that--correlates.

Then how can we pronounce about other species?
Given this, what justifies the Cartesian convenience by which we blithely judge that they, not even cats, don't have 'it'?  Or is it just a more profound kind of convenience, namely, that we want them--other species--to be 'things' so that we, with our self-declared special powers, can control their lives and even eat them?  Is that different from the view wasps and tigers must essentially have of their prey?  Or is there such a thing as 'partial' or 'lesser' consciousness, compared to ours--as opposed simply to a different kind of consciousness, for example, not based on symbolic language as ours is?

Mammals, like our cat and dog friends, and even birds, have very similar genotypes to ours.  They have very similar cellular and anatomical structures, and neural wiring, to ours.  Their behaviors are very similar to ours.  They communicate in ways quite similar to ours except, perhaps, that it is more by stereotypical signaling than abstract symbols.  But we presume to dismiss their particular internal experiences as being mechanical, that is, fundamentally different from ours.

Is our declaration that they are just machines, or at least don't really have 'it', more than our particular convenient, self-interested rationale for doing what we like to them?

In its fashion that we would completely recognize were we to experience it, does a cow in the slaughterhouse queue ever ask:  'Whats this? Why me?', or a cat wonder 'What is it like to be a human?'  I ask what is it like, what does it seem and feel like, to be a laboratory mouse enjailed in a tiny cage?  Or to be gassed to death, at our convenience?  Montaigne's question is as cogent today as it ever was:

"When I play with my cat, how do I know that she is not passing time with me rather than I with her"


Our  cats (and chipmunk).
Drawings by Anne Buchanan.  For more of her fantastic artwork, see http://www.annevbuchanan.com/
                                           Left, center: are they not 'me's?  Right: aren't cat and chipmunk 'me's?