Wednesday, August 15, 2018

On the 'probability' of rain (or disease): does it make sense?

We typically bandy the word probability around, as if we actually understand it. The term, or a variant of it like probably, can be used in all sorts of contexts that, on the surface seem quite obvious and related to some sense of uncertainty; e.g., "That's probably true," or "Probably not."  But is it so obvious?  Are the concepts clear at all?  When are they, actually, more than just informally and subjectively, meaningful?

Will it rain today?  Might it?  What is the chance of rain?
One of the typical uses of probabilistic terms in daily life has to do with weather predictions.  As a former meteorologist myself, I find this a cogent context in which to muse about these terms, but with extensions that have much deeper relevance.

Here is an episode of a generally very fine BBC Radio 4 program called More or Less, whose mission is to educate listeners on the proper use and understanding of numbers, statistics, probabilities and the like.  This episode deals, somewhat unclearly and to me quite vaguely, unsatisfactorily, and even somewhat defensively, about the use and interpretation of weather forecasts.

So what does a forecast calling for an x% chance of rain mean?  Let's think of an imaginary chessboard laid over a particular location.  It is raining under the black, but not under the white squares.  There is nothing probabilistic about this.  50% of people in the area will experience rain.  If I don't know where you live, exactly, I'd have to say that you have a 50% chance of rain, but that has nothing to do with the weather itself but rather with my uncertainty of where you live.  Even then it's misleadingly vague since people don't live randomly across a region (they are, for example, usually clustered in some sub-regions).

Another interpretation is that I don't know where the black and white squares will be exactly, at any given time, but my weather models predict that in about half of the region, rain will fall.  This could be because my computer models, necessarily based on imperfect measurement and imperfect theory, are therefore imperfect--but I run them many times, making small random changes in various values to account for that imperfection, and I find that among these model runs, 50% of the time at any given spot, or 50% of the entire area under consideration, experiences rain.

Or, is it that there is an imaginary chessboard moving overhead and so the 50% of the land will be under the black and hence getting rain at any given time, and thus that any given area will only get it 50% of the time, but every area will certainly get rain at some time during the forecast period, indeed every area will be getting rain half of the period?  Then the best forecast is that you will get wet if you stay outside all day, but if you only run out to get the mail you might not?  Might??

Or is it that my models are imperfect but theory or experience tell me that there is a 50% chance of any rain in the area--that is, my knowledge can tell me no more than that.  In that case, any given place will have this guesstimated chance of rain.  But does that mean at any given time during the forecast period, or at every time during it?  Or is it that my knowledge is very good, but the meteorological factors--the nature of atmospheric motion and so on--only probabilistically form droplets that are large enough not just to be clouds but to fall to earth?  That is, is it the atmospheric process itself that is probabilistic--at least based on the theory, since I can't observe every droplet.

If a rain-generating front is passing through the area, it could rain everywhere along the front, but only until the front has moved past the area.  Thus, it may rain with 100% certainty, but only 50% of the specified time, if the front takes that amount of time to pass through.

I've undoubtedly only mentioned some of the many ways that weather forecast probabilities can be intended or interpreted as meaning.  It is not clear--and the BBC program shows this--that everyone or perhaps even anyone making them actually understands, or is thinking clearly about, what these probability forecasts mean.  Even meteorologists themselves, especially when dumbing down for the average Joe who only wants to know if he should carry his brolly with him, are likely ('probably'?!) unclear about these values.  Probably they mean a bit of this and a bit of that.  I wonder if anyone can know which of the meanings are being used in any given forecast.

Well, fine, everyone knows that nobody really knows everything about the weather.  Anyway, it's not that big of a deal if you get an unexpected drenching now and then, or more often haul your raincoat to work but never need it.

But what about things that really matter, like your future health?  My doc takes my blood pressure and looks at my weight, and may warn me that I am at 'risk' of a heart attack or stroke--that without taking some preventive measures I may (or probably will) have such a fate.  That's a lot more important than a soaked shirt.  But what does it mean?  Isn't everybody at some risk of these diseases?Does my doc actually know?  Does anybody?  Who is thinking clearly about these kinds of risk pronouncements?

OK, caveats, caveats: but will I get diabetes?
In genomics 'precision' genomic medicine is one of the genomics marketing slogans of the day, the very vague (I would say culpably false) promise that from your genotype we can predict your future--that's what 'precision' implies.  The same applies even if weaseling now would include environmental factors as well as genomic ones.  And the idea implies knowledge not just of some vague probability, but by implication it means perfection--prediction with certainty.  But to what extent--if any at all--is the promise, or can the promise be true?  What would it mean to be 'true'?  After all, anyone might get, say type 2 diabetes, mightn't they?  Or, more specifically, what does such a sentence itself even mean, if anything?

We know that, today at least, some people get diabetes sometime in their lives, and even if we don't know why or which ones, that seems like a safe assertion.  But to say that any person, not specifically identified, might become diabetic is rather useless.  We want a reason--a cause--and if we have that we assume it will enable us to identify specifically vulnerable individuals.  Even then, however, we don't know more than to say, in some sense that we may not even understand as well as we think we do, that not all the vulnerable will get the disease: but we seem to think that they share some probability of getting it.  But what does that mean, and how do we get such figures?

Does it mean that among all those with a given GWAS! genotype, (1) a fraction f will get diabetes?(2) a fraction f will get diabetes if they live beyond some specified age? (3) a fraction f will get diabetes before they die if they live the same lifestyle diet as those from whom the risk was estimated? (4) a net fraction f will get diabetes, pro-rated year by year as they age; (5) a net fraction related to f will get diabetes, but that is adjusted for current age, sex, race, etc.?

What about each individual consulting their Big Data genomic counselor?  Are these fractions f related to each individual as a probability p=f that s/he will get diabetes (conditional on things like items 1-5 above)?  That is, is every person at the same risk?

Only if we can equate our past sample, from which we estimated f by induction to the probability p used by deduction to assert for each new individual might this, even in principle, lead to 'precision genomic medicine'.  It is prediction, not just description that we are being promised.  Even if we were thinking in public health terms, this is essentially the same, because it would relate to the fraction of individuals who will be affected in the future, because each person is exposed to the same probability.

Of course, we might believe that each person has some unique probability of getting diabetes (related, again, to the above items), and that f reflects the mix (e.g., average) of these probabilities.  But then, we have to assume that all the genotypes and lifestyles and so on in the current group whose future we're offering 'precision' predictions is exactly like the sample from which the predictions were derived, that this mix of risks is, somehow, conserved.  How can such an assumption ever be justified?

Of course, we know very well that no current sample whose future we want to be precise about will be exactly the same as the past sample from which the probabilities (or fractions) were derived.  Obviously, much will differ, but we also know that we simply have no way to assess by how much it will differ.  For example, future diets, sociopolitical, and other factors that affect risk will not be the same as those in the past, and are inherently unpredictable.  So, on what meaningful basis can 'precision' prediction be promised?

Just for fun, let's take the promise of precision genomic medicine at its face value.  I go to the doc, who tells me
"Based on your genome sequence, I must advise you of your fate in regard to diabetes."
"Thanks, doc.  Fire away!"
"You have a 23.5% chance of getting the disease."
"Wow!  That sounds high!  That means I have a 23.5% chance that I won't die in a car or plane crash, right?  That's very comforting.  And if about 10% of people get cancer, then of my 76.5% chance of not getting diabetes, it means only a 7.65% chance of cancer!  Again, wow!"
"But wait, Doc!  Hold on a minute.  I might get diabetes and cancer, right?  About a 7.65% percent chance of that, right?"
"Um, well, um, it doesn't work quite that way [to himself, sotto voce: "at least I think so..."].....that's because you might die of diabetes, so you wouldn't get cancer.  Of course, the cancer could come first, but it would linger, because you have to live long enough to experience your 23.5% risk of diabetes.  That would not be good news.  And, of course, you could get diabetes and then get in a crash.  I said get diabetes, not die of it, after all!"
I gather you, too, can imagine how to construct many different sorts of fantasy conversations like this, even rashly assuming that your doctor understood probability, had read his New England Journal regularly when not too sleepy after a day's work at the clinic--and that the article in the NEJM was actually accurate.  And that NIH knew in sincerity what they were promising in the way of genomic predictability promises.  But wait!  The medical journals, and even the online genotyping scam companies--you can probably name one or two of them--change your estimated risks from time to time as new 'data' come in.  So when can I assume case-closed and I (well, the Doc) really knows the true probabilities?

I mean, what if there are no such true probabilities, because even if there were, not just knowledge, but also circumstances (cultural, not to mention mutations) continually change, and what if we have no way whatever to know how they're gonna change?  Then what is the use of these 'precision' predictions?  They, at best, only apply to a single, current instance.  So what (if anything at all) does 'precision' mean?

It only takes a tad of thinking to see how precisely imprecise these promises all are--must be, except very short-term extrapolations of what past data showed, and extrapolations of unknown (and unknowable) 'precision'.  Except, of course, the very precise truth that you, as a taxpayer, are going to foot the bill for a whole lot more of this sort of promises.

Unlike the weather, we don't have anything close to as rigorous an understanding of human biology and cultures as we do of the behavior of gases and fluids (the atmosphere).  We might want to say, self-protectingly and more honestly modest, that our use of 'probability' is very subjective and really just means an extrapolated rough average of some unspecifiable sort.  But then that doesn't sound like the glowing promise of 'precision', does it?  One has to wonder what sort of advice would make scientifically proper, and honorable, use of the kind of probabilistic, vague, ephemeral evidence we have when we rely on 'omics approaches, or even when it's the best we can do at present.

In meteorology, it used to be (when I was playing that game) that we'd joke "persistence is the best forecast".  This was, of course, for short range, but short range was all we could do with any sort of 'precision'.  We are pretty much in that situation now, in regard to genomics and health.

The difference is, weather forecasters are honest, and admit what they don't know.

Tuesday, August 14, 2018

The Placebome.....can you believe that!

Is it only religion that feeds and reassures the gullible, no matter what catastrophes strike?

When a baby is born with serious health issues, this is apparently the loving God's will (to test the parents' faith; God can, after all, save the baby's soul).  But rather than just blaming God, perhaps one's faith in this same devilish Being, that faith itself, could have curative powers.  At least those powers might extend to the believer him or herself.

When a person's mood ameliorates a disease, yet no formal medical treatment has been involved, that is a psychological effect.  When the person is in a case-control drug trial study, in which s/he has (though unaware of it) been given a sugar pill--a placebo--rather than the drug under test, and that person's health improves anyway, that is called the placebo effect.

It is important when testing a new drug to have a way to determine whether it really does nothing (or, indeed, is harmful) rather than its intended effect.  Since people who are ill might get better or worse for various reasons, a drug trial often compares those patients given the drug with those who are given a placebo.  The drug is considered to be efficacious if it does something, rather than nothing--nothing, that is, as is assumed about the placebo.

But are some unjustified if convenient assumptions being made in this long-used standard comparison as a test of the new drug's efficacy?  Studies including placebo have long been relatively standard, if not indeed mandatory for drug approval.  But how well are the comparisons--and their underlying assumptions--understood?  The answer may not be as obvious as is generally assumed.

Back pain that's a headache
What about this paper by Carvalho et al., in the journal Pain (Carvalho et a., vol 157, number 12, 2016)?  The authors did a randomized control trial of open-label placebos (OLPs) taken in the usual dose way for the usual 3 weeks on patients suffering low back pain.  The authors found clear (that is, statistically significant) reduction in symptoms--even though the 'control' patients knew they were taking a placebo.  Perhaps they still thought they were taking medicine, or perhaps just being in a study seemed to them, somehow, to be a form of care, something positive--that is, systematically better than no treatment.  But this is not supposed to happen, and relates to a variety of very important, if equally inconvenient, issues about what counts as evidence, what counts as therapy and so on.

The samples in the Carvalho study were small and one can quibble about the quality of the research if one wants to dismiss it.  (E.g., if it were really true, why wasn't it published in a major journal? Did reactionary reviewers from these journals keep it from being published there?).  Still, if the placebo effect is real, the idea should not be a surprise.  Biologically, there really need be no reason why subjects must be blinded to being given placebos in order for them to work.  

But is it appropriate to ask whether, in a similar way, religious faith might have a placebo effect, and if so, should it be part of case-control studies of new drugs or treatments?  If so, then.....

....some things to consider
Here's an interesting thought:  If the placebo effect is real, then how do we know that actual medicines work?  They may seem better than placebos in comparison studies, but what if a substantial fraction of the treatment effect is for religious or other reasons?  That is, these subjects experience a kind of placebo effect?  Then, the case-control distinction is less than one thinks: perhaps as a result, the efficacy of the medicine is actually substantially less than is credited by the standard kinds of placebo-comparison study.  Perhaps placebo-response is part of the case side of the comparison, as well as the control side, and without them the 'case' effect would no longer be significant, or as significant?

If we are doing a placebo-based test of a new drug, should case and control religious or other beliefs be identified, and matched in the two groups?  What about atheists--is that also a comparable faith, or would it serve as a control on such faith?  

Even to acknowledge the possibility that we've under-rated the placebo effect, and over-rated the drugs that we rely on, and that belief systems can even have such effect, raises interesting and important questions.  What if we told a patient that s/he had a placebic genotype, and thus, say, tended to believe everything s/he heard or read?  Then would s/he realize this and stop believing, blocking the placebo effect?  In not knowing if s/he were a case or control, actually reduce even the 'case' effect?  Would we tell such people of some meds they could take to 'cure' this placebo-responsive trait?  Would they take it?  These could be interesting areas to explore, though deciding how to do definitive studies would, by the very nature of the subject, not be easy.

And yet. . . .
Of course, scientists being the way they are, there is now a proposed 'placebome' project (Hall et al., Trends in Mol Med, 21 (5), 2015). The researchers want to search for genomic regions that affect the effect which, they claim varies among people and hence assume it must be 'genetic' (this might even be reasonable, in principle, but way too premature for yet another GWAS project).  Is it as silly, bandwagonish, transparent, and premature a version of unquestioning belief and/or marketing as one can imagine?  I think so--you can, if you wish, of course, look at the paper and judge for yourself. 

But even if this is capitalizing on the 'omics fad, a transparent me-too money-seeking strategy that our venal system imposes, that doesn't vitiate the idea that placebic effects could, in fact, be both real and important.  Nor that truly thoughtful, systematic ways of investigating its nature, not just some statistical results related to it, would be possible and appropriate.  But to do this, how would such a study be designed?

One thing this all suggests to me is that we may not have defined placebos carefully (or knowledgeably) enough, or don't understand what is going on that could count for a physiological (as opposed to 'merely psychological') effect.  Since we have the embedded notion that science is about material technology, statistics, and so on, perhaps we just don't believe (and that's the right word for it) that things can happen that are not part of our science heritage, which largely derives from reductionist physics.  If we've not looked in a properly designed way for  this effect, perhaps we should.  At the very least, there may be much to learn.

But before rushing to the 'omics market, there are interesting qusetions to ask.  Why aren't religious believers who pray for God's grace, generally healthier than the non-believers?  Or is there, in fact, a notable but undocumented difference? Does serious religiosity serve as a placebo in daily life, and if not, why not? If there are measurable physiological or neural pathways that can be identified during placebic experience, are they potential therapeutic targets?  

But there's a deeper more serious question
The fact of placebo effects is generally interesting, but raises an important, very curious issue.   How can a placebo effect work on the diversity of traits for which it has been suggested?  If all a placebic effect does is make you feel better no matter how sick you are, then it's not really placebic in that it doesn't mimic the drug being taken and shouldn't affect the specific disease, just the patient's mood.  But if it can affect the disease, how can that be?

Placebos seem to work in many different drugs and treatments, for many physically and/or physiologically different and unrelated disorders.  At least, I think that is what has been reported.  But these involve different tissues and systems.  So how does the patient 'know' which tissue or physiological system to fix, that is, which cell type a real medicine would be targeting, when believing s/he has taken some effective medicine?  

I know very little about the placebo effect, and it doubtlessly shows in this post, to anyone who does.  But I think these are important, or indeed fundamental questions that include, but go beyond asking if the effect is real: they ask what the effect could actually be.  Before we untangle these issues, and understand what the placebo effect really is, we should be highly skeptical of any 'omic project claiming that it will map it and find out what genes are responsible for it.  Among other things, as I've tried to point out here, one needs to know what 'it' actually is.  And as regards genetic studies, is there the proper kind of plausibility evidence on which to build an 'omics case: is there, for example, any reason at all to believe the placebo is familial?

There is already huge waste of research money chasing  'omics fads these days, while real problems go under-served.  One need not jump on every bandwagon.  If there are real questions here, and there seem to be, then the groundwork needs to be laid before we go genome searching.

Monday, August 13, 2018

Big Data: the new Waiting for Godot

In Samuel Beckett's cryptic play, Waiting for Godot, two men spend the entire play anticipating the arrival of someone, Godot, at which point presumably something will happen--one can say, perhaps, that the wait will have been for some achieved objective.  But what?  Could it simply mean that they can then go somewhere else?  Or, perhaps, there will be no end because Godot will never, in fact, arrive.

A good discussion of all of this is on the BBC Radio 4 The Forum podcast.  Apparently, Beckett insisted that any such answers were in the play itself--he didn't imply that there was some external meaning, such as that Godot was God, or that the play was an allegory for the Cold War--which is one reason the play is so enigmatic.

Was the play written intentionally to be a joke, or a hoax?  Of course, since the author refused to answer or perhaps even to recognize the legitimacy of the question, we'll never know.  Or perhaps that in itself, is the tipoff that it really is a hoax.  Or maybe (I think more likely) that because it was written in France in 1949, it's an existentialist era statement of the angst that comes from the recognition that the important questions in life don't have answers.

Waiting for the biomedical Promised Land
That was then, but today we are witnessing real-life versions of the play: things just as cleverly open-ended, with the 'What happens then?' question only having a vague, deferred answer, as in Beckett's title.  And, as in the play, it is not clear how self-aware even some of the perpetrators are of what they are about.

I refer to the possibility that we are witnessing various Big Data endeavors, unknowingly imitative but as cleverly and cryptically open-ended as the implied resolution that will happen when Godot arrives.  Big Data 'omics is a current, perhaps all too convenient, scientific version of the play, that we might call Waiting for God'omics.  The arrival of the objective--indeed, not really stated, but just generically promised as, for example, 'precision genomic medicine' for 'All of Us'--is absolutely as slyly vague as what Vladimir and Estragon were presumably waiting for.  The genomic Godot will never arrive!

This view is largely but not entirely cynical, for reasons that are at least a bit subtle themselves.

Reaching the oasis, the end of the rainbow, or the Promised Land is bad for business
One might note that if the 'omics Godot were ever to arrive, it would be the end of the Big Data (or should one say Big Gravy?) train, so obviously our Drs Vladimirs and Estragons must ensure that such a tragedy, arrival at the promised land, the elimination of all diseases in everyone, or whatever, never happens in real life.  Is there any sense that anyone seriously thinks we would reach resolution of the cause of disease, with precision for all of us, say, and be able (that is, willing) to close down the Big Budget nature of our proliferating 'omictical me-too world?

We have entrenched the search for Godot, a goal so vague as to be unattainable.  Even the proper use of the term 'precision' implies an asymptote, a truth that one never reaches but can get ever closer to.  If we could get there, as is implied, we should have been promised 'exact' genomic medicine. And wouldn't this imply that then, finally, we'll divert the resources towards cures and prevention?

However, even if the perpetrators of the Big Promises never think or aren't aware of it, we must note that the goal cannot be reached even with the best and most honorable of intentions.  Because of births and deaths, and environmental changes, and mutations and recombination, there truly never is the palm-draped oasis at which our venture could cease.  There will never be an 'all' of us, and genetic causation is ever-changing (in part because of the similarly dynamic environment), meaning that there are no such things as risks to be approached with 'precision'.  Risks are changeable and not stable, and indeed not fixed numerical values.  At best, they are collective population (or sample) averages.  So there is never a 'there' there, anywhere.  There is only a different one everywhere.

But awareness of these facts doesn't seem to be part of the 'omicsalyptic promises with which we are inundated.  They seem, by contrast, rote promises that are little if any different from political, economic, or religious promises--if only we do this, we'd get to a Promised Land.  But such a land does not exist.

If we had, say, a real national health system, it would be properly and avowedly open-ended without anyone honorable objecting (if it were done well).  And epidemiologically, of course, there will always be new mutations, recombinations, environments and the like to try to understand--disease with, or without strong genotype-phenotype causation.  There will always be a need for health research (and basic science).  But science, of all fields of human endeavor, should be honest. It should not hold out the promise that Godot will arrive, but in a sense, openly acknowledge that that can never happen.

But this doesn't let those off the guilty hook who are hawking today's implicit Big Data, big open-ended budget promise that by goosing up research now we'll soon eliminate genetic disease (I recall that Francis Collins did indeed, not all that long ago, promise that this Paradise would come soon--um, I think his date was something like 2010!)  It's irresponsible, self-interested promising, of course.  And those in genomics who are intelligent enough to deserve to be in genomics do, or should, know that very well.

Like Vladimir and Estragon, we'll always be told that we're waiting for Godot, and that he'll be coming soon.

NOTE:  One might observe that Godoism is a firmly entrenched strategy elsewhere in our society, for examples, in regard to  theoretical physics, where there will never be a collider big enough to answer the questions about fundamental particles: coming to closure would be as fiscally threatening to physics as it is to life sciences.  Science is not alone in this, but our society does not pay it nearly enough skeptical heed.

Monday, August 6, 2018

Traffic jams ---> Trophic jams

We live in State College, PA, a small university town.  Well, it isn't nearly as small as it was when we moved here in 1985; Penn State enrollment has gone from around 30,000 when we got here to something like 50,000, and the town has grown to keep up.

How did that happen?  In essence, by sacrificing farm fields, turning them into condo centers, fine suburban-style cardboard 'mansions' with big grassy lots, 2-3 car garages (so everyone could drive a few miles to the nearest grocery), and so on. Even in this fairly small town, during the day, there are cars going through most intersections most of the time, even in the residential tracts.

To get from here to anywhere you need to get on I-80 or I-95, or some other throughway, where there is an endless chain of nearly stationary cars and trucks, hour after hour, mile after mile after mile.  Even when not obstructed by an accident or construction, the traffic is so heavy that it's not at all unusual for very slow, or creeping, or stopped traffic jams  tolitter the route.

The global traffic jam....
This same situation is happening all over the country, all over Europe, all over Japan and much of India and China.  It is even happening in parts of Africa and Australia.  This is 24/7.  The endless rivers of steel, rubber, and petrochemicals is like a river, and as Heraclitus said you can't step into it twice:  in no two moments is this same stream actually the same.  The cars and people and their arrangement are different--and, of course, we are never using the same gasoline twice: once used, it is burped into the atmosphere.

When you've been around more than a few decades, you'll start to realize that the current situation isn't 'normal'.  In a decade or two, or three, you'll think today was good and you were used to it, but that what has become normal, the jam of all jams, is what's really intolerable.

The traffic jam is, of course, due to the unconstrained growth of population, and its per capita consumption.  And this traffic jam, in turn, will have its longterm side effects in terms of the resources it uses up.  And that is going to lead to another kind of jam.

The global trophic jam
As we pave and build condos and shopping malls over what has for  millions of years been millions of acres of fertile land, we reduce the potential food production for us and other creatures, plant or animal.  Our sewage and waste claims more in water and land areas.  And we seem unable to prevent there always being more of us.  That means more paving, more building, and altered climate.  This means less fertile land for growing food--a trophic jam.

Climate is changing, and at least some of this is due to human-induced global warming.  Ostrich-like deniers, note: Climate change is happening regardless of why!  This will moisten some arid lands, and even more it will dry out currently fertile lands, in large amounts.  It will raise water levels on coasts and in rivers.  Since before the industrial age, settlements--now cities--were built on waterways for trade and so on, many or even most major cities will be threatened by water rise.  This will drive people inland, to cover over even more arable land.  Those living inland should realize that they will not be able to keep this inrush out.

Some areas, like Northern Canada perhaps, will become wetter.  But hardly anybody lives there.  Other areas, the rich farmlands, will become drier and likely many will become arid.  Nations that rely on food for their people or for trade, will have to look elsewhere--and if all of human history is any guide, this trophic jam will inevitably lead to attempts at military conquest.  If the breadbasket has shifted, say, from the US to Canada, and there's real food pressure, does anyone doubt that military expeditions won't head northward?

Our relentless, unconstrained traffic jams are irritating, especially to the impatient (like me) or those who want to spend time with their families, or bowling, rather than sitting in traffic.  But these headaches may be dooming us to stomach aches--the kind one gets when there isn't enough food.

One can be a climate denying ostrich, or a rosy believer in science and engineering, but if what we here and many others who know much more than we do are making these warnings, they are not all Chicken Littles.  Yet, like the swarm of lemmings, we are headed for the cliffs.

Apparently, today at least, we can't tell, or don't care to tell, the connections between traffic and trophic jams.

Saturday, August 4, 2018

On Montaigne's cat

The person who, in a sense, invented the blog way back in 1580 in the form of his meandering Essays, was Michel de Montaigne.  He rambled across much of the territory of human thought, opining, suggesting, hinting, retreating and, well, just musing often rather incoherently.  Isn't that how most all modern blogs--this one included--are?!

Sadly for him, Montaigne couldn't Tweet his frequent 'posts', but he did Meow one.  In a famous oft-quoted part of his 'An Apology for Raymond Sebond', Montaigne muses about the arrogantly vain and presumptuous way that we judge our own uniqueness, in particular relative to other species.  In a famous passage, he writes:

"When I play with my cat, how do I know that she is not passing time with me rather than I with her"

  "Am I not a 'me'?"  Our own Mu (drawing by Anne Buchanan)

Humans routinely, conveniently, ignore the thought.  It is not in our self-interest.  Indeed, by now our cultural legacy is from the often obscure writing of Rene Descartes who, at least about himself recognized "I think, therefore I am."  But, apparently, a cat doesn't, so isn't.  By turning other creatures into automatons, mere machines, in the period that laid the foundation for modern science, Descartes' objectifying dogma opened not only justification for raising or hunting animals for our tables with a clear conscience, but also for the diverse experimentation that we do on uncountably many laboratory animals (indeed, the story with plants and their sense of self-awareness is becoming more complex, but that is too disturbing to think about).

Mea culpa!
I am personally heavily burdened by the thought of what I did over decades of research to countless mice.  Wholly innocent of any offense, they suffered the ultimate mortal penalty, so we could see what genes were expressed in their unborn young's teeth, or model effects on their craniofacial development or even, unforgivably perhaps, determine when they grew too old and their lives were no longer (to us) worth living and 'sacrificed' them.  No Viagra relief, retirement centers, hearing aids, etc. for them!

We were once forced to 'euthanize' (gas to death) a large number of laboratory mice, males, females, and young.  This was done in the usual 'humane' and research-ethics-approved say.  Deep in their sacrificial tank, as the hissing N2O began and the mice sensed the lack of air, they grouped tightly together in a terrified death huddle, young pressed against their mothers, that as I watched reminded me of images of Hitler's death-showers.  I will never forget that, though it was entirely within the standard accepted IRB protocols by which 'we' manage and manipulate 'them'.  They're just things after all.....aren't they?  Of course, if so, why do we bother with any sort of 'humane' treatment?  Or, if they're like us, why are we allowed to manipulate them, often to their terror and suffering?

We smugly let chimpanzees retire comfortably to senior centers (e.g., Chimp Haven, in Louisiana).  Why?  Because they are like us!  But other animals, even rhesus monkeys, are merely them.  Their lives are disposable.

Fortunately, for doubters at least, there is no after-world in which justice will be served to us, or where we might ask Him (She? It? Them?) why life was created as a food chain in which each depends on one or another form of this sort of savagery just for survival.

The science question
All of this is confession in the side booth, but it does raise the important question that bemused Montaigne: what is the 'me' of a cat like, compared to my own 'me'?  Can we ever know?  Scholars have long mused over what the nature of consciousness might be and how we could ever know it.  When the detached, mechanistic Descartes said, metaphorically, 'I think therefore I am", in the realm of consciousness he was 'thinking' in an exclusive way.

Frans de Waal, a prominent primate-watcher, has argued in a convincing way that 'thought' as we would casually use the term, doesn't really require language--doesn't have to be just the way you, right now, are doing it, to exist in every meaningful sense.

Of course, consciousness and its causative or even phenomenological nature has always been, and still is, essentially elusive.  I think and I am....but how?  How does wiring among a huge bunch of neurons lead to the meta-phenomenon of self-awareness?  Or since clearly cats and even bugs are self-aware in some senses, and many if not all animals have similar genomes and neural structures and wiring, why don't they, too, have the same sense.  Is there such thing as a lesser sense of 'me'?  How could we know, and  more importantly on what basis can we assert that they don't really have It?

Many have opined that science is the specifically objective endeavor by which we, operating from the inside (of our own heads), assess the way the outside world works.  If so, then science can't be expected to look inside the inside, from the inside, so to speak; perhaps consciousness is a literally subjective phenomenon that we experience but cannot examine by what we call 'science'.  Further, we assume that it--whatever it is--is also experienced by (at least some of our more decent) human fellows.

If the notion that in reality consciousness is the internal experience that is out of bounds for the essentially external purview of science, then we may relate our own and describe it as each of us sees it in others, from the outside, but we can't really understand it objectively.  If so it would simply be out of bounds by being inappropriate for science.  Many dabblers have tried to get around these obvious limitations, and they document all sorts of externally observed 'neural correlates', and in the same sense that a bullet through the head ends the phenomenon, these observations may reflect much about its objective nature.  But since consciousness is inherently about the experience, whatever the wiring, these correlates are, so far at least, just that--correlates.

Then how can we pronounce about other species?
Given this, what justifies the Cartesian convenience by which we blithely judge that they, not even cats, don't have 'it'?  Or is it just a more profound kind of convenience, namely, that we want them--other species--to be 'things' so that we, with our self-declared special powers, can control their lives and even eat them?  Is that different from the view wasps and tigers must essentially have of their prey?  Or is there such a thing as 'partial' or 'lesser' consciousness, compared to ours--as opposed simply to a different kind of consciousness, for example, not based on symbolic language as ours is?

Mammals, like our cat and dog friends, and even birds, have very similar genotypes to ours.  They have very similar cellular and anatomical structures, and neural wiring, to ours.  Their behaviors are very similar to ours.  They communicate in ways quite similar to ours except, perhaps, that it is more by stereotypical signaling than abstract symbols.  But we presume to dismiss their particular internal experiences as being mechanical, that is, fundamentally different from ours.

Is our declaration that they are just machines, or at least don't really have 'it', more than our particular convenient, self-interested rationale for doing what we like to them?

In its fashion that we would completely recognize were we to experience it, does a cow in the slaughterhouse queue ever ask:  'Whats this? Why me?', or a cat wonder 'What is it like to be a human?'  I ask what is it like, what does it seem and feel like, to be a laboratory mouse enjailed in a tiny cage?  Or to be gassed to death, at our convenience?  Montaigne's question is as cogent today as it ever was:

"When I play with my cat, how do I know that she is not passing time with me rather than I with her"

Our  cats (and chipmunk).
Drawings by Anne Buchanan.  For more of her fantastic artwork, see
                                           Left, center: are they not 'me's?  Right: aren't cat and chipmunk 'me's?

Tuesday, July 31, 2018

Thinking about science upon entering the field. IV. Finale

Here is the fourth and final of a four-part series of posts by Tristan Cofer, a graduate student in chemical ecology here at Penn State.  He has been thinking about the profession he is being trained for, and the broader setting in which it is taking place, and into which he will have a place:

For my final entry in this series, I would like to revisit some ideas from my earlier posts, as they pertain to a book that I recently finished, called ‘What is Real?’ (Basic Books, 2018) by Adam Becker. The book recounts quantum theory’s formative years during the early twentieth century, focusing as much on the personalities that were involved in the theory’s development as on the science itself.

Becker devotes much of the book to the 1927 Solvay Conference, which gathered twenty-nine of the world’s leading physicists to discuss the newly formulated theory. Attendees at the conference were divided into two ideologically distinct groups. In the majority, were Werner Heisenberg, Max Born, and others who had adopted Danish physicist Niels Bohr’s ‘Copenhagen interpretation’.

Influenced by Heisenberg’s ‘uncertainty principle’, Bohr claimed that subatomic entities had ‘complementary’ properties that could never be measured at the same time. Electrons, for example, behaved like ‘particles’ or ‘waves’ depending on the experiment. To Bohr, this implied that electrons, photons, and other subatomic entities only had probabilities until they were measured. ‘Reality’ simply did not exist in the quantum world. It was therefore pointless to talk about what was happening on the quantum level, since quantum theory could not describe the way the world ‘is’.

On the other side of the aisle were Louis de Broglie, Erwin Schrödinger, and Albert Einstein who were adamant that physical systems were ‘real’ whether we acknowledged them or not. Led by Einstein, this group argued that although considerable advances had been made in developing quantum theory, it was hardly complete. Rather than do away with reality at the quantum level, Einstein et al. suggested that hidden processes, such as de Broglie’s ‘pilot waves’, could explain apparent contradictions such as wave–particle duality.

In the end, Bohr’s instrumentalist view won the day over Einstein’s realist one. Quantum mechanics was a closed theory that was no longer susceptible to change. Einstein and his supporters were largely ignored, and Einstein himself was painted as an out-of-touch curmudgeon who simply would not accept the new theory. At least that is how the story has been told over the past several decades. Becker, however, gives a slightly different account. He argues that the Copenhagen interpretation’s popularity had less to do with its epistemological value than with the cult of personality surrounding its architect, Niels Bohr.

Bohr was a ‘physicists’ physicist’ and the preeminent scientist of his time. In contrast to Einstein (who described himself as a ‘one-horse cart’), Bohr collaborated with other physicists throughout his career and mentored many others at his institute in Copenhagen, where he enjoyed considerable financial support from the Danish government. According to Becker, Bohr’s social influence, together with the convoluted and sometimes confusing way that he expressed himself, led many to revere him as a near mythical figure. Indeed, in one particularly telling passage, Becker quotes Bohr’s former student John Archibald Wheeler, who compared Bohr to ‘Confucius and Buddha, Jesus and Pericles, Erasmus and Lincoln’.

‘What is Real?’ serves as an important cautionary tale. While we want to believe that science advances only through its devotion to empirical fact, many ‘facts’ are decided upon not by what they say, but by who says them. We each belong to a ‘thought collective’ with fixed ideas that prevent us from seeing things objectively. Competing ideologies are quickly swept under the rug and forgotten. Indeed, in my experience, I have found that students are rarely exposed to the histories and philosophies that have shaped their respective disciplines. Do we all have our own ‘Copenhagen interpretation’, firmly enshrined in a scaffolding of tradition and convenience? I suspect that we do. To borrow a line from Daniel C. Dennett’s, ‘Darwin’s Dangerous Idea’: ‘There is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination’.

Sunday, July 15, 2018

The problems are in physics, too!

We write in MT mainly about genetics and how it is used, misused, perceived, and applied these days.  That has been our own profession, and we've hoped to make cogent critiques that (if anybody paid any attention) might lead to improvement.  At least, we hope that changes could lead to far less greed, costly herd-like me-too research, and false public promises (e.g., 'precision genomic medicine')--and hence to much greater progress.

But if biology had problems, perhaps physics, with its solid mathematical foundation for testing theory, might help us see ways to more adequate understanding.  Yes, we had physics-envy!  Surely, unlike biology, the physical sciences are at least mathematically rigorous.  Unlike biology, things in the physical cosmos are, as Newton said in his famous Principia Mathematica, replicable: make an observation in a local area, like your lab, and it would apply everywhere.  So, if the cosmos has the Newtonian property of replicability, and the Galilean property of laws written in the language of mathematics, properties that were at the heart of the Enlightenment-period's foundation of modern science, then of course biologists (including even the innumerate Darwin) have had implicit physics envy.  And for more than a century we've thus borrowed concepts and methods in the hopes of regularizing and explaining biology in the same way that the physical world is described.  Not the least of the implications of this is a rather deterministic view of evolution (e.g., of force-like natural selection) and of genetic causation.

This history has we think often reflected a poverty of better fundamental ideas specific to biology.  Quarks, planets, and galaxies don't fight back against their conditions, the way organisms do!  Evolution, and hence life, are, after all, at the relevant level of resolution, fundamentally based on local variation and its non-replicability.  Even Darwin was far more deterministic in a physics-influenced way, than a careful consideration of evolution and variation warrants--and the idea of 'precision genomic medicine', so widely parroted by people who should know better (or who are fadishly chasing funds), flies in the face of what we actually know about life and evolution, and the fundamental differences between physics and biology.

Or so we thought!
Well, a fine new book by Sabine Hossenfelder, called Lost in Math, has given us a reality check if ever there was one.

In what is surely our culpable over-simplification, we would say that Hossenfelder shows that at the current level of frontier science, even physics is not so unambiguously mathematically rigorous as its reputation would have us believe.  Indeed, we'd say that she shows that physicists sometimes--often? routinely?--favor elegant mathematics over what is actually known.  That sounds rather similar to the way we favor simple, often deterministic ideas about life and disease and their evolution, based on statistical methods that assume away the messiness that is biology.  Maybe both sciences are too wedded to selling their trade to the public?  Or are there deeper issues about existence itself?

Hossenfelder eloquently makes many points about relevant ways to improve physics, and many are in the category of the sociology or 'political economics' of science--the money, hierarchies, power, vested interests and so on.  These are points we have harped on here and elsewhere, in regard to the biomedical research establishment.  She doesn't even stress them enough, perhaps, in regard to physics.  But when careers including faculty salaries themselves depend on grants, and publication counts, and when research costs (and the 'overhead' they generate) are large and feed the bureaucracy, one can't be surprised at the problems, nor that as a result science itself, the context for these socioeconomic factors, suffers.  Physics may require grand scale expenses (huge colliders, etc.) but genetics has been playing copy-cat for decades now, in that respect, entrenching open-ended Big Data projects.  One can debate--we do debate--whether this is paying off in actual progress.

Science is a human endeavor, of course, and we're all vain and needy.  Hossenfelder characterizes these aspects of the physics world, but we see strikingly similar issues in genomics and related 'omics areas.  We're sure, too, that physicists are like geneticists in the way that we behave like sheep relative to fads, while only some few are truly insightful.  Perhaps we can't entirely rid ourselves of the practical, often fiscal distractions from proper research.  But the problems have been getting systematically and palpably worse in recent decades, as we have directly experienced.  This has set the precedent and pattern for strategizing science, to grab long-term big-cost support, and so on.  Hossenfelder documents the same sorts of things in the physics world.

Adrift in Genetics
In genetics, we do not generally have deterministic forces or causation.  Genotypes are seen as determining probabilities of disease or other traits of interest.  It is not entirely clear why we have reached this state of affairs.  For example, in Mendel's foundational theory, alleles at genes (as we now call them) were transmitted with regular probabilities, but once inherited their causative effects were deterministic.  The discovery of the genetics of sexual reproduction, one chromosome set inherited from each parent, and one set transmitted to each offspring, showed why this could be the case.  The idea of independent, atomic units of causation made sense, and was consistent with the developing sciences of physics and chemistry in Mendel's time as he knew from lectures he attended in Vienna.

However, Mendel carefully selected clearly segregating traits to study, and knew not all traits behaved this way.  So an 'atomic' theory of biological causation was in a sense following 19th century science advances (or fads), and was in that sense forced onto selective data.  It was later used to rationalize non-segregating traits by the 'modern evolutionary synthesis' of the early 1900s.  But it was a theory that, in a sense, 'atomized' genetic causation in a physics-like way, with essentially the number of alleles being responsible for the quantitative value of a trait in the organism.  This was very scientific in the sense of science at the time.

Today, by contrast, the GWAS approach treats even genetic causation itself, not just its transmission, as somehow probabilistic.  The reasons for this are badly under-studied and often rationalized, but might in reality be at the core of what would be a proper theory of genetic causation.  One can, after the fact, rationalize genotype-based trait 'probabilities', but this is in deep ways wrong: it borrows from  physics the idea of replicability, and then equates retrospective induction (the results in a sample of individuals with or without a disease, for example), with prospective risks.  That is, it tacitly assumes a kind of causally gene-by-gene deterministic probability.  One deep fallacy in this is that a gene's effects can be isolated, but genes are in themselves inert: only by interacting do DNA segments 'do' anything.  Far worse, one may say epistemologically worse if not fatal, is that we know that future conditions in life, unlike those in the cosmos, are not continuous, deterministic, or predictable.

That is, extending induction to deduction is tacitly assumed in genomics, but is an unjustified convenience.  Indeed, we know the prevalence of traits like stature or disease changes with time, and along with literally unpredictable future lifestyle exposures and mutations.  So assuming a law-like extensibility from induction to deduction is neither theoretically or practically justifiable.

But to an extent we found quite surprising, being naive about physics, what we do in crude ways in genetics much resembles how physics rationalizes its various post hoc models to explain the phenomena outlined in Hossenfelder's book.  Our behavior seems strikingly similar to what Lost in Math shows about physics, but perhaps with a profound difference.

Lost in statistics
Genetic risk is expressed statistically (see polygenic risk scores, e.g.).  Somehow, genotypes affect not the inevitability but the probability that the bearer will have a given trait or disease.  Those are not really probabilities, however, but retrospective averages estimated by induction (i.e., from present-day samples that reflect past-experience).  Only by equating induction with deduction, and averages with inherent parameters, indeed, that take the form of probabilities, can we turn mapping results into 'precision' genomic predictions (which seems to assume, rather nonsensically, that the probability is a parameter that can be measured with asymptotic precision).

For example, if a fraction p of people with a given genotype in our study, have disease x, there is no reason to think that they were all at the same 'risk', much less that in some future sample the fraction will be same.  So, in what sense, in biology at least, is a probability an inherent parameter?  If it isn't, what is the basis of equating induction with deduction even probabilistically?

There is, we think, an even far deeper problem.  Statistics, the way we bandy the term about, is historically largely borrowed from the physical sciences, where sampling and measurement issues affect precision--and, we think profoundly, phenomena are believed to be truly replicable.  I'd like to ask Dr Hossenfelder about this, but we, at least, think that statistics developed in physics largely to deal with measurement issues when rigorous deterministic parameters were being estimated.  Even in quantum physics probabilities seem to be treated as true underlying parameters at least in the sense of being observational aspects of measuring deterministic phenomena (well, don't quote us on this!).

But these properties are [sic] precisely what we do not have in biology.  Biology is based on evolution which is inherently based on variation and its relation to local conditions over long time periods.  This does not even consider the vagaries of (sssh!) somatic mutation, which makes even 'constitutive' genotypes, the basic data of this field, an illusion of unknowable imprecision (e.g., it differs uniquely with individual, age, tissue, and environmental exposure).

In this sense, we're also Lost in Statistics.  Our borrowing of scientific notions from the history of physical sciences, including statistics and probability, is a sign that we really have not yet developed an adequate much less mature theory of biology.  Physics envy, even if physics was not Lost in Math, is the result of the course of science history, a pied piper for the evolutionary and genetic sciences.  It is made worse by the herd-like behavior of human activities, especially under the kinds of careerist pressures that have been built into the academic enterprise.  Yet the profession seems not even to recognize this, much less seriously to address it!

Taking what we know in biology seriously
The problems are real and while they'll never be entirely fixed, because we're only human, they are deeply in need of reform.  We've been making these points for a long time in relation to genetics, but perhaps naively didn't realize similar issues affected the fields of physics which appear, at least to the outsider, much more rigorous.

Nonetheless, we do think that the replicability aspects of physics, even with its frontier uncertainties, make it more mathematically--more parametrically--tractable compared to evolution and genetics, because the latter depend on non-replication.  This is fundamental, and we think suggests the need for really new concepts and methods, rather than ones essentially borrowed from physics.

At a higher and more profound, but sociological level, one can say that the research enterprise is lost in much more than math.  It will never be perfect; perhaps it can be perfected, but that may require much deeper thinking than even physics requires.

This is just our view: take a serious look at Hossenfelder's  assessment of physics, and think about it for yourself.