Thursday, October 2, 2014

Ignore this study!

A piece in the New York Times reports that a new study shows that working long hours causes type 2 diabetes, but only in people of lower socioeconomic status.  The study is a meta-analysis of 4 previous studies and analysis of 19 unpublished studies, and is published in The Lancet: Diabetes and Endocrinology ("Long working hours, socioeconomic status, and the risk of incident type 2 diabetes: a meta-analysis of published and unpublished data from 222 120 individuals," Kivimäki et al.).
Type 2 diabetes, characterised by hyperglycaemia and insulin resistance or insulin insufficiency, causes substantial disease burden. Globally, more than 285 million people have type 2 diabetes, and its prevalence is predicted to increase to 439 million by 2030. The findings from prospective cohort studies show that working long hours is associated with factors that contribute to diabetes, such as unhealthy lifestyle, work stress, sleep disturbances, and depressive symptoms. Working long hours is also associated with an increased risk of cardiovascular disease, which is one of the complications of type 2 diabetes. However, the direct association between long working hours and incident type 2 diabetes has been assessed in only a few studies.
And they found that "In this meta-analysis, the link between longer working hours and type 2 diabetes was apparent only in individuals in the low socioeconomic status groups."  People of low SES who worked more than 55 hours a week were at a 30% higher risk of developing t2d than those who worked 35-40 hours a week.

 

October 24, 1940: The Fair Labor Standards Act of 1938’s mandate of a 40-hour work week with time-and-a half overtime pay for hours of work beyond that goes into effect. The legislation was passed to eliminate “labor conditions detrimental to the maintenance of the minimum standard of living necessary for health, efficiency, and the general well-being of workers.”


Which means, of course, that working long hours doesn't in fact have a direct effect, but is correlated with something else that's associated with living life in the lower socioeconomic strata.  Or at best working long hours exacerbates the effect of that unidentified, confounding variable, or variables.  The authors note that they adjusted for age, sex, obesity and physical activity, and excluded shift workers, and that the effect of long working hours was independent of these variables.  But, while they wonder what it is about working long hours that might be causal, they do also note that long working hours may be mediating the effect of some unknown variable.  

Indeed, there are many differences between socioeconomic strata that might be associated with risk of T2D including ethnicity and genetic predisposition, diet, maternal health during pregnancy, type of job and thus pay scale, and other possible risk variables associated with poverty.  So, it's interesting that what looks to us like an inconclusive study that suggests but doesn't identify confounding variables associated with risk of type 2 diabetes, the NYT piece emphases the effect of long working hours only, though acknowledging that this is associated with depression, sleep deprivation, unhealthy lifestyle, which may be causal.  Though, they do not question why that would be only in poor people.

Curious that the lead author is quoted in the NYT on T2d prevention this way:
“My recommendation for people who wish to decrease the risk of Type 2 diabetes,” he said, “applies both to individuals who work long hours and those who work standard hours: Eat and drink healthfully, exercise, avoid overweight, keep blood glucose and lipid levels within the normal range, and do not smoke.”
So, basically, he's saying ignore this study.

And here's another study we might want to ignore
Steven Salzburg at Forbes alerts us to another work-related danger.  Standing up at our desks is said to be good for our health.  Indeed, it's said even to reduce risk of type 2 diabetes (and presumably the benefit increases the longer we stand and work, unless of course we're lower class, in which case the longer we stand, the more likely we'll get type 2 diabetes).  The mechanism has now been found -- you know how telomere length (the length of the ends of our chromosomes) is associated with longevity?  The longer they are, they longer we live, right?  Well, it seems that standing up lengthens telomeres!

Telomeres; Wikpedia


An RCT (random control trial) study in the British Journal of Sports Medicine ("Stand up for health—avoiding sedentary behaviour might lengthen your telomeres: secondary outcomes from a physical activity RCT in older people," Sjögren et al.) reports that the less time their subjects spent sitting, the greater the lengthening of their telomeres six months from the inception of the study. "Reduced sitting time was associated with telomere lengthening in blood cells in sedentary, overweight 68-year-old individuals participating in a 6-month physical activity intervention trial."

But, as Salzburg points out, this result is based on 12 individuals, and in fact only 2 of these people showed a marked effect, and probably drove the results.  And what do blood cell telomeres have to do with diabetes or longevity, unless this has to do with the immune system?  We ask, because red blood cells have no chromosomes, and white blood cells are basically in the immune system.  So are some discarded cells sloughed off from other tissues (and hence no longer being used by the body). Where is a plausible mechanism, unless it has to do with lifestyle correlates of those who choose stand-up careers?  The authors owe us at least some explanation. Or is it the journal that owes its readers and explanation for why it published such a paper?

Anything can get published
There are good studies and there are bad studies.  Even good (legitimate) findings can be falsely attributed to some measured putative cause without sufficient justification.  The publication and promotion of loose, over-interpreted, over-sold studies is one reason that we don't know which foods are good for us and which are bad.  But the problem is deeper than that -- reductionist science, which aims to identify single causal factors for complex diseases, no matter how well done the study and expert the analysis, is simply the wrong approach to understanding complexity.  It is systematically misleading.

Why these reports keep flowing is understandable in our news-cycle media culture.  But when the bottom line is basically that a reasonable, moderated, balanced lifestyle is the best and almost the only reliably known way to defer many chronic diseases, it's strange that scientists themselves can't see the relative nonsense they are purveying.

Wednesday, October 1, 2014

Nature or nurture? Tristram Shandy weighs in!

It is only because of our very casual and cursory attention to history that we credit Charles Darwin, in 1858, with showing us that every aspect of our natures is due, entirely and with infinite determinism, to natural selection fine-tuning our genomes.  We like heroes and because we're scientists, we want the heroes to be other scientists (so we can liken ourselves, and our own inherent brilliance, to those heroes).  We dismiss philosophers and historians of science as meddlers in our business if they do not cling to the mythologies we prefer about ourselves (and our inherent brilliance).  But if we're really scientists, and truly truth-seekers, we must bow to discoveries that undermine our self-flattering tales.  It hurts, it really hurts, if the discovery shows that the pioneers of our field were, in fact, religious or, worse, totally un-versed in the lore of science and the pursuit of objective fact.

But such is the sad reality about the fact of complete genetic determinism.

The opinions and discoveries of Laurence Sterne (as expressed by Tristram Shandy)
In 1761-3, the Rev. Laurence Sterne, published his study of environmental determinism, called The Life and Opinions of Tristram Shandy, Gentleman.

Laurence Sterne (1713-68); image from Google images
The book is written as a narrative of his life, by the title character.  Tristram notes that at the very moment of the act that led to his conception Mrs Shandy blurted out "Pray my Dear, have you not forgot to wind up the clock?"  About that question her husband, though a man of exceedingly regular habits, replied "Good G..!  Did ever woman, since the creation of the world, interrupt a man with such a silly question?"  The clock in question is shown in the background of this figure; in the foreground is a depiction of the use of forceps in deliveries (which crushed Tristram's nose during his birth).

The Shandy home (and the clock), from 1761 edition.  By Wm Hogarth, from Wikipedia images

Here is how Tristram describes the lifelong impact of his mother's ill-timed distraction:
I wish either my father or my mother, or indeed both of them, as they were in duty both equally bound to it, had minded what they were about when they begot me; had they duly consider'd how much depended upon what they were then doing;--that not only the production of a rational Being was concerned in it, but that possibly the happy formation and temperature of his body, perhaps his genius and the very cast of his mind;--and, for aught they knew to the contrary, even the fortunes of his whole house might take their turn from the humours and dispositions which were then uppermost;--Had they duly weighed and considered all this, and proceeded accordingly,--I am verily persuaded I should have made a quite different figure in the world, from that in which the reader is likely to see me.--Believe me, good folks, this is not so inconsiderable a thing as many of you may think it;--you have all, I dare say, heard of the animal spirits, as how they are transfused from father to son, & &--and a great deal to that purpose:--Well, you may take my word, that nine parts in ten of a man's sense or his nonsense, his successes and miscarriages in this world depend upon their motions and activity, and the different tracks and trains you put them into, so that when they are once set a-going, whether right or wrong, 'tis not a half-penny matter,--away they go cluttering like hey-go mad; and by treading the same steps over and over again, they presently make a road of it, as plain and as smooth as a garden-walk, which, when they are once used to, the Devil himself sometimes shall not be able to drive them off it.

. . . . .

--Then, let me tell you, Sir, it was a very unseasonable question at least,--because it scattered and dispersed the animal spirits, whose business it was to have escorted and gone hand in hand with the Homunculus, and conducted him safe to the place destined for his reception. . . . . Now, dear Sir, what if any accident had befallen him in his way alone!--or that through terror of it, natural to so young a traveller, my little Gentleman had got to his journey's end miserably spent;--his muscular strength and virility worn down to a thread;--his own animal spirits ruffled beyond description,--and that in this sad disorder'd state of nerves, he had lain down a prey to sudden starts, or a series of melancholy dreams and fancies, for nine long, long months together.--I tremble to think what a foundation had been laid for a thousand weaknesses both of body and mind, which no skill of the physician or the philosopher could ever afterwards have set thoroughly to rights.

. . . . .

That I should neither think nor act like any other man's child:--But alas! continued he, shaking his head a second time, and wiping away a tear which was trickling down his cheeks, My Tristram's misfortunes began nine months before ever he came into the world.
The surprise is that more than two centuries ago at least some views were directly contrary to the predominant view these days, reflecting the way so much of our research resources are currently committed--that is, to the idea that genes rather than experience are what (also from the moment of conception) make us what we are.  But not everyone shared this view!

Things were debated even back then!
Last week we noted that in 1862, just after Darwin's Origin of Species, the novelist Wilkie Collins expressed the debate between Nurture advocates and their Nature foes as to which was responsible for our behavior. This was exactly a century after Sterne's Nature view just described.  Sterne had no notion of 'genes' and Tristram didn't attribute his nature to inheritance, which would put him in the Nurture category, even if at the extreme (being parentally distracted in flagrante delicto defined the imminent conceptus's entire future).

It wasn't long thereafter that the power of inheritance was debated, in little-remembered works, that applied to the same behavioral characteristics discussed in fiction.  Almost half-way between Sterne and Collins, in 1808, one M. Portal, a French professor of medicine, published the "Considerations on the Nature and Treatment of some hereditary or Family Diseases" (London Med. Phy. Journal, 21, 229–239, 281–296).  Members of the upper classes (at least) were concerned that they might sully their noble posterity by transmitting disease, especially mental disease, from parent to offspring.  Many traits were known to be transmitted (as Montaigne is quoted by Portal), "We find that not only are the marks of the body transmitted from father to son, but also a resemblance of temper, complexion, and inclinations of the mind."

A few years later, in 1814, a British physician named Joseph Adams took Portal's work to task, arguing that there were other ways that traits could cluster in families.  He, too, was speculating in that he hadn't the kinds of precision or systematic analytic methods we have today.  But he carefully pointed out that life-experience, infection, and other causes could account for such clustering.  Diseases present at birth, for example, were more likely to be hereditary than diseases that, even if similar among relatives, only appeared later.  He noted that inherited predisposition could lead to a disorder only after experiencing some environmental or life-style factor.  He was particularly interested to calm down those in the upper classes who were worried that behavioral traits ('madness') were inherited.  Additionally, Adams explicitly anticipated much of Darwin's ideas about evolution by natural selection, but that's unrelated to our topic today.  We've discussed Adams' book on MT before.

Joseph Adams' book, 1814

The bottom line is that, from these instances, ones that I just happened to know about, both literature and science reflect the fact that in the post-Enlightenment era we often think of as the Age of Science, western culture has long known of what appeared to be inherited traits, even if 'genes' per se weren't yet known of, and yet it was also clear that experience and living conditions could not only generate traits but generate family resemblance of traits.

But nobody knew the how, when, or why of the two kinds of cause, and without knowing units of causation in either life-experience or genetics, we had simply to guess or speculate about these things. The 20th century gave us Mendelian patterns to look for in families as signatures of genetic causation, but when we realized early on that many genes could generate similar traits it was known (to those who cared to recognize that fact) that Mendelian patterns were not needed even in 'genetic' traits.

We have almost the same level of confusion, mix, debate, and lack of clarity today, centuries later in our amped-up Age of Technical Science.  We now throw around terms like 'interaction' without, usually, having much direct sense of what we are actually referring to.  Sometimes, the contorted way we describe our ideas of causation seem not much different from the way Tristram Shandy did--only that was satire and we seem to be deadly serious!

Something's missing.


After-note:  If you haven't read Tristram Shandy, and are interested in more than just reading the flood of science articles in the journals that jam up your mail box every day, Sterne's book is a good, if wacky, romp through sense and nonsense.  Reading it takes patience, since it intentionally doesn't always flow in one direction, but it's no more contorted and obscure than those same science articles.

Tuesday, September 30, 2014

Notes for a late-summer's day.....

Well, it's a slow day, one of the last of warm ones for the year and I'm about to go for a bike ride, but before I go there are a few little things I'll quickly comment on.

One is the banner headline in the 12 September Science about a huge aquatic dinosaur, proclaiming "Giant dinosaur was a terror of Cretaceous waterways!" The Hollywood image of a massive terror is illustrated in the story; see below.  Wow! One need not doubt that this was a nasty beast and a terror to its predators.  But this is purportedly a science journal, not a Hollywood ad vehicle.  Most species have predators that prey on them, and for those species, the predators are surely terrors.  In this case, those relatively few species big enough to be seen by and yet not so agile as to escape from the giant dinosaur would be those experiencing terror.  And, in fact, this huge predator of the waterways probably left the vast majority of waterway species alone and was not a terror for them at all!  The same applies to the Cretaceous as to any period in the history of life.


Artist's imagination, from the Science article, from their website

We can wonder at how such a huge monstrosity of a species could evolve, much less swim, given the energetic, thermal, and mechanical challenges of being that big, not to mention of finding enough to eat. But we don't need the melodramatic drawing to make the point (after all, what was found were bones, not the flesh).  We may learn from the history this shows, but could do without the histrionics.

Next on the list is the dramatic cover of Nature on 18 September.  This figure reports results that use genomic sequence data to argue that modern Europeans carry genes from three different ancestral populations.  The massive author list is another characteristic of modern science whereby if you as much as walk by the office where someone is working on the paper and ask what they're doing then you qualify as an author and can report to your Chair and Dean that you have a cover article in Nature.


Nature cover, 18 September 2014, from their website
The story reports that genetic data on modern European variation and from some ancient DNA sources suggests a melding of people migrating (or gene-flowing) in from three basic areas, a northern or Siberian, a more expected western European, and a Middle Eastern source.  Taking place millennia ago, and/or over millennia, the result is that Europeans do not constitute a standing discrete 'race', but are a mix, the result of population history.

There is absolutely nothing wrong with this paper(as far as we can tell). But note that the cover illustrations (this  is, after all a science journal) are again artist's reconstructions of our supposed distant ancestors, not the data themselves.  This is taking reifying liberties with the science, something Nature does on a regular basis both with illustrations and catchy-cute pun-laden story ads on their covers.

One can and we think should ask why, unless this is just more Hollywood and advertising, this is in any way a Nature cover story.  Interesting as it is, it is no breakthrough of modern science (the issues and evidence have been building for decades--yes, decades).  This story should be in a specialist journal (like the Am. J.  Physical Anthropology), where before our marketing age it would have been.  There, it would be seen by other anthropologists with expertise in human migration history, vetted for other interpretations if any, and would have become part of our knowledge of human origins, and material for textbooks.  But it just doesn't warrant being splashed as if it is some sort of innovative discovery.  Again, the nature of science has changed, and one can ask why we have moved in this pop-sci direction, and whether that's good.

Are we just too, too stodgy?  Probably.  But isn't the translation of science to videographic presentation a pandering to a bourgeois culture that is bored with details?  Will it grab people and draw them into science?  Even if it does the latter, which it may well, doesn't it give the impression that the daily work of science is exciting, and the technical demands of doing good science minimal?  Or is it mainly a way to employ graphic artists and sell magazines?  Of course, this is how our culture works in the Advertising Age, so the answer is probably moot.

Anyway, back out to the bike path and garden, on a day too nice to think about anything very technical, enjoying fall before it passes, and (today at least) getting out before our town is overrun with drunken football fans.

Monday, September 29, 2014

Predicting future environments: it's impossible in principle

We often caution here on MT about predicting the future.  There's an idea afoot that once we know our genomes we'll be able to predict who will get what chronic disease, even when the disease is the result of genes interacting with environmental factors.  Or even multiple genes and multiple environments.  But, as we've often said, making such predictions with known accuracy is impossible in principle.  And here's why.

A BBC television series in the 1970's, "A Taste of Britain," hosted by Derek Cooper, set out to document traditional foods and food practices in Britain before they disappeared.  The long-running BBC Radio 4 show, "The Food Programme," revisits this series by traveling to the places documented 40 years ago to find out what became of the foods, the people, and the traditions the original program had captured on film.  "A Taste of Britain Revisited" is a fascinating glimpse into traditional foods and how they've fared in the intervening years.

Cooper urgently tracked down cockle pickers, sewin fishermen, truffle hunters, cheese makers, colliers wives, makers of Yorkshire Pudding, makers of Welsh cakes and laverbread, and more, certain that the traditions he was documenting would soon be lost.

Cockle pickers, Wales: Daily Mail, 2008

Fishing for sewin (sea trout) in traditional coracles, Wales; Wikipedia
And some were.  Once miners were moved into council housing in Yorkshire, away from coal-fired ovens, Yorkshire Pudding was never the same.  Native English crayfish will soon be extinct because a heartier North American variety has taken over, bringing a virus that kills the natives with it.  Cooper filmed the last traditional hearth in Wales, which was replaced but something more modern just weeks later.

And some have changed. The traditional Dorset Blue Vinny cheese that Cooper was told he was eating was in fact a second-rate Stilton, the real thing not having been made since World War II when the making of soft and semi-soft cheeses was banned, and only hard, transportable cheeses like cheddar could be made, to be sent to the cities.  But the recipe for Blue Vinny was found in the 1980's, and the cheese revived.  Cockles were in short supply in the 1970's and had to be imported.  The numbers increased for a while after that, fell and increased again, but now aren't as plentiful as they were, and cockle gatherers don't go to the beach by horse and cart any longer.

Horse and Cart on Beach, William Ayerst  Ingram; Wikipedia

And some have remained the same, but the economics have changed.  Welsh laver, or seaweed, is now being sold in Japan, when just 40 years ago getting it to London was a major accomplishment.   Businesses have been bought up by foreign companies.  A Spanish company now buys British cockles, and they are now being canned and sold in Spain. "Cans is something we never thought we'd put cockles into, but if that's the way they want them, that's the way they can have them."

So, Cooper wasn't entirely right that he was documenting the final days of many traditions and foods, though he was certainly documenting change.  I don't know what he thought was going to replace traditional foods, but perhaps the biggest change over the last 40 years was the rise of processed foods.  Whether or not he foresaw that I don't know, but even if he or anyone else had, the health effects couldn't have been predicted.  Indeed, even with all the evidence before us, we can't really say what the health effects of any of the specific changes have been.  Yes, people are more obese, have more heart disease, stroke, hypertension, cancers and so on, but in a very real sense, a major 'cause' of these diseases is control of infectious diseases.  

But, that's not the real point of this post.  The point is that it looks as though the common, late onset chronic diseases we are dying of now are the result of complex interactions between genes and environment.  Epidemiologists have been trying to identify environmental risk factors for decades, with only modest success -- meaning that it's not clear that we know which aspects of our environments are linked with risk.  But even if we did, these two series of programs on traditional foods in Britain make it clear that while we may fit today's disease cases to yesterday's exposures, it's impossible to predict what people will be eating not far into the future, so that even if we do identify risk alleles and risky environments, we don't know that carriers of that allele will still be exposed to risky environments not many years from now.  So, we can't predict their disease with more than retrospective accuracy, when we know from experience that risks of the same traits change rapidly and substantially.

Here's an example.  Type 2 diabetes has risen to epidemic levels in Native American populations, and Mexican Americans who are admixed with Native Americans, since World War II.  The nature of the disease and the pattern of the epidemic suggests that a fairly simple genetic background may be responsible.  But, even if so, that's only in the provocative environment of the last 60 or so years.  That's because there was little to no type 2 diabetes in Native Americans, or Mexican Americans, before then, yet whichever alleles are responding now, leading to glucose intolerance and so forth have basically not changed in frequency. It's the environment  -- diet and exercise, presumably -- that has changed.  This epidemic could simply not have been predicted 60 years ago, even if we'd been genotyping people then, because whatever risk alleles are now responsible were not causing disease before the environment changed.

So, the promise that genotyping every infant at birth will allow us to predict the late onset, complex diseases they will eventually have is unlikely to be met.  Instead, for many or most complex traits, it's an illusion that's being sold as fact.

Wednesday, September 24, 2014

What are scientific 'revolutions'? Part II: Qualitative paradigm adjustments

As we described yesterday, when scientists are doing what Thomas Kuhn referred to as 'normal' science, we are working within a given theoretical framework--or 'paradigm' as he called it--and using new technologies, data, and approaches to refine our understanding of Nature in terms of that framework. In biology now, normal science is couched within the evolutionary framework, the acceptance of descent with modification from a common ancestor.  We might tweak our views about the importance of natural selection vs drift, say, but that doesn't change the paradigm.

But there are energetic, and sometimes fierce discussions about just how we should go about doing our work. These discussions often involve the basic statistical methods on which our inferences lie.  We've talked about the statistical aspects of life science in numerous past posts. Today, I want to write about an aspect that relates to notions of 'revolution' in science, or what Kuhn called paradigm shifts.  What follows is my own view, not necessarily that of anybody else (including the late Kuhn).

xkcd

For many if not most aspects of modern science, we express basic truths mathematically in terms of some parameters.  These include values such as Newton's gravitational constant and any number of basically fixed values for atomic properties and interactions.  Such parameters of Nature are not known with perfect precision, but they are assumed to have some universally fixed value, which is estimated by various methods.  The better the method or data, the closer the estimate is held to be, relative to the true value.  Good science is assumed to approach such values asymptotically, even if we can never reach that value without any error or misestimation.

This is not the same as showing that the value is 'true', or that the underlying theory that asserts there is such a value is true.  Most statistical tests evaluate data relative to some assumed truth or property of truth, or some optimizing criterion given our assumptions about what's going on, but many scientists think that viewing results this way is a conceptual mistake.  They argue that our knowledge leads only to some degree of confidence, a subjective feeling, about an interpretation of Nature. Using approaches generally referred to as 'Bayesian', it's argued that all we can really do is refine our choice of properties of nature that we have most confidence in.  They rarely use the terms 'belief' or 'faith' in the preferred explanation, because 'confidence' carries a stronger sense of an acceptance that can be systematically changed.  The difference between Bayesian approaches and purely subjective hunches about Nature is that Bayesian approaches have a rigorous and in that sense highly objective format.

This comes from a famous rearrangement of a basic fact of probabilities, credited to Thomas Bayes. It is a rearrangement of basic laws of probability, and it goes like this:

     p(Hypothesis|Evidence) = p(Evidence|Hypothesis) p(Hypothesis) / p(Evidence)

This says that the probability of some Hypothesis we may be interested in is equal to the probability of that evidence E if the Hypothesis were truly true, times the probability that we have in mind for the Hypothesis, all divided by the overall probability of the Evidence; that is, there must be a lot of ways that the Evidence might arise (or we'd already know our Hypothesis was true!), so you sum up the probaiblity of the data if H is true (weighted by your prior probability that it is, and separately the probability of the data if an alternative to H is true weighted by the probability of its not being true. It's somewhat elusive, but here's an oversimplified example:

Suppose we believe that a coin is fair. But there's a chance that it isn't.  In advance of doing any coin-flipping, we might express our lack of knowledge by saying the chance that the coin is fair is, say, 50%, since we have no way to actually know if it is or isn't.  But now we flip the coin some number of times.  If it's fair, the probability of it coming up Heads equals 50%, or p(H) = 0.5 per flip.  But suppose we observe 60% Heads.  A fair coin could yield such results and we can calculate the probability of that happening.  But an unfair coin could also generate such a result.

For simplicity, let's say we observe HHHTT.  For a fair coin, with p(H) = 1/2, the probability of this result is (1/2)(1/2)(1/2)(1/2)(1/2)(1/2) = 0.0312, but if the coin is unfair in a way that yields 60% Heads, the probability of this result is (0.6)(0.6)(0.6)(0.4)(0.4) = 0.035.  Using the formula above, the probability that the coin is fair actually drops from 50% to about 31%: we're less confident about the coin's fairness. If we kept flipping and getting such results, that value would continue dropping, as we became less confident that it's fair and increasingly confident that its true probability of Heads was 0.6 instead of 0.5.  We might also ask if the probability of it being fair is, say, zero, or 1/8, or 0.122467--that is, we can test any value between zero (no chance it's fair) to 1.0 (completely sure it's fair).

The basic idea is that we have some prior reason, or probability (p(H)) that the Hypothesis is true and we gather some new Evidence to evaluate that probability, and we adjust it in light of the new Evidence.  The adjusted value is called the posterior (to the new data) probability of the Hypothesis, and Bayes' theorem provides a way to make that adjustment.  Since we assume that something must be true, Bayes' formula provides a systematic way to change what we believe about competing explanations.  That is, our prior probability is less than 1.0 (certainty of our Hypothesis) which implies that there are other hypotheses that might be true instead.  The use of Bayes' theorem adjusts our confidence in our specified Hypothesis, but doesn't say or show that it is true.  Advocates of a Bayesian approach argue that this is the reality we must accept, and that Bayesian approaches tell us how to get a best estimate based on current knowledge.  It's always possible that we're not approaching truth in any absolute sense.  

A key aspect of the Bayesian view of knowledge is that the explanation is about the probability of the data arising if our preferred explanation is true, accepting that it might or might not be.  It assigns quantitative criteria for alternative explanations whose relative probability can be expressed--that is, the set of possible hypotheses each have a probability (a value between zero and 1),  and their sum exhausts all possibilities (just as Heads and Tails exhaust the possible flip outcomes, or a coin must either be fair or not-fair).

OK, OK so what does this have to do with scientific 'revolutions'?
The basic idea of Bayesian analysis is that it provides a technically rigorous way to express subjective confidence in a scientific context.  It provides a means to use increasing amounts of data to adjust the level of confidence we assign to competing hypotheses, and identify the Hypothesis that we prefer.

This is a good way to express confidence rather than a yes-no illusion of ultimate truth, and has found widespread use.  However, its use does depend on whether the various aspects of experiments and hypothesis can adequately be expressed in probabilistic terms that accurately reflect how the real world is--and, for example, important causal components may be missing, or the range of possibilities may not be expressible in terms of probability distributions.

I am by no means an expert, but a leading proponent of Bayesian approaches, the late ET Jaynes, said this in his classical text on the subject (Probability Theory, Cambridge Press, 2003):
Before Bayesian methods can be used, a problem must be developed beyond the 'exploratory phase' to the point where he it has enough structure to determine all the needed apparatus (a model, sample space, hypothesis space prior probabilities, sampling distribution).
This captures the relevant point for me here, in the context of the idea of scientific revolutions or paradigm shifts.  I acknowledge that in my personal view, and this is about philosophy of inference, such terms should be used only for what is perhaps their original reference, the major and stunning changes like the Darwinian revolution, and not the more pedestrian applications of everyday scientific life that are nonetheless casually referred to as revolutions.

These issues are (hotly) debated, but I feel we should make a distinction between scientific refinement and scientific revolutions.  To me, Bayesian analysis is a systematic way to refine a numerical estimate of the relative probability of an idea about Nature compared to other ideas that could be correct.  The prior probability of the best of these alternatives should asymptotically with increased amounts of data (as schematically shown in the figure below), unless something's wrong with the  conceptualization of the problem.  I think this is conceptually very different from having a given scientific 'paradigm' replace another with which it is incommensurable.   



Where it's useful, Bayesian analysis is about altered ideas among what are clearly commensurable hypotheses--based on different values of the same parameters. Usually, the alternative hypotheses are not very different, in fact, so that for example, a coin has some bias in its probability of Heads, ranging from no-chance to fair (50%) to inevitable; but assuming such things as that the flips are all done the same way and the results generated by flipping are probabilistic by nature.

In my view, Bayesian analysis is a good way to work through issues within a given theoretical framework, or paradigm, and it has many strong and persuasive advocates.  But is not a way to achieve a scientific revolution nor does it reflect one.  Sometimes the idea is used rather casually, as if formal Bayesian analysis can adjudicate between truly incomparable ideas; there, to me, we simply must rely on our subjective evaluations. One can't, of course, predict when or even whether a truly revolutionary change--a paradigm shift, if you will--will occur, or even if such is needed.

Ptolemaic epicycles added accuracy to the predictions of planetary motion, at the price of being cumbersome.  One could have applied Bayesian analysis to the problem at the time, had the method been available.  The Copernican revolution changed the basic structure of the underlying notion of what was going on.  One might perhaps construct Bayesian analysis that would evaluate the differences by somehow expressing planetary positions in probabilistic terms in both systems and allow one to pick a preference, but I think this would be rather forced--and, most importantly, a post hoc way to evaluate things (that is, only after we have both models to compare).  In fact, in this case one wouldn't really say one view was true and the other not--they are different ways of describing the same motions of bodies moving around in space relative to each other, and the decision of how to model that is essentially one of mathematical convenience.

I think the situation is much clearer in biology.  Creationist ideas about when and where species were created or how they related to each other in terms of what was called the Great Chain of Being, could have been adjusted by Bayesian approaches as, for example, the dates of fossils being discovered could refine estimates of when God created the species involved.  But Bayesian analysis is inappropriate for deciding whether creationism or evolution is the best hypothesis for accounting for life's diversity in the first place.  The choice in both approaches would be a subjective one, but without artificial contortions the two hypotheses are not probabilistic alternatives in a very meaningful sense. That's what incommensurability, which applies in this case I think, implies.  You can't very meaningfully assign a 'probability' to whether creationism or evolution is true, even if the evidence is overwhelmingly in favor of the latter.

Current approaches
These posts express my view of the subject of scientific theory, after decades of working in science during periods of huge changes in knowledge and technology.  I don't think that scientific revolutions are changes in prior probabilities, even if they may reflect them, but are more and different from that.  From this viewpoint, advocates for Bayesian analysis in genomics are refining, but not challenging the basic explanatory framework at all.  One often hears talk of paradigm shifts and use of similar 'revolution' rhetoric, but basically what is happening is just scaling up our current "normal science", because we know how to do that, not necessarily because it's a satisfactory paradigm about life.  And there are many reasons why that is what people normally do, more or less as Kuhn described.  I don't think our basic understanding of the logic of evolution or genetics has changed since I was a graduate student decades ago, even if our definition of a gene, or modes of gene frequency change, or our understanding of mechanisms have been augmented in major ways.

It is of course possible that our current theory of, say genomic causes of disease, is truly true, and what we need to do is refine its precision.  This is, after all, what the great Big Data advocacy asserts: we are on the right track and if you just give us more and more DNA sequence data we'll get there, at least asymptotically.  Some advocate Bayesian approaches to this task, while others use a variety of different statistical criteria for making inferences.

Is this attitude right for what we know of genomics and evolution?  Or is there reason to think that current "normal science" is pushing up against a limit and only a true conceptual revolution, one essentially incommensurate with, or not expressible in the terms of, our current models? In past posts, we've suggested numerous reasons why we think current modes of thought are inadequate.

It's all too easy to speak of scientific revolutions (or to claim, with excitement, that one is in the midst of creating one if only s/he can have bigger grants, which is in fact how this is usually expressed).  It's much harder to find the path to a real conceptual revolution.

Tuesday, September 23, 2014

What are scientific 'revolutions'? Part I: Qualitative paradigm shifts

How do we know what we think we know?  And how do we know how close we are to the 'truth'? These are fundamental questions in life, and especially in science where we expect to be in pursuit of the truth that we assume exists.  We build our work upon an accepted body of trusted knowledge, one that we first spend many years learning, and then even more years contributing to.  But there are always facts that don't quite fit the existing paradigm -- or don't fit at all -- and these can be wrong, or they can make a revolution.

In 1962, Thomas Kuhn published The Structure of Scientific Revolutions.  He built on his earlier work in the history and philosophy of science, The Copernican Revolution (1957) which analyzed  the way the sun-centered Copernican view of planetary motion replaced the long-standing Ptolemaic earth-centered view as an example of how scientific understanding of the world can change.  

In a nutshell, Kuhn says that scientists at any given time usually work within a model or theory, or paradigm as he referred to it, that explains their findings.  This paradigm guides what we do every day as we work away at what Kuhn called "normal science".  There are always unexplained or even apparently contradictory facts that don't easily fit into our working theory, but we do our very best in normal science to fit, or shoe-horn, these anomalies into our current paradigm.  Occasionally, when the lack of fit becomes too great, a 'revolution,' essentially a new theory, is proposed, usually based on a new finding or a new way of synthesizing the data that does a better job of accounting for the anomalies in question (even if it may do less or less well for some known facts).

The new theory dramatically and at one fell swoop accounts for hosts of facts that hadn't fit into the previous working paradigm, including the apparent anomalies.  A key point we'll discuss below is that the new view is not just a quantitative improvement in, say, measurement accuracy or something like that.  It's not technology. Instead, a defining characteristic is that the new view is "incommensurate" with the view it replaced: you cannot express the new view in terms of its predecessor.  It is quickly adopted by the profession in what Kuhn coined a "paradigm shift", which becomes the tool of a new phase of 'normal science'.  This was what he called a scientific 'revolution'.

Motion of SunEarth, and Mars according to heliocentrism (left) and to geocentrism (right), before the Copernican-Galilean-Newtonian revolution. Note the retrograde motion of Mars on the right. Yellow dot, Sun; blue, Earth; red, Mars.
(In order to get a smooth animation, it is assumed that the period of revolution of Mars is exactly 2 years, instead of the actual value, 1.88 years). The orbits are assumed to be circular, in the heliocentric case. Source: Wikipedia, Copernican Revolution

The view that the earth was part of the solar system fundamentally changed the way planetary motion was accounted for.  In the older Ptolemaic system, movements that were supposed to be perfect circles in the perfect spheres of the heavens, did not fit astronomical observations. So occasional little circles of movement (called epicycles) were invoked to explain observations and make predictions more accurate and consistent. But if the sun were viewed as the system's center, then one could account for the motions with ellipses and no epicycles. Refinements were to come along with Newton and Kepler, and Tycho Brahe then showed that geocentric mathematics could also work, with a "geo-heliocentric" system in which the Sun and Moon orbit the Earth (see Wikipedia: Tycho Brahe); but in which the other planets go around the Sun.

However, there have been other examples that reflect the basic Kuhnian idea: Darwin's evolutionary theory replaced one of special creation of the earth's species; quantum theory and relativity added truly revolutionary ideas about space, time and even causal determinism; plate tectonics (continental drift) replaced a diversity of ad hoc accounts for geological forms and changes, and so on.  The basic notions of normal science, working paradigms, and essentially incommensurable replacement of one theory by another may be criticized in detail, but Kuhn's way of explaining the dynamics of science has much to recommend it.

The phrase "paradigm shift" has become canonized in modern science parlance.  It glamorizes the genius (Copernicus, Einstein, Darwin) who was responsible for the change of view, often neglecting others who had roughly the same idea or whose work triggered the iconic figure's work.  And for that reason, and because scientists are mainly middle class drudges who need to feel important, we throw the phrase around rather loosely (often referring to our own work!).  We speak of scientific revolutions now rather casually as if they are occurring, whenever some new finding or technology comes along.  But is that justified?

Generalizations about classical 'paradigm shifts' and revolutions in science
We were lead to write about this because of comments on our recent post on the faith component of science having to do with how we in science view what we think we know.  This and the following post tomorrow are reflections about this, and not intended as an argument with the commenter.

A key relevant question is how we decide that what we assert today is better than what we said yesterday.  If it is different, but not better, then where can we find a sense that we know more, or are closer to the truth?  What if there is no single truth that we hope science is asymptotically approaching--with each new discovery getting closer to a perfect understanding?

At least one aspect of the answer lies in the idea of incommensurability between 'paradigms' as opposed to accuracy within a given paradigm.  Here, I'll focus on genetics and evolution, fields I know at least something substantial about.

Prior to Darwin, in Western culture the prevailing view of life was that species had been individually created by God for His own reasons.  Species might change under husbandry and so on, but they were basically static (though they might become extinct, again for some reason in God's plan), but they didn't morph one into another.  After Darwin, species were viewed as the result of a historical physical process, evolution taking place over time due to physical constraints (natural selection).  In a Darwinian view one cannot measure the nature or arrival of species in terms of events of special creation.  Humans cannot be viewed as specially created at the Beginning with the rest of life created for our use.  Evolution is not just a quantitative description of special creation.  The two views are incommensurable.

In the new 'paradigm', everything changed.  Species and their traits are viewed in terms of historical usage history, context-specific factors that affect what forms could succeed better than other forms relative to each other at the time, not in any external Creator's eye.  Evolution was truly a revolutionary change in the understanding of global diversity in life.  It has had at least as much impact as any other revolutionary conceptual change in any science.  But is it more 'true' or has it given us the truth about life?

Of course, even if the process of speciation is an historical one that takes place gradually by Darwinian means, each species must arise at some specific time.  Is this so different?  Yes!  It's different first because the definition of 'species' is a human-imposed cultural one and because the many processes that could lead populations to be mating-incompatible (the usual definition of 'species') may arise by single events (mutations in chromosome regions required for mating, for example) but they were historical, random changes in DNA, etc.  They were not guided from without with any purpose.  And generally, diversity accumulates along with mating incompatibility, gradually.

And what about natural selection?  It is the theoretically accepted origin of complex traits in living species.  It is a gradual process even if each life or death or conception may be discrete events in time and place.  And, after Darwin, we have had to add chance (genetic drift) into the picture of how genomic structures and what they cause have changed over time.  But such additions modify, but do not at all overthrow the idea of evolution.  They introduce no paradigm shift.

Nor does the discovery that chromosomes contain more than just protein-coding DNA sequence--they have regulatory sequences, sequences involved in DNA's own packaging, and so on.  The idea of gene regulation, or of genes being made of discrete, interrupted sequence regions (coding exons, introns, etc) added new theoretical elements to biology, but they are entirely commensurable with prior views that were non-specific as to just what genes 'are'.  The discovery of the base-pairing nature of DNA and its use of a code for protein sequence and other functions added to our understanding of life, and produced a new theory of genetic causation.  But that theory didn't replace some earlier specific theory about what genes were.  None of this in any serious way was a paradigm shift, even though these discoveries were of momentous importance to our understanding of life.

And then there's the origin of 'life'.  Mustn't that, too, have had a moment of creation?  Biochemists will have to assert that the possibility has always existed since the beginning of the cosmos, but that only when the right ingredients (molecules, pH, temperature, etc.) existed at the same time and place did life start.  It may have had countless molecular origins, but here on earth at least only one such led to life as we know it today.  That is, in a sense, a theory of a moment of occurrence--though not of 'creation'.  So in our modern view it's part of the historical process that is life.

So, biology has had its scientific revolution, and one that shook the earth in very Kuhnian terms. But whether we are closer to the 'truth' about what life is, is itself a rather vague or even unanswerable question.  As technology advances, we could be getting a better and better understanding, and a more complete explanation of the essential nature of life.  Or, forces at work within organisms might be discovered which will lead to fundamentally different kinds of understanding of life.  How can we ever know unless or until that happens?

One way to rephrase this question is to ask whether we can know how 'close' we are to understanding the truth.  We can compare origin theories from many different cultures, including our own Biblical one, but we can't really concoct a quantitative measure of how true they are even relative to each other.  In a sense, all have zero truth except evolution, but that's not very useful, because we have no way to know what new idea may come along to challenge the one that we now believe to be true.  Of course, some people, even some otherwise scientists, accept religious explanations and will simply not acknowledge what I've been saying because they have an incommensurable truth that cannot be compared in this way to evolution other than by forced contortions such as that the Bible should be taken metaphorically and the like.  Or others have a mystical view of universal unity and reincarnation etc. which, like Biblical explanations, cannot really be compared because it doesn't attempt to explain the same things.

But there is another very different way to view scientific progress, typically referred to by the term 'Bayesian', which is often implicitly equated with 'revolutions' or 'paradigm shifts', as a systematic rather than episodic way for scientific truth to become known.  We'll discuss that tomorrow.

Monday, September 22, 2014

Nature vs nurture, and human behavior: where’s the science?

Despite more than a century of work in genetics, sociology, psychology, anthropology, economics, and other disciplines that focus on human behavior, the turning of the Nature-Nurture cycle continues.  Is human behavior hard-wired in our genomes or is it a pattern of responses created from birth onward by experience?  How can it be, given the masses of data that state-of-the-art science has produced, and the quantity of ink that has been spilled to answer the question, that neither side yields an inch?  Indeed, how can their still be sides?  

We can express the debate roughly as follows:
Does there exist in every human being, underlying behavior that is shaped by the social influences around him/her, an inborn predisposition that education might modify, but cannot really change? Is the view that denies this and asserts that we are born with tabula rasa, totally molded by life experience, asserted by people who have not recognized the obvious fact that we are not born with blank faces—or who never compared two infants of even just a few days old, and observed that those infants are not born with blank temperaments that can be molded up at will? Is there, essentially open-ended variation among individuals, genomic causes of behavior, that cannot be over-ridden by a person’s life experience? Is experience ever the key to understanding a person’s behavior, or can genomic knowledge predict the nature of each person?
The prevailing view in our current age, in love as it is with technology and genomics, the Nature view, invokes Darwinian evolutionary theory as its justification for genetic determinism, and asserts that if we do enough genome sequencing, individual behaviors will be predictable from birth, much as the view in the biomedical community that argues that disease risk will be predictable from birth.  In anthropology, this view goes by the rubric ‘Human Biodiversity’, the assertion that is the bio component that drives our diversity, as opposed to our culture and environment.  This is extended to the argument that Darwinian theory implies that different ‘races’ necessarily are different in their inherent skills, temperaments and the like just as they are in their skin color and hair form.

Opposed to that is the Nurture view that argues that some physical traits and disease predispositions have genetic components, but that our basic nature and behavior are molded by our social, political, and economic contexts.  This view argues that measures of value and achievement, and behaviors leading to economic or family success or to addiction and antisocial behavior, are social constructs rather than reflections of hard-wired natures.

There is nothing new here, and in reaction to the eugenic movement and the Nazi horrors, which were rationalized in terms of Darwinian inherency, there was a strong turn towards what was coined as tabula rasa, the view that early experience makes us what we are.  This was how psychology was generally taught in the post-war generations.

Both views can be traced back to precursors, including Freud and many others for the Nurture side and of course Darwin and his relative Francis Dalton on the Nature side.  However, we are after all supposedly in the Age of Science, and it is a bit surprising that we have not moved very far at all with respect to the Nature-Nurture question.

Another way to say it
In fact, that lack of progress, despite enormous investment in supposed scientific research to answer the question, explicitly or implicitly, is not hard to document.  And the documentation goes beyond the ivory labs of scientists, to the broader public, where it has been of interest for more than a century.  Here is a way to express the same question, which I have just chanced upon:
Does there exist in every human being, beneath that outward and visible character which is shaped into form by the social influences surrounding us, an inward, invisible disposition, which is part of ourselves, which education may indirectly modify, but can never hope to change? Is the philosophy which denies this and asserts that we are born with dispositions like blank sheets of paper a philosophy which has failed to remark that we are not born with blank faces—a philosophy which has never compared together two infants of a few days old, and has never observed that those infants are not born with blank tempers for mothers and nurses to fill up at will? Are there, infinitely varying with each individual, inbred forces of Good and Evil in all of us, deep down below the reach of mortal encouragement and mortal repression—hidden Good and hidden Evil, both alike at the mercy of the liberating opportunity and the sufficient temptation? Within these earthly limits, is earthly Circumstance ever the key; and can no human vigilance warn us beforehand of the forces imprisoned in ourselves which that key may unlock?
Where does this somewhat stilted form of the issue come from?  It’s from  Wilkie Collins’ book No Name, published in serial form in Charles Dickens’ journal All The Year Round, 1862-3, just 3 years after Origin of Species (but with no reference to it, and Collins was not involved in science discussions at the time).  That was about 150 years or six generations of science ago!



No Name is a neglected masterpiece, a compelling novel about social conditions in England at the time (related to rules of inheritance, marriage, and illegitimacy.  It was not a commentary on science, but this paragraph reflects an essentially totally modern expression of the very debate that still goes on in the world generally, and in science specifically.

Where we are today
We have tons more facts, on both sides, but not a whit more nuance in the generally expressed views on the subject.  Indeed, do we have much more actual science or is each side just enumerating some additional data, often that barely deserves being called 'fact'?  One major reason for the persistence of no-answers is the inordinately dichotomous world-views of scientists, predilections based on various aspects of their technical specialties but also on their sociopolitical views.  Another reason is simply the tribal nature of humans generally, and in particular that of opinions about issues that affect societal policy, such as where to invest public funds in regard to things like education or welfare, how one views social inequality, the origins of proper (socially admired rather than antisocial) behavior, and the like.  We all have our perspectives and our interests, regardless of whether we work as scientists or in other occupations.  In light of this discussion I probably should be asking whether my world-view and career in science is the result of my genes, or the fact that there were lots of jobs (and science was more fun and less about 'productivity' and money than it is now) when I got my degree?

Not everyone sees this question in a completely polarized way, but instead propose Nature via Nurture or other acknowledgements of the role of both genes and environment (an obvious truth, since DNA is in itself basically inert), but if you look carefully you'll almost always be able to detect their political beliefs, and thus their predilection for one side or the other.  They pay lip service to environment but basically want to do a lot of DNA sequencing (or tests for 'epigenetic' changes in DNA), or they want to do sociological studies and minimize the genetic component (or opportunistically say they're testing a given variant in many environments).  We are all being trained as specialists with particular interests and science in general has been based on an enumerative reductionist approach that is not good at integrative studies of complex interactions.

The bottom line for me is that we should all recognize the uncertain nature of the science, even perhaps that science doesn’t pose its questions in ways that have scientific answers. But we also should recognize that behavior and attitudes towards it affect how society allocates resources, including status and privilege and power hierarchies.  For that reason, scientists should treat the subject with a bit more caution and circumspection—much as we do things like experiments testing whether genetic engineering of viruses could make them more pandemic, and other areas where what someone does in a lab, for curiosity or any other reason, can spill over to have implications for the lives of the rest of us—who are paying the bills for that science.

However, for the same reason that the research affects society, we can’t expect scientists, who are privileged members of that society, to monitor themselves.  And the persistence of the same important questions, from the beginning of the scientific age to the present, should teach a lesson in humility as well.