Showing posts with label Enlightenment. Show all posts
Showing posts with label Enlightenment. Show all posts

Monday, October 6, 2014

Big Data and its rationale - why are we still at it?

We're in the era of Big Data projects.  This is the result of a fervor of belief that more data, essentially comprehensive and completely enumerative, will lead to deeper or even complete understanding.  It is an extension of the idea of reductionism and induction that was a major part of the Age of Science, ushered in about 400 years ago with the likes of Bacon, Newton, Galileo and other iconic figures. Examples in physics include huge collider studies and very costly space activities, and of course the Big Data drive is hugely prevalent in genomics and other biological and biomedical sciences.  But in our age, several centuries later, why are we still at it in this way?

The story isn't completely simple.  The Age of Science also led to the so-called 'scientific method', a systematic way to increase knowledge through highly focused hypothesis-testing.  Many philosophers of science have argued that, or tried to show how, the approach enabled our understanding of truth refined in this self-disciplined way, even if ultimate truth may always elude our meagre brainpower.  But why then a return to raw induction?

One reason, and we think a predominant reason, is the pragmatic competition for research resources. As technological abilities rise (pushed by corporate interests for their own reasons), we have become able to collect ever more detailed data.  The Age of Science was itself ushered in by technology in many ways, the iconic examples being optics (telescopes and microscopes).  But sociopolitical reasons also exist.  Long, large projects lock up large amounts of funds for years or even decades, guaranteeing jobs and status for  people who thus can avoid the draining, relentless pursuit of multiple, small 3- or even 5-year grants.  As the science establishment has grown, driven by universities for good as well as greedy reasons, funding inevitably became more competitive.

Careerism and enumerative ways to judge careers by administrators (paper and grant counting) are driving this system, but the funding agencies, too, have become populated with officials whose careers involve holding and building portfolios, using public relations to tout their achievements, and so on.  And once you've got to the top of the Big Data pile, it's a high that's hard to come down from!

"It takes Big Data to make it Big--But I did it!    (Drawn by the author)
The history of a worldview
From the Manhattan Project in WWII, and several open-ended research efforts that followed, the idea became obvious that if you can state some generic problem and get funding to study it, and justify why it requires large scale, expensive technology, and long term, well, you snared yourself a career's worth of secure funding and all the status and perks--and, of course, actual research--that go with it.  It's only human to understand those reasons.

However, there are also some good, scientifically legitimate reasons for the growth of Big Data. When I have queried investigators in the past--and here, this means over about 20 years--about why they were advocating genome mapping approaches to diseases of concern to them, they often said, rather desperately, that they were taking a 'hypothesis-free' approach because nothing else worked!   If biology is genetics, genes must be involved, and mapping may show us what genes or systems are involved.   For example, psychiatrists said that they simply had no idea what was going on in the brain to cause traits like schizophrenia, no candidate genes or physiological processes, so they were taking a mapping approach to get at least some clues they could then follow with focused research.

But to a great extent, what started out as a plausible justification for arch induction approaches, has become an excuse and a habit, a convenience or strategy rather than a legitimate rationale.  The reason is not because their reasoning in the past was wrong at the time.  The reason is because the Big Data approach has in a sense worked successfully: it has by and large proven not to provide the kind of results that initially justified it.  Instead of identifying causes that couldn't have been expected, exhaustive Big-Data studies, in genomics and other areas of epidemiology and biomedical science, have identified countless minor or even trivial 'causes' of traits (and the psychiatric traits are good examples), showing they are not well explained by enumerative approaches, genetic or otherwise. For example, if hundreds of genes, each varying in and among populations, contribute to a trait, every occurrence of a disease, or everyone's blood pressure etc. is due to a different combination of causes.  Big Data epidemiology has found the same for environmental and life-style factors.

What we should now do is to realize this successful set of findings from mapping studies.  Rather than flood the media with hyperbole about the supposed successes of current approaches, we should adapt our approach to what we've learned, and take a time-out somehow, to reflect on what other conceptual approaches might actually work, given what we now know rather clearly.  We may even have to substantially reform the types of questions we ask.

The reason for that sort of new approach is that once we, or as we, plunge into ever more too-big-to-terminate studies, with their likely minimal cost-benefit payoff, we lock up resources that clever thinkers might find better ways to use.  And unless we do something of that sort, the message to scientists is to be more strongly driven to think in Big Data terms---because they'll know that's where the money is and how to keep their hold on it.  This is exactly what's happening today.

Unfortunately, even many fundamental things in physics, the archetype of rigorous science, are being questioned by recent findings.  Life sciences are in many relevant ways a century behind even that level.  But this seems not to give us, or our funders, pause.  Changing gears seems to go against the grain of how our industrialized society works.

In times of somewhat crimped resources from traditional funders, it's no wonder that universities and investigators are frantically turning to any possible source they can find.  As we know from our own experience and that of colleagues, so much time is spent hustling and so relatively little in doing actual research, that the latter is becoming a side-light of the job.  But it doesn't really seem to be changing how people think, and the push for Big Data is an understandable part of the strategy.  The way to think about science itself is not changing under this pressure.  At least not yet.

We keep harping on this message because it involves both the nature of knowledge and the societal aspects of how we acquire it.  Even if there were no material interests in terms of allocation of resources, to cling to Big Data, we face a scientific or even epistemological problem that few seem interested in facing.  There is simply too little pressure to force people to think differently.

Perhaps, if the message is said enough times, and read by enough people, sooner or later, somewhere, someone might get it, and show the rest of us a better way to fathom the causal complexities of the world.

Tuesday, March 4, 2014

Lucretius, and stories about the Nature of Things

I never took courses in Classics, so my knowledge of the major figures and their works is only sometimes from my own informal reading, and mainly from secondary summaries or hearsay.

But, after having read of and about it for many years, I decided I should actually read Lucretius' (99-55BC) long poem entitled De Rerum Natura (On the Nature of Things).  So many times, I've heard that Lucretius anticipated modern science in sometimes rather eerily prescient ways, and I felt I should see for myself.  I can't read Latin, so I have read an English prose translation in English, hoping at least to see the content, even if unable to appreciate the artistry.  


The deep truths?  Source: crusader.bac.edu



Lucretius
Lucretius was a Roman, nominally writing to persuade a friend of his, but he was basically expounding the views of the Greek Epicurean school of philosophy, led by Epicurius and Democritus and others who lived a couple of hundred years earlier. Among other things, the Epicureans argued persuasively that there were no gods and that human existence is matter-to-matter, dust-to-dust: enjoy life while you have it because that's all there is.  You will decay and your material will return to the cosmos from whence it came.  Nothingness cannot generate material, and material is eternal.  Your mind (or spirit, or whatever term you use) is part of your body and disappears with your body, into the cosmos, to be re-used in the future, as it had come to form you when you were conceived.

There exists matter, they said, made of atoms ('a-tom', or cannot cut), which are the smallest fundamental particles, and 'void' (space).  Atoms move around randomly through space and collide when they meet, at which time they can stick and form larger constructs, which today we call  'molecules'.  Aggregates of these compounds make up the stuff of the world.  It can separate and rejoin, but matter is neither created out of nothing nor destroyed, nor is space.  To allow for our apparent free will, the Epicureans said that atoms could sometimes 'swerve' from their normally determined paths.

The atomic theory of the Epicureans sounds quite like modern molecular and atomic physics and cosmology (though it is true that modern physics does seem to allow strange things like the creation of matter and energy from cosmic vacuum, and perhaps multiple universes, and so on).   Thus, the ideas Lucretius described can seem presciently scientific, two thousand years before their time.  I have read such characterizations frequently.   But there are some interesting points here, that have to do with how ideas are anchored in reality, and with selective reporting.

For one thing, if you read the rest of Lucretius, you'll find stories of the origins of the things in an earth-centered universe, including anthropological tales explaining the origin of humans and their cultural evolution--how we started out crude and beast-like, then discovered weapons, clothing, governments, language and song, the discovery of agriculture, domestication of animals.  He also used his theory to explain the nature of lightning, earthquakes, volcanoes, weather, geology,  gravity,why the Nile floods and the nature of magnetism.  He explained the working of our senses like vision, touch and taste, in atomic terms--accounting, for example, for the emanations from the atoms on the surface of our bodies, that enable us to see 'ourselves' in mirrors.  He raised developmental arguments to show that chimeric beasts, like Centaurs, cannot be real.  He delves into racial variation and why different populations are subject to different diseases.  And he goes into the clinical nature and epidemiology of plagues.

A main aim of Lucretius was to purge people of superstition.  He fervently wanted to dismantle anything other than a pure materialism, even in explaining the origin of moral aspects of society.    In this sense, too, he is favorably cited for his 'prescient' materialistic atomic theory of everything.

In the major sections of De Rerum, however, the apparent prescience becomes less and less, and any idea that he foreshadowed modern science dissolves.  Basically, the Epicureans were applying their notion of common-sense reasoning based on very general observations.  They strung out one categorical assertion after another of what 'must' be the case.  In today's parlance, they were providing hand-waving 'explanations' ('accounts' would be a better term) that seemed consistent but did not require any sort of rigorous means of establishing truth.

Along comes the Enlightenment
Aristotle, Plato, and others of the Greek philosophers reinforced the idea that reasoning itself was enough to generate understanding of the world.  We are basically built, they said, to see and understand truth.  Such a view of knowledge lasted until about 400 years ago, the period called the Enlightenment (in Europe), the time of Francis Bacon, Descartes, Galileo, Newton, and many others.  Those authors asserted that, to the contrary, to understand Nature one had to make systematic observations, and develop proper, formal, systematic reasoning to fit hypotheses to those observations, to develop general theory or laws of the world.  Out of this was born the 'scientific method' and the idea that truth was to be understood by empiricism and actual testing of ideas, not just story-telling--and no mysticism.

Reading Lucretius makes one realize first, that even if a story like the Epicureans' atomic theory has aspects we'd regard today as truth, it was to them basically a sort of guessing.  Secondly, just because a story is plausible does not give it a necessary connection to truth, no matter how consistent the story may seem.  We now do have actual 'scientific' theories to account for--or, now, actually, explain--phenomena such as earthquakes, weather, volcanoes, the nature of metals and water, the diversity of life, a great deal of biology, and even culture history.  If you think of how we know these things, even if there are major gaps in that knowledge, you can see how very powerful and correct (or at least much more accurate) a systematic approach to knowledge can be, when the subject is amenable to such study.

It is a great credit to centuries of insightful, diligent scientists, our forebears, whose legacy has brought us to this point.  It is a wonderful gift from them to our own time.

Advances in technology and methods may be making some Enlightenment concepts obsolete, and we continually find new ways of knowing that go ever farther beyond our own personal biological senses.  For those of us in science, reading the likes of Lucretius is an occasion to compare then and now, to see why just being intelligent and able to construct consistent explanations is not enough, and that for many areas we do now have ways to gain knowledge that has a firmer footing in reality--not just plausibility.


But....
That's all to the good, but if you do a more measured reading of Lucretius, you can see that in many ways we haven't come all that far.  We do a lot of cherry-picking of things in Lucretius that sound similar to today's ideas and thus seem particularly insightful.  But it is not clear that they were more than a mix of subjective insight and, mainly, good guesses--after all, there were competing theories of the nature of Nature even at the time.  And other areas of Epicurean thought, well, are just not mentioned by those remarking on their apparent modernity.  Selective citation gives an impression of deep insight.  Most of De Rerum Natura was simply story-telling.

In many areas of science, perhaps even some aspects of fundamental physics and cosmology, but particularly in the social and even aspects of evolutionary sciences, we still make careers based on plausibility story-telling.  Our use of mathematics or statistical methods--random surveys, questionnaires, arguing by analogy, and so on--and massive data collection, give the same sort of patina of professional wisdom that one can see in the rhetoric of Lucretius.

We tell our stories with confidence, assertions of what 'must' be so, or what is 'obvious'.  Often, those interested in behavior and psychology are committed to purging religious mysticism by showing that behavior may seem immaterial but that this is an illusion, and purely material evolutionary and genetic explanations are offered.  No 'free will'!  The world is only a physical reality.  The role of natural selection and competition in explaining even morality as a material phenomenon is part of this, because Darwin provided a global (may one say 'Epicurean'?) material framework for it.  Evolutionary stories are routinely reported to the public in that way as well.  Even if some caveats or doubts are included here and there, they are often buried by the headlines--and the same can be found in Lucretius, over two thousand years ago.

Explanations of physical and behavioral variation and its evolutionary causes, along with many 'adaptive' stories making forcefully asserted plausibility arguments about what evolved 'for' what, still abound. They are not just told on television--we can't really blame Disney and Discover for appealing to their audiences, because they are businesses; but the same stories are in the science journals and science reportage as well.  We see tales every day reporting miraculous discoveries about genetic causation, for example.  It is sobering to see that, in areas where we don't have a really effective methodology or theoretical basis, we are in roughly similar sandals as our ancient predecessors.

Cherries; Wikipedia
Intoxicated by the many genuinely dramatic discoveries of modern, systematic science, we do our own cherry-picking, and tend to suspend judgment where findings are less secure, dressing our explanations in sophisticated technological hand-waving.

When we don't have actual good explanations, we make up good-sounding stories, just as our forebears did, and they're often widely accepted today--just as they were then.

Tuesday, January 31, 2012

Did Darwin run a pyramid scheme? Darwinian method, continued

Before the period of the Enlightenment in science, roughly starting with Francis Bacon and Galileo and others, about 400 years ago, a model of knowledge (among scientific types, at least, not farmers and craftsmen and others who actually earn a living) was largely attributed to Aristotle from around 400 BC.  According to this view of how we should figure out the world, we were hard-wired to understand the nature of Nature (sounds like a lot of genetic or Darwinian determinists, doesn't it?).  Thus, knowledge could be deductive (the classic example of this is 1) All men are mortal, 2) Socrates is a man, 3) Therefore, Socrates is mortal).  The basic truths were known and were in that sense axioms from which predictions about facts to be found could be deduced.  In a sense, the facts were latent in the assumptions.  A theory came first, and led to many facts.  (The BBC Radio 4 program, In Our Time, featured a nice discussion of the Scientific Method last week.)

But the Enlightenment turned that idea on its head.  The idea was the scientific method that started with observation rather than inspiration, and built up a pyramid of understanding.  First, by the process of induction, many observations were made, seen to be consistent, and they lay at the base of knowledge (All the swans I've ever seen are white, therefore all swans are white).  Other types of generalization built upon this base, to the top of a pyramid of understanding, the final theory that we infer from facts.

When Darwin published his theory of evolution, of descent with modification screened by natural selection, from a common ancestral form, it was a challenge to accepted wisdom.  The scientific method was well established, but religious explanations of life were rife.  Darwin's theory certainly challenged this big-time!

Now, Darwin amassed countless facts--it was one of his incredible fortes.  From these he inferred his theory, and on the face of it this would seem to be the scientific method if anything was.  But the geologist and former friend and teacher of Darwin's, Adam Sedgwick, was rather incensed by the theory of evolution.  Sedgwick was a seriously Christian believer, and could not abide this threat to all that he held dear.  He lambasted Darwin, not explicitly because Darwin contradicted biblical explanations, but because Darwin's theory was (in our words) Aristotelian rather than Baconian:  it was incorrect, old-fashioned and not real science at all!

Inverted pyramid, the Louvre
Sedgwick basically argued that Darwin inverted the proper pyramid of knowledge.  He claimed to be using inductive reasoning by bringing to bear so many different facts to form a consistent theory.  But in fact, argued Sedgwick, this was not induction at all!  That's because Darwin used the outcome of life that he observed as if he could draw conclusions from it inductively, when in fact life had only progressed on the Earth once!  Thus, Darwin was taking a single observation, partitioned into many minute facts to be sure, but was generalizing about life as if evolution were observed again and again.  This, said Sedgwick, was old fashioned a priori theory driven reasoning that Aristotle would be proud of, but it did not have the empirical truth-value that the scientific method was developed to provide.

In various discussions of this topic then and since, it appears that Darwin largely conceded the formal point, but of course stuck to his guns.  He could (and we can) predict new facts, but they are details that can immediately be fitted (or, Sedgwick would perhaps argue, be retro-fitted) into the theory.  Yes, there was diversity in the world, but this could arise by other processes (such as special Creation) and the theory of evolution was not the only explanation as a result.  It was not, argued Sedgwick, properly inductive.

One could argue that we have used this theory in so many ways, modeled mathematically and experimentally in artificial selection, to predict formerly unknown phenomena about life, that the theory has clearly stood the test of time.  Many facts about the one process on earth, could be used to generalize about the process as if it could be repeated.  We argue that different species in different places each do represent replicate observations from which the process of evolution can be induced.  Or, one could argue, inductive reasoning is just one way of getting at convincing accounts of the nature of Nature.

One might even go all the way with Aristotle, and say that for the very reason that evolution did occur, our brains were adapted (by Darwinian processes!) so that we are built to understand Nature!  The argument probably wouldn't hold much water, but it's a thought.  In any case, the fact that evolution only occurred once does suggest that the idea was cooked up by Darwin in a non-inductive way--even if his theory was built upon countless observations, but of one single process.

The triumph of the Darwinian method, to use the title of a book by Michael Ghiselin that we posted on in October and November 2011, proliferates throughout the life sciences.  There are things that this allows us to do that are not exactly inductive, but are close to inductive reasoning in many ways.  This has to do with the nature of variation and how it's to be explained.  In a next post we'll discuss this in light of DNA, totally unknown to Darwin.  There are substantial problems, and many of the inferences we make about specifics of the past are speculative, but overall, Darwin was not a Madoff, and did not hustle us with a pyramid scheme! 

We'll see how Darwinian concepts, more than any other, enable us to understand why DNA sequence can be 'random' on its own, in the sense that the A,C,G,T's along DNA are by various statistical tests  random: the nucleotides in one place don't predict those nearby or elsewhere along the sequence.  Yet the nucleotides are not just letters in a computer test, and are in fact anything but random.  Indeed, the concept of randomness has to be revised because DNA sequences evolved.

Tuesday, July 5, 2011

How evolution's hope evolved away

In the incoming tide (or was it an onslaught?) of science in the 18th and, especially the 19th century a few interesting things converged, with interesting if perhaps saddening impact on western thought.

In the dawning of the age of science, leading thinkers advocated a strongly empirical way to understand the world: through observation, what we now call science.  The widely held belief in this period, known as the Enlightenment, was that empiricism would reveal knowledge of the nature of Nature that would enable humans to engineer a better more perfect society--a Utopia.  This society would rectify physical needs as well social inequities.

At the beginning of this era, Newton's and Galileo's physics had powerfully showed that if based on observation rather than received doctrine or faith, the world followed natural law.  Natural laws unchangingly governed the universe: they dictated how the stars did, do, and would forever go 'cycling on according to the fixed law of gravity'.  That phrase is in Darwin's last paragraph in the Origin of Species (1859), but Darwin had anything but a static world in mind:  instead, he was the century's icon of unremitting, mechanical change.

Darwin's insights reflected, were influenced, enabled, and inspired by the advances in geology showing that, while not static, geological processes were very slow and sometimes cyclical.  As a famous founding article in geology by Hutton (1788) ended: 'no vestige of a beginning, no prospect of an end.'

But Darwin showed that life on earth was changing in a gradual, but very different way: it was evolving, and from a beginning!  Darwin refined and reflected thinking by many others, who were suggesting general ideas about evolution, including about the evolution of life.  Many were relating scientific ideas about laws of change to society itself:  Herbert Spencer, Karl Marx, and others were suggesting that society was as much subject to laws of Nature as the earth and life were.  This thinking about society, (and to some extent Darwin's thinking about life, and that of many of his contemporary evolutionists), included a strong sense of progress.  In ways resembling Enlightenment Utopian thinking, they saw society as evolving from primitive states, to civilization, and on towards some idyllic end:  life had evolved the noble brow of humans, and society would evolve to be more equitable.

These were all about material laws, and one might think they would drive definitive nails into the coffin of changeless, religious thinking, based as it was on received word rather than observed fact.  How on earth (so to speak) could anyone speak of 'Heaven'?  In fact, however, at the time many writers suggested that the progress in earthly evolution was leading to a perfected end-state, often envisioned as being 'with God' and that this march of progress was in fact ordained by God.

Alfred Tennyson
We are stimulated to write this post by yet another installment of our favorite radio program, the wonderful BBC4's In Our Time.  This was a discussion of Alfred Tennyson's poem In Memoriam, AHH.  At Cambridge, the young Tennyson met an exceptionally brilliant student, one Arthur Hallam.  They became extremely close friends.  The program discusses their connections, and one can easily explore this on the web, including, of course, this In Our Time episode.  Hallam valued the young Tennyson's poems and the two were intellectually compatible to an unsurpassable degree.  But then, suddenly, while traveling in Europe and at age 22, and without warning, Hallam died, apparently from an aneurism.

Henry Hallam
Tennyson was totally devastated by this news--indeed he never recovered from the loss.  Seventeen years in the making, In Memoriam was his long, mournful eulogy or elegy to this closest of close friends.  The poem is wonderful to read, and famous as can be (' 'T is better to have loved and lost, than never to have loved at all.'; 'Nature, red in tooth and claw.').  Its moving and palpable pathos, and its metric magnetism, cannot be equaled.  If you have not read it, you really should (but do it patiently, to absorb the incredible, relentless, desperate mourning).

Of British intellectuals, Tennyson was among the most interested in, and aware of, contemporary science.  In Memoriam was published in 1850, some years before Origin.  But evolution in life and geology were in the air, and others had been writing about evolution (in particular a book by Robert Chambers called Vestiges of Creation, 1844), and in the kinds of progressive terms we mentioned above.  The earth and its life were working out God's plan so that its perfecting creature--humans--would eventually be worthy of, and end up with, God.  The progress in society and in the more complex organisms on earth than had been here before, suggested that change was heading to a goal that the evidence made obvious.

This view--or hope--provided the only solace of Tennyson's mourning, and he could look to the science we've just mentioned to support that hope.  The earth was mechanical, cruel, relentlessly dishing out sorrow.  All here was temporary, all would disappear as streams 'Draw down aeonian hills/The dust of continents to be,' and those who once lived 'seal'd within the iron hills.'  As for species 'a thousand types are gone....all shall go.'  into the bowels of history!  If there was no way to really recover from the death of close friends, the relentless earth meant there was no way to escape from it, either.  Tennyson would never again shake Hallam's hand or hear his voice, at least not in this world.  But he consoled himself by writing that Hallam was a creature ahead of his time, 'a noble type, Appearing ere the times were ripe', and his disappearance may have been because he was too good for his time.  But if Hallam had been taken from him for the time being, Tennyson yearned and dreamt, hopefully, for the day they would be reunited in a perfected world.  There will come a 'crowning race....No longer half-akin to brute....to which the whole creation moves.'

The Rapture redux, in the raiment of science
For a while, especially before Darwin, one could do as Tennyson did, and absorb the factual realities of earthly change into the teachings of some aspects of  religion.  But as science progressed, and the implications of biological evolution were realized, it became less and less possible to see either real progress in how life worked, or to find any evidence of a march to a specifiable, much less seeable, end on the evolutionary horizon.  Eventually, the earth would burn up or burn out, and return to the unrelenting cold womb of space, and life would go with it.

How could someone who had absorbed the idea of change--even progressive change--have ever got the idea that there would be a end-point?  Well, if the 2nd law of thermodynamics says the universe is moving towards maximum disorder (called 'entropy'), or randomness of matter and energy, then why can't there be some biological or societal end-point?   The comparable point to entropy might be when there was no social inequality, or when the mind had reached the maximum perfection (that was called 'God').  It's a stretch, because in a sense it implies there's a maximum IQ for God's perfecting creatures to reach (maybe that's God's IQ?).  In any case, define it how you want, one can imagine, or invent, such states.  Still, there was no evidence for them--in life, or in society.

Darwin may have grudgingly agreed that there might be a God who started it all, but the clockwork mechanism of evolution provided no goal or endpoint for life, nor so far as we know did Darwin ever make any such suggestion.  And as science has worked it out over the subsequent 150 years, our understanding of life as the product of evolution has largely closed the coffin lid on such goal-headed views of life.  If one wants to believe in a theistic religion, an Omega state, an after-life, or a benevolent deity, we know that one must seek it in some non-material rationale: the scientific study of the material earth simply doesn't provide any empirical support for such views.  Except perhaps that 'intelligence' will lead to rapid human-induced extinction of our own and all other life....

Religion does, however, calm and solace people from seeing the world in its stark, remorseless reality.  In Memoriam expresses the value of such a view.   Tennyson's grief was so deep that he needed at least that hope to hold onto, even if tentatively and with doubt--'Believing where we cannot prove'.  Even today it is a poem to be read, and as discussed in In Our Time, one that has given solace to many persons weeping in the long, dark night for a lost loved-one:

 Tears of the widower, when he sees
  A late-lost form that sleep reveals, 
  And moves his doubtful arms, and feels
Her place is empty, fall like these.

Perhaps the often-strident and prideful Darwinian atheists should take some pause.  They too, and you, and we, will someday experience Tennyson's grief.  Illusion?  Maybe.  But there is more to life than science.

A warmth within the breast would melt
  The freezing reason's older part,
  And like a man in wrath the heart
Stood up and answer'd "I have felt."

Tennyson's is a view not to be frivolously dishonored, even if the evolution of the science of evolution has unremittingly evolved away from any such notions of progress towards an idyllic endpoint. Or, even, from wishful thinking.

Wednesday, December 29, 2010

Boondoggle-omics, or the end of Enlightenment science?

Mega-omics!
We're in the marketing age, make no mistake.  In life science it's the Age of Omni-omics.  Instead of innovation, which both capitalism and science are supposed to exemplify, we are in the age of relentless aping.  Now, since genetics became genomics with the largesse of the Human Genome  Project, we've been awash in 'omics':  proteomics, exomics, nutriomics, and the like.  The Omicists knew a good thing when they saw it:  huge mega-science budgets justified with omic-scale rhetoric.  But you ain't seen nothing yet!

Now, according to a story in the NY Times, we have the Human Connectome Project.  This is the audacious, or is it bodaceous, and certainly grandiose grab for funds that will attempt to visualize and hence computerize the entire wiring system of the brain.  Well, of some aspect of some brains, that is, of a set of lab mouse brains.  The idea is to use high resolution microscopy to record every brain connection.

This is technophilia beyond anything seen in the literature of mythical love, more than Paris for Helen by far.  The work is a consortium so that there will be different mice being scanned, and these will be inbred lab mice, and all that goes with their at least partial artificiality.  The idea that this, orders of magnitude greater complexity than genomes, will be of use is doubted even by some of the scientists involved....though of course they highly tout their megabucks project--who wouldn't?!

Eat your heart out, li'l mouse!
One might make satire of the cute coarseness of the scientists who, having opened up a living (but hopefully anesthetized) mouse, to perfuse its heart with chemicals to prepare the brain for later sectioning and imaging, occasionally munch on mouse chow as they do it.  Murine Doritos!  Apparently as long as the mouse is knocked out you can do what you want with them (I wonder if anyone argues about whether mice feel pain, as we now are forced to acknowledge that fish do?).

This project is easy to criticize in an era with high unemployment, people being tossed out of their  homes, undermining of welfare for those who need it, and in the health field itself.....well, you  already know the state of health care in this country.  But no matter, this fundamental science will some day, perhaps, maybe help out some well-off patrons who get neurological disease.

On the other hand, it's going to happen, and you're going to pay for it, so could there be something deeper afoot, something with significant implications beyond the welfare of a few university labs?

But what more than Baloney-omics might this mean?
The Enlightenment period that began in Europe in the 18th century, building on international shipping and trade, on various practical inventions, and on the scientific transformations due to people like Galileo and Newton, Descartes and Bacon, and others, ushered in the idea that empiricism rather than Deep Thought was the way to understand the world.  Deep Thought had been, in a sense, the modus operandi of science since classical Greek thought had established itself in our Western tradition.

The Enlightenment changed that: to know the world you had to make empirical observation, and some criteria for that were established: there were, indeed, natural laws of the physical universe, but they had to be understood not in ideal terms, but by the messiness of observational and experimental data.  A major criterion for grasping a law of nature was to isolate variables and repeatedly observe them under controlled conditions.  Empirical induction of this kind would lead to generalization, but this required highly specific hypotheses to be tested, what has since that time come to be called 'the scientific method'.   It has been de rigeuer for science, including life science, ever since.  But is that changing as a result of technology, the industrialization of science, and the Megabucks Megamethod?

If complexity on the scale of things we are now addressing is what our culture's focus has become, then perhaps a switch to this kind of science reflects a recognition that reductionism is not working the way it did for the couple of centuries after its Enlightenment launching.  Integrating many factors that can each vary, into coherent or 'emergent' wholes, may not be an effective approach, and enumerating the factors may not yield a satisfactory understanding.  Something more synthetic is needed, something that involves reductionistic concepts that the world is assembled from fundamental entities--atoms, functional genomic units, neural connections--but that to understand it we must somehow view it from 'above' the level of those units.  This certainly seems to be the case, as many of our posts (rants?) on MT have tried to show.  Perhaps the Omics Age is the de facto response, even a kind of conceptual shift that will profoundly change the nature of human approach to knowledge.

The Connectome project has, naturally, a flashy web site and is named 'human' presumably because that is how you hype it, make it seem like irresistible Disney entertainment, and get NIH to pay for it. But  the vague ultimate goal and the necessity for making it a mega-Project may be yet another canary in the mine, an indicator that, informally and even sometimes formally,  we are walking away from the scientific method, away from specific hypotheses, to a different kind of agnostic methodology:  we acknowledge that we don't know what's going on but, because we can now aim to study everything-at-once, the preferred approach is to let the truth--whatever form it takes, and whether we can call it 'laws', emerge on its own.

If that's what's happening, it will be a profound change in the culture of human knowledge, that has crept subtly into Western thought.

Tuesday, October 5, 2010

The arrogance of science.

We have not read Sam Harris's new book, the soon-to-be bestseller, The Moral Landscape: How Science Can Determine Human Values, but we have watched his TED lecture on the subject, and read Appiah's review in the Sunday NYT and we're pretty sure we're not likely to read the book.  But of course that isn't stopping us from having something to say about it.

Two things disturb us about Harris' argument.  (If you've read the book and can tell us that reading it would change our minds, please let us know -- we'd love to be wrong on this.)  But as we understand it, Harris's argument is both arrogantly imperialistic -- or worse -- and non-Darwinian, which is rather ironic from someone arguing that science will out the Truth. The 'logic' of the argument is to put together intelligent-sounding phrases that have little actual content....especially little scientific content.

Best known as one of the New Atheists, Harris has written previously on how he knows there is no God.  He argues in his new book, and in the lecture, that only science can answer the questions of life's "meaning, morality and life's larger purpose" (as quoted in the review).


Which prompts us to ask, Where is existentialism when we need it?  Better yet, let's call it Darwinian existentialism.  If we are truly to take the lessons of Darwinian evolution to heart, we must accept that there is no "larger purpose" to life.  The only purpose to life, which we don't ourselves construct, is to survive and reproduce.  And even that is not a purpose to life itself, which to an arch Darwinian might be not to survive, so something better can do it instead.  Or to expend solar energy in some particular way.  To argue otherwise is to position humans above Nature, which is precisely what Darwin and his contemporary supporters argued was biologically not so (though even Darwin fell into that ethnocentric trap in Descent of  Man).

Further, if we accept Darwinism in the raw, there is no meaning or morality for science to find. Meaning, morality and purpose are constructed by us once we've got food and a mate. As animals with history and culture and awareness of both, we imbue our lives with values and morals and meaning, but they are products of the human mind.  This doesn't mean that they aren't important, or compelling, or even things to live or die for, but those judgments are our own.  But people with the same genome can adopt very different sense of meaning -- which is equally important and compelling.

According to Harris, science can uncover not only facts, but values, and even the 'right values'.  Just as science can tell us how to have healthy babies, science can tell us how to promote human 'well-being'.  And "[j]ust as it is possible for individuals and groups to be wrong about how best to maintain their physical health," he writes, as quoted in the review, "it is possible for them to be wrong about how best to maximize their personal and social well-being."

What is this well-being of which he speaks?  Who says we or anyone should 'maximize' it, and who are 'we' in this context?  Well-paid professors?  If he meant Darwinian fitness we might pay attention because that's the only objective measure of success that counts in a Darwinian world (unless it's ecosystem expansion, even if at the expense of particular species).  But what he means is something much less empirically tangible -- ironically for someone arguing that science will find it.  He means happiness.  This would be perfectly fine in the realm of psychology or Buddhism or philosophy, but, to our minds, this argument of his is on the same playing field with religious arguments about morality and purpose -- which of course he would not accept -- and even pre-Darwinian.

And, it wasn't that long ago that Science decided that homosexuality wasn't an illness to be cured, or that phrenology wasn't in fact enlightening, or that bleeding patients wasn't a cure -- and of course there are many other such examples.  When what was once True becomes False, what does this say about Science and its ability to find the ultimate Truth? Why would anybody think we're right today....unless it's from ethnocentric arrogance?


The Enlightenment period was the age in which the belief grew that modern science could be used to create a better world, without the suffering and strife of the world as it had been.  It was a world of the Utopians.  Their egalitarian views were opposed vigorously by the elitist right ('we're just scientists telling it like it is')  in the form of Thomas Malthus, Herbert Spencer, strong Darwinians, who opposed the more idealistic thinking.  The Science Can Find the Moral Truth view grew through much of the 19th century, but its consequence, 'modernism', was rejected after science gave us machine guns, carpet bombing, eugenics, the Great Depression, dishonorably wealthy industrial barons, and other delights of the 20th century.  The reaction to that went under various names, but included things like cultural relativism and anti-scientific post-modern subjectivism.  Unfortunately, like any Newtonian reaction, the reaction was equally culpable, if less bloody, in the opposite direction, by minimizing any reality of the world.

Cultural relativism, against which Harris rails, is the view that each culture is a thing of its own, and we can't pass judgment about the value of one culture over another, except as through our own culture-burdened egotistical eyes.  That is not the same as saying that we have to like someone else's culture, nor adopt it, nor need it be a goody-goody view that we have to put up with dangers from such culture (like, for example, the Taliban).  But there is no external criterion that provides objective or absolute value.   Racism and marauding are a way of life in many successful cultures; maybe by some energy consumption or other objective measure it's best for their circumstances.  Science might suggest (as it did to the Nazis and Romans and some groups today) that their way is the only way, the best way, Nature's chosen way.


Science may be a path to some sorts of very valuable Truth, and better lives, such as how to build a safe bridge or have painless dentistry (the greatest miracle of the 20th century!).  Regarding many aspects of our culture, we would not trade.  We ourselves would love to attain the maximum happiness that Harris describes.  But it is an arrogance to assume that in some objective sense that is 'the' truth. 

And what if the 'facts' said that to achieve the greatest good for the greatest number (not exactly an original platitude, by the way) meant that people like us (and Harris) had to cut our incomes by a factor of 100, or 1000, for resources to be equitably distributed?  After all, the USSR implemented 'scientific' ideas of maximal good for the masses (communism, Lysenkoism, to the tune of tens of millions purged, frozen to death in Siberia, or starved because of failed harvests, and more).  The Nazi policies were explicitly based on the belief that Aryans were simpler better than others, based on warped Darwinian truths, and we know what happened.

So, anyone who would still not realize that the smug self-confidence that one can find the ultimate truth through science either is another tyrant potentially in the making, or hasn't read his history.

Whether or if there can be some ultimate source of morality is a serious question and if it has an answer nobody's found it yet.  Religion has no better record than materialistic science, nor secular philosophy.  Nor does Darwin provide that kind of objective value system, especially in humans where very opposed cultural values can be held by people toting around the same gene pool.

The Darlings of the Smug rise, like mushrooms, in every society.  They are glib, but so are demagogues of other sorts.  They're all potentially dangerous -- or are those for whom they serve as the intellectual justification.  Again, that is not to say we should adopt someone else's values, nor that we should hold back from defending ourselves against those who threaten us.

Still, oblivious to these points, Harris argues, as does the far right in the US, that cultural relativism is wrong and should be completely and utterly discounted.  Here are some quotes from his TED talk:
How have we convinced ourselves that every opinion has to count?  Does the Taliban have a point of view on physics that is worth considering?  No. How is their ignorance any less obvious on the subject of human well-being?  The world needs people like ourselves to admit that there are right and wrong answers to questions of human flourishing, and morality relates to that domain of facts.  It is possible for individuals and even for whole cultures to care about the wrong things.  Just admitting this will transform our discourse about morality.
Again, how is this different from, say, the Aryan line which would say we have a right to decide and purge, all in the name of science (and, by the way, it was medical science as well as Darwinism)?  Why is this not the arrogance of imperialism all over again?

When the Taliban, the religious right and the likes of Harris and the New Atheists all believe that only they are the keepers of the Truth, dominion can be attained not by science but by wielding of power alone.

Tuesday, January 12, 2010

Knowledge is Power

At the dawn of the modern science era, in 1597, Francis Bacon, a founding empiricist, used the phrase 'knowledge is power'. To Bacon, "knowledge itself is power", that is, knowledge of how the world works would lead whoever had it to extract resources and wield power over the world--science would enable Empire.

This view of science has persisted. It was important in the early founding of the Royal Society and other prominent British scientific societies in the 17th and 18th centuries and beyond. The technology and even basic knowledge that was fostered did, indeed, help Britannia to rule the waves.

Basic science was the playground of the classic idle wealthy of the 1700s and surrounding years, and applied technology was developed by people not formally beneficiaries of 'education' as it was done in those times. In the US, major scientific investment, such as in large telescopes, was funded by private philanthropy--of wealthy industrialists who could see the value of applied science.

We tend perhaps to romanticize the 18th and 19th centuries, the era of Newton, Darwin, and many others, who advanced science in all areas--geological, physical, chemical, and biological, without doing so for personal or financial gain. But at the same time, there was much activity in applied science and technology and even in 1660 when the Royal Society was founded with government support, gain was one of the objectives.

An informative series about the history of the Royal Society and of other scientific activities in Britain was aired the week of Jan 4 on BBC Radio 4, on the program called In Our Time--the four parts are now available online. Much of the discussion shows that the interleaving of government funding, geopolitics, and avarice were as important when the Royal Society was funded, as now, in driving science.

There can be no doubt about the importance of systematic investigation of the ways of Nature in transforming society during the industrial revolution. The result was due to a mix of basic and applied science. The good was accompanied by the bad: daily life was made easier and healthier, but episodes of industrialized warfare made it more horrible. On the whole, it has allowed vastly more people to live, and live longer, than ever before. But it's also allowed vastly more people to struggle in poverty, too. (The discovery of novocaine for use by dentists may alone justify the whole enterprise!)

The post-WWII era seemed to foster lots of basic science. But in the US the National Science Foundation and other institutions poured money into science largely, at least, in response to the fears that the Soviet Union whose space program was far ahead of ours, might gain on us in world prominence. So there was a recurring pragmatic drive for supporting science.

The university dependence on research grants was one of the beneficiaries of this drive. We think this has been awful for science, since careers depend on money-generating by faculty, and that leads to safe, short-term thinking, even if more funds mean more opportunity. The intellectually thin Reagan administration's demand that research should translate into corporate opportunity was just a continuation of the materialistic element of support for science.

In a way, we're lucky that basic science, disinterested science actually got done, and lots of it at that! Human society probably can't be expected to put resources into things so abstract as basic science, with no promise or obvious way to lead to better pencils, medicine, or bombs. So it's no wonder that universities, bureaucracies, and scientists alike hype their personal interests in terms of the marvels to be returned to the funders.

Such a system understandably leads to entrenched vested interests who ensure their own cut of the pie. We routinely write about these vested interests and the effect we believe they have on the progress of knowledge. But, as anthropologists, we have to acknowledge that the self-interest that is part of the package is not a surprise. After all, why should we be able to feed off the taxpaying public without at least promising Nirvana in exchange? Human culture is largely about systematized resource access and distribution, and this is how we happen to do that these days.

Under these conditions science may not be as efficient or effective as it might otherwise be. A few MegaResearchers will, naturally, acquire an inordinately large share of the pie. Much waste and trivia will result. The best possible science may not be done.

Nonetheless, it's clear that knowledge does progress. A century hence, it will be our descendants who judge what resulted from our system that was of real value. The chaff in science, as in the arts, sports, or any other area of life, will be identifiable, and will be the majority. But the core of grain will be recognized for its lasting value and impact.

BUT that doesn't mean we should resign ourselves to the way the system works, to its greed, waste, hierarchies, and its numerous drones who use up resources generating incremental advance (at best). That is part of life, but only by the pressure of criticism of its venality and foibles can the System be nudged towards higher likelihoods of real innovation and creativity in knowledge.

It's remarkable that blobs of protoplasm, evolved through molecules of DNA and the like from some primordial molecular soup, understand the universe that produced it as well as we actually do. And we will continue to build on what we know; empirical, method-driven activity is a powerful approach to material gain. Embedded in inequity, vanity, venality, and other human foibles, we nonetheless manage to manipulate our world in remarkable ways.

The history of the Royal Society and other science societies that reflect the growth of society generally, as reflected in these BBC programs, is a fascinating one. But that doesn't change our belief that, in principle at least, we could make better use of our knowledge and abilities to manipulate our world toward less inequity, vanity, venality and so on.