Tuesday, November 10, 2009

Ig-nor'-ance, or Hire the Fired Coach!

Dismal Science
As readers of this blog will know, we regularly listen to many of the excellent radio programs on the BBC. There was recently an episode of a Radio 4 program called Analysis about the state of professional economics in the current state of the economies.

The point was to ask why the self-characterized "dismal science" of economics so dismally lived down to its reputation in failing to predict the current awful state of things. It would take a rather bold economist not to be at least a bit apologetic, and some actually are that unrepentent, justifying models as at least putting constraints on our understanding of the world, and being adjustable so as to be better next time (though without evidence that they can be better).

But that is pretty lame, to say the least, and some economists are openly nearly honest, saying that macroeconomic models (about how things work on the large scale, rather than at your local 7-Eleven) are essentially worthless. We say 'nearly' honest, because perhaps the most honest thing to say is that economics departments ought to be shut down and the resources diverted to something useful (one could say similar things about much of what happens in 'research' universities these days).

In addition to economic theory being essentially useless in making predictions, and hence guiding actions, theories tend to come around episodically. That's why there is a lot of Ode to Keynes these days. But when ideas cycle like this, it's either because we are ignorant (but then why are we so assertive about our wisdom and theories?), or we are ig-nor'-ant: we ignore the past and its lessons.

Robert Proctor, an historian at Stanford, and, sadly, late of Penn State, calls culturally-induced ignorance agnotology, although he is particularly interested in the perpetuation of knowingly inaccurate or false information for some purpose. Here, we're less interested in conspiracy theory than in why false or misleading theories are repeatedly perpetuated, in spite of their often well-recognized limitations.

Ig-nor'-ance and its consequences (or not)
Really, those who perpetrated the false theories that led to policy that led to disaster should be shunned, ostracized, and ignored. Instead, the head perps, like Alan Greenspan, will still be published, still get job offers at prestigious think tanks or Ivy League universities (at high salaries and teaching very little), will still get grants, and will get even larger fees for giving rubber-chicken dinner talks.

This is like the well-known phenomenon in sports whereby a head coach whose team regularly loses and so is fired, is quickly hired as the coach of another team, rather than left to go be a health-insurance salesman. Of course sports doesn't do much damage, except to the bones of football players, but economists have done huge damage to people (but generally not, surprise-surprise! to themselves) who lose jobs, houses, and normal lives as a result of culpably, knowingly false theories.

Ignor'ance makes ignorance a matter of policy
Ignor'ance is quite common and we can think of several anthropological explanations for it. You can't make a career in an institutionalized society like ours by echoing your masters (that was what academics did for centuries--being experts on the Bible, Aristotle, and so on--until the research focus of universities began to bloom). You have to make a name for yourself, to find something new, or to show that you're smarter than your forbears were. And because we need to earn a living, we can't agree to quit or shut down our paper-publishing factories just because we don't know as much as we need to think we do (to pamper our egos) or claim (to get grants and have secure jobs).

Ignor'ance makes ignorance a matter of policy, if students or junior faculty do not know or respect the history of their fields. It becomes a willful way to, in a sense, keep getting credit for old ideas dressed in new technical clothes. This is the case in a sense in economics, where the new clothes are the nearly instantaneous trading speeds, and vastly speedier computers that allow models to be ever more mathematical and automated, but without titering them to the real world.

Ignor'ance and biology
There are parallels in all this to the subject matter of this blog. One can go back to Darwin and earlier to see many of our current ideas in genetics or biology stated, sometimes only in rudiment, but sometimes clearly. The fact that we know much more now than they did then ameliorates this a bit, but not entirely. We engage in ignor'ance as well as ignorance in many ways when we persist in dogmatic views of evolution or genetic determinism. Often a theory drifts out of sight, and then is ignor'antly rediscovered and touted as if there was new data that made it more plausible. Sometimes there is, indeed, and that's progress!

But often the data are different but don't change things in any serious way. Analysis of human variation (and the concept of 'race', whatever euphemism is used for it), aspects of arguments about natural selection and speciation, and so on are examples. Genetics tells us nothing today that can justify human geneticists (with or without any proper understanding of anthropology) making pronouncements about human races--but they do, often based on new and exotic molecular or statistical data. But these statements are virtually identical to what was said 100 years ago, and 50 years before that in Darwin's time, and in earlier iterations.  Other biological parallels include the nature/nurture gene/environment pendulum, views about the nature of speciation, the importance of selection vs mutation in adaptive evolution, and many more.

We lose touch with the valuable things in our past when professionalism in its varied manifestations leads us to engage in ignor'ance. This happens for various reasons, not least being our love affair with technology that leads students to disregard things from the past, and the lack of time or attention given to courses that include the history of our discipline. This isn't to romanticize the past or just grumble about the present. We have to make our careers, of course, but we waste resources and even those valued careers in a sense, if we don't teach, learn, or remember our past--especially when we have every way and reason to know better.

Not learning from non-progress
If we haven't progressed in understanding in 100 years, that is a sign of weakness of our science and the complexity of the world we're tying to understand. It is a problem only when we don't come clean to the public, to funders, to students, and most of all to ourselves about our limitations. Like well-done negative experimental results, non-progress is a kind of very useful knowledge: that our current approaches are not cogent or sufficient relative to the problem at hand.

An obvious part of the solution is to slow down the train and make the reward system reward conceptual novelty, not just technical production (of which, of course, there is an amazing and potentially amazingly valuable amount). Instead, our industrialized 'productivity'-driven culture has spread into science. but if the problem is hard, let's work on it humbly, rather than dismissing a new idea and recycling an older one, etc. Let's rethink our conceptual approaches, not just gear-up for more intense technological data gathering, computer programming, and the like.

A remarkable statement!
The way we are embedded in our culture is illustrated by another remarkable statement made in the BBC program about economic theory. An economic historian said correctly that the mathematizing of economics made things more structured than the real world is, and ignored the emotional or other aspects of real, local human actions.

He is an advocate for what one might call a more subjective or sociological view of economics. Rather than throw out the formal models, he said professors from the social sciences and economic 'theorists' ought to get together and unite their views. The resulting mix would be less rigidly mathematical, and 'less scientific but closer to the truth.'

This is nearly an exact quote and is a remarkable reflection of the iron cage of culture. Why is this? Because it shows the deep level to which our culture--our worldview itself--has decided that if something is rigidly mathematical (or molecular in the case of biology and genetics) it is 'science', but if it's not like that but is closer to the truth of the real world, it isn't science!

We hate to say it, but this shows the kind of thing that post-modernists, the inveterate opponents of much of our culture's obsession with science and technology, would point out as the deeply cultural matrix of all that is done. Because if something is closer to the truth that by definition should be the most 'scientific'. The statement shows, as did others on the program, the way that academicians in essence protect their territory without even realizing that's what they're doing.

Complexity
Our particular concerns are genetics and evolution, but society and other sciences also face many problems with similar characteristics: in today's parlance, they are 'complex'--they involve many contributing factors and many or most may be individually very minor.

That we haven't adequately understood complexity is not a fault of ours, since the problems are tough ones. But it is a fault if we refuse to accept what we already know, and continue to do the same thing again and again (as Einstein said, insanity is doing the same thing over and over and expecting the results to be different).

We can be excused for our manifest ignorance, but not for our ignor'ance.

-Ken and Anne

7 comments:

Arjun said...

Particularly in the sciences, I've found the awarding of Nobel Prizes to be common perps as far as the perpetuation of dubious theories is concerned.

Nicholas Nassim Taleb provides a more experienced perspective into this idea:

http://www.thefinancialexpress-bd.com/2007/11/10/16761.html

What I term the 'statistical basis of knowledge' also appears at fault: that the truth of a statement exists in direct proportion to the number of people who believe and espouse it.

Solomon Asch's conformity experiments provide some experimental confirmation of this viewpoint:

http://en.wikipedia.org/wiki/Asch_conformity_experiments


While I feel that science's foundation lies in relentless questioning, I've observed numerous cases of what I term 'epistemic hegemony,' through which failure to conform to prevailing paradigms of research and thinking has stymied more than a few academics' professional careers, in effect requiring that academic researchers 'echo their masters.'

A graduate student I know in the Loop Quantum Gravity (a competitor to String Theory in settling the rifts between quantum mechanics and general relativity) Group of Penn State's Physics Department was admonished prior to joining that he may encounter difficulty in obtaining a job following his graduation owing to the dominance of String Theory in the field of quantizing gravity.

David Bohm, another physicist, spent half of his professional career ostracized from the mainstream physics community due to his endorsement of a deterministic quantum mechanics which flew in the face of the accepted (yet still unintuitive) idea that the universe is fundamentally probabilistic which has sparked so much philosophic debate over the past several decades.

Though I'm sure you're familiar with the fellow due to your association with the Santa Fe Institute, I also mention W. Brian Arthur, an economist, and his idea of 'increasing returns,' that increasing the production of a given good can lead to an increase in its market share, which contradicts conventional economic theory claiming that additional units of input to any given market will lead eventually to smaller and smaller corresponding returns. Though it seems to have gained wider acceptance, I believe that the idea continues to remain on the fringe of mainstream economic theory.

I hate to add to this glut of examples but another that just came to mind was that of improvements on Robert Millikan's experiment to determine experimentally the electron's quantity of charge. Although their results were in fact MORE ACCURATE than Millikan's first reported result, experimenters were at first hesitant to report their results out of concern for the discrepancy between their values and the 'established' value.

Your remark on biology not being taken as seriously reminds me of the assertions of a physics major friend of mine that 'biology is not a science' since its practice lies primarily in description rather than (quantitative) prediction of the properties of its subjects of study. He also holds faith in the intellectual hierarchy of

humanities < biology < chemistry < physics,

preferring to consign mathematics to the role of a toolbox for physicists.

In general I see method, or heuristics, as the skeleton for this hierarchy, in that quantitative (in particular, predictive) studies have ascendancy over the qualitative, supposedly maintaining a tighter seal against subjectivity, that bane of 'scientific' inquiry from oozing into the analysis.

But what does a predictive theory do?

Right now I don't see a predictive theory as telling us so much what 'will' happen as describing 'all possible cases' of a given system, if you understand what I'm getting at, analogous to a theorem in that for a given set of conditions, we will see "such and such" to be the case.


What to do, what to do, what to do...?

Anne Buchanan said...

You make a lot of good points, Arjun. The sociology and psychology of all this is quite interesting. As for what to do, recognizing that science isn't always 'just the facts' is already a big step.

Ken Weiss said...

We have a post coming up on another, somewhat less culpable, way in which a view becomes accepted in part by repetition.

Probabilistic causation is not what science intuitively is about, even if it may be the way of the world. A flipped coin only appears to be probabilistic, in the deterministic-Nature view: if we knew it's properties etc. we could predict the outcome (in fact, a statistician at Stanford has done this--see my Crotchets & Quiddities column on 'saving the Persians.'

Perhaps by their excessive zealotry, post-modernists (and other schools of that ilk) got their own baby thrown out with their ample bathwater.

It's often far from easy to filter the culturally convenient from the more hard-nosed realities of our view of Nature.

Arjun said...

Dr. Weiss, could you elaborate on how post-modernists have thrown the baby out with the bathwater?

Ken Weiss said...

In their various guises, from 'deconstruction' to 'post-modernism', the reaction to the 'modernist' view that science and technology would solve society's problems (among other things) became a tribal zealotry of its own. It often turned into a view that denied objectivity to a ritualistic extent, and that indulged in its own contorted academicized jargon to the point of non-sense (and nonsense).

I'm oversimplifying but like other rabid schools of thought, it became exclusionary, accusatory, and so on and destroyed or undermined many areas of intellectual life (it had little impact on the real world, as far as I can see).

The Sokal article in postmodern physics is but one instances in which the problem was made clear.

As the old quip goes, the postmodernists denied objecitve science but were willing to fly to their meetings on airplanes.

This is a quick cartoon of the issues. The 'baby' is the truth that science is not entirely objective and is a part of our culture, like our other activities and beliefs. In that sense, rejection of postmodernism is a convenient self-service to and by scientists.

Again,this is a very cursory response about what has been a very big subject.

Arjun said...

So are you saying in your previous comment that a postmodernist would likely blindly characterize ALL OF SCIENCE as probabilistic?

Ken Weiss said...

No, not really. It is just that the extreme and most zealous PoMo schools of thought, again very crudely characterizing them, think that much of science is based on culture-bound perspectives and in that sense represents choices about how to interpret the real world.

The choices may not be conscious, as feminist philosophers would say about the male-bias in what scientists do or how they do it. Different theories work on different criteria for 'truth' (see Feyerabend's statement of this viewpoint in his book Beyond Reason).

The 'PoMo' view is that we may usually be unaware of this subjective element in what we do. Why do we pick on DNA and on competition as our core views of life? Why do we choose to do reductionist approaches rather than more holistic ones? And so on.

This has nothing to do with probabilism, I think, but really with subjectivism and the denial that the world is as objective as scientists like to think.

I have no idea what they'd say about probabilistic causation, but one can imagine it: for example, they may say the idea that probabilistic phenomena are really deterministic if we knew enough is an industrial-age belief in techology rather than a truth. Or that there is a distribution underlying probabilistic statements is a subtle deterministic-like assumption. Etc.