In the history of 20th century physics, there were some profoundly unsettling conundrums. Among the most important were observations of what seemed to be inconsistency between the idea that the world was made up of discrete objects (e.g., particles and other 'stuff') moved by continuous, Newtonian, forces (e.g., gravity, energy)--both of which were in a sense absolute. Separate, but fixed, laws applied to discreteness and continuity. So long, as Einstein showed, as you were within a particular frame of reference.
But various critical observations showed that such a duality of views just didn't work (nor did the idea of absoluteness). There were exceptions, all sorts of strangeness. The anomalies were precise (unlike those in biology), so debate could be focused. Adherents to traditional views clung fastly to them. But then the leaders of transformational change like Einstein and others began to eat away at conventional wisdom. Was light a wave or a particle? Are electrons located in a specific place...or are they even 'particles'? Symbolically representing the transformation that resulted is Schroedinger's equation that accounted for the quantum-based dynamics of the universe on its atomic scale. It's oversimplifying, but there, in one go, many things fell into place.
Do we need a 'Schroedinger equation' for biology? In the past couple of posts, we've noted reasons why we think we're in a
period of biology in which we have amazing technology for identifying
aspects of causal complexity, but are long on data and short on
theoretical explanations. We listed some rather similar-feeling unease, to what physicists went through, about whether our current understanding of life is facing fundamental paradoxes, and we wondered if our own analog to the famous equation is in order.
What we need is Bell Labs, not Bell Telephone
Our posts generated a decent bit of Twitter-ink, including, apparently, a lot of agreement, but among the comments was the suggestion that
1. "If you don't have a solution, you shouldn't raise the issues." The objection is understandable, but unreasonable: nobody likes a critic, much less a heretic. But this common reaction is circling the wagon. It's like saying that if I see your house on fire, I shouldn't tell you unless I have a fire-hose to put it out with.
2. "Until you tell us what to do, I'm going to continue doing what I'm doing." The objection is understandable, and it's how humans are--especially in our current careerist environment. But how costly in brain-cells and resources is that?
One major problem is that we have steadily been structuring universities as money-makers, venal institutions that imitate the rest of society rather than protected environments where intellectual activities can occur without needing to satisfy an immediate bottom line. Again, it's understandable, but society should have such oases if it wants real innovation.
By reputation, at least, Bell Telephone created Bell Labs as isolated, protected arenas for unconstrained thought. High quality scientists were given a building, a coffee pot, and blackboards, and the door was locked and they were told to do whatever they wanted. Every now and then they had to report at least something of what they were doing...in case it might make for something useful for the phone company. The incredible results, over decades, showed that the model works.
Universities, too, were like that--it's why they're often called 'ivory towers'. Unfortunately, these venal days, universities are now much more like Bell Telephone than Bell Labs. That, we think, is tragic for society. But we, at least, are senior enough that we don't have to give a damn what the organization thinks, so, whether or not we have the appropriate level of talent, we do have the ability to think as freely as we want. We're effectively in Bell Labs! But really, so is everyone on the internet.
So, some thoughts from "Bell Labs"
Here are a few "What if?" issues that we think are worth musing over, in regard to the normal view of life in this arena, and how one might revisit or revise.
1. Identical causes are not identical
If we have a 'gene' with two states, A and a (following Mendel's 1866 notation as if nothing's changed since them), then we assume all A's are alike, and all a's are alike. And extending this, that all AA's are alike. Risk's are assigned to alleles (or, in some cases, genotypes like AA or Aa), and are assessed by counting (regression, chi-square tests) based on inherently subjective statistical decision-making criteria (whose results don't force us to make the putative decision if we don't like them!).
But what if things that are alike aren't alike, after all? What if each 'A' allele is functionally different? Even if we just refer to a single SNP (DNA location), so that all 'A' alleles are chemically identical (unlike, some, that may be methylated), there will be some span around the site beyond which each instance is different. Not all AA genotypes are alike. At some point as the span extends, at least, each 'allele' is unique, or becomes increasingly more so. Concepts like Hardy-Weinberg are misleading in various ways because they treat labeled identity as if it were functional identity and aggregate categories (like "AA"s). Monoallelic expression, which may be far more widespread than is accepted in standard thinking (which, again, goes uncritically back to Mendel in 1866), is then clearly important: each cell in an AA individual, expresses only one of it's 'A's. Likewise with dominance deviations for quantitative traits: we may be treating salad as if it's all carrots. The identity span becomes something to test or understand. There are obvious experimental ways to look at such things.
2. Life is entropy generating, but negative entropy is its engine
Life is an evolutionary molecular reaction. We evolved from nested common origins, so species are built of similar things and hence serve as convenient nutrition for each other, as life 'tries' to disperse concentrated energy or materials in an entropy-generating way. The simplest causes (like classical 'dominance') are wholly negentropic: they concentrate function, because if you have the genetic variant, you always have its causal result. But we tend to enumerate functional effects of localized variation in genomes. We do a lot of hand-waving about interaction, ROC-based risk estimates, systems biology, and the like. But that's not yet much more than very poorly organized hand-waving--it's another form of enumerative Big Data rather than something more systematic.
But what if variation operates differently? What if there are patterns of entropy, or some measure of variation, as one moves along or around or 'over' a genome--patterns among cells in individuals that can be related to their the genomes in the single cell that they arose from, and patterns among individuals within a species--that one can relate to functions Since genomic function, and hence evolutionary trajectories, are not just linearly organized, perhaps we need a better way to characterize a proper entropy measure. Since not all things that seem the same are the same (point 1 above), the usual plog p conception of 'entropy' may not be the right way to think of this because 'p' is a frequency measure that aggregates things as if they were identical. By a conceptual analogy, the less entropy by some appropriate genomewide measure(s), the greater the functional effects; variation scattered more or less uniformly all over the place can't do 'work'.
What is meant here by 'entropy' is very vague and thoroughly exploratory. But carrying the thought further, DNA and hence genetic causation, is by itself inert. Biological function relates fundamentally to context (and it's layered, as things like RNA editing show). If one can characterize its (properly conceptualized) entropy--some measure of relevant variation, one might view its functional effects relative to its dynamically changing 'environment' (itself properly defined), in a way somewhat analogous to electromagnetism: much as passing through a magnetic field induces current in a wire, genomes passing through environments induce functional outcomes. But this must be assessed on a genomic scale since there will be wavelike inteference among different parts of the genome, effects reinforcing or cancelling each other, so the overall entropic features may dictate the net result. If you think in terms of mathematical functions as a metaphor, perhaps we can construct a wave-like description of variation along genomes or per 3D or 4D genome configuration, analogous to the quantum descriptions of, say, electrons.
3. Life is more exploratory than conservative
Evolution is often viewed in a mindlessly Darwinian way, of intensive force-like competition driving genomes to ever-changing, ever-refining goals. Yet when we know of functional parts of genomes mainly what we see among individuals or species is conservation. This we attribute to purifying selection. Adaptive selection does occur, but it, too, generally and quickly leads to reduced variation. So it has come to pass that sequence conservation is assumed to be a key, or even defining criterion for biological function. Yet---we now know of all sorts of aspects of genomes, like various non-coding RNAs or chromatin-packaging or spacing that represent functions that may be independent of sequence itself.
But what if sequence conservation is not the only, or not the main, criterion for biological function? What if the widespread mapping results showing 'polygenic' control of traits is correct, and each contributor is making so small a contribution that it evolves mainly by drift? And many of the elements are short and hence easily created or erased by mutation, and their locations or binding-affinities are variable. Many promoter and other replicable transcription or other sites in DNA clearly seem organized (they have low entropy) but are not associated with surrounding conserved sequence. What if such multiplicity of effects makes them ephemeral? Then our approach to causal inference needs revision, pointing to the desirability of some other criterion--like, perhaps, something related to aggregate or distributional effects such as 'entropy'.
These are just night-thoughts in the midst of an ongoing attempt to see not just whether really new thinking might be called for, but what that could possibly entail. Right now, we're squandering both brainpower and large amounts of public resources chasing rainbows, because we've institutionalized and industrialized thinking. That stifles originality.
In the history of science, it is usually puzzling facts like the above, and those we listed yesterday, that provoked hard thinking by persons who were free to do it. The successes came from just such people. So if we personally can do anything in a forum like a blog, perhaps it is just to raise issues for readers to think about.....