Thursday, July 11, 2019

Human races are not like dog breeds

If you googled for information comparing race in humans to dog breeds, we wrote this open access, peer-reviewed article specifically for you. There is even a glossary at the end of the main text, and some of the sources we cite are also open access. Thanks for reading.

Are human races like dog breeds? No.
Are human races the same as dog breeds? No.
Are human races just like dog breeds? No.
Are human races basically dog breeds? No.


Friday, May 10, 2019

The music of life--more than a collection of notes

My composer friend wants to be quite modern about creating beautiful music.  He doesn't like to use computer programs for composing but he has devised another 'modern' way to compose, given that, in writing a piece, he often changes his mind.  Scratching out notes on paper to replace them with 'better' ones makes for a real mess on the working pages, and he'd then have to transcribe his work onto new pages, and that in itself introduces room for mistakes.  So he had an idea.

He purchased a set of notes and musical symbols, printed individually on a kind of flexible plastic.  Copies of each possible note and notation element were in boxes in a little tray.  As he composed, he merely took each required note from its place in the tray, and used its static electricity to place it on a page with printed staff-lines.  If he changed his mind, it was easy to remove or replace a given note, and put it back in its box in the tray without generating an inky mess on the page and having to keep starting over to make his work-in-progress legible.  

But there turned out to be a serious, indeed even tragic, problem.  He liked working in his studio, right in front of a window giving him an inspiring view of his garden.  But, after days of work composing a comparably ethereal and beautiful piece, a gust blew through the window, riffled the pages, and shook all the notes off the page and onto the table!  What a scattered mess!  And what a heartbreaking loss of all that work!

Of course, you could say that the composition with all its beauty was in some sense still there, right before him: all the required notes were indeed still there--every one.  But they were in a pile, no longer with any order from which he could reconstruct the composition just by picking the notes up and placing them back on the page.  So, it was literally all there---but none of what mattered was!

As my composer friend told me this story, it occurred to me that this was analogous to the 'pile' of DNA letters (As, Cs, Gs, and Ts) that is found by sequencing people with and without some trait, like a disease.  The letters differ greatly among individuals with the 'same' trait, because they don't have the trait for the same genetic reason.  And the sampled individuals' genomes vary in literally countless ways that have nothing to do with the disease.  Unlike the score, the 'letters' are still in their original order, but genes don't make a score as far as we are concerned because, unlike an orchestra, we don't know how to 'play' them!

In a sense, each person we see who is playing the same tune, so to speak, is doing so from a different score.  Some shared notes may be involved, but they are all jumbled up with shared, and not-shared, notes that have nothing to do with the tune.

And yet we are widely promised, and widely being trephined to pay for, the idea that looking through the jumble of genetic 'notes' we can predict just about anything you can name about each individual's traits.

Indeed, unlike the composer's problem, there are all sorts of notes that are not even visible to us (they are called 'somatic mutations').  We yearn for a health-giving genomic 'tune', which is a very natural way to feel, but we are unable (or, at least, unwilling) to face the music of genomic reality.

And, of course, this mega-scale 'omics 'research' is all justified with great vigor by NIH, as if it is on the very verge of discovering fundamental findings that will lead to miraculous cures, indeed cures for 'All of us'.  At what point is it justified to refer to it as a kind of culpable fraud, a public con job?

By our bigger, bigger, bigger approach, we have entrenched 'composers' trying to read scores that are to a great extent unreadable in the way being attempted.  We are so intense at this, like rows of monks transcribing sacred manuscripts in a remote monastery, that we are committed to something that we basically have every legitimate good reason to know isn't the way things are.

Thursday, April 18, 2019

Brains, not brawn, for college!

It has long been a secret--not!--that American football is not compatible with having any brains left to do college work.  Now there is yet another story, in the New York Times, this time about this in regard to the University of Colorado's football brain-injured.  This sport is as savage as the Roman Coliseum 'sports' were two thousand years ago, and, yes, humans may be slow learners, but that is far too long for us to get the message.

We here at Penn State have the world's third largest football stadium, a grand stage on which to observe brain damage (not to mention various other breaks and bruises) of our 'students'.  Of course, some of these players actually are students in a serious rather than technical sense of the term.  How many leave here with fewer IQ points than when they came, is not known.  At least some do major in actual college-level subjects, and many are very fine students (as I can say from direct personal experience).

But it is time to change, NFL or not.  Let those who want to gladiate for money in the NFL get their brain-damaging preparatory experience elsewhere.  We are supposed to be universities, places of classroom and lab learning, not brute brain bashing.  Football may have been safer decades ago before training methods improved to make these guys huge monsters in size and strength.  It's not their fault, of course, but ours--the adults at universities.  We brought this about, and there is one reason: we wanted money from attendees, alums, TV networks, and so on.  But  universities should not operate on the greed metric, but should stand for something higher, something better.

Indeed, we can have it both ways:  If we moved soccer 'football' to the stadiums, there would be a lot of grumbling from alumni, and maybe a few years of lower donations (mainly to the athletic department, one can surmise) and lower beer and hot dog sales, but eventually they'd all be back, cheering their lungs out for the Nittany Lion soccer team.  And they could have many more games--and for men and women--in a season.  It would eventually pay out.  Well, TV revenues might drop a lot for a while, but if other actual 'universities' followed suit, everything would recover, except the players.  They would not have to recover, since they'd have far fewer injuries (and protective headbands could be used to protect from damage during headers).  And they could take more, and more substantial, college courses while doing this.

It's worth thinking about, for those readers who still have their brains intact to do such a thing.

Wednesday, April 3, 2019

Copy cat!! How 'bourgeois' we've become!

In the sad way that science has become ever more bourgeois, Nature, itself now largely a checkout- counter mag, has a feature editorial on plagiarism (p 435, 28 March 2019).  The author, Debora Weber-Wulff, seems to specialize in sleuthing academic verbal cheaters, as if it is a new profession in itself.  She goes over the various software developed to detect plagiarism in professional and student papers, and evaluates them and the detection problem itself.  Commercial, profiteering, competing software--more than one--to detect academic cheaters!

The commentary mentions strategies that authors use to get multiple pubs on the same subject, and even seems to suggest that publishing an article from or part of your doctoral dissertation is a kind of plagiarism (who in recent memory has searched for or found, much less read doctoral theses after the defense?).

And now, the most bourgeois thing of all, in my opinion, is that there are conferences on academic integrity, and even they have their own plagiarism as the author relates!  And as part of the new class system even in esoteric academia, she notes that those that were detected were "demoted" to mere posters.  Surely I've mis-read this commentary.  Surely!

Somehow, this seems just another routine story about academic life.  Since it's basically gossipy, it takes place of honor in Nature.  It's a kind of 'how-it's-done' review, as if cheating was as common as, say, making espresso.  Can you imagine that such a thing would largely have been unthought of not many decades ago?  It's true.  I was there (and I didn't plagiarize!).

There must always have been some plagiarism, since there are always rogues.  There have long been rewards to publish much the same paper in several different places, to reach different audiences in the days before web-searching.  But it was likely much easier to detect real plagiarism, which was doubtlessly far less prevalent in the old days.  At least that was my experience in my particular old days.  There was no need for competing companies to profiteer by selling plagiarism-detecting programs!  That almost institutionalizes cheating as a cat-and-mouse part of modern careerism, and a commentary like Weber-Wulff's that describes plagiaristic ploys almost helps one do it!

One, if not the main, reason for this situation is that far less was being published, less often, in fewer journals, and by far fewer players in a given academic arena.  The players were much better known to each other, far fewer published more than an article now and then, and most readers knew the relevant literature (and each other). The pace was less.  The Malthusian academic overpopulation didn't exist, so the competition was less (even if there were of course Big Egos competing).  Publishers were mainly non-profit, careers less intensely grant-dependent (and grants were easier to get).  The competition was more about ideas and actual substantive impact, and far less about academic score-counting (citations, publication counts, impact factors).  Less pressure to survive, and less pressure to cheat.  No first-semester graduate seminars on 'grantsmanship'.

....and no need for Nature to  have a feature commentary on how to catch academic cheaters.

Just after this was posted, a commentary appeared in Nature on a major academic fraud case:

If we really want to encourage honor and honesty in science, we need to look not at the science but at the science culture, the money-driven, competitive, frenetic area--and Nature and its proliferating for-profit satellite publications is a culpable part of the problem.  We need to cool down the temperature of the research industry.  But to me that requires reducing the amount of selfish gain available--to investigators, journals, universities, equipment suppliers--the academic-industrial complex to pick up on Dwight Eisenhower's long-ago warning about military's similar excesses.

But where is the will to do this?

Monday, March 25, 2019

Human Genome Diversity: important to recognize, but not a new issue

A couple of decades ago, several of us, led by Luca Cavalli-Sforza, Marc Feldman, Ken Kidd, and several others (including yours truly), got together to suggest a worldwide sampling of human genetic diversity that would specifically include the diverse 'anthropological' populations (traditional tribal groups who still existed but were being surrounded or incorporated -- or worse -- by the growing, large agricultural/industrial civilizations.  The idea, called the Human Genome Diversity Project (HGDP), was to collect DNA samples from hundreds of populations worldwide who would otherwise be un- or under-represented in the available data on human genomic variation.  The large agricultural/industrial populations are swamping (if not literally exterminating) these more ethnically aboriginal peoples.  Yet their pattern of genomic diversity is that from which the dense populations derived, and the latters' variation may tell us about the origins and nature, and perhaps adaptive fitness, interactions and so on of the larger pan-human population into which 'we' grew.

The idea of a global HDGP was stifled by two things.  One was attacks by political opportunists (and played culpably by the media) who felt this global sampling was demeaning to the aboriginal populations or would be designed imperialistically to profit from those peoples by patenting findings; and secondly, by the hungry economic maw of the human genome sequencing project then in progress and preemptive.

The upshot was that the HGDP was never funded.  Luca donated the set of global samples then available to him, to the France-based CEPH website, where they were given the HGDP name (that is still the case, though I think it wrong, because the data set was not a systematically global design-first-then-sample project, so it rather co-opted the HGDP name).  Nonetheless, and to the good, the DNA along with analytic results from those samples are freely available to qualified researchers.

Another HGDP organizer, Ken Kidd at Yale (along with his wife, Judy, and other collaborators), has produced an excellent, publicly accessible website called ALFRED, which provides allele frequency data from populations around the world, plus documentation of the sampled population and a variety of other user-friendly features.  Among other things, this is a fine tool for teaching global human diversity,

Now, a new paper by Sarah Tishkoff and others (Sirugo et al., "The Missing Diversity in Human Genetic Studies", in Cell 177, March 21, 2019) makes the case for sampling human genomic diversity, of a sort, pointing out various reasons why it would be good to address the current bias in genetics towards Europeans with global sampling of human variation.  Obviously, I agree with that although many technical points could be raised about whether the inevitably smaller samples from scattered small populations could possibly be analyzed as effectively as the very large samples required to identify risk variants that are being over-peddled to us via the various 'omics and Big Data advocates.

What are the 'populations' and what does 'diversity' properly include?
The value, potential and humane importance of properly sampling humans beyond the major large populations in Europe and North America is obvious, but the new paper makes the case mainly for the larger 'mainline' populations other than Europeans.  Unfortunately, though even they be numerous in the census sense, they are heterogeneous and it is unclear who, exactly, and how, current data represent them.  Can we just blithely say we need to include 'Africans' to address the representativeness problem?  Are, for example, African-Americans, not to mention 'Hispanic-Americans' all the same among possible samples?  And the same regarding Asians. The current paper deals with these issues at least to some extent.  But then what about, say, New Zealand natives, or Cherokees, other small populations, or which castes and from which parts of India must we collect data? How exhaustive should we sample and how can complex genomes effectively be parsed in this way (not to mention environments--a topic at least acknowledged by Sirugo et al.).

Francis Collins' current 'All of Us' sloganeering is, to me, a culpable mis-representation to the public, a strategy to ply huge funds out of Congress in open-ended ways, too big to terminate, a welfare project for university research and their various supporting industries and interests.  The idea seems to be implicit, though unjustified, that any sort of open-ended Big Data 'omical project can be fair to small sub-groups (indeed, I would argue from various aspects of what we know already, it can't for the major ethnic groups either).  So what does the promise that this is for 'All of us' actually mean, beyond transparent strategy to pry open-ended funding from Congress?

Problems with the promise in the first place
Now while I agree that increasing sampling of human diversity is important for many reasons, not least being fairness, the paper promises that it will increase or improve 'precision' medicine.  To me, that is sloganeering, and avoids facing up to what Big Data 'omics have already shown us about causal complexity of the important non-Mendelian traits--complexity not only in the genomic but also environmental senses.

There are several obvious, but obviously conveniently ignored reasons for this.  First, 'genetic' causation involves more than inherited genomic variation.  Important variation arises during life, when cells divide.  This somatic variation is genetic, but not sampled in the usual genome-sequencing way.  Yet somatic variation clearly has important consequences because, a cell doesn't 'know' if its genome sequences were inherited from the individual's parents, or arose during the individual's life.

Secondly, the whole enterprise assumes that induction can lead to deduction, that is, that what we've observed in the past leads us to predict the future.  It is not just inherited and somatic mutations whose future is literally unpredictable, but the same is true for lifestyle exposures.  Yet lifestyle exposures are vital components of complex disease risks.  They cannot be predicted, even in principle.  That means past exposures do not predict future ones (to environments or mutations).  This is not a dark secret, no matter how inconvenient for the 'omics prediction industries.  Unlike many areas in chemistry and physics, induction does not lead to deduction in life.

What we need is deep re-thinking of the problem of genomic effects on disease and other traits.  But that is not easy to arrange when careers and institutions depend on very large, very predictable, basically permanent funding is needed for the persons involved.  To improve these aspects of our science, we need a different way to support it, new economics, not bigger data or more sequencing.  and a side benefit of such reform, were it ever possible, would be to free up investigators' minds from surviving to surmising--new ideas.

Our "I'm first!!" era in science
I do have to note that the tendency to ignore, or be ignorant of, prior work is manifest in this paper, which does not mention the HGDP.  We are in an "I'm first!" era in science.  I think Shakespeare understood the clearer truth: 'What is past, is prologue'.

Good ideas need to be followed up, and properly sampling the world is one such good idea.  But this paper doesn't really deal with the small, traditional aboriginal populations.  In the case of the HGDP effort, there was simply a lack of support for sampling small, relatively isolated populations to build a picture of human genomic diversity out of the context from which it actually arose.  But it was an effort that explicitly recognized the issues, as they stood at that time.  So it is not excusable that the new paper fails to acknowledge the precedent advocating worldwide population sampling.  The senior author was very familiar with that effort.

A good idea, that should not seem novel, would be for scientists to read, and cite, their predecessors who had prior recognition of an issue or problem and inevitably, even if indirectly, are leads to stimulating subsequent work.  But crediting others doesn't help one's career score-counting, and it takes at least a tad of effort to find out what an ideas' ancestors may have thought, not to mention crediting them.  In this case, the senior author had every reason indeed to know directly about this history.  Indeed, she did her doctoral and post-doctoral work in places deeply involved in the HGPD!

Anyway, this griping aside, it is at least worth discussing in a serious way whether and how a global sampling of worldwide populations, beyond the main 'racial' groups, would be a good thing to do.  I think it would.  We are, after all, throwing away countless millions (or is it billions?) on proudly hypothesis-free Big Data 'omical enumerations, projects too big to stop (no matter how, by now, largely pointless). We now know the basic landscape, and it is not nearly as encouraging as its self-interested press regularly blares.  Its valuable results should stimulate hard, new thinking, but as long as business as usual pays and absorbs careers, who knows when that will happen?

Even if reform is difficult because of vested interests that we've allowed to develop, it is proper to acknowledge one's intellectual ancestors.

Tuesday, March 5, 2019

Tales for children (and lessons for scientists, of all ages)

How the Gene got its Family
Reported by Ken Weiss, Penn State University

NOTE:  The following “Just So” story was found in the posthumous papers of the late Rudyard Kipling, apparently intended to explain to his young readers how genomes got their repetitive structure and why that protects us.

Now, O Best Beloved, I’ll tell how Snake Gene came all spotted, safe from Mongoose Mutant’s fangs, like Leopard in the dappled shadows of the forest floor!  Once, ever so long, long ago, Gene lay alone in the deep dark dense nuclear forest. Fearing Mutant, Gene longed for a family to keep him safe in the wild woods. He looked at himself, so long, long, and lithe, and had an idea!  “What I need to do is duplicate!”

Bending and twisting, snaky Gene coiled so snuggly that when he uncoiled he saw he had made another of his kind!  And this he did again, and again, ‘til he exclaimed “O My! We’re a family--the Genomes!”  The new family nestled warmly together, curling and coiling, curling in the deep dark forest! 

And they took heart:  When Mongoose next came hunting, hungry, Beloved, he saw a wriggling ‘scape of dazzling spots, each a Gene, as elusive as the morning mist.  Mutant kept snapping, snapping, but his prey seemed always here and there: if he bit one, others took its place, and yet others.  ‘Aaah!’, cried Mutant, ‘I hunger for my prey, but my bite can’t bring it down.’

And Lo!, seeing this from his perch on a nearby tree, sage Owl passed the word of Genome’s victory all forest-wide, and each who heard it followed suit.  They duplicate and duplicate and protect themselves from O so per’lous Mutant’s fangs that seek their end!  One day, even People heard the news, and learned how Mutant met his match.

The Law of Life’s dense, deep-dark, dank dang’rous jungle is: Safety rests in duplication’s many paths to the same end.  We call that Ree-dundancy!

But then, you may wonder, "If they are so protected, why does any beast of the forest ever take ill?”  Ah, Beloved, it is good that you ask!  Each time Mutant snaps, he can bite one or even more of the Genes.  Such a small meal from so large a family, so that usually nothing bad happens.  But sometimes, after many bites hurt ever so deeply, they may even kill!  Yes, a law of the tricky dark jungle is that each time, different Genes are bitten. There isn’t just one way Mutant gains his meals!  The Genomes are a big family, and most bites don’t hurt much.  But, when Mutant is lucky, sometimes, so sadly, he bites enough to bring the victim down.  The heavy weight of guilt can't fall on one poor Gene and say he is the cause.  It is a failure of the family.  That is a law of the Jungle.

Tuesday, February 19, 2019

More thoughts on animals and even plants.....

In my previous post, I bemoaned the fate of laboratory animals, in my own experience mice, who suffer all sorts of manipulations, typically followed by execution, to please desires and satisfy objectives we humans stipulate (without the animals' consent).

Researchers are all fallible beings, but by and large we are as 'humane' as can be managed when we use animals to achieve a research goal.  These goals can be noble, such as the development of new medications.  Or they can seem important but in fact be rather trivial.  The level of import is a subjective judgment.

Animal research in universities, at least, must be approved by the institutions' research protections committees.  Even then, the members are only human, and one can question their judgment or criteria for what they approve.  But by and large the intent must be assumed to be worthy--assuming that any animal research is to be approved.

Opponents of any form of animal research have been quite vocal and, sometimes, have physically attacked laboratories where animal research is carried out.  It's an extreme attempt by opposing individuals to prevent research that the university allows, and this fact itself can cause the research committee to become more bureaucratic and perhaps to hunker down in self-protection.  But my prior post was about the overall ethics of animal research, not a plea for riot!

We could extend the ethical consideration beyond mammals to other species--even like, say, fruit flies--for which I think there are no such ethical-use review approvals needed.  I think you can just pull their legs off for sport.  And then there are plants.  Do we have any way to know they do not feel distress at what is done to them?  If they do, how can we learn about them without causing that?  Or is it that plants, openly arrayed to those who would eat their leaves and other parts, don't need to feel 'fear' or 'pain' and don't 'suffer' their predators?  It may not be so clear: some plants, at least, do send airborne molecular warning signals, but these need not be 'felt' in an integrated psychological sense.  Many plants require their fruits to be eaten so their seeds will disperse.  But who are we but a form of herbicide when we dispose of eaten apples' seeds or peach pits, or eat peas and corn, and so on?  If we clear a forest for our own uses of its wood, how many other living creatures besides the trees, are also done in by our actions?

To oppose all animal research is a difficult stance.  Many critics, including me, in fact eat meat on a regular basis.  We use insecticide.  We wear leather shoes and belts.  Some go fishing and hunting.  We may keep caged birds or guinea pigs.  I slap mosquitos to death that try to bite me.  The potential hypocrisy is obvious and has often been noted.  Sects, like the Jains, do their best not to kill animals.  But we all must eat and unless or until all food were to become entirely factory-synthesized, we generally must kill to eat. We are the product of evolution, and living individuals must sustain their lives on nutrients from other living (or, in some cases, formerly living) creatures.

In this sense it is hypocritical to castigate scientists for using other lives for their own gain, doing in one particular form (at least partially regulated) what we do daily in other forms. We know that in a profound sense, it's a cruel Darwinian world.

And war....?
As long as killing and maiming is our subject, and even if we exclude murder as unexcused killing, what about war, in which we glamorize those who intentionally and systematically kill as many other humans, of a particular type ('enemy') as they can--and they us, each side feeling virtuous in the process?   We honor veterans as heroes, but they are in fact paid killers.  The fact that others may wish to kill us so as to gain what we have makes this a very difficult issue if we were to hold that killing is simply wrong.  Do Quakers have the answer relative to humans as Jains do for other animals?  Even so, what about creatures they eat?

This and my previous post are just musings that I am making about the issue, about where we can or should draw lines, definitions, limits, and so on, relative to who can kill or torment, and for what reason.  Life is, after all, finite for us all.  But the issue of ending life is perhaps one that will never die.

Monday, February 11, 2019

Mea culpa, mice!

Animal research is purportedly protected by university and broader review and approval criteria.  The relevant IACUC (Institutional Animal Care and Use Committee) approval committees meet regularly, go over research and/or grant applications their faculty wish to submit for funding to the feds, and so on, and approve them for humane handling and other criteria; approval is required before the work can go on. 

Approval?  Well, what I have seen over many years, is not that 'anything goes', but lots of bending to allow scientists to act like Frankensteins so long as the victims--and that's what they are--can't protest.  We define what is 'humane' for them.  That makes it 'ethical' research.  And, no surprise, it does not require that whatever be approved is something we would voluntarily undergo ourselves (for knowledge's sake!).  Well, maybe the Nazis did that....

I am writing this to express my own very deep regrets for my many years of research on mice.  Mice were almost my only victims (no guinea pigs, etc., baboons once or twice briefly), but they were many.  And they did not have to sign any informed consent!  We did to them what our purportedly protecting IACUC approved, and deemed 'humane' and 'justified' for the new knowledge we would gain.  Note the 'we' here.  This was entirely selfish.

A lab mouse (source:

And, dare one ask: what fraction of the knowledge gained by animal research is really of any substantial value to humans (none, of course, to the animals' own species)?  By what 'right' do we enslave and torment (if not, often, horrify and torture) innocent animals, to build our careers?  Because it is score-counting for our own selfish interest that is part of the story, even if, of course, that story also includes the desire for genuine new knowledge, which we hope will lead to improvements of some kind (for us).

Well, they're 'just animals', we rationalize (I think, a rationale conveniently provided by Descartes, who judged all but we soul-bearers to be no more than autonomous machines).  IACUC restrictions mean that we don't torture them (well, we pretend so, at least, using our not their definition; and we certainly do do things to them we'd be jailed, or even executed, for doing to humans).  Animal research is, after all, for our own good (note, again, the 'our' in this excuse).

What is the reality of the IACUC protection system?  It may avert the worst tortures, but it rationalizes much that is so gruesome that if the public knew of it......  For example, we can 'humanely' make transgenic mice who suffer--even from conception--some serious physical, health or behavioral defect, by investing them with, say, some known serious gene defect so that, hopefully, 'we' can figure out how to fix it (in humans, not in the victims from whom we learned the tactic).  I have been present at a purported meeting about research ethics, where a prominent university official bemoaned the care (already minimal, really, from the victims' viewpoint) that his IACUC committee exercised.  Why?  Because, he said, their Committee sometimes turned down faculty proposals that might, if funded, have brought in significant overhead funds to the University.  I mean, really!

A clear statement of the problem, for those humane enough to listen
There is a poignant, indeed deeply disquieting article in The Atlantic ("Scientists Are Totally Rethinking Animal Cognition", Ross Andersen, March, 2019).  Animals of all sorts, including even invertebrates, have self-awareness of one kind or another, essentially a sense of 'me'.  They presumably generally don't know about the inevitable finiteness of life, including their lives, so are saved from at least some of the abstract fears and fearful knowledge we humans may uniquely understand.  But they are not just things, cellular machines, and they do have fears, experience pain, and so on.  And yet.....

And yet, we noble university professors, not to mention those working in industry and agriculture, do to animals what we would not do to ourselves or each other (well, about what we do to each other one can say much that is just as sad as my reflections on animal research here).

So, to the mice (and those insects I've knowingly destroyed, and countless animals that I've eaten), I offer my mea culpa!  Sometimes, one must kill.  Life is an evolutionary phenomenon most of whose actors must, because of their evolutionary history, dine on objects of similar makeup.  Meat is one form, but do we too easily dismiss plants?  There are recent reports showing that plants are far more social, sentient, and aware than is convenient for us to think about with ease, as we munch away on our daily salad or fruit and veggies.

Darwin saw in his way what is to blame.  Life evolved as a self-renewing chemical phenomenon, and species evolved to dine on each other because, in a sense, we're all made of the same stuff.  It is a cruel truth of living existence, and that is one reason Darwin's work was controversial and is still resisted by those who wish for comforting theological accounts of reality and a joyful forever-after.  But we know--even they know--some of the harsh realities of life here and now.

We researchers always have some sort of justifying rationale for what we do to animals.  We have the approval of a committee, after all!  We're doing it for the good of humanity, to understand life, or for some other self-advancing careerist reason, including bringing in money to the university.  But, the bottom line is: we do, in fact, do it.

Fare thee well...
So, you countless mice, whose lives I terminated so I could get ahead, even if doing so as 'humanely' as possible after selfishly using them as I did, here's my apology.  I can't take back what I did to you, all within our acceptable research standards (note, mice, I said 'our', not 'your' acceptable standards).  Even where I could find, or construct, some rationale for my work, such as health-related discovery, basic knowledge, and so on, the payoff for us is small, and the cost to you, mice, was total and involuntary.  And one can debate how valuable the knowledge really was in the grand scheme of things.  How many of these must suffer before even one serious benefit, even one just for humankind, be gained?   I, and my lab group over the years, didn't really 'need' to do it, except  mainly for our careers. Was it enough that I did, after all, give you life, some sort of short life at least?

I so wish there were some way I could make it up to you.

Friday, January 11, 2019

Is there a gene for celibacy?

We see study after study of genes 'for' behavioral traits considered to be driven by selection: intelligence, athletic ability, criminality, recklessness, drug abuse, aggression, even being a caring grandmother. The list goes on and on and on. Simplistically stated, the idea is that behavioral traits have a genetic basis, usually a simple 'genetic' one, and that during human evolution, those genetically bestowed with the 'best' version of the trait outcompeted those unlucky enough to be less intelligent, less of a risk taker, a more fearful warrior, and so on.  That is pure Darwinian determinism: the bearers of the 'optimal' version of a trait systematically had more offspring, and thus the gene(s) for that version were selected for, and therefore increased in frequency.

This is why, for example, the basis of homosexuality is so curious to evolutionary biologists.  How could a behavioral trait that means its bearer does not have offspring ever have evolved?  How could a gene persist if it codes for something that interferes with reproduction so the gene isn't passed on? The most common explanation for this is that during the long millennia of human evolution, homosexuals mated and reproduced anyway, because homosexuality was culturally proscribed in the small groups in which humans lived.  Maybe that's so, but it's certainly no longer true in many cultures where being gay doesn't have to be hidden anymore, so should we now expect the frequency of homosexuality to fall? Another post hoc account is that homosexuals helped care for their relatives' children, enhancing their extended kinship and hence consistent with natural selection--a technically plausible but basically forced speculative explanation by those who want Darwinian determinism to be as universal as gravity.

In any case, the "cause" of homosexuality is certainly an interesting evolutionary puzzle, if it's assumed to be genetic.  It may well not be, of course -- perhaps sexual orientation is influenced by environmental exposures in utero or in infancy.  But, let's go with the genetic assumption.  Let's even assume that looking for genes for IQ, aggression and so many other behaviors is reasonable, because all these traits, as all traits, must be here because of natural selection.

In that case, it's very curious that there are so many traits that defy Darwinian explanation whose genetic basis isn't being explored. Where are the searches for genes for, say, voluntary celibacy, or use of birth control and non-celibates choosing not to have children, or for suicide, or child-beating, or infanticide, or abortion, or young men volunteering to be soldiers?  These are all traits that make no evolutionary sense and shouldn't have evolved, if such traits have a genetic basis.  We should be just as perplexed by the evolutionary history of these behaviors as we are by homosexuality. Why aren't we looking for genetic explanations?

I think it's a reflection of cultural values. It's rather akin to environmental epidemiologists never looking for the harmful effects of cauliflower, broccoli or Brussel's sprouts -- instead it's the things we like, our indulgences; alcohol, fatty foods, sugar, which reflect our Puritan scorn for pleasure.  I think we notice and think about what seem to us to be unacceptable aberrations, and give much less thought to what seems normal.  It's ordinary to us that nuns and priests choose not to reproduce, even if that is completely non-Darwinian, or that suicide bombers are generally of reproductive age and are foregoing having children.  Abortion may not be personally acceptable to you, but it's a societal norm.  Indeed, artificial birth control itself is highly problematic in a Darwinian world -- even worse for Darwinian theory, it sends women into the work force, away from their children.

Apparently we don't generally notice that these 'normal' behaviors are non-Darwinian -- our primary drive, consciously or unconsciously, but inherently, is supposed to be to perpetuate our genes.  If behaviors are genetically driven, selected for, then it's not just homosexuality -- which, until recently, was not socially acceptable -- that doesn't make evolutionary sense, it's any behavior whose primary ramification is not to send our genes into the next generation.

So, don't we have the same issue with explaining the evolutionary origin of all these behaviors as we do explaining homosexuality?  Perhaps.  But let's consider an explanation that's not generally proffered: Perhaps this is all just statistical 'noise' around a weak rather than precisely or strongly deterministic natural selection, that Nature is just sloppier than the strictly Darwinian view would expect.  The success of no species requires that every individual reproduce, so long as enough do. Culture is a powerful force -- once we respond to cultural dictates and norms, the simple evolutionary explanation of selection for optimal (in fitness terms) traits is much less convincing.  And, perhaps we didn't evolve to reproduce, just to have orgasms.

And, is there a gene for being dogmatic?

Thursday, January 10, 2019

Too many post-docs! (I wonder why.....)

A recent report, in Nature discusses a glut of post-docs, that is, PhDs who went on to post-doctoral research position(s) with the idea that it was a prep for an actual permanent faculty job, but who then can't find that pot at the end of the promised rainbow.

This has been a growing problem and it has not been a surprise nor closely guarded secret.  Nor is the reason any sort of secret. It is due to the academic system that rewards more: more grants, more publications, more citations, more graduate students, more lab activity.....  It is the score-countable itemization of faculty worth that has taken over our universities, gradually, almost without our being aware of it, during the past few decades.  Universities and their faculty gain their career rewards by satisfying the More-manic criterion that we have allowed to crowd out actual substance from our university culture.

Score-counting has perhaps always been with us to some extent, but nowhere near what it has become.  In part, this was the evolution of convenient computer-countability, as well as the push to oust the Old Boy system and to open university careers, promotions, tenure and so on, to make it more fair. This was done, and it was good.  However, it was also obvious to administrators (including promotion committees, chairs, deans, and so on) that their careers could be advanced if their bullying of those beneath them on the status totem pole could be seen as 'objective'.  This, of course, opened up careers for the for-profit publishing of annual citation-count books, etc., to turn academic life into a score-based kind of game (this started 30 or more years ago, if slowly....).

Now we're in Objectivity's safe, politically correct full-swing, and it's everywhere, polluting the properly more contemplative and abstract nature of serious-quality academic careers.  It is, of course, very, very, very good for administrators (as can be seen by their proliferation over recent decades), suppliers of gear, and so on.

Colleges and universities are now deeply into a Malthusian pattern of growth which is reaching, or has reached, the inevitable saturation point. It was foreseeable.  Paired with the inability to enforce a mandatory retirement age (also nicely serving the alpha baboons in the system), and universities' need for Teaching Assistants (i.e., a bevy of minimally paid graduate students) so the Professors don't have to sully themselves in classrooms, we seem to be exceeding our academic ecology's capacity, at least if we think in terms of what is fair.  We have seen it coming, of course, because we, the faculty, have made it so.  Maybe, perhaps hopefully, inevitable retirements and attrition will help, but by how much?

Yes, it is we, the academic System, who have only ourselves to blame.....but why do that!?  Reform could harm our cushy careers, after all.  Let graduate students beware.....

Tuesday, January 8, 2019

Susumu Ohno: Accounting for Why Gene Counting Doesn't Account for Things

Gifts, gifts, gifts!  Every day in the media, often promoted by universities, journals, and NIH, we seem to be offered the imminent gift of immortality, if we but pony up for more and more 'omical' science (well, if you have to pay for it, even via taxes, I guess it's not exactly a gift!).

The promise that for nearly two decades has been the main course on the 'omicists' menus, is that by counting--adding up the contributions of a list of enumerated genome locations--all our woes will be gone!  The idea is simple: genes are fundamental to life because they code for proteins and stuff like that, which are the basis of life.  This, in a nutshell, is the justification for much of the Big Data endeavors being sponsored by the NIH these days, long driven for historical reasons by an obsession with genes.

But, at least partly, this obsession has revealed to us what we should--and could--already have known.  Genes are clearly fundamental to life, coding for proteins and other functions.  But the reason we're seeing increasing weariness with GWAS and other fiscally high but scientifically low yield approaches is not new.  It's not secret.  And it is not a surprise.  All we needed to do was to ask, where do genes come from?  It is not a new question, the genome has been intensively studied, and indeed the answer has been known for nearly 50 (that is, fifty) years.

Susumu Ohno (1928-2000), from Google images)
In 1970 Susumu Ohno published his deeply insightful Evolution by Gene Duplication.  This book should be a must-read for all life-science graduate students.  Instead, it has been casually forgotten--one might say conveniently forgotten, except that in our culpable ignorance of the history of our field  or to suit our self-serving careerism, it has not been deemed important to read anything published more than a few years ago.

So, what did Ohno say?

Where do 'genes' come from?
In his time, we didn't have much in the way of DNA sequencing.  We knew that genes coded for proteins, and were located on chromosomes.  We had learned a lot about how the code works, much of this from experiments, such as with bacteria.  We knew proteins were fundamental building blocks of life, and were strings of amino acids.  Watson and Crick and others had shown how DNA carries the relevant code, and so on.

But that did not answer the question: Where do all these genes come from?  I'm not an historian, and cannot claim to know the many threads leading to the answer.  But in essence, a point Ohno is credited for noting and whose importance he stressed, is that new genes largely arise from duplication events affecting existing genes.  He had noticed amino acid similarities among some known proteins (hemoglobins); this and other evidence suggested that chromosomal or individual gene duplication was a mechanism, if not the mechanism, for the origin of new genes.  Expecting random mutations in parts of DNA not already being used to code for RNA or DNA, to generate all the sequence aspects of a code for a new protein that would actually have some use, was too far-fetched.  Indeed, nowadays one can be skeptical if an 'orphan' gene is claimed--that is, one not part of a gene family, of which there are also other genes in the genome.

Instead, if occasionally a stretch of DNA or even a whole chromosome duplicates, the individual inheriting that expanded genome gains two potentially important attributes.  First, s/he has a redundant code; mutational errors in one gene that lead to a non-functional protein can be compensated for by the fact that an entirely different, duplicate gene exists and codes for the same protein.

Secondly, duplication is the basis of a much deeper, indeed fundamental aspect of life, going farther even than just gene: redundancy.

Evolution depends on redundancy: genomes are family affairs
By having redundant genes, the initial result of duplication, an individual is more likely to survive mutations.  And over the long haul, with lots of duplication, the additional copies of a needed gene can mutate and over time take on new function, without threat to the individual, who will still have one or more healthy versions of the gene.

Indeed, perhaps one of the far under-appreciated but even fundamental axioms of life is that it is built on redundancy: not only are genomes almost exclusively carriers of members of gene families whose individual genes arose by duplication events, but our tissues themselves are constructed by repeating fundamental units: multicellular organization generally; bilateral or radial symmetry; blood cells, intestinal villi, lobes and alveoli in lungs, nephrons in kidneys, and so on.

I think it is not easy to imagine a different evolutionary way for our very simple biochemical beginnings to generate the kinds of complex organisms that populate the Earth.  And this has deep consequences for those for whom dreams of omical sugar plums dance in their heads.

Why the 'omics' promises were always doomed to fail, or at least to pale
From the cell theory to Ohno to the very data that our 'omical dreams have yielded in extensive amounts, we have found that life relies on the protection of redundancy.  From genes on up, if one thing goes wrong, there's an ally to pick up the slack.  Redundancy means back-ups and alternatives.  It also provides individual uniqueness, which is also fundamental to the dynamics evolution.

Together, these facts (and they're facts, not just wild speculations) show that, and why, we can't expect to predict everything from individual genes or even gene scores.  There are many roads to the Promised Land.

It is important, I think, and entirely fair to assert that nothing I've said here has ever been secret, known only to a small, Masonic Lodge of biologists exchanging secret handshakes.  Indeed, these basic facts have been at the heart of our science since the advent of the cell theory, centuries ago.  Genomics has largely just added to what was already known as a generalization about life.

The implicit lesson, of Ohmo not Homer, is to Beware of Geneticists Bearing Gifts.

(updated to correct a spelling error in Prof. Ohno's name)