Thursday, May 31, 2012

Farming the Madding Crowd -- 23andMe announces its first patent

The direct-to-consumer DNA genotyping company, 23andMe, announced on Monday that it was soon to receive its first patent, one that it had applied for in 2010 (that patent application is here).  It has to do with genetic polymorphisms associated with Parkinson's disease (PD), and, while we're not patent lawyers by any means, the application seems to cover genetic variants and testing.  This announcement is a surprise to many, including many of its customers (indeed, the consent form says nothing about possible patents), in part because the company has come out against gene patenting in the past.

Ken and I have had issues all along with the scientific merit of what these companies are selling (see this post and this one, e.g.).  And we're by far not the only ones.  Much of what they report back to consumers are things the consumers already know (e.g., hair color, freckles), and the estimates of most disease risk, which they also report, just aren't ready for prime time for all sorts of reasons that we and others have commented amply upon.  So, while it may be fun for people to know their genotypes, and sometimes their ancestry estimates, the information is generally not very useful clinically.  And apparently it's not just for public edification or recreation: it is run by Google money, after all.  We believe it's unethical for any of these companies to be selling disease risk, no matter how many caveats they include, and this is before considering the profit motive in patenting what they have found apparently on the backs of their customers.

How does 23andMe work?  You spit in a tube and send it to the company along with your payment, and they genotype you at selected sites and then give you full access to that data, with some explanation.  If you want to take part in the research the company does then you can choose to answer surveys about lifestyle or other risk factors, which is your consent for them to throw that information and your genetic data into their research pool. The company says that most of their customers have chosen to do this, but at the same time, it seems that none of their customers were told that gene patents might result.

Gene patenting has been controversial since the first one was applied for, in part because it is difficult to know how to apply the generic wording on patents in the Constitution to the molecular age. Basically, patents should cover inventions or discoveries, and while scientists may in fact discover genes, the law now says that they can only be patented if the scientist has also discovered a function for the gene.  That's a law that had to have been written by not-disinterested geneticists!  Legally speaking, the polymorphisms discovered by 23andMe for PD are patentable, but is that really innovation?

But legal or not, many, including us, are still not in favor of gene patenting.  To us, patenting should be not getting to own properties of nature, but for innovating value-added products.  A medical test, that any beginner who works in a genetics lab for a week could replicate, shouldn't be covered.  If the patents were designed to protect against commercialization of naturally occurring genotypes, as NIH once threatened to do, that's a worthy public-domain protectant--against predatory commercial practices--but that's not the idea here.

The gene patenting issue has been endlessly discussed and the discussion is easily accessed online, so we won't repeat the arguments here except to say that our personal concerns are that patenting makes public and often publicly funded information private, for private personal gain.  In addition, it can tie the hands of clinicians who want to be able to offer genetic testing to their patients, it can prohibit others from doing research on a gene of interest and so on, and while we may be a minority we're not in favor of the get rich quick motivation for doing research.  Some "defensive patenting" has been going on (patenting to protect against genetic profiteering), but in our opinion that should not be necessary -- naturally occurring genes should not be patentable, period.  

As if the issue of this 23andMe patent weren't problematic enough, it seems that the company's motivation for focusing on Parkinson's disease is a very personal one.  Apparently the CEO's husband, one of the founders of Google, has been found to have a PD risk allele, and the couple has donated millions of dollars to the Michael J Fox Foundation (which will be one of the beneficiaries of this patent, along with Scripps Research Institute) for research.  Well, fine, but they should have been up front about their personal interest in mining the 23andMe consumer database for PD variants and asked their customers whether they were willing to donate spit, and pay for what turned out to be a PD research 'crowd-funding' project (see our post last week on this idea as related to fairer science funding).  Instead, they appear to have done it surreptitiously, complicating what's already making many 23andMe customers feel misled, if not betrayed (see comments on Twitter for examples--here's a randomly chosen set of them, e.g.). Not to mention that some of the scientists affiliated with this company have had prior apparent interests in ethics, to boot.

Wednesday, May 30, 2012

Magical science: now you see it, now you don't. Part II: How real is 'risk'?

Why is it that after countless studies, we don't know whether to believe the latest hot-off-the-press pronouncement of risk factors, genetic or environmental, for disease, or of assertions about the fitness history of a given genotype?  Or in social and behavioral science....almost anything!  Why are scientific studies, if they really are science, so often not replicated when the core tenet of science is that causes determine outcomes?  Why should we have to have so many studies of the same thing, even decade after decade?  Why do we still fund more studies of the same thing?  Is there ever a time when we say Enough!?

That time hasn't come yet, and partly that's because professors have to have new studies to keep our grants and our jobs, and we do what we know how to do.  But there are deeper reasons, without obvious answers, and they're important to you if you care about what science is, or what it should be--or what you should be paying for.

Last Thursday, we discussed some aspects of the problem when a set of causes that we suspect work only by affecting the probability of an outcome we're interested in.  The cause may truly be deterministic, but we just don't understand it well enough, so must view its effect in probability terms.  That means we have to study a sample, of repeated individuals exposed to the risk factor we're interested in, in the same way you have to flip a coin many times to see if it's really fair--if its probability of coming up Heads is really 50%.  You can't just look at the coin or flip it once.

Nowadays, reports are often of meta-analysis, in which, because it is believed that no single study is definitive (i.e., reliable), we pool them and analyze the lot, that is, the net result of many studies, to achieve adequate sample sizes to see what risk really is associated with the risk factor. It should be a warning in itself that the samples of many studies (funded because they claimed and reviewers expected them to be adequate to the task) are now viewed as hopelessly inadequate.  Maybe it's a warning that the supposed causes are weak to begin with--too weak for this kind of approach to be very meaningful?

Why, among countless examples, after having done many studies don't we know if HDL cholesterol does or doesn't protect from heart disease, or antioxidants from cancer, or coffee is a risk factor, or obesity is, or how to teach language or math, or avoid misbehavior of students, or whether criminality is genetic (or is a 'disease'), and so on--so many countless examples from the daily news, and you are paying for this, study after study without conclusive results, every day!

There are several reasons.  These are serious issues, worthy of the attention of anyone who actually cares about understanding truth and the world we live in, and its evolution.  The results are important to our society as well as to our basic understanding of the world.

So, then, why are so many results not replicable?
Here are at least some reasons to consider:
1.  If no one study is trustworthy, why on earth would pooling them be?
2.  We are not defining the trait of interest accurately
3.  We are always changing the definition of the trait or how we determine its presence or absence
4.  We are not measuring the trait accurately
5.  We have not identified the relevant causal risk factors
6.  We have not measured the relevant risk factors accurately
7.  The definition of the risk factors is changing or vague
8.  The individual studies are each accurate, and our understanding of risk is in error
9.  Some of the studies being pooled are inaccurate
10.  The first study or two that indicated risk were biased (see our post on replication), and should be removed from meta-analysis....and if that were done the supposed risk factor would have little or no risk.
11.  The risk factor's effects depend on its context:  it is not a risk all by itself
12.  The risk factor just doesn't have an inherent causal effect:  our model or ideas are simply wrong
13.  The context is always changing, so the idea of a stable risk is simply wrong
14.  We have not really collected samples that are adequate for assessing risk (they may not be representative of the population at-risk)
15.  We have not collected large enough samples to see the risk through the fog of measurement error and multiple contributing factors
16. Our statistical models of probability and sampling are not adequate or are inappropriate for the task at hand (usually, the models are far too simplified, so that at best they can be expected only to generate an approximate assessment of things)
17.  Our statistical criteria ('significance level') are subjective but we are trying to understand an objective world
18.  Some causes that are really operating are beyond what we know or are able to measure or observe (e.g., past natural selection events)
19. Negative results are rarely published, and so meta-analyses cannot include them, so a true measure of risk is unattainable
20. The outcome has numerous possible causes; each study picks up a unique, real one (familial genetic diseases, say), but it won't be replicable in another population (or family) with a different cause that is just as real
21. Population-based studies can never in fact be replicated because you can never study the same population--same people, same age, same environmental exposures--at the same time, again
22. The effect of risk factors can be so small--but real--that it is swamped by confounding, unmeasured variables. 
Overall, when this is the situation, the risk factor is simply not a major one!
This situation--and our list is surely not exhaustive--is typical and pervasive in observational rather than experimental science.  (In the same kinds of problems, lists just as long exist to explain why some areas even of experimental science don't do much better!)

A recent Times commentary and post of ours discussed these issues.  The commentary says that we need to make social science more like experimental physical science with better replications and study designs and the like.  But that may be wrong advice.  It may simply lead us down an endless, expensive path that simply fails to recognize the problem.  Social sciences already consider themselves to be real science.  And presenting peer-reviewed work that way, they've got their fingers as deeply entrenched into the funding pot as, say genetics does.

Whether coffee is a risk factor for disease, or certain behaviors or diseases are genetically determined, or why some trait has evolved in our ancestry...these are all legitimate questions whose non-answers show that there may be something deeply wrong without current methods and ideas about science.  We regularly comment on the problem.  But there seems to be no real sense that there's an issue being recognized, in opposition to the forces that pressure scientists to continue business as usual---which means that we continue to do more and more and more-expensive studies of the same things.

One highly defensible solution would be to cut support for such non-productive science until people figure out a better way to view the world, and/or that we require scientists to be accountable for their results.  No more, "I write the significance section of my grants with my fingers crossed behind my back" because I know that I'm not telling the truth (and the reviewers, who do the same themselves, know that you are doing that).

As it is, resources go to more and more and more studies of the same that yield basically little, students flock to large university departments that teach them how to do it, too, journals and funders make their careers reporting their research results, and policy makers follow the advice.  Every day on almost any topic you will see in the news "studies show that....."

This is no secret: we all know the areas in which the advice goes little if anywhere.  But politically, we haven't got the nerve to make such cuts and in a sense we would be lost if we had nobody assessing these issues.  What to do is not an easy call, even if there were the societal will to act.

Tuesday, May 29, 2012

More nature, less supernature: Results of a new biological anthropology curriculum

Wind God (Tokyo National Museum)
This semester, Spring 2012, I made three significant changes to my Introduction to Biological Anthropology curriculum (aka APG 201: Human Origins) .

Number one. I rearranged everything relative to all the mainstream textbooks. That includes teaching evolution (common ancestry, deep time), then inheritance, variation, and mutation, then genetic drift and gene flow and THEN natural selection.

Number two.  I actively teach creation alongside evolution and am ashamed that it took me this long to do it (but won't beat myself up too much about it considering all the political pressure to keep it out of science classes, which I'm sympathetic to but no longer agree with entirely. For more read here.)

Number three. Students voluntarily spat for 23andMe and were able to see thousands of their SNPs that were genotyped for free and also heard excellent guest speakers on genetics and genetic testing. (All thanks to a teaching grant I received from my university.)

Students who take this course are mostly freshmen and sophomores (18-20 years old) who are undecided majors or non-majors. It serves as a "general education" credit in the natural sciences that fulfills graduation requirements for nearly all majors on campus. The course is required for any anthropology majors and minors in the class, and future ones.

I pre- and post-tested them on science and evolution issues--using the same questions* I asked of students in the Fall 2011 course where I did not implement this curriculum. I also surveyed the students at the end of the semester about the 23andMe experience (and I may report the results of that survey in a later post).

Based on those pre- (n = 82) and post- (n = 85) tests, here's a bit of what I learned about what they learned during Spring 2012, either inside or alongside my course, during those three or so months of their lives. Percentages are presented as pre...post %.**
Wind God, Ehecatl, an avatar of Quetzalcoatl (en.wikipedia.org)

Result #1: Greater understanding of, and confidence about, evolution and science

Showed improvement.
From already correct majority to even larger correct majority. Listed from least to most improvement.
  1. There is lots of evidence against evolution.  78...81% disagree
  2. If two light-skinned people moved to Hawaii and got very tan their children would be born more tan than they (the parents) were originally.   85...88%  disagree  
  3. Humans and chimpanzees evolved separately from an ape-like ancestor. 70...73%  agree  
  4. Dinosaurs and humans lived at the same time in the past.    83...87% disagree 
  5. A species evolves because individuals want to.   83...88%  disagree  
  6. Variation among individuals within a species is important for evolution.  84...92% agree    
  7. You cannot prove evolution happened.  70...79%  disagree   
  8. A scientific theory is a set of hypotheses that have been tested repeatedly and have not been rejected.   78...88% agree    
  9. The theory of evolution correctly explains the development of life. 77...88%  agree  
  10. Evolution is always an improvement.   54...72%  disagree  
  11. Evolution cannot work because one mutation cannot cause a complex structure (e.g., the eye). 62...88%   disagree   
From incorrect majority to correct majority.
  1. New traits within a population appear at random. 59% disagree...55% agree 
  2. A species evolves because individuals need to.   73% agree... 49% disagree 
  3. “Survival of the fittest” means basically that “only the strong survive.”   65% agree...72% disagree
For comparison...Fall 2011 students (remember, they did not have the three curricular changes; n= 70) improved on only 10 of the 14 questions above. Here are the four exceptions where they did not improve (and instead got worse!):
  • Dinosaurs and humans lived at the same time in the past.
  • If two light-skinned people moved to Hawaii and got very tan their children would be born more tan than they (the parents) were originally.
  • A species evolves because individuals need to.
  • “Survival of the fittest” means basically that “only the strong survive”.

Showed improvement, but still WRONG. 
Fewer students chose the wrong answer and by extension more students chose the correct answer. None of these questions showed improvement by Fall 2011 students.
  1. The environment determines which new traits will appear in a population. 80...76% agree (should  disagree) 
  2. All individuals in a population of ducks living on a pond have webbed feet. The pond completely dries up. Over time, the descendants of the ducks will evolve so that they do not have webbed feet.  69...59% agree (should disagree)
  3. If webbed feet are being selected for, all individuals in the next generation will have more webbing on their feet than individuals in their parents’ generation.  59...51% agree (should disagree)
Got worse.
Fewer chose the correct answer. All of these also got worse for 2011 students.
  1. Small population size has little or no effect on the evolution of a species. 79...78% disagree 
  2. If two distinct populations within the same species begin to breed together this will influence the evolution of that species.  88...83% agree 
  3. A scientific theory that explains a natural phenomenon can be defined as a “best guess.” 45% disagree...53% agree 
Huh?
88% agree with:  A scientific theory is a set of hypotheses that have been tested repeatedly and have not been rejected.   
Okay, great, but 53% also agree with: A scientific theory that explains a natural phenomenon can be defined as a “best guess.” 


Stayed the same.  Also stayed the same in 2011.
Two of the most important factors that determine the direction of evolution are survival and reproduction. 94...94% agree

On confidence...
I have a clear understanding of the meaning of scientific study. 76...89% agree
I have a clear understanding of the term “fitness” when it is used in a biological sense.  58...88% agree 

Comments
In terms of some fundamentals, undergrads seem to have a decent handle on things coming into my class but there's still a lot of room for improvement. It's frustrating how some of these that show improvement only to 73% (like question 8). I mean, roughly 95% get question 8 right on the exam but with different wording. To get to the bottom of it, I think I'll talk through #8 explicitly with students. I do think that in many instances  semantics are muddying the results. Unfortunately I can't change the wording if I want to keep comparing results to prior semesters (and to the results published in the source article*). 


Nevertheless, it's clear that 2012's students outperformed 2011's by improving on more questions (both the ones the majority already knew and the ones that the majority still does not know but is getting there!). Next time I'm going to need to better cover "theory," population size effects, gene flow effects, and general population variation over the generations. Or I can continue, like I do, to go further in depth with population genetics in the upper level courses, since I can't expect students to learn everything in one semester. I didn't after all. Still haven't!

Based on only two semesters of data, it's hard to link my curricular changes to the improvement between Fall 2011 and Spring 2012 that is demonstrated here, but at least it's clear that my changes haven't harmed student performance. Why they improved on some of the questions but not others isn't intuitively obvious. But because they did, I'll continue on with this overall approach next semester, trying of course to lead them to improve further. I can't help but assume that changing things up like I did (by teaching natural selection last, and not all tangled up with the basic evolutionary principles at the start of the semester) helped students see evolution and selection beyond just "survival of the fittest" and beyond ideas of progress, perfection, and agency.

Wind God, Vayu (en.wikipedia.org)

Result #2: No need of that hypothesis

The last question on the pre-and post-tests is this one: Humanity came to be through evolution, which was controlled by God.

Let's compare the responses between 2011 (remember, no rearranged presentation, no teaching creation, and no 23andMe) and 2012 with their pre...post %.

Fall 2011
Agree: 26...25%
Disagree: 57...58%
No opinion/undecided: 17...17%

Spring 2012 
Agree: 26...15%
Disagree: 57...72%
No opinion/undecided: 17...13%

Comments
That's an increase of 15% of students in my course who disagree, which is to report that they have no need of the god hypothesis for human evolution and no need for human exceptionalism.***

That degree of change is roughly on par with the improvement seen here (from above):
  • The theory of evolution correctly explains the development of life. 77...88%  agree  
  • Evolution is always an improvement.   54...72%  disagree  
  • Evolution cannot work because one mutation cannot cause a complex structure (e.g., the eye). 62...88%   disagree   
  • A species evolves because individuals need to.   73% agree... 49% disagree 
  • “Survival of the fittest” means basically that “only the strong survive.”   65% agree...72% disagree
  • I have a clear understanding of the meaning of scientific study. 76...89% agree
  • I have a clear understanding of the term “fitness” when it is used in a biological sense.  58...88% agree 

I'll have to do the statistics if I want to test whether those students who were likely to disagree with the god question were more likely to answer the above questions correctly or confidently (for the last two). If that turns out to be the case, then there is support for claiming that a deeper understanding or acceptance of more nuanced, complex, and agency-free evolutionary principles and processes (or confidence about them) leads to letting go of the supernatural. But we can't say that yet without those predictive tests.

Even if stats enable me to predict improvements with letting go of supernature, I'm not so sure that it will be clear that this change was caused by my course and new curriculum. For me, my loss of the supernatural was so gradual that I can't credit one course! I just kind of sloughed off the supernatural over time. So it's quite possible that these results don't reflect a change in many of those students, as much as their new found willingness to admit that a change is occurring or has occurred. Still, that I tried out this new curriculum... it's hard not to assume causation from correlation. (But like I said, I have to do the stats if I'm to see about that.)

I'm not going to lie... I'm so proud of my students for how they answered this one question. And I'm not going to pretend this isn't dangerous information that our friends and heroes who battle creationist politicians might not like to share with those creationist politicians!

The fear of losing the god hypothesis is a major reason why creationists don't want evolution to be the only explanation taught in K-12 biology classrooms (or don't want it taught at all). And it's also why the human animal is so often left out of K-12 biology curriculum. And my experience, here, suggests that that fear is justified! But what's ironic about the creationist plea for getting creation into science classes to better "teach the controversy" and to cover "competing theories" (the only two being creation and evolution) is how my experience suggests that teaching creation alongside evolution is a recipe for losing supporters of the supernatural. (see here)

It's important to notice that the question I ask students is not about whether they believe in God, or are religious or spiritual, or pray or meditate, or whether they believe in a higher power or a greater force or anything like that! It's just asking whether human nature requires supernature. And they're saying no.

Why does that matter? Why am I so proud of them? Here's just one reason: Because if you accept nature, then you somehow understand the delicate role of each species, including humankind, in the ecosystem. And maybe humans who see this more clearly will take more compassionate, human-friendly, and Earth-friendly actions at the market, in the voting booth, and in their communities.
Juno Asking Aeolus to Release the Winds, F. Boucher, Paris, 1769
(Uppity note: The original title was Pull My Finger and Boucher's later variations on this theme incorporated Prometheus.)

Notes
* Assessment pre- and post-test questions are taken from this article: Cunningham DL, Wescott DJ. Still more "fancy" and "myth" than "fact" in students' conception of evolution. Evolution: Education and Outreach 2009;2:505-517.

**Students are asked to report whether they "strongly agree," "somewhat agree," "somewhat disagree," "strongly disagree," or "no opinion/undecided/never heard of it." I don't report everything here. And I'm lumping "strongly" and "somewhat" categories into just plain "agree" or "disagree."

***Unfortunately another way to read this statement and yet still disagree with it is from the hard core creation perspective: No evolution at all! However, I have few of those students in my course and expect to have fewer of them, not more of them, by the end of the course. That's not because they change their minds about hardcore creation, necessarily, but because the few who'd take my course in the first place would probably drop it before it's over.  I will add a question that is better worded to the next round of tests in the Fall semester and I'll keep this one as reference. Something like "Evolution does not explain human existence" would help me gauge how many might disagree with the above question from the minority  (no evolution at all) perspective.

Monday, May 28, 2012

Big Science, Stifling Innovation, Mavericks and what to do (if anything) about it

We want to reply to some of the discussion and dissension related to our recent post about the conservative nature of science and the extent to which it stifles real creativity.  Here are some thoughts:

There are various issues afoot here.

In our post, we echoed Josh Nicholson's view that science doesn't encourage innovation, and that the big boy network is still alive and well in science.  We think both are generally true, but this doesn't mean that all innovators are on to something big, nor that the big boy network doesn't ever encourage innovation.  Some mavericks really are off-target (we could all name our 'favorite' examples) and funding should not be wasted on them.  The association of Josh Nicholson with Peter Duesberg apparently has played a role in some of the responses to his BioEssays paper.  That specific case somewhat clouds the broader issues, but it was the broader issues we wanted to discuss, not any particular ax Nicholson might have to grind.

One must acknowledge that the major players in science are, by and large, legitimate and do contribute to furthering knowledge, even if they do (or that's how they) build empires.  There is nonetheless entrenchment by which these investigators have projects or labs that are very difficult to dislodge.  Partly that's because they have a good track record.  But only partly.  They also typically reach diminishing returns, and the politics are such that the resources cannot easily be moved in favor of newer, fresher ideas.

It's also true that even incremental science is a positive contribution, and indeed most science is almost by necessity incremental: you can't stimulate real innovation without a lot of incremental frustration or data to work from.  Scientific revolutions can occur only when there's something to revolt against!

If we suppose that, contrary to fact, science were hyper-democratized such as by anonymizing grant proposers' identities (see this article in last week's Science), the system would be gamed by everyone.  Ways would be found to keep the Big guys well funded.  Hierarchies would quickly be re-established if the new system stifled them.  The same people would, by and large, end up on top.  Partly--but only partly--that's because they are good people.  Partly, as in our democracy itself, they have contacts, means, leverage, and the like.

And it's very likely that if the system were hyper-democratized a huge amount of funding would be distributed among those who would have trouble being funded otherwise.  Since most of us are average, or even mediocre, most of the time, this would be a large expenditure if it were really implemented relative to a major fraction of total resources,  contributions likely watered down even further than is the case now.  But that kind of  broad democratization is inconceivably unlikely.  More likely we'd have a tokenism pot, with the rest for the current system.

Historically, it seems likely that most really creative mavericks, the ones whom our post was in a sense defending, often or perhaps typically don't play in the stodgy university system anyway.  They drop out and work elsewhere, such as in the start-up business world.  To the extent that's true, a redistribution system would mainly fund the hum-drum.  Of course, maybe the budgets should just be cut, encouraging more of science to be done privately.  Of course, as we say often on MT, there are some fields (we won't name them again here!) whose real scientific contributions are very much less than other fields, because, for instance, they can't really predict anything accurately, one of the core attributes of science.

One can argue about where public policy should invest--how much safe but incremental vs risky and likely to fail but with occasional Bingos!

It is clear from the history of science that the Big guys largely control the agenda and perhaps sometimes for the good, but often for the perpetuation of their views (and resources).  This is natural for them to do, but we know very well that our 'Expert' system for policy is in general not a very good one, and we keep paying for go-nowhere research.

Perhaps the anthropological reality is that no feasible change can make much difference.  Utopian dreams are rarely realized.  Maybe serendipitous creativity just has to happen when it happens.  Maybe funding policy can't make it more likely.  Such revolutionary insights are unusual (and become romanticized) because they're so rare and difficult.

The kind of conservative hierarchy and tribal behavior are really just a part of human culture more broadly.  Still, we feel that the system has to be pushed to correct its waste and conservatism so it doesn't become even more entrenched.  Clearly new investigators are going to be in a pinch--in part because the current system almost forces us to create the proverbial 'excess labor pool', because the system makes us need grad students and post-docs to do our work for us (so we can use our time to write grants), whether or not there will be jobs for them.

Again, there is no easy way to discriminate between cranks, mavericks who are just plain wrong, those of us who romanticize our own deep innovative creativity or play the Genius role, and mediocre talent that really has no legitimate claim to limited resources.  The real geniuses are few and far between.

A partial fix might be for academic jobs to come with research resources as long as research was part of the conditions for tenure or employment.  Much would be wasted on wheel-spinning or trivia, and careerism, of course.  But it could at least potentiate the Bell Labs phenomenon, increasing the chance of discovery.

We cannot expect the well-established scientists generally to agree with these ideas unless they are very senior (as we are) and no longer worried about funding....or are just willing to try to tweak the system to make it better.  When it's just sour grapes, perhaps it is less persuasive.  But sometimes sour grapes are justified, and we should listen!

Friday, May 25, 2012

You scientist, we want you to get ahead....but not too FAR ahead!

A paper in the June issue of BioEssays is titled "Collegiality and careerism trump critical questions and bold new ideas" and, no, we didn't write it.  The subtitle is "A student’s perspective and solution": the author is Joshua Nicholson, a grad student at Virginia Polytech.  It's a mark of the depth of the problem that it is recognized and addressed by a student, in a very savvy and understanding way.  But of course it's students who will most feel its impact as they begin their careers when money is tight and the old boy network is alive and well. The situation is so critical today that even a student can sense it without the embittering experience of years of trying to build a post-training career.
As students we are taught principles and ideals in classrooms, yet as we advance in age, experience, and career, we learn that such lessons may be more rhetoric than reality.
Nicholson is not the first to notice that the current system of funding and rewards encourages more of the same, not innovation.  Scientists, he notes, are discouraged from having radical, or even new ideas in everything from grant applications to even just expression of ideas.  Indeed, numerous examples exist of brilliant scientists who have said they couldn't have done their work within the system; Darwin, Einstein, and whatever you think of Gaia, its conceptor, but also innovative inventor, James Lovelock, has said the same (and he did so recently on BBC Radio 4's The Life Scientific). Other creative people, in the arts, have felt the same way about universities (e.g., Wordsworth the poet, Goya the painter).

The US National Institutes of Health and National Science Foundation both pay lip service to innovation, yes, but still within the same system of application and decision-making.  Nicholson says that the NIH and NSF in fact admit that these efforts are not encouraging innovation (as those of us who have been on such panels and never seen an original project actually funded--usually the reviewers pat the proposer on the head patronizingly and say make it safe and resubmit).  He blames this, correctly, on the review structure; peer review.  Yes, experts in a field are required to evaluate new ideas, but it is they who are often most unwilling to accept them.

To be fair, usually this is not explicit and reviewers may usually not even be aware of their inertial resistance to novelty.  But Nicholson explains that:
(i) they helped establish the prevailing views and thus believe them to be most correct, (ii) they have made a career doing this and thus have the most to lose, and (iii) because of #1 and #2 they may display hubris [2–4, 9, 10]. If, historically, most new ideas in science have been considered heretical by experts [11], does it make sense to rely upon experts to judge and fund new ideas?
He concludes that a student looking to build a career therefore must choose between getting funding by following the crowd and doing more of the same, or being innovative but without any money... that is, driving a taxi.

He goes on to say that the system not only encourages safe science, but cronyism as well.  We would add that this includes hierarchies, which foster obedience by many to the will of the few.  Because the researcher's affiliation, collaborators, co-authors, publication record and so on are a part of the whole grant package, it's impossible for reviewers to not use this information in their judgments and review a grant impartially. As Nicholson puts it, the whole emphasis is on a scientist "being liked" by the scientific community.  Negative findings are rarely published, which in effect means that scientists can't disagree with each other in print, and peer review ensures that scientists stay within the fold.  Nicholson believes this has all created a culture of mediocrity in science.  We can say from experience that submitting grants anonymously is unlikely to work because, like 'anonymous' manuscripts sent out for review, one can almost always guess the authors.

There is of course a problem.  Most off-center science is going to go nowhere.  Real innovation is a small fraction of ideas that claim it (sincerely or as puffery).   Accepted wisdom has been hard-won and that's a legitimate reason to resist.  So not all those whose ideas are off base are brilliant or right. How one tells in advance is the question that's a problem because there's no good way, and that provides a ready-made excuse for generic resistance.

Nicholson's solution to restructuring "the current scientific funding system, to emphasize new and radical work"?  He proposes that the grant review system change to include non-scientists who don't understand the field, as well as scientists who do. "Indeed," he says, "the participation of uninformed individuals in a group has recently been shown to foster democratic consensus and limit special interests." And, "crowd funding" has been successful in a lot of non-scientific arenas, he notes, and could conceivably be used to fund grants as well.

It will come as no surprise to regular MT readers to know that we endorse Joshua Nicholson's indictment of the current system.  Peer review seems necessary, brilliant and democratic, and it was established largely and explicitly to break up and prevent Old Boy networking, and make public research funding more 'public'.  Indeed, money no longer goes quite so exclusively to the Elite universities.  But politicians promise things to get a crowd of funders, who want the rewards.  And even that, like any system, can be gamed, and a pessimist (or realist?) is likely to argue that after you've relied on it for a while, it produces just the kind of stale, non-democratic, old boy network that Nicholson describes--similar hierarchies even with many of the same hierarchs resurfacing.

It's unlikely that the grant system will undergo radical transformation any time soon, because too many people would have to be dislodged, though, perhaps when the old goats retire and get out of the way, that will smooth the way.   But there are rumblings in the world of scientific publishing, and demands for change, and this makes us hopeful that perhaps these growing challenges to the system can have widespread effects in favor of innovation and a more egalitarian sharing of the wealth (in the form of academic positions, grant money, publications, and so on).  The demands are coming from scientists boycotting Elsevier Publishing because they are profiting handsomely from the scientists' free labor; scientists and others petitioning for open and free access to papers publishing the results of studies paid for by the taxpayer; physicists circumventing the old-boy peer review process by publishing online or first passing their manuscripts through open-ended peer review online.  And, yes, there are open access journals (e.g., PLoS), though generally at high cost.

The system probably can't change too radically so long as science costs money and research money doesn't come along with salary as part of an academic job, as it probably should since research is required for the job!  Instead, the opposite is true: universities hunger for you to come do your science there largely because they expect you to bring in money (they live on the overhead)!  And humans are tribal animals so the fact that who you know is such an intrinsic part of the scientific establishment is not a surprise--but that aspect of the system can and should be changed.  The reasons that science has grown into the lumbering, conservative, money-driven, careerist megalith that it is can be debated, as can the degree to which it is delivering the goods, even if imperfectly.  But it is possible that we're beginning to see glimmers of hope for change.  The best science is at least sometimes unconventional, and there must be rewards for that as well.

Thursday, May 24, 2012

Magical science: now you see it, now you don't. Part I: What we mean by 'risk'

The  life, social, and evolutionary sciences have a problem.  We posted about the issue of their non-replicability last Friday but that is only part of the problem.  They also have non-predictability (see a recent Times commentary), but both replicability and predictability are key elements of science as we know it. 

It is difficult to make rigorous assertions that have the kind of predictive power we have come (rightly or wrongly) to expect of science, on the typical if often unstated assumption that our world is law-like, the way it seems that the physical and chemical universe are.

A clear manifestation of the problem is the way that findings in epidemiology and genetic risk say yes factor X is risky, then a few years later no, then yes again, and so on.   Can't we ever know if factor X is a cause of some outcome Y?  In particular, is X a risk factor?  X could be a reason for natural selection, a component of some disease, and so on.

Often, we think of this as a probability.  As we've posted before, that means that exposure to that factor yields some probability that the outcome will be observed.  If you have two copies of a gene, there is usually a 50% risk or probability that a given one of your children (or parents, or sibs) will also have it.

So, in a strange turn of phrase, coffee or a given HDL cholesterol level is said to be a risk factor for heart disease, or a given genetic variant is a risk factor for cancer.  And we try to estimate the level of that risk--the probability that if that factor is present, you will manifest the outcome.  Among various possible risk factors, the modern concept of science has it that you are at some net or overall risk of the outcome, like having a heart attack, depending on your exposure to those risk factors.

In evolutionary terms, having a particular genetic variant can have some probability  (or some similar measure) of reproducing, or surviving to a particular age.  Among various possible genetic risk factors, what you have puts you at some net risk of such outcome, which is your evolutionary fitness in the face of natural selection, for example. 

If we assume that enumerated causes of this sort, and that they really are causes, are responsible for a trait then your exposure level can be specified.  The causes might truly be deterministic, in the way gravity determines the rate an object will fall--here or anywhere in the universe--but that our incomplete level of knowledge is such that we can only express its effect in terms of probability.

Still, we assume that probabilistic causation is real.  When things are the result of probabilities, we can know the causes but can't predict the specific outcome of any given instance.  This is the sense in which we know a fair coin will come up Heads 50% of the time, but can't predict the result of a given flip.  Actually, and we've posted about this before but the issue of probability is so central to much of science that we keep repeating it, the coin may be perfectly deterministic but we just don't know enough, so that for all practical purposes the result is probabilistic.

In such cases, which are clearly at the foundation of evolutionary inference and of genetic and other biomedical problems, we must estimate the risk associated with a given cause by choosing a sample from all those at risk, and seeing what happened to them.  Then, we assume we know the causal structure and can then do what we must be able to do, if this is actual science: predict the outcome.  This must be so if the world is causal, even if our predictions are expressed in terms of probabilities: given your genotype you have xx probability of getting yy disease.

So, with our huge and munificently funded science establishment, why is it that day after day the media tout the latest Dramatic Finding....that is just as noisily touted the next day when the previous assertion is overturned?

Why is it that we don't know if coffee is a risk factor for disease?  Or isn't?  Or isn't for the moment until some new study comes along?  Or maybe until some environmental factor changes, like the type of filter paper McDonald's uses in its coffee maker, say--but how would we ever know whether that explains the flip-flopping findings?

Why indeed do we have to continue doing studies of the same purported risk factor to see if they are really, truly risks?  These are fundamental questions not about the individual studies, but about the current practice of science itself.

If we look at the reasons, which is tomorrow's post, we'll see how shaky our knowledge really is in these areas, and we can ask whether it is even 'science'.

Wednesday, May 23, 2012

Just-So stories, revisited (again!): not even lactase?


A lot of us love to spice up our publications, and our contributions to the media circus, with nice, taut, complete, simple adaptive evolution stories.  Selection 'for' this trait happened because [*whatever nice story we make up*].

The media love melodrama, which sells, and our stories make us seem like intrepid insightful adventurers--those 'selective sweeps' that we declare, by which the force of some irresistible allele (variant at a single gene) devastates all its competitors, sometimes over the whole world, suggest such epic tales.

The fact that we know from decades of experience that the stories we have made up are unlikely to be true doesn't slow down our juggernaut of such stories.  But one of the things we here on MT are here to do is ask how true they actually are--or, if we're not sure, why we assert them with such confidence.  And here's one of the current, newly classic, human Just-So stories:

Lactase persistance and milk-drinking adults
Mammals when weaned generally lose the ability to digest milk.  Adults usually can't break down the milk sugar, lactose, and the result (so the story goes) is grumbly gut and, one might put it, farty digestion leading to early death or (understandably!) failure to find mates and so reduced fitness.  The reason is that while infants produce the enzyme lactase that in their gut breaks down lactose, the lactase gene is switched off by adulthood:  if we don't drink milk we don't need it, and there'd be no mechanism to keep the lactase gene being expressed.

The gene is called LCT and its regulation, due to DNA regulatory sequence located nearby in the genome, is what determines whether gut cells do or don't use the gene.  How, as a mammalian infant ages, this gene is gradually shut down is not well understood.  And while this is true as a rule for humans, there are exceptions.  The first that was carefully studied was the high fraction of Europeans, and especially northern Europeans such as Scandinavians, who can tolerate milk as adults (most people elsewhere, except in parts of Africa--see below--lose this ability, so the story goes).

The frequency of the Lactase Persistence allele (genetic variant), call it LP, is highest in northern Europe, and diminishes towards the middle East.  The genetic and archeological evidence have been interpreted as showing that the LP variant first arose by mutation in the middle East--say, somewhere around Anatolia (Turkey)--and spread gradually, along with dairying, into Europe.  Because of a supposed selective advantage, there was a sweep of LP as this occurred, so that by the time one gets to Scandinavia the LP frequency is high because, the story went, they had been dairying there for thousands of years.  A roughly 1% natural selection advantage is assumed for this gradual, continent-wide increase in LP's frequency. One can make up stories about  how adults would live longer or rutt more successfully if they could slug down their domestic ruminants' fluids, but those are difficult to prove (or refute).

Anyway, this was the adaptive scenario.  When I talked to various clinicians in Finland, they generally dismissed the idea that inability to digest milk was harmful in any serious or life-threatening way to those Finns who by bad luck did not carry the LP variant.  And when I ask Asian students in my classes if they can drink milk (they're not supposed to be able to according to the adaptive story) they almost all say 'yes' and are curious why I should ask such a question!  So I had my doubts, and for this and other reasons as well.

Even though all the evidence that LP had arisen in Anatolia and risen in frequency, the idea of a steady 1% advantage all across Europe over thousands of years, seemed at least speculative--that is, even if there was a selective advantage for some reason--the story was rather pat, like so many selection stories.  A demonstrated fitness-related advantage of adult milk-drinking was asserted but simply wasn't shown.  Perhaps the reason for the high frequency of this LP trait was only an indirect result of something being selected for nearby on the same chromosome, or some other reason than nutrition why LP variant would have been favored.  Indeed, there was even evidence from residues in Anatolian pottery that milk sugar was cooked into digestibility long ago--so a genetic adaptation to it wasn't even necessary.  Regardless of these details, the story stuck.  It made a good tale for news media and textbooks alike.

More evidence?
But now a new paper, by several Finnish authors, has appeared in the Spring 2012 issue of Perspectives in Biology and Medicine, and it suggests something different.  It suggests that milk usage hasn't been present in northern Europe for nearly long enough to explain the LP frequency.  Their idea is that the variant spread into northern Europe, along with people, but that they brought the high LP frequency with them: it had already risen to high frequency there.

It is much easier to envision a combination of luck and selection, in some relatively localized population in Turkey or thereabouts, gradually adopting and adapting to adult dairying, which then spread as they expanded their territory in part because of their advanced farming technology, then it is to envision a continent-wide, steady selective story about milk-drinking.

The point here is that, if the new paper is right, even lactase persistence, which has become the classic exemplar of recent human adaptation (displacing even sickle cell and malaria in the public media), is simply too simplified.  Can't we even get that one right?

Probable truth
One has to say that the  idea of lactase persistence being evolutionarily associated with dairying culture probably means something.  East African populations that have a history of dairying also have a similar LP persistence, that confers at least some higher-level of digestibility of lactose into adulthood--but this is for different mutations affecting the gene's regulation, not an historical connection to the Anatolian story, nor shared with surrounding non-dairying population.  Similar selective advantage has been estimated for the African population.

So either the lactose digestion story is basically correct, even if some details are still debatable, or this is only a coincidence.  Or here, too, some other effect of this chromosome region was involved.  Two such kinds of coincidence seem rather unlikely.

Thus, while some aspects of this story may be correct, the point is that even in the classic supposed clear-cut example of recent human evolution, that has been intensely studied in modern genomic detail, maybe we still haven't got it right in important ways.

The message is not to dismiss the LP story, but we should temper the confidence with which simple selection stories are offered--in general.

Why be a spoil-sport?
Unfortunately, there's no reward for humility in our society.  We personally think that claims should be tempered, so that we as a scientific community, and the community that pays for science, can more accurately and quickly tell what is worth following up with further funds, and why.  We also feel that our understanding of the universe we live in, and the system of life of which we are a part, is the job of science--and the job of universities is to educate.  To us, that means provide students with the best understanding we currently have about the nature of life, history, arts, and technologies, not to pass on a superficial lore.

This is, we know, rather naive!  We also know that neither we, nor a puny blog like this, is going to have any serious impact.  But if we can keep these issues before the eyes of at least some conscientious people, even if we are regularly arguing for less rather than more claims, and regularly pointing to what seem to us to be deeper questions worthy of investigation, then we have hopefully done at least some service.

Tuesday, May 22, 2012

Slot machines and thoughts: neural determinism?

Coin flips are probabilistic for all practical purposes (unless you learn how to "predetermine" the outcome, here).  By 'probabilistic' we mean that the outcome of any given flip can only be stated as a probability, such as 50% chance of Heads: we can't say that a Heads will or won't occur.  This is for all practical purposes, since if we knew the exact values of all the variables involved, standard physics can predict the outcome with, with complete certainty.  Machines have been built to show this, as we've posted about before (e.g., here).

Slot machines are (purportedly) random dial-spinners that stop in ultimately random ways (that are adjusted for particular pre-set overall payoff levels, but not individual spins).  In this sense, the slot machine is nearly a random device, but even the computer-based random number generator of modern slot machines is not 100% random and, in a sense, every spin could be predicted at least in principle.

So, as far as anybody can tell in practice, each flip or each jerk of the one-armed bandit, is random.  We still can say much about the results:  We can't predict a given coin flip or slot-pull, but we can predict the overall net result of many pulls, to within some limits based on statistical probability theory--though never perfectly.

On the other hand, a casino is a collection of numerous devices (roulette wheels, poker tables, slot machines, and so on).  Each is of the same probabilistic kind.  Nobody would claim that the take of a casino was not related to these devices, not even those who believe that each one is inherently probabilistic.  To think that would be to argue that something other than physical factors made up a casino.

But the take of a casino on any given day cannot be predicted from an enumeration of its devices! The daily take is the result of how much use was made of each device, of the decision-making behavior of the players, of the particular players that were there that day, of how much they were willing to lose, and so on.  The daily take is an 'emergent' property of the assembled items.  Interestingly, nonetheless, the pattern of daily takes can be predicted at least within some limits.  This is the mysterious connection between full predictability and emergence, and it is a central fact of the life sciences.

Genes exist and they do things.  On average, we can assess what a gene does.  Clearly genes underlie what a person is and does.  But each gene's net impact on some trait depends not just on itself, but on the rest of the genes in the same person's genome, and countless other factors.  A particular individual's particular action is simply not predictable with precision from its genome (or, for that matter, its genome and measured environmental factors).  There are simply too many factors and we can't assess their individual action in individual cases, except within what are usually very broad limits.

Brain games
A common current application of the issues here is to be found in neurosciences.  There is a firm if not fervid belief that if we enumerate everything about genes and brains we'll be able to show that, yes, you're just a chemical automaton.  Forget about the delusion of free will!

Location of the amygdala; Wikipedia
A story last week in the NY Times largely asserts that behavior is going to be predictable from the 'amygdala', a section of the brain.  There is also a story suggesting that psychopaths can be identified early in life.  And there are frequent papers about let's call it 'econogenomics', claiming they will save the day by showing how our genomes determine our economic behavior.

Day after day, in the media and in the science journals themselves, the promise is made of ultimate (often, of imminent) predictability even of complex emergent phenomena, from examination of their parts.  If we just have enough sequencers, fMRI machines, and other kinds of technology, everything will work out.  Not to worry!

So the Human Connectome Project, exploits the 'omics idea that if we mindlessly enumerate every single little thing we can understand every single big thing, is funded and off and enumerating.....every connection between every neuron in the brain (starting, we think, with 'the' mouse, whatever that means).  Mindlessly is the right word, because the investigators of such things often proudly proclaim that they are not testing any hypothesis about Nature: this is pure Baconian empiricism, something we've discussed in earlier posts:  collect all the facts and the theory will emerge automatically.  There seems to be a feeling of imminent triumph that, like the priests of old, we The Scientist will be able to see inside your very soul to see what you are really like, no matter how much you may delude yourself that you are a free agent.

Clear-cut cases of prediction in complex systems from specific identified elements do exist, due to individually very strong factors.  They are usually rare, but they addict us to the idea that all cases--all behaviors or even all thoughts, will be predictable by enumerating all causal factors and their effects.  But this is, at best, not practicable.  Is it an ultimate illusion?

So why the persistent belief to the contrary?
Could it be that really, truly, and ultimately when so many countless probabilistic factors interact to generate a net result, our ability to predict them other than in a few special cases is inherently limited?  Could it be that our claims to do otherwise are, in fact, no more than a current version of Delphic mumbo-jumbo that has always existed in society?  Whether or not that is true, science, like religion, is not likely to agree to that.

Why is there such reluctance to simply accept limits to our knowledge, or perhaps even to our ability to know things by applying current methods?  Is it just arrogance, careerism, profit-chasing?  Is it ignorance of the landscape?

One thing is that of course we cannot apply scientific methods that we haven't yet discovered.  There are programs and even organizations, like the Santa Fe Institute of which Ken is an external faculty member, that are dedicated to working out an understanding of complexity.  We think it's fair to say that they haven't solved the problem!

At present, a nay-sayer may be viewed as someone who is anti-science, or perhaps even being mystical.  After all, either things are material or they aren't!  If they are material, should we not be able to understand them?  If they are numerous or individually small, doesn't the history of science show that instrumentation and technology need to be brought to bear on the problem?

The answer to these questions is certainly 'yes'.  We're not mystics. But physical problems need not be amenable to the kinds of solutions we currently have, any more than astrology solved problems when observing the stars and planets was the technology of the time.   Our society certainly believes in technology and even more so, perhaps, in the idea that technology is for making a profit.  That's the often explicitly stated that the point of science is its application, that we do this for our careers and labs, or for patients, or for society at large.

But it is not defeatism to ask whether the current approaches, based on 400 year-old Enlightenment-derived methods and concepts, are obsolete for the kinds of questions we are now asking (no matter how powerful they were for lesser questions that were successfully answered).  It could potentially help to withdraw resources from business as usual as a way of trying to force more creative thinking--but there's no guarantee that, if, or when, it would work to stimulate the next Darwin or Einstein.

It is similarly not out of line to ask, as regular readers know we ask regularly, whether much of what is being supported in science is on wrong trails, even if good for maintaining funding and other sorts of momentum, by diverting funds from things more likely soluble with traditional approaches, like diseases that really are genetic and for which genetic treatments would be fantastic.

And it is not out of line to ask whether when there are so many really serious human ills in the world, that have nothing to do with genes (or, for that matter, with science), that resources are often wrongly being used to maintain an academic welfare system, the way passing the plate maintains religious establishments on the promise of Things to Come.

As we have often also said often,  triggered by yet more grandiose claims in the news or journals, complexity due to multiple interacting but individually small factors is the challenge of the day.  It is even more challenging to the extent that really, or for all practical purposes, these factors are probabilistic results of large numbers of interacting, individually minor, factors.

If that's the case, we are back in the 1800s, when it was discovered that every year a predictable number of people will commit suicide, and by predictable arrays of methods, but yet this can rarely be predicted for individuals.  That kind of problem was perhaps first recognized more than a century ago, but is still with us.

And it's a no-brainer to recognize that.

Monday, May 21, 2012

Soupcon du Jour: a problem in science as a System

Science is a 'System' in our society.

Why is it that after countless dollars, we still don't know if HDL cholesterol is good for us or not, or whether drinking coffee is good for us or not, or when and how obesity is defined and/or is good or bad for us, and so many other things of this sort, including a lot of genomics baloney, that clutter up the daily press and drain your taxes?  If we still have to live by the soupcon du jour--the question we ask to day, that we have to ask again and again the way restaurants change their soups du jour, then something's wrong!

It is obvious, we think, that these epidemiological interpretations are misguided because of so many, generally minor, factors contributing to these traits, always changing, not all measured or identified, and so on, that we are just throwing money, skill, and effort at very minor questions that we cannot seem to answer, if the kind of 'answers' we think we seek even exist.  Those answers, from a science point of view, should have general predictive power, and staying power. They should not just be statistically whiz-bang descriptions (not the same as prediction!) of yesterday's events that, since they already happened, we can sample. 

We can predict how fast a rock will fall, but not how fast the risk of disease will fall or why.  That's because gravity (at this level of observation, at least) is a universal law of nature, whereas in the life and health, behavioral and social sciences we're dealing with evolution and its products, and these simply don't follow such taut laws.  We've posted many times on our view that the individual units (e.g., organisms, societies, genomes) are not uniform collections of the kind that, say, gravity or biochemistry deal with.

"If I had a hammer, I'd hammer in the morning...."
With science as a System, there is a whole infrastructure of people and resources who need their salaries, labs, psychological feeling of worth, and so on.  The System, if not its results, have to keep carrying on.  And in any case, because there really are problems to solve, one would not want to shut down the System.  When some new challenge--that really is important--came along, we'd not have the resources to address it.

But there are plenty of problems for science to face as it is, yet much of normal workaday science is essentially making up problems because, as scientists sometimes privately say "this is what I do!" whether or not it's answering questions adequately.  We can't each retrain all the time, to be qualified to address problems that really need addressing.  Or can we?

If it's a problem, is there a solution?
Maybe this is just  how human societies of our type happen to work, when 99% of us live in cities and offices rather than on farms.  We have to seem to do something to justify our very unequal claim to resources, and by which we make the other 1%-ers work hard to supply our needs (these 1%-ers).  Hierarchies need at least some patina of justification.

So, in that sense, there's no problem.  Like any other industry or System (including farming and mining), we have our members and structures and committed resources.  Most of anything individual humans do is chaff, science being no exception.  Science is certainly able to do great things on occasion, even if we can't order them up the way you order a burger at McFood and expect it delivered quickly.  And when it comes to technology (not the same as basic science), we're marvelous creatures.

But if there is a problem in that we are so inertial, incremental, and conservative in that sense, is there a solution?  A solution would be a way of directing resources much more efficiently and effectively toward scientific questions of real importance, without the ready reserve we need to address the next surprise question (environment, epidemic, war, etc.) that we may face unpredictably.  Is our army of specialists like a huge tanker that simply can't be turned around, or could we establish incentives or something that would enable more innovative and creative careers, that are safe and predictable enough that people would want to pursue them?  We have no answer.  Maybe there isn't one.  Maybe the soupcon du jour--the suspicion of the day that we can design huge studies to address til some new trivium comes along--is just how things have to be.

Friday, May 18, 2012

Non-replicability in science: your antelope for the day

A piece in the May 17 Nature supports one of Ken's favorite observations, something he says while wearing his Anthropologist's hat -- "Journal articles are just an academic's antelope for the day."  We're still just hunter/gatherers -- our published papers are, more often than not, nothing more than the way we feed ourselves.  Our basket of berries -- eaten today, droppings tomorrow.

Blackbuck male, females; Photo from Wikimedia, Mr Raja Purohi
Ed Yong, in "Replication studies: Bad copy," reports that most published studies can't be replicated.  This is something we often talk about with respect to genetic studies, and there are many reasons for this that are specific to genetic data, but apparently it's even more rampant in psychology, for reasons also specific to the field.  

And there is the notorious problem that 'negative' results are not published very often.  They're not glamorous and won't get you tenure--even if some of the most important findings in science are 'negative' if they steer work towards valid rather than dreamt-of theory or hypothesis.  Clinical trials are a major example, but less noticed are ephemeral natural selection stories about evolution.

A paper published last year claiming support for extrasensory perception, or psi, for example, produced a major kerfuffle (we blogged about it at the time).  The aftermath has been no less interesting, and informative about the world of publishing, as researchers who tried to replicate the findings but failed also failed to find publishers for their results.  This lead to a lot of discussion about the implications of negative results not being published, a discussion that has flared up frequently in academia, as well it should, although we're no closer to resolving it than ever.
There are some experiments that everyone knows don't replicate, but this knowledge doesn't get into the literature,” says [Eric-Jan] Wagenmakers [mathematical psychologist at the University of Amsterdam]. The publication barrier can be chilling, he adds. “I've seen students spending their entire PhD period trying to replicate a phenomenon, failing, and quitting academia because they had nothing to show for their time.”
But we'll leave that issue for another time.

The question of why studies so often aren't replicable is a different, if related one.  And one that The Reproducibility Project, a large scale collection of scientists from around the world, is addressing head on, as they attempt to replicate every study published in three major psychology journals in 2008, as described last month in the Chronicle of Higher Education.  
For decades, literally, there has been talk about whether what makes it into the pages of psychology journals—or the journals of other disciplines, for that matter—is actually, you know, true. Researchers anxious for novel, significant, career-making findings have an incentive to publish their successes while neglecting to mention their failures. It’s what the psychologist Robert Rosenthal named “the file drawer effect.” So if an experiment is run ten times but pans out only once you trumpet the exception rather than the rule. Or perhaps a researcher is unconsciously biasing a study somehow. Or maybe he or she is flat-out faking results, which is not unheard of. 
According to Yong, the culture in psychology is such that experimental designs that "practically guarantee positive results" are perfectly acceptable.  This is one of the downsides of peer review -- when all your peers are doing it, good scientific practice or not, you can get away with it, too.
And once positive results are published, few researchers replicate the experiment exactly, instead carrying out 'conceptual replications' that test similar hypotheses using different methods. This practice, say critics, builds a house of cards on potentially shaky foundations.
So, if a study isn't replicated exactly (or however exactly it can be), it's possibly because the methods were not described in enough detail for the study to be replicated.  Or, and this is a problem certainly not confined to psychology, the effect was small and significant by chance, as epidemiologist John Ionnides suggested in a paper published in 2005 that garnered a lot of attention for saying most Big-Splash studies are false.  He explained this in statistical terms, having to do with bias in significance levels of studies of new hypotheses and similar issues.

As the Chronicle story says about non-replicability:
The researchers point out, fairly, that it’s not just social psychology that has to deal with this issue. Recently, a scientist named C. Glenn Begley attempted to replicate 53 cancer studies he deemed landmark publications. He could only replicate six. Six! Last December I interviewed Christopher Chabris about his paper titled “Most Reported Genetic Associations with General Intelligence Are Probably False Positives.” Most!
So, psychology is under attack.  We blogged not long ago about an op/ed piece in the New York Times by two social scientists calling for an end to the insistence that the social sciences follow any scientific method.  Enough with the physics envy, they said, we don't do physics.  Thinking deeply is the answer.  But, would giving these guys free rein to completely make stuff up really be the solution?  Well, it might just be, if their peers agree. But, let's not just pick on psychology.  The problem is rampant throughout the sciences. 

Meanwhile, the motto seems to be:  Haste makes....nutrition for scientists!

Thursday, May 17, 2012

The Prisoner's Dilemma dilemma

Last week's In Our Time on BBC Radio 4 was a discussion of Game Theory.  It was an interesting discussion as far as it went, but we want to talk about the evolutionary implications here. 

The Prisoner's Dilemma (PD) is a famous game theory game:
Two prisoners are accused of a crime. If one confesses and the other does not, the one who confesses will be released immediately and the other will spend 20 years in prison. If neither confesses, each will be held only a few months. If both confess, they will each be jailed 15 years. They cannot communicate with one another. Given that neither prisoner knows whether the other has confessed, it is in the self-interest of each to confess himself. Paradoxically, when each prisoner pursues his self-interest, both end up worse off than they would have been had they acted otherwise.  (Answers.com)
Both confessing can be shown mathematically to be an ESS--evolutionarily stable solution.  This is 'evolutionary' in the sense that competition for resources that involves some risk, involves similar kinds of decision-making.

Red-tailed hawk; Wikipedia
Another 'game' of evolutionary interest is called Hawk and Dove (HD), which biologist John Maynard Smith brought to prominence in the 1970's in an effort to solve the problem of how cooperation evolved.  Roughly, the idea is if there is a single resource and two competitors, each can be aggressive and try to get it all, or they each can offer to share.  You balance the expected gain by trying to out-muscle your competitor, but there is also a cost if you lose.  It's been shown that a balance can be struck in which depending on the Value and the Cost amounts, some fraction of the time--that is, with some probability--you behave aggressively, and the remaining fraction of instances, you decide to share.  That is the ESS.  The fractions depend on the Value and Cost amounts, and neither party reveals this probabilistic strategy to its opponent.

Bar-shouldered dove; Wikimedia
Both PD and HD and many other similar types of games reflect situations that are very common in human society, but also very common in Nature--ecological balances, mating competitions, competition for resources, and so on.

Game theory is immensely popular among evolutionary biologists (and others who are obsessed by the view that life is mainly about Darwinian winner-take-all competition, or who simply want to understand the balance between competition and cooperation).  There is the additional appeal that game theory usually requires sexy, sophisticated mathematics to find the right strategies, or stable ones.

If people or birds are seen to be following some strategy when they compete, it is then assumed that they probably evolved to do this.  This then whets the prurient appetites of those who want to peer into your genome to see how, despite silly illusions of free will, you're really just a genetic automaton.

So, is it realistic to assume that something so open-ended and complex as a game theory behavior could have evolved?   After all, games like PD or HD seem widespread and if birds, ants, or even humans are just complex gene machines, mustn't it be possible to pre-program them (i.e., genetically) to play the game the evolutionarily optimized way?

This is really a dilemma, because even just one game, say HD, can arise in all sorts of ways even within a given individual's lifetime.  How can genes be pre-wired to recognize all the situations and identify their similarity and then push the Play button?  After all, every brain is wired in zillions of different ways in detail.  So what kind of gene or genes could possibly produce this behavior?  Remember that genes code for proteins, and have to be regulated in specific contexts; they are not individual computer programs.

One way to answer this is by a kind of meta-evolutionary view: we may not be able to identify the wiring diagram, but the net result of evolution is a brain that can perceive the environment and evaluate costs and strategies, and figure out for itself what is best.  The selection pressure is general, and it's for evaluating conditions rather than 'for' some specific strategy.  No need whatever for any specific genes 'for' HD or PD playing!

In this view, and especially if games really are cosmically mathematical (as they must be, given how widely they are found and shown to have similar strategy properties), then a brain that is somehow good at evaluating the real world will figure this out and identify the better strategy.  The same brain is faced with multiple and diverse evaluation situations, so that all we need is overall evaluative ability to get what we see, however such ability actually can be built into neural synapses and the like.

One would certainly easily be able to relate this to probabilistic strategies like HD, because each individual is more or less guessing what to do each time, the result being an empirical probability--the observed fraction that individuals act like hawks, or doves.  Likewise, one could observe that played something like PD properly, because s/he figures out the general risk-benefit situation.  No need for specific evolution of some convoluted gene-based mechanism specific to the game (which would imply the same to evolve separately at the gene level, for every other situation-evaluating things that animals do).

This is a way in which things can be 'genetic' in a general sense, but not specifically hard-wired by selection for a specific task.  That's a big difference!

It is genetically deterministic in a sense, by not in the precise Darwinian sense so often invoked, explicitly or just under the surface, so routinely in discussions of behavioral evolution.

Wednesday, May 16, 2012

On the causes of the obesity epidemic, maybe everyone's right.

Interesting how much various perspectives differ on the light they can shed on a subject.  Obesity is a significant and increasing problem in the US and much of the rest of the world.  Why?  The answer depends on who you ask.  A geneticist says it's genetic, and probably billions of research dollars have been spent on looking for genes 'for' obesity.

An epidemiologist is likely to say it's gene by environment interaction, though these days an epidemiologist may well finger only genes, given that identifying environmental risk factors for complex traits, even those for which environmental factors are clearly primary, has so often proven to be daunting. Many epidemiologists excuse their jumping onto the genomic bandwagon with the rationale that yes, genetic effects are weak, but we need to know them anyway because once they are identified they can be regressed out of the search for the real (to them, environmental) effects.

A nutritionist might say it's diet, and/or not enough exercise, and the girth of the diet section of any bookstore tells you how many ways that explanation can be parsed.  Indeed, that girth itself is an indicator of the size of the problem -- a simple and cheap epidemiological stand-in!

Some people pick out a single component of the diet -- sugar is a big one these days; we blogged about that here -- as the culprit. A lipid scientist might chalk it up to leptin, a hormone involved in regulating appetite and metabolism.   A person struggling with his or her weight might say it's personal weakness.

And now mathematics weighs in.  An interview with an applied mathematician at the NIH was reported in yesterday's Science Section of the New York Times Times.  Carson Chow is at the National Institute of Diabetes and Digestive and Kidney Diseases, where they have a growing interest in mathematical study of obesity.  Why not, math is technical, opaque, and so has sex appeal!  Chow was hired in 2004, at which time he had little knowledge of obesity, but quickly learned.
I could see the facts on the epidemic were quite astounding. Between 1975 and 2005, the average weight of Americans had increased by about 20 pounds. Since the 1970s, the national obesity rate had jumped from around 20 percent to over 30 percent.
The interesting question posed to me when I was hired was, “Why is this happening?”
When he first arrived, Chow worked with a mathematical physiologist who had developed a model of obesity that involved "hundreds of equations", including all the usual variables -- height, weight, exercise, caloric intake, and so on.  Chow says he pared it down into a simple equation, the essential message of which is that the obesity epidemic has been caused by "the overproduction of food in the United States." 

This is interesting, but not the first time this explanation has been proffered.  'Food chain journalist' Michael Pollen has also blamed obesity on overproduction of food.  In particular, the excess of nitrogen after World War II, and its subsequent use as a fertilizer, which meant more corn being grown, and thus more corn-based processed food, all dependent on farm subsidies. And, it has long been felt that the generally post-war epidemic of obesity and related disorders in Native Americans was due to sedentary, depressed lifestyles and the open-ended availability of cheap calories.

But, overproduction of food can't really be the answer by itself, because excess corn can sit in the field until the cows come home if no one is going to buy it or what's made from it.  Someone had to convince the consumer to buy, and then eat the stuff.  So then, maybe it's the advertising industry that's responsible for the obesity epidemic.  Those paid-deceivers lie (so to speak) at the heart of many of the more serious problems in the US these days, after all.  Indeed, Chow believes that if the industry stopped marketing food to children, that would be a start.  (Oh, no, can't limit free speech, bleats Madison Avenue, claiming in effect that making you obese is their first-amendment right!)  Further, Chow says, "You simply have to cut calories and be vigilant for the rest of your life." Vigilant in resisting the appeal of all that food that's being flashed at you wherever you look. 

But maybe there is no single cause of obesity.  Maybe obesity is yet another complex trait and, collectively, everyone is right.  Perspective is important -- if you're studying leptin, what matters to you is not why there's so much excess food in the marketplace, but why people want to eat it.  If you're a geneticist, nothing matters except your next GWAS.  It's like the question of what causes AIDS -- is it HIV, needle sharing, poverty?  It's all of the above.

This is, in a way, a lesson for much of life -- including evolution and genetic causation.  Life is not about single factors. If it were, or had been, it would be far too vulnerable to extinction.  The buffer of complexity spares life, while the complexity of buffets generates spare tires.