Tuesday, December 7, 2010

Hope, not hype -- stimulating remyelination may be a possible route to multiple sclerosis (MS) therapy

We don't do this very often, but here we point out an excellent use of genetics with a potentially very important therapeutic outcome.  Authors writing in Nature Neuroscience on Dec 5 report that the ability of myelin sheaths in the CNS to regenerate following acute demyelination is limited in dymyelinating diseases such as multiple sclerosis (MS), but that with pharmacological and genetic manipulation methods, they were able to enhance remyelination in rats.  This has the potential to be a significant advance for treating demyelinating diseases. 

The molecular basis of remyelination has not been well-characterized, but it has been known, as stated in the paper, that following demyelination, "adult oligodendrocyte precursor cells (OPCs) can migrate to the area of injury, differentiate into oligodendrocytes and restore myelin sheaths".  In individuals with MS, in part the problem is that following the demyelination episodes that characterize the disease, OPCs don't differentiate into myelinating oligodendrocytes.  (OPCs are the CNS equivalent to Schwann cells, which insulate axons.)

To identify genes of interest in the remyelinating process, Robin Franklin's group demyelinated nerve sheaths in rats with a toxin, and generated a list of all the genes expressed in the lesions during the remyelination process, a complete "transcriptome".  They found thousands of genes differentially expressed over time in the lesions as they regenerated myelin, including, early on, genes involved in the immune response, and later, a number of genes already known to be involved with myelination, cell metabolism and proliferation and differentiation.
These results show that the overall molecular signature of CNS remyelination involves distinct and temporally regulated signaling pathways that are characterized by active inflammation at 5 dpl [days post lesion] and by the initiation of remyelination at 14 dpl.
But one gene was of special interest because it was one of the most significantly upregulated genes at the time when remyelination was occurring, 14 dpl, and clustered with many genes involved in myelination.  This is retinoid X receptor gamma, RXR-gamma, found in the tissue of individuals with multiple sclerosis as well, and involved in the regulation of cell proliferation, differentiation and apoptosis, which they verified experimentally.  The paper shows this to have been a very careful characterization of this gene and its role in remyelination. 

To determine whether they could effectively activate RXR and promote CNS remyelination, they tested the effects of 9-cis-retinoic acid (9cRA) on RXR activity.  9cRA is an RXR ligand which is known to activate transcription of MBP, or myelin basic protein, which is expressed in differentiated oligodendrocytes.  If RXR signaling is involved in differentiation in these cells, it would be via MBP.  It turned out that OPC cultures treated with 9cRA showed indication of OPC differentiation, and in vivo tests of the effects of RXR signaling via this pathway were positive. They then determined that, in culture, in OPCs in which RXR activity was blocked, oligodendrocyte differentiation was inhibited.

So, this series of experiments, from gene discovery to characterization of function, suggests a possible pathway to treatment for demyelination diseases -- stimulate RXR activity.  A short interview with the senior author in the story on the BBC website shows him to be cautiously hopeful, but measured in his promises for a cure. When pressed on when this work might lead to treatment, he said that it was very difficult to say, but perhaps within 15 years. 

The BBC quotes him:
Professor Robin Franklin, director of the MS Society's Centre for Myelin Repair at the University of Cambridge, said: "Therapies that repair damage are the missing link in treating MS.
"In this study we have identified a means by which the brain's own stem cells can be encouraged to undertake this repair, opening up the possibility of a new regenerative medicine for this devastating disease."
The study takes advantage of the idea of genes for processes rather than things per se.  Myelin is, in a sense, an insulating compound: it's not alive, but is secreted by cells. Rather than try to apply some sort of coating to individual nerve cells by therapy the idea is to induce the cells to do it, when they're defective in that process.  That means tinker with signaling!  Thus, the problem here is with a signal reception and the consequent disruption of subsequent signal cascades leadiing to cellular production of myelin. By activating the signal efficiency, the downstream event is triggered -- and no meddling with the many intermediate steps is needed, as they are intact in the individual's genome.  In that sense, or rather in the way we've tried to describe in our book The Mermaid's Tale, it is communication and cooperation that have failed, and here, fortunately, appear to be able to be restored.

This is an excellent example of the use of genetic knowledge to explore a disease and potential treatment pathway.  A lot of work still must be done to figure out, among other things, how to stimulate remyelination in vivo, so caution is warranted.  Still, we hope along with the researchers that their work continues to be fruitful.

Saturday, December 4, 2010

Arsenic and Old News?

Alien life!!
"Arsenite...[incorporates into DNA] in the place of phosphate in the nucleotides during the synthesis of DNA."  

You've seen this story all over the web in the last few days.  Alien life!  Life on other planets!!  NASA is revived!   Mars men get ready, 'cause here we come!

But, in fact, is that really the story?

The report was published in Science Express this week and written about in the NYT  -- and everywhere else you look:
Scientists said Thursday that they had trained a bacterium to eat and grow on a diet of arsenic, in place of phosphorus — one of six elements considered essential for life — opening up the possibility that organisms could exist elsewhere in the universe or even here on Earth using biochemical powers we have not yet dared to dream about.
 “There is basic mystery, when you look at life,” said Dimitar Sasselov, an astronomer at the Harvard-Smithsonian Center for Astrophysics and director of an institute on the origins of life there, who was not involved in the work. “Nature only uses a restrictive set of molecules and chemical reactions out of many thousands available. This is our first glimmer that maybe there are other options.”
NASA, naturally enough, with its PR army always at the ready to instantly toot its own horn, didn't miss this juicy chance:
NASA-funded astrobiology research has changed the fundamental knowledge about what comprises all known life on Earth.
So some bacteria were artificially selected to eat arsenic.  What's the big deal here?  Is this a big deal?
Caleb Scharf, an astrobiologist at Columbia University who was not part of the research, said he was amazed. “It’s like if you or I morphed into fully functioning cyborgs after being thrown into a room of electronic scrap with nothing to eat,” he said.
Oh, that's it!  It is a big deal!  The bacteria morphed into cyborgs!  Or rather, the arsenic insinuated itself into the bacterial DNA, by replacing phosphates in some of the nucleic acids.  Is that a big deal?

That quote we started with?
It turns out that that quote with which we lead this post, in red, the one about arsenite incorporating into DNA, the big finding that NASA is touting, is from a paper published in 1980 -- 30 years ago -- about the health effects of arsenic and how and why it's so toxic. To humans.  Give humans enough arsenic and they die, but we can survive lower levels.  Kind of like bacteria.  And that 1980 paper cites a 1974 paper, Petres, J., D. Baron and I. Kunick, Untersuchungen fiber arscnbedingte VerKndenmgen der Nueleinsauresynthase in vitro, Derm. Mschr., 160 (1974) 724--729 -- ok, it's in German, but it's still knowledge that's out there.)


Something largely new, worth knowing--important, even!
It's long been known that Arsenic is chemically similar to Phosphorous and can substitute in many biomolecueles, like protein and DNA, and their reactions.  This is freshman chemistry.  It has also been known that the arsenic alternatives are less stable than those involving phosphorous in these compounds and reactions.

What this new study showed was that these particular bacteria, chosen because they had already adapted to a high-aresenic environment, could be induced by artificial selection pressure to incorporate arsenic into their basic biomolecules.  The study is a careful, sophisticated demonstration of this fact. The authors clearly state that the bacteria were already adapted--in the normal evolutionary way--to high-aresenic environment.  What they then did was gradually feed a serially transferred culture of these bugs to increasing concentrations of arsenic and little or no added phosphorous.  The bacteria that survived were able to substitute the arsenic for some, at least, of their biofunctions (including use to synthesize DNA).

This is worthy of publication in a major journal, and even newsworthy as showing the degree to which already adapted simple organisms can adapt further to a slowly changed environment.  It shows the potential power of evolutionary adaptation, and the degree to which life could, in some ways, function in an arsenic, phosphorous-deprived environment.

But this is not a new life form, any more than a giraffe is a new life form.  The idea that there is life that doesn't function exactly as we do is not new.  After all, we have found anaerobic life, life in extremely hot, cold, salty etc. conditions on Earth.  In most every way, these are ordinary bacteria and this is ordinary evolution followed by very carefully imposed artificial selection. 

Were they happy?
And those little bacteria?  They were much much happier when given phosphorus again after their arsenic bath.  As the authors say, the bacterium "is not an obligate arsenophile and it grew considerably better when provided with P" [phosphorus] -- which means that it would not have survived natural selection in the presence of both chemicals.

Indeed, this study doesn't provide a scrap of evidence that such forms could evolve into stable complex life forms de novo without the help of strong, intentional, teleological and hence totally non-Darwinian evolution.  Maybe it's possible, but this study shows nothing more than that bioreactions can occur, under some particular conditions, with the incorporation of arsenite in a complex cell already adapted to an arsenic environment.  We aren't chemists so can't say how much this study furthers, if not really revolutionizes, ordinary biochemistry.  The paper basically says nothing about life in NASA-land.   That was almost entirely PR hyperbole--the usual self-promotion (not, we note, by the authors in their paper, though they were funded by NASA).  

This paper can stand on its own legs as an interesting finding in biochemistry, or even a demonstration of the power of natural or artificial selection....but is there anything new here that wasn't already known in the 1974 German paper?  Perhaps the latter (we don't read German well enough to judge) was an in vitro reaction rather than one occurring in a whole organism.  If so, if this is a new finding about DNA synthesis in vivo, then responsible news media should just say that without all the ballyhooing and showmanship.

Do we know anything new about alien life?
As to alien life, what does this add?  Let's now assume that the study shows that, in principle, life could exist without phosphorous and with arsenic.  It has long been argued (wholly hypothetically, but let's not quibble about that either) that if there are billions of rocky planets in the universe (and, perhaps it's trillions, and perhaps there are many universes) and if even only a tiny fraction of those planets  have classical earth-life conditions, then just statistically there must be thousands of planets with life that has at least some similarities with our own.  That doesn't mean they use DNA with RNA intermediates, have brains or lipid cell membranes (or cells, for that matter).  But even assuming that is statistically possible, it may exist on at least some planets somewhere out there. The larger space is, the more of these there statistically would be, on the basis just of probability.  Of course such numbers games are fun, but prove nothing.

So what about the arsenic finding?  Suppose, say,  it doubled the fraction of planets that would have this additional life-possible make-up.  By playing the same numbers game, this would just raise the hypothetical number of planets with life on them by, say, a factor of two.  It doesn't change a single thing we've thought or known before.  Even if we assumed that the new result showed that arsenic could work.

It also doesn't raise by any serious amount the likelihood we'll find such planets, that we'll talk to the little green arseno-Men who live on them, or anything else.

Are we condemned to live in a world in which everything has to be boasted about, or turned into  entertainment?  Is it OK for science to be turned into fun in this way?  Or is it irresponsible? 

Science is interesting in its own right, and we know that, left alone, it will lead, if unpredictably, to increased and interesting knowledge, and improved standards of living.  But the hype-route obviously isn't working to make a population that's more deeply understanding of science.  It makes a population that (in the US) largely doesn't even 'believe' in evolution, and perhaps increasingly can't tell fact from fiction.

Friday, December 3, 2010

The Mermaid's Fin

Out of sheer neglect, we never posted a picture of this beautiful mermaid sculpture that Holly sent from Kenya when she was there last summer.  As she said, they almost got it right.




But that's an interesting point.  What's 'right' when it comes to mermaids?

Do they look like this?




Or more lusty like this?


Or like this?
 




Or like these PT Barnum mermaids (that some claim were fakes)?

It's all well and good for people to talk casually about mermaids. But (as we discuss briefly in our book) this impinges on serious science vs just-so story-telling.  What do mermaid's actually look like?  And is there species diversity?  Or are these images from a gallery of mermaid mutants?

In an age where, as we regularly try to point out, proper science and its relationship with the media and its lobbying for funds often get at cross purposes, reliable answers are important.  If mermaids don't actually exist (except perhaps at the bottom of Loch Ness), or if they all look the same, then there is  likely nothing to apply for research funds to study.

BUT, if they have all this variation, then in the spirit of several things we've posted about over the past days and months (like the lusty finger-length stories!), mermaids may be prime targets for new, major, large-scale biobanking GWAS studies!

Thursday, December 2, 2010

Loving hands, or 'go finger!'

Remember the Fickle
Finger of Fate?


WARNING:  This message contains adult content (along with something that purports to be 'science').  Digital discretion advised.

Well, we often hear that someone with loving hands is a good lover.  This can be taken both literally and as a figure (or finger) of speech.  The structure of hands isn't often thought of as a sexual attribute.  But maybe that has to change.

Not long ago Holly posted about the finding that Neanderthals were sex-obsessed thugs, based on the length of their finger bones, which indicates apparently how much exposure they had to testosterone in utero.

Now, to add to the complexity of the sexual finger is a story in the British Journal of Cancer about male finger configurations and the risk of prostate cancer. 
The ratio of 2nd and 4th digit length is fixed in utero (2D:4D ratio), and is sexually dimorphic, lower in men than in women. To date, only one longitudinal study has investigated digit ratio and prostate volume, PSA level and the prostate cancer risk. The ratio (2D:4D) is negatively related to testosterone and related phenotypes, such as sperm counts, and positively related to oestrogen concentrations. Accordingly, digit length pattern may act as a proxy indicator for the underlying prenatal testosterone levels. We therefore investigated this in a large case–control study of prostate cancer to explore whether there is any association between hand pattern and prostate cancer risk. 
That is, men whose index finger (2D) is longer than their ring finger (4D) were a third less likely to develop prostate cancer than men whose ring finger was longer.  But on the right hand only, not the left.  (Handedness even in sex!)  And, the previous study mentioned is the "Korean Cohort study" of 366 men, which found a negative association between digit ration and PSA, a measure of prostate cancer that is notoriously not terribly sensitive

As the BBC sums it up,
Being exposed to less testosterone before birth results in a longer index finger and may protect against prostate cancer later in life, say researchers at the University of Warwick and the Institute of Cancer Research.
Does this mean that if you are at high risk of prostate cancer, you're also a sex-obsessed rapine thug?  Or, does this mean that Neanderthal men all had prostate cancer? Have our noble investigators thought (yet) about getting a grant to give DREs (digital research exams, that is) to prisoners convicted of sexual crimes?  When will DRE results be admissible evidence in sexual abuse trials or preventive surveillance?

So this study is based on two assumptions -- one, relative finger length is indeed a reflection of testosterone levels during the development of the hand, and two, embryonic testosterone levels in fact influence risk of prostate cancer sixty or seventy years later. 

However, neither of these assumptions is tested by this study.  The study itself even says, cautiously enough, that digit length ratio "may [our italics] be a proxy indicator for prenatal testosterone levels." And the authors write that other adult diseases have been associated with uterine hormone levels.  The implication being: Thus, why not prostate cancer? 

This is on the nearly silly side, another use of research funds that probably shouldn't.  In addition to the issues we mention above, more than half of men have the Fickle Finger trait (wait, does this mean that the other 1/2 of men are not sex-obsessed? Impossible!  Who ever heard of a man who was not sex-obsessed?), yet the risk of clinical prostate cancer even by old age is only about 150 per 100,000 men.  Indeed, just as with PSA testing, the Finger Test could lead to a lot of screening in the Long Fingered, that could, like PSA testing, cause more morbidity and problems due to intervention than it solves. That's because most males of elder years have some prostate cancer, and most of those lesions never progress to a clinical stage before something more serious (and fatal) intervenes.

Of course, on the positive side, a glance at the hand is less embarrassing than a real DRE (digital rectal exam), or a PSA test to look for prostate cancer.   It may be as useful, at least in terms of risk.  It's a lot cheaper.  Of course the PSA testing companies are likely to resist this current interpretation. And with similar disinterest, what do the investigators say?  "We need a lot more research" (of course).

It all goes to show that even since our ancient fossil ancestors, women should be doing a size test on the guys they date, rather than just casually holding hands--and be prepared for what he might be want to do!  He may want his hands all over you....but it may be in his genes, so to speak, and how could he be blamed??

Wednesday, December 1, 2010

Supplemental data on vitamin D supplementation

Could the promotional enthusiasm over vitamin D supplementation be screeching to a halt?  Ok, that's too much to hope, but at least the Institute of Medicine has introduced a note of sanity into the discussion.

The IOM was charged with assessing the existing data to determine whether in fact half of us (or more) are deficient in vitamin D, whether deficiency actually does cause all the diseases it's been linked with in recent years -- cancer, diabetes, heart disease, the flu and other infectious diseases, autism, disorders of the immune system such as multiple sclerosis; indeed, most of the diseases of modernity -- whether excess vitamin D intake could be a problem, to establish adequate vitamin D levels, and to suggest gaps in our knowledge about vitamin D and how to fill them.  They also looked at whether calcium supplementation was necessary.

Their report, released on Tuesday, is summarized thus:
The IOM finds that the evidence supports a role for vitamin D and calcium in bone health but not in other health conditions. Further, emerging evidence indicates that too much of these nutrients may be harmful, challenging the concept that “more is better.
Dr Michael Hollick of Boston University is the most prominent proponent of the more-is-better view, and he ain't backin' down now.  Few people do back down from public postures, though of course if they're right they shouldn't.

Adequate vitamin D levels have been difficult to establish, in part because the causes of chronic disease are so difficult to identify.  As the NYT says,
It is not clear how or why the claims for high vitamin D levels started, medical experts say. First there were two studies, which turned out to be incorrect, that said people needed 30 nanograms of vitamin D per milliliter of blood, the upper end of what the committee says is a normal range. They were followed by articles and claims and books saying much higher levels — 40 to 50 nanograms or even higher — were needed.
In a population with high prevalence of chronic disease, it's very easy to establish a link with inadequate vit D (defined, essentially, as the level found in most of the unhealthy population), for someone who wants to believe it.  But, it's often impossible to say which came first, the purportedly low vitamin D levels or disease.  Someone who's chronically ill is less likely to be out in the sun, making vitamin D, than someone who's fit and healthy.  This is a classic case of the correlation doesn't equal causation caveat not being heeded.

The Institute of Medicine says, vitamin D is required to maintain strong bones, but even this is not unequivocal, since African American women have significantly lower vitamin D than European Americans, but significantly lower fracture risk as well.

And, in fact, says the NYT:
Evidence also suggests that high levels of vitamin D can increase the risks for fractures and the overall death rate and can raise the risk for other diseases.  While those studies are not conclusive, any risk looms large when there is no demonstrable benefit.  Those hints of risk are "challenging the concept that 'more is better,' " the committee wrote.

So, once again, what seemed like a simple story, a simple correlation and a simple cure for the ills of modern civilization turns out to have been simplistic.

That's a lesson that really needs to be learned: if something is so elusive it's unlikely to be that important in general or on its own.  And this means that simple hypotheses about function, and especially about evolutionary fitness, can easily become simplistic -- knowingly overlooking complexity in order to tell (or sell) a good story.  Instead of such simplistic Just-So causal or evolutionary stories, a major challenge for modern biology is -- or at least should be -- to face complexity on its own terms.  That's hard as we so often stress on MT, because the career, bureaucracy, and reward systems favor simple, quick-fix kinds of assertions.

Tuesday, November 30, 2010

Is asthma 'catching'? Is everything? Who said we'd defeated infectious disease?

The incidence of childhood asthma skyrocketed through the 1980s and 1990s and no one knows why.  Environmental epidemiologists have attributed the cause to a wide variety of factors including breastfeeding, bottle feeding, absence of helminth infestation, dirt, excessive cleanliness, air quality (good or bad), presence of pets in the household in infancy, the absence of pets, the use of acetaminophen in infancy, and even Facebook -- note that many of these are directly contradictory.  A few studies early on found that kids who grow up on farms tend to be at lower risk of asthma, and this led to the "Hygiene Hypothesis" which posits essentially that an immune system without enough to do is an immune system that goes awry.


A trend in epidemiology in the last few decades, in direct response to the inability to definitively identify environmental causes of complex diseases such as asthma or heart disease or type 2 diabetes, has been to look inward instead, to look for genes 'for' these diseases.  This isn't surprising, given the promises made by the Human Genome Project, for example, that knowing the genome would allow us to explain the majority of disease.  And, in fairness, given the great successes of human genetics early on at finding single genes for rare largely pediatric diseases like cystic fibrosis or Tay Sachs, the idea that genes could explain other diseases (even most diseases) was highly appealing and seductive.

But this meant that diseases were geneticized that, in our opinion, should have been left to the environmental epidemiologists -- genes can't explain an epidemic that took off as fast as the asthma epidemic did because gene frequencies don't change nearly fast enough.  Even if you posit that those who are affected have a genetic susceptibility to whatever environmental factor is triggering the disease, still it's clear that something rapidly changed in the environment, and, for our money, research dollars should be going toward finding that rather than genes.  (Why epidemiology has such a hard time finding such factors is another story, but it's largely because the tools of the trade are best at finding risk factors with large effects, such as infectious agents.  Though, why that hasn't been true of asthma, which, given the nature of the spike in incidence, seems to have had a main effect cause, is perplexing.)

But, largely because of this failure, many genetic studies of asthma have been done in the last several decades, including large genomewide association studies (GWAS).  And, just as with other genetic studies of complex diseases, in what is a rather repetitive drumbeat for trait after trait, nothing major has been found to reliably explain this epidemic.  Sure, some genes have been reported, but they just don't explain enough to be the answer.  Now researchers still interested in the genetics of asthma are suggesting, as with other complex diseases, that there must be numerous interacting genes with small effect.  Which is much more likely than one or two causative genes, yes, but it doesn't answer the question of what caused the epidemic.

There have been many ideas about the environmental component, most of them having to do not with pathogens but with pollutants, a number of candidates being listed above.  But now Science reports in the 26 November issue that there may be a bacterial link with asthma.  Again, the 'more microbes is better' hypothesis:
As odd as this might sound, there's mounting evidence that bacteria matter. Babies born via cesarean section, who experience a more sterile entry into the world than those born vaginally, are more likely to get asthma. So are young children treated with many courses of antibiotics. Along with animal studies, these observations suggest that the balance of bacteria and other microbes help guide immune development—and that when the balance is disrupted, disease may follow.
But it's complex.
All of us play host to bacterial residents. But children who develop asthma, researchers are learning, are home to different bacteria—and sometimes a less diverse mix—than those who stay healthy. “It's really coming down to the bacterial community structure, who's there, and in what numbers, and where,” [University of Michigan immunologist] Huffnagle says. 
So far, the evidence linking asthma and bacteria are associations, not proof that an imbalance of bacteria causes the disease. The big question, says Martinez, is, “Do asthmatics have an immune system that makes them be colonized by different things? … Or is it because they were colonized by different things that caused them to have asthma?” 
Researchers are currently comparing the microbiome (the community of microbes that colonize an individual) of kids with asthma and kids without, kids on farms, and kids in cities, to try to answer this question.  This is another way to ask if it's genes or environment -- which came first, the microbes, or the host environment?  And, of course many kids born by C section don't get asthma, and many kids born vaginally do.  It's complex indeed.

Microbiome analysis has some merits, beyond being yet another genomic Golden Calf for research labs. But we can predict that it is yet again a hunt for needles in a needle-stack of complexity.  We're likely still to be left with the problem that as a complex phenotype, like all other complex phenotypes, asthma is hard to explain, and hard to define.  In fact, an editorial in The Lancet several years ago suggested that every case should be considered unique, causally, physiologically and in how it's treated.

It is interesting, however, that schools of public health  abandoned their infectious disease departments in the '70s, declaring effectively that we'd won that battle (cheered on,  naturally, by geneticists seeing themselves as the funding beneficiaries).  Now, not only are clearly infectious issues like HIV/AIDS, multiple antibiotic resistant TB, SARS, and various influenzas still plaguing us, along with resurgent malaria and other neglected tropical diseases.  But many GWAS 'hits' are landing in genes involved in the immune system.

Inflammatory bowel disease, Crohn's disease, macular degeneration, and even schizophrenia are in this category.  If these pan out (and the first ones seem clearly to have done so), it means that non-acute, sub-clinical infection of long duration may build into diseases that did not seem either contagious or infectious.  If that's so, and the message sinks in, then GWAS may have done a service.  And epidemiologists like the sometimes-fringy Paul Ewald who have been touting the role of infectious role in chronic disease will have been vindicated.

Monday, November 29, 2010

RAiN gutter, or RNAi down the tubes?


Here's a story that may not seem so but is actually about the kind of extensive cooperation that characterizes life, and is a major theme in our book.

"Drug giants turn their backs on RNAi," reports Nature this week.  The headline writer must have had a hard time resisting the urge to end that sentence with an exclamation point, because RNA interference has been seen for the last decade or so as the latest greatest drug technique on the horizon, and if that's no longer true, that's news.
Not long ago, a technique called RNA interference (RNAi) seemed to be on the fast track to commercial success. Its discovery in 1998 revealed a new way to halt the production of specific proteins using specially designed RNA molecules, and it quickly became a favourite tool of basic research. In 2006, the scientists who made the discovery were awarded the Nobel prize for medicine, and the New Jersey-based pharmaceutical giant Merck paid more than US$1 billion to snatch up Sirna Therapeutics in San Francisco, California — one of the first biotechnology companies aiming to harness RNAi to create new drugs.
As one company, Alnylam Pharmaceuticals, one of the best endowed RNAi start-ups in the world, describes it:
RNAi is a revolution in biology, representing a breakthrough in understanding how genes are turned on and off in cells, and a completely new approach to drug discovery and development. RNAi offers the opportunity to harness a natural mechanism to develop specific and potent medicines, and has the potential to become the foundation for a whole new class of therapeutic products.
The discovery of RNAi has been heralded as a major scientific breakthrough that happens only once every decade or so, and represents one of the most promising and rapidly advancing frontiers in biology and drug discovery today. 
Well, according to the story in Nature, Alnylam has just laid off 50 workers, more than a fifth of its work force, because the Big Pharma company, Novartis, declined to extend its partnership with them.  Of course Alnylam says they still believe wholeheartedly in the promise of RNAi, but they would have to say that, wouldn't they?  Is protecting your stock prices and nervous investors an excuse for shading honesty, one of our recent themes?

RNAi is a naturally occurring process that interferes with gene expression when the antisense strand of an RNA molecule binds to the sense strand, thus inhibiting its translation into a protein.  The discovery of RNAi was a major discovery worthy of a Nobel prize because it revealed how cells naturally titer their level of gene expression.  They can start using a gene, but then quickly shut it down if brief or highly controlled timing is important, in forming organs in an embryo, for example, or cell differentiation or response to environmental changes.  It shows that nature is like the mushroom in Alice in Wonderland: nibble from one side to get taller, and the other to get shorter.

RNAi was quickly and widely heralded as a major breakthrough, and its potential in treatment of diseases like Huntington's or Parkinson's and so forth was obvious and exciting to researchers, pharma, and patients alike.

But, as with other forms of gene therapy, the realities of delivering the RNAi molecule to its target within cells is proving to be daunting.
The development of RNAi-based drugs has stalled as companies confront the challenge of delivering RNA molecules, which are notoriously fragile, to target cells in the human body, and then coaxing those cells to take up the RNA. "Getting these molecules exactly where we want them to go is a little more difficult than originally thought," says Michael French, chief executive of Marina Biotech, an RNAi company based in Bothell, Washington.
Of the dozen RNAi-based therapeutics in early clinical testing, most apply the RNA molecules directly to the target tissues, or aim to shut down the production of a protein in the liver, which takes up the RNA as it filters the blood. Several candidates also package the RNA within a lipid nanoparticle, a delivery vehicle that both protects the RNA and allows it to be shuttled across cell membranes. 
Alnylam claims to have a number of possible drugs in the proverbial (just-in-time-for-Christmas?) pipeline, and still enough money, even without Novartis, to do some testing, and other companies are still holding on as well.  So apparently we aren't hearing the actual death knell yet, but it's starting to look a lot like the sobering of the dance of enthusiasm for personalized genomic medicine by Big Pharma a decade or so ago because of the realities of complex disease.

There are several things here worthy of comment besides the discouraging news that a hopeful-sounding therapy might not work.  Differentiated organisms rely on internal integrity of their many cooperating parts -- organs, tissues within each organ, and complex interactions among components within each cell.  Stuff from the outside that is not brought in under controlled circumstances is not likely to be usefully incorporated into cellular machinery.  That's why immune systems of various kinds, often quite intricate, exist.  That's why cells highly control what crosses their membranes from the outside in, and get rid of what's no longer needed by kicking it out.

All this involves detection and cooperative action of large numbers of components that must be in the right place, on guard or ready for duty, at the right time.  It is no wonder that trying to sneak in something external, that is supposed to coopt the cell for its own purposes, is difficult!  Especially if this requires shanghaing many different parts of the cell to get this done.  Whether the problem can be solved is only for the future to tell, but the hyperbole by companies about how RNAi would quickly lead to a revolution in medicine was typically highly over-worked.

Years ago, David Stock, a very fine post-doc in our lab (now a prominent faculty member in beautiful Boulder, Colorado) spotted something strange in some work we were doing with embryonic mouse teeth.  We were interested in how teeth are patterned, and were working with a gene we had discovered called Dlx3.  David said he had sequenced some RNA -- purportedly messengerRNA coding for the Dlx3 gene -- but instead he had discovered the reverse sequence.  This was, at the time, 'impossible' and we dismissed it as a laboratory artifact.  RNAi had not yet been discovered, and since it was so unexpected we just never followed up on this finding (maybe we should have!).  That's how surprising RNAi seemed to be when it was shown to be real, widespread, ancient, and important in the biology of normal organisms.

But the idea, including the potential for practical application, was not new.  Even in our lab, we had tinkered with experimental uses of anti-sense RNA.  Various people had thought of using introduced anti-sense RNA in cells to experimentally alter gene expression.  The idea that this could have  application was also expressed by many, and we think some pharma explored this.  If genes are expressed via mRNA, which is translated into protein, then if you could inhibit that translation you could experimentally (or therapeutically) slow or stop the expression of the targeted gene.  AntisenseRNA would bind to the corresponding messengerRNA in the cell, so it couldn't be translated into protein.

Unfortunately, the method proved very difficult to use.....that is, it basically didn't work.  Even in cell or organ culture (such as, in our lab, growing embryonic mouse tooth germs in culture), the cells simply did not like incoming RNA, and RNA is quite unstable to begin with.  We and many others tried this approach, but gave up on it.

Thus it was that investigators thought of doing what nature had been doing, systematically and precisely and unbeknownst to science, for many many millions of years.  And unlike BigPharma, nature has made it work! 

The bottom line here (for science, citizens, and investors) is to keep the enthusiasm under control when making promises or pronouncements.  Stay closer to the truth.  And work harder before drawing conclusions.

Sooner or later technology and engineering often do succeed, and it's dangerous to bet against them.   But not everything works, and it's hard to know in advance what will.  Hopefully something as specifically targeted as RNAi will find useful biomedical or other application eventually, but without the hype.