Showing posts with label public health. Show all posts
Showing posts with label public health. Show all posts

Friday, October 19, 2012

Social Malaria

My name is Daniel Parker and I am a PhD candidate at Penn State University in the Anthropology and Demography Departments.  I consider myself to be a population scientist and my research concerns a range of population scales, from the microscopic level to human metapopulations (populations of populations).  Humans are my favorite study organism; however I am also very interested in the microparasites and invertebrate vectors that plague humans.  My dissertation research looks at human migration and malaria in Southeast Asia.  Anne and Ken invited me to write a guest post on this subject, and this is it.
--------------------------------



Are there social determinants to malaria infection?

If you’re a social scientist you might be quick to say yes, but if you understand the biology of the disease the question may not make much sense to you.

A female anopheline mosquito feeds on someone carrying the sexual stage of the parasite.  The blood meal gives her the nutrition necessary for laying her eggs.  Assuming that the parasite has successfully undergone another transformation in the mosquito gut, and that the mosquito feeds on another person, she may transfer the infection.  Mosquitoes probably don’t care about the socio-economic status of the people on which they feed (though they do seem to prefer people with stinky feet and pregnant women).  It is probably safe to say that all other things being equal, mosquitoes really don’t care who they bite.  But are all other things equal?  Not even close…

Let’s consider our not-too-distant history with malaria in the U.S. since it was a plague of non-trivial proportions for a large swath of our nation.  During the 1860s a prominent scientist (one of the first to publicly suggest that malaria may come from mosquitoes) argued for having a giant screen placed around Washington D.C. (which was a swampy, malaria infested city up until the mid 1900s).[1]  Several of our presidents seem to have suffered from the disease.  George Washington suffered throughout much of his life with bouts of fever that were likely malaria.  Presidents Monroe, Jackson, Lincoln, Grant, and Garfield also may have suffered from malaria.  On a personal note, both of my grandparents contracted malaria growing up in modern day Oklahoma (at that time it was still Indian Territory).  My grandmother still drinks tonic water, which contains the antimalarial Quinine, when she feels a headache or chills today.    The following maps (I apologize for the poor resolution) come from a CDC webpage about the history of malaria in the U.S.

 
CDC Malaria History

A question, then, is: How were we so successful at eradicating malaria here?  Furthermore, why didn’t we do that everywhere else?!!!

A favorite story for many anti-environmentalists is that it was all or mostly because we used DDT.  And beginning in the 1930s we did use the hell out of DDT.  Apparently it was common practice for parents in the Southern U.S. to encourage their children to run behind DDT fog trucks as they drove down streets.  (See this blog post for some related stories).  But some real problems with DDT are that it doesn’t just target mosquitoes, probably also targets the predators that would feed on mosquitoes and other pests, and can potentially cause all sorts of troubles (with regard to bioaccumulation and/or biomagnifications) as it works its way through trophic levels.  A few people noticed this could be a problem (see Silent Spring by Rachel Carson) and DDT production was halted in the U.S in 1972.  (Soon after there were global efforts at banning its use for agricultural purposes).

But DDT wasn’t the only thing that changed in the U.S. during the Second Great War.  The U.S. was just coming out of the Great Depression and there were some interesting demographic things going on too.  For example, lots of working-aged males were away for the war, returned in masse, and then some major baby-making ensued.  The economy was rebounding and suburbia was born, meaning that many of those baby-makers could afford houses (increasingly with air conditioning units) that wouldn’t have been possible in previous years.  There were major public works projects aimed at building and improving drainage systems and sanitation.

During this same time period chloroquine, a major antimalarial drug with some important improvements on quinine, went into wide-spread use (mostly in the 1940s) but by the 1950s there were drug resistant parasite strains in Southeast Asia and South America.  This isn’t a surprising occurrence.  Antimalarials provide a pretty heavy selective force against the parasites.  Furthermore, those parasites undergo both clonal and sexual reproduction, meaning they can potentially generate a lot of novel variants and strains.  This has been the curse of antimalarials ever since, soon after they are rolled out the parasites develop resistance and resistant strains quickly spread globally.

Eradication of malaria in the U.S. occurred during a time when we were using heavy amounts of DDT, when we had access to relatively cheap antimalarials, and when we were undergoing some major socio-economic, structural, and demographic changes.  However the DDT was becoming an issue of its own and wasn't working as well as it once did.  The antimalarials weren't working as well as they once did either.  Despite this fact, and despite the fact that mosquito vectors for malaria still exist in the U.S., we still don’t have a real malaria problem.  And while it is almost impossible to tease out all of the contributors to our current malaria-free status, I argue that the social and economic factors that changed during this time period are the main reason why malaria is no longer a problem for us here in the U.S.  If that weren't the case, we’d be back to using insecticides and antimalarials to try to eradicate it once again.

I’m certainly not the first to notice such things.  A study on dengue fever (a mosquito-borne viral disease) in a Southern Texas/Northern Mexico town split by the international border (los dos Laredos) found that people without air conditioning units seem to have more dengue infections when compared to people who do.[2]  Poor people, living on the Mexico side of the border, tended to leave their largely unscreened windows open since they didn't have AC units to combat the sometimes brutal heat in that part of the world.  This is a clear example of how socio-economic factors can influence mosquito-borne disease transmission, but it plays out in other ways in other environments and parts of the world.

In Southeast Asia, where I do malaria research, many if not most of the people who are afflicted with malaria are poor, ethnic minorities and migrants who have been marginalized by governments and rival ethnic groups.[3]  Constant, low-grade warfare in Myanmar (Burma) for the last half century has left many of the residents of that nation in a state of public health crisis.  And, since pathogens don’t normally respect international borders, malaria remains a problem for neighboring countries such as Thailand (which is mostly malaria free when you exclude its border regions).  The story is the same along China’s border with Myanmar in Yunnan Province.  Mosquitoes don’t target people because they’re poor disenfranchised ethnic minorities.  But a lot of those ethnic minorities do happen to live in conditions that allow malaria to persist, and the mosquitoes who pick up malaria go on to feed on other potential human hosts, regardless of their economic status.  This means that your neighbor’s poverty can actually be bad for you too.

Arguably, most (not all!) public health advances can be largely attributed to socio-economic change (google: McKeown hypothesis).  Increasing the standard of living for entire populations tends to increase the health of populations too.  In Asia, nations such as Taiwan, Japan, most of South Korea (excluding its border zone with North Korea), and Singapore are malaria free.  Obviously, it isn’t always an easy task to increase the standard of living for a population, but the benefits go far beyond putting some extra cash in peoples’ pockets and letting them have nice homes.  The benefits include decreases in diseases of many types, not just malaria, and that is good for everyone.

Consider, now, the amount of money that is dumped into attempts at creating new antimalarials or that ever elusive malaria vaccine.  Consider the amount of money that has been dumped into genome sequencing and countless other really expensive scientific endeavors.  And then consider whether or not they actually have a lot of promise for eliminating or controlling malaria in places that are still plagued by this disease.  Sure, sequencing can provide insight into the evolutionary dynamics associated with the emergence and spread of drug resistance (and that is really exciting).  Some people believe that genomics will lead to personalized medicine, but even if this is true then I am skeptical that it will ever trickle down to the people that most need medical attention.  New antimalarials and new combinations of antimalarials may work for a while.  But it seems pretty obvious to me that what actually works over the long term, regardless of parasite evolution and genetics, is what we did right here in the U.S.  So, at the risk of jeopardizing my own future in malaria research, I've got to ask:

From a public health standpoint, is it possible that it’s cheaper to attack socio-economic problems in malarious places rather than to have thousands and thousands of labs spending millions and millions of dollars for cures that seem to always be short lived?  

Wouldn't we all get more bang for our buck if we took an approach that doesn't only address one specific parasite?       

1. Charles, S. T. Albert F. A. King (1841-1914), an armchair scientist. Journal of the history of medicine and allied sciences 24, 22–36 (1969).
2. Reiter, P. et al. Texas lifestyle limits transmission of dengue virus. Emerging Infectious Diseases 9, 86 (2003).
3. WHO, Strengthening malaria control for ethnic minorities in the Greater Mekong Subregion. 2011, (2008).

Wednesday, April 11, 2012

The next challenge in malaria control - artemisinin resistant parasites

Anopheles mosquito, Wikimedia Commons
Sometimes the news about malaria is good, as recently when deaths from malaria were reported to be decreasing, even if inexplicably, and sometimes it's not so good.  Last week saw two not-so-good stories -- one in The Lancet and one in Science -- about the increase in anti-malarial resistance in the Plasmodium falciparum parasite.  The Lancet paper documents this on the border between Thailand and Burma, and the Science paper reports the identification of the genome region in the parasite that is responsible for this newly developing resistance.  Because the parasites are becoming resistant to the best anti-malarial in use today, arteminisin, this is a serious issue.

The Science paper sets the stage:
Artemisinin-based combination therapies (ACTs) are the first-line treatment in nearly all malaria-endemic countries and are central to the current success of global efforts to control and eliminate Plasmodium falciparum malaria. Resistance to artemisinin (ART) in P. falciparum has been confirmed in Southeast Asia, raising concerns that it will spread to sub-Saharan Africa, following the path of chloroquine and anti-folate resistance. ART resistance results in reduced parasite clearance rates (CRs) after treatment...
As the BBC piece about this story says, "In 2009 researchers found that the most deadly species of malaria parasites, spread by mosquitoes, were becoming more resistant to these drugs in parts of western Cambodia."  This will make it much harder to control the disease in this area, never mind eradicate it.

Most malaria deaths occur in sub-Saharan Africa, and the spread of resistance to this part of the world would have disastrous public health consequences.  There is no therapy waiting in the wings to replace ACTs.  Whether the newly identified resistance is because infected mosquitoes have moved the 500 miles from the initial sites where resistance was found toward the border or because the parasites spontaneously developed resistance on their own is not known.  If the latter, this suggests that resistance is likely to arise de novo anywhere that artemisinin is in use -- and that's everywhere malaria is found, as ACTs are the most effective treatment currently in use.

This is, of course, evolution in action, artificial selection in favor of resistant parasites.  It's artificial because we're controlling 'nature' and how it screens.  Normally, selection that's too strong for the reproductive power of the selected species can mean doom -- extinction.  Blasting the species with a lethal selective factor can do that.  In this case, we'd like to extinctify the parasite.  But selection in a rapidly reproducing species is difficult because if any resistance mutations exist, the organisms bearing them have a relative smorgasbord of food -- hosts not hosting other parasite individuals, and this can give them an emormous selective advantage.  So the artificial selection against susceptibility is also similarly strong selection for resistance.

Unfortunately the development of resistance is inevitable when a strong selective force such as a drug against an infectious agent is in widespread use against a prolific target.  And it shows why the idea that Rachel Carson was personally responsible for millions of deaths from malaria because she pointed out in her 1962 book, Silent Spring, the harmful environment effects of DDT, an insecticide that effectively kills non-resistant mosquitoes, is short-sighted.  If its use against mosquitoes had been widespread and sustained, it would have long ago lost its efficacy.

The inevitable rise of resistance to treatment is why prevention or, even better, eradication are the preferred approaches.  Unfortunately developing a vaccine against malaria is proving to be a scientific challenge, and similarly evolutionary considerations will apply; and eradication, while doable in theory, is a political and economic challenge, and could involve the same resistance phenomenon if not done right.  So, the documented rise of drug resistant P. falciparum on the Thai Burma border is a severe blow.

We don't happen to know what, if any, intermediate strategies are being considered or tried.  Multiple moderate attacks, with different pesticides or against various aspects of the ecology or life-cycle might not wipe individuals out so quickly, but may 'confuse' them so that no resistance mechanism can arise because those bearing the new mutation protecting from agent X would be vulnerable to agent Y.  A complex ecology of modest selective factors, could possibly reduce the parasite population to a point where it really did become lethally vulnerable to some wholesale assault.

Or would it be necessary to accept some low level, but not zero, rate of infection to prevent major resistance?   Small pox and polio would seem to suggest that real eradication is possible, but how typical that can be expected to be, is unknown (to us).

Tuesday, December 20, 2011

Good news on malaria. Kind of.

It has been said that malaria has killed  more people than any other single cause in human history.  And that's not to mention the problem of ill health in people even if they don't die.  The reason for its toll seems to be the association of malaria with agriculture and settled populations that provide the right ecology for the disease.  Those populations were far larger than their ancestors', making vastly more people vulnerable than had ever lived in the past.  The evolution of genetic resistance has helped save many people, but not all that effectively.

But there's some good news from the World Health Organization.  The 2010 World Malaria Report states that the number of people who died from the disease has fallen 26 percent since 2000, 5 percent since 2009, to on the order of 655,000 deaths last year, predominantly children under the age of five.  And 2010 was the first year when no locally contracted cases of malaria were reported in the European region.  The WHO says this is because of increased use of malarial control measures such as the use of insecticide-treated mosquito nets, the increased availability of effective medicines, and the rise in the proportion of cases confirmed by testing prior to treatment, a widespread effort to reduce the spread of treatment-resistant disease.

However, WHO had previously set the goal of reducing incidence by half by 2010 from the rate in 2000, but it fell by only an estimated 17 percent (figures were not believed to be accurate enough from two dozen countries in Africa to be more precise about the rates).  The WHO has also set the goal of reducing mortality to almost none by 2015, a goal that is probably unlikely to be met, in spite of the fact that bed nets and diagnostic tests are cheap.

A story at StatesmanJournal.com questions the wisdom of setting such grandiose goals.
Dr. Robert Newman, director of WHO's malaria program, said it is disappointing not to have reduced malaria by 50 percent by last year. But, he said, it was "truly significant progress" that the parasitic disease's death rates fell by more than one-third in Africa.
He described the current goal of cutting malaria deaths to "near zero" by the end of 2015 as "aspirational," but added that it wouldn't be accomplished unless every person at risk has access to a bed net and suspected cases are properly diagnosed and treated. Newman also said it would cost $6 billion a year — about three times more than the world currently spends — to be successful.
"It is unacceptable that people continue to die from malaria for lack of a $5 bed net, a 50 cent diagnostic test and a $1 anti-malarial treatment," Newman said in an email.
The risk is that because control is so dependent on continuing donations from both the public and private spheres, when goals aren't attained, donors may stop giving.  This is the history of disease control.  And the global financial crisis isn't helping.  In fact, the Global Fund to Fight AIDS, Tuberculosis and Malaria, the primary funding agency for public health programs, currently can't fund its next round of grants.  Their financial difficulties will mean less funding for bed nets and treatment programs.  So, the danger of improved control becoming elusive is real. Yet bureaucracies, in our current 'advertising age' seem unable to keep their fund-seeking hype, in the form of these unrealistic goals, under control--and/or our population has come to be responsive only to hype.  Either way, it's not a good way to be!

We are also interested that the WHO report doesn't mention the reduction in malaria incidence that can't be explained by bed nets or treatment, something we blogged about back in September. We cited a paper published in the September issue of Malaria Journal by Meyrowitsch et al. which suggested:
...other factors not related to intervention could potentially have an impact on mosquito vectors, and thereby reduce transmission, which subsequently will result in reductions in number of infected cases. Among these factors are urbanization, changes in agricultural practices and land use, and economic development resulting in e.g. improved housing construction.
Or, they suggested, the decline might also be attributable to a decrease in the mosquito population due to changing rainfall patterns caused by climate change, an hypothesis tested by Meyrowitsch et al.  Year-to-year climate changes are going to be unpredictable, which means that their effect on mosquito populations, and thus malaria incidence and mortality, will be unpredictable as well. 

It's probably a mistake for an organization like the WHO to set unattainable goals, but it's also a mistake for them to let it seem as though they understand all the forces responsible for the epidemiology of a disease like malaria, which depends on a complex interplay of climatic, demographic, social, economic and biological factors, and is thus much more difficult to explain and predict than by simply reducing it to bed nets and treatment.  And comparably more problematic to predict rates of success or its timing.  Bed netting and treatment are crucial, of course, but its a disservice to the public health infrastructure that is working hard to control the disease to make it seem simple.

But, on a positive note, at least no one's saying that if only we could sequence, or even genotype, everyone at risk we'd have the problem licked.  Maybe if we sequenced a few mosquito nets though....

Tuesday, November 24, 2009

Drinking your way to....health? oblivion? cancer? More on 'evidence based' medicine

Perhaps the main concern of our blog is to understand biological causation. Our interests are general, but the issue comes up disproportionately in understanding the findings of medical research, because that is naturally what a large fraction of funding supports.

Well, last week the BBC reported a study in Spain that says that men's health is substantially improved by moderate daily drinking (sorry, women, maybe your turn will come with the next discovery). That is, 3 or so drinks a day reduces heart disease risk by 35 to even 50 percent-- a very substantial difference indeed!

But why did this story make the news? How many times does alcohol consumption have to be studied in regard to health risks? It is included in many, perhaps the vast majority of epidemiological studies, and it has been so for many decades. How could such an effect have been missed? Indeed, how could it possibly be that we don't have solid, irrefutable knowledge by this time? Why would even a Euro cent have to be spent to study its effects any further?

This is highly relevant to the notion of 'evidence based' medicine, because the recommendation about alcohol use bounces around like silly putty. The only thing that is uniformly agreed on is that too much is, well, too much (but the greatest heart disease risk reduction includes those Spanish guys downing 11+ drinks a day).

Here is a case in which culture is part of the nature of 'evidence'. In prudish America, alcohol is considered something so pleasurable as to be necessarily a sin and is studied intensely. It has been controversial whether hospitals should offer patients a dinner glass of wine. Officials dread to recommend drinking at all. So the 'evidence' required to make a recommendation depends on subjective value judgments. But even if one were to be a hard-nosed empiricist, we again ask how we could possibly not know the answers with indisputable rigor.

If this is the nature of evidence, then what evidence do we accept? Is it always the latest study? Why do we think that is any better than the next latest study to come down the pike tomorrow? And if so, why don't we ignore today's study? Why do we think former studies were wrong (some may be identifiably so, but most aren't obviously flawed). Is some aggregate set of studies to be believed? Is it the study that let's business as usual be carried on, for whatever reason?

Our answer is that in addition to the cultural side-issues, there are so many complex factors at play, both causal in regard to what alcohol does in the body and what the body does to it, and in regard to confounding factors, that there is no simple 'truth' and hence it is unclear what counts as 'evidence.' Confounders are factors that may not be known or measured but that are highly correlated with the measured variable of interest (daily alcohol consumption) so that cause itself is hard to identify.Confounders may be causal on their own, or may causally interact with the factor under study.

For example, if the more you drink the more you smoke or the less sleep you get, or the more sex you enjoy (which, since pleasurable, is a sin and must be harmful) these other factors, rather than the alcohol, could be what affect your heart disease risk directly.

Confounders are confoundedly difficult to identify or tease out. Their exposure patterns and even their identity can change with lifestyle changes. And then there are many potentially directly relevant variables, too. When do you drink? What do you drink? With or without olives or a twist of lemon (or salt on the rim)? How uniform is your daily consumption whose average is measured on a survey? Even if these things were known, the future exposure patterns cannot be known, so that today's evidence is really about yesterday's exposures and so the accuracy of future risks based on this evidence is inherently unknowable.

Finally, if drinking is encouraged and heart disease is reduced, will this be good for public health? Or will it increase the number of, say, fatal accidents or violent crimes? Or is it -- well, it is -- a kind of Get-Cancer program? Why? Because if you don't get heart disease you'll live longer and that by itself increases your cancer risk. Not to mention the risk of Alzheimer's, hearing and vision problems, and a host of other older-age problems.

Perhaps the oldest advice in western medical history is from Hippocrates, about 400 BCE. That was "moderation in all things." In today's world, with our romantic notions about the powers of science, such advice is so non-specific and non-technical, that it is considered a cop-out that is not 'evidence-based'. Maybe so, but it's still the best advice. That's because it implicitly includes the unmeasured and unpredictable risk-factor regimes to which people are exposed--and that is evidence.

"Just the facts" sounds like raw empiricism, the kind of rational empiricism our society values. But the 'facts' weave a tangled web.

Monday, June 15, 2009

The simple facts of life

Here we report on some reflections after our participation in a meeting on the 'New Genomics in Medicine and Public Health' held at the University of Bristol, UK. The talks were varied and interesting, including a talk about Mendel, reports of successful and unsuccessful genomewide association studies, plaudits for the UK Biobank, and discussion of clinical applications of genomic findings.

An important question these days, related to various methods in genetics and its role in medicine and public health, is how causally complex life really is--a question at the heart of most of the work reported at the meeting. Some normal traits as well as diseases clearly are genetic, in that their variation is clearly caused by variation in a single gene (or a small number of genes, in a way that's well understood). But others are less clear cut, as we've discussed here a number of times.

Vested interests of all sorts, including venal and careerist interests, but also strongly held scientific conviction affect this area these days. One way to put the question is: "How causally complex is life?" Here the interest is mainly in genes, environment getting some but usually rather casual or minimal attention, and the question boils down to how well phenotypes can be predicted from known or knowable genotypes. Sometimes this means using individual variation to predict individual disease risk--this is the major original purpose of GWAS (genome-wide association studies). Sometimes it means using natural variation as a tactic to identify genetic pathways that are responsible for some normal trait; the idea here is either that, when mutant, the pathway (or 'systems' or 'network') genes could lead to disease and/or that these genes, when known, can be used as general preventive or therapeutic targets.

A commonly invoked motivation for human genetics work these days is that we will be able to implement 'personalized medicine', to predict disease or treatment, or to suggest preventive measures, based on each person's genotype. Many companies are promoting this, and the molecular genetics community is hyping it very heavily (here, there is no doubt of strong material vested interests, even if some actually believe it will work as advertized).

There are hundreds of diseases for which a, or often the causative gene is known. Sickle cell anemia, Huntington disease, Phenylketonuria (PKU), Cystic fibrosis (CF), and Muscular dystrophy (MD) are just a few examples. For these, predictive power already exists, though clinical application is not necessarily based on genotype. There are other examples where the latter is true, but these are generally rare in the population. Promising gene-based molecular therapy is in the works for CF, MD, and maybe even for some forms of inherited breast cancer (due to BRCA1/2 mutations). For these diseases, causation is clear even if there are substantial variation in risk, age of onset, or severity. Causation here is usually thought of as simple.

But for most common and/or chronic diseases, the story is far from clear as we've mentioned in various earlier posts (and as is widely discussed in the literature). These traits usually have substantial heritability (i.e., familial risk--if a close family member is affected, your chance of getting the same disease is greater than that of a random member of the population to which you belong). That means that, unless we are somehow badly understanding things, genetic variation plays a major role in risk (at least in current environments). Yet after many sophisticated, large studies, identified genes account for only a small fraction of the familial risk. The data suggest that many genes, say 'countless' genes, contribute substantial risk in aggregate, but individually their contribution is so small as to be unidentifiable by feasible (or cost-justifiable) studies. That would suggest that the disorder is caused by numerous combinations of huge numbers of individually weak, and rare, genetic variants. This is known classically as 'polygenic' inheritance, and if it's what's going on, things are very complex indeed.

Others, focused on the many clearly 'Mendelian' (single-gene) traits, simply don't believe life is that complicated. They suggest at least two other possibilities. One is that only a modest number of genes contribute, but most of the culpable alleles (sequence variants) are so rare and weak that genomewide association studies cannot pick them up. At such genes, there may be one or two strong, common alleles and these have high penetrance (when present, the disease usually occurs) and so they can be identified in family or GWAS. Those variants only account for a small amount of overall genetic contributions. But once the gene is known, we can sequence it in many patients and, lo and behold!, we find many other alleles that, some argue, contribute the rest of the observed family risk.

There is some truth to this: we have done simulations to show that there can be high heritability but only a few contributing genes, for just such reason (heterogeneity of the frequency and effects of existing alleles).

Another possibility is that a modest number of genes have variants with rare, but not very rare frequency. These will be identified by the panoply of existing methods, and once that's done it will be possible to genotype everyone at these genes, identify each person's individual set of variants, and determine risk. These are called 'oligogenic' effects, because the number of genes involved is small rather than huge. This view acknowledges the current problem, but assumes it will go away with enough data--and, importantly, that business as usual is a right approach.

Presentations at the Bristol meeting, including Ken's, show clearly that causation is a spectrum of aggregate vary rare genetic effects, a larger but still small fraction of oligogenic effects, major gene effects, and polygenic effects.

The question is: what do we do if this is true? Where is the practical limit below which attempts to identify all the genes are futile or not worth the investment, and is it likely that current attempts will at least identify the bulk of genetic effects and the networks involved so that the disease can be eliminated in whole or at least major part?

There is no single consensus in this area. Some are more skeptical than others. Some argue that knowing the genetic contributions to disease may make diagnosis more specific (the doctor can test for which gene is contributing to a given case), even if genotype-based prediction will remain a dream in the eyes of venture capitalists. Other computophiles believe that if enough computers are used on enough DNA sequence, the problem will, like infectious diseases, be solved. We won't be sick any more.

Time will tell where in the causal spectrum most traits lie. One thing we can be sure of, though: in this contentious area in which huge career, institutional, and commercial investments are at stake, in years to come, retrospective evaluation will always claim victory! Few will look back and say that we knew better than to make the level of investment in genetic causation that we are currently making.