Showing posts with label social epidemiology. Show all posts
Showing posts with label social epidemiology. Show all posts

Monday, May 16, 2016

What do rising mortality rates tell us?

When I was a student at a school of public health in the late '70s, the focus was on chronic disease. This was when the health and disease establishment was full of the hubris of thinking they'd conquered infectious disease in the industrialized world, and that it was now heart disease, cancer and stroke that they had to figure out how to control.  Even genetics at the time was confined to a few 'Mendelian' (single gene) diseases, mainly rare and pediatric, and few even of these genes had been identified.

My field was Population Studies -- basically the demography of who gets sick and why, often with an emphasis on "SES" or socioeconomic status.  That is, the effect of education, income and occupation on health and disease.  My Master's thesis was on socioeconomic differentials in infant mortality, and my dissertation was a piece of a large study of the causes of death in the whole population of Laredo, Texas over 150 years, with a focus on cancers.  Death rates in the US, and the industrialized world in general were decreasing, even if ethnic and economic differentials in mortality persisted.

So, I was especially interested in the latest episode of the BBC Radio 4 program The Inquiry, "What's killing white American women?" Used to increasing life expectancy in all segments of the population for decades, when researchers noted that mortality rates were actually rising among lower educated, middle-aged American women, they paid close attention.

A study published in PNAS in the fall of 2015 by two economists was the first to note that mortality in this segment of the population, among men and women, was rising enough to affect morality rates among middle-aged white Americans in general.  Mortality among African American non-Hispanics and Hispanics continued to fall.  If death rates had remained at 1998 rates or continued to decline among white Americans who hadn't more than a high school education in this age group, half a million deaths would have been avoided, which is more, says the study, than died in the AIDS epidemic through the middle of 2015.

What's going on?  The authors write, "Concurrent declines in self-reported health, mental health, and ability to work, increased reports of pain, and deteriorating measures of liver function all point to increasing midlife distress."  But how does this lead to death?  The most significant causes of mortality are "drug and alcohol poisonings, suicide, and chronic liver diseases and cirrhosis."  Causes associated with pain and distress.


Source: The New York Times

The Inquiry radio program examines in more detail why this group of Americans, and women in particularly, are suffering disproportionately.  Women, they say, have been turning to riskier behaviors, drinking, drug addiction and smoking, at a higher rate than men.  And, half of the increase in mortality is due to drugs, including prescription drugs, opioids in particular.  Here they zero in on the history of opiod use during the last 10 years, a history that shows in stark relief that the effect of economic pressures on health and disease aren't due only to the income or occupation of the target or study population.

Opioids, prescribed as painkillers for the relief of moderate to severe pain, have been in clinical use since the early 1900's.  Until the late 1990's they were used only very briefly after major surgery or for patients with terminal illnesses, because the risk of addiction or overdose was considered too great for others.  In the 1990's, however, Purdue Pharma, the maker of the pain killer Oxycontin, began to lobby heavily for expanded use.  They convinced the powers-that-be that chronic pain was a widespread and serious enough problem that opioids should and could be safely used by far more patients than traditionally accepted.  (See this story for a description of how advertising and clever salesmanship pushed Oxycontin onto center stage.)

Purdue lobbying lead to pain being classified as a 'vital sign', which is why any time you go into your doctor's office now you're asked whether you're suffering any pain.  Hospital funding became partially dependent on screening for and reducing pain scores in their patients.

Ten to twelve million Americans now take opioids chronically for pain.  Between 1999 and 2014, 250,000 Americans died of opioid overdose.  According to The Inquiry, that's more than the number killed in motor vehicle accident or by guns.  And it goes a long way toward explaining rising mortality rates among working-class middle-aged Americans.  And note that the rising mortality rate has nothing to do with genes.  It's basically the unforeseen consequences of greed.

Opioids are money-makers themselves, of course (see this Forbes story about the family behind Purdue Pharma, headlined "The OxyContin Clan: The $14 Billion Newcomer to Forbes 2015 List of Richest U.S. Families;" the drug has earned Purdue $35 billion since 1995) but pharmaceutical companies also make money selling drugs to treat the side effects of opioids; nausea, vomiting, drowsiness, constipation, and more.  Purdue just lost its fight against allowing generic versions of Oxycontin on the market, which means both that cheaper versions of the drug will be available, and that other pharmaceutical companies will have a vested interest in expanding its use.  Indeed, Purdue just won approval for use of the drug in 11-17 year olds.

In a rather perverse way, race plays a role in this epidemic, too, in this case a (statistically) protective one even though it has its roots in racial stereotyping.  Many physicians are less willing to prescribe opioids for African American or Hispanic patients because they fear the patient will become addicted, or that he or she will sell the drugs on the street.

"Social epidemiology" is a fairly new branch of the field, and it's based on the idea that there are social determinants of health beyond the usual individual-level measures of income, education and occupation.  Beyond socioeconomic status, to determinants measurable on the population-level instead; location, availability of healthy foods, medical care, child care, jobs, pollution levels, levels of neighborhood violence, and much more.

Obviously the opioid story reminds us that profit motive is another factor that needs to be added to the causal mix.  Big Tobacco already taught us that profit can readily trump public health, and it's true of Big Pharma and opioids as well.  Having insinuated themselves into hospitals, clinics and doctors' offices, Big Pharma may have relieved a lot of pain, but at great cost to public health.

Wednesday, June 18, 2014

Republican presidents are bad for our health?

Infant mortality in the US fluctuates with the political party of the President.  A paper published in the  June 4 issue of the widely respected International Journal of Epidemiology ("Us Infant Morality and the President's Party," Rodriguez et al.) reports that between 1965-2010, infant mortality rates were 3% higher when a Republican was president than when the president was a Democrat.

Rodriguez et al. write that previous "political epidemiology" has been cross-national, and has attempted to determine the effect of policy on public health; welfare states, national health systems vs not, higher social expenditures or medical expenditures per capita vs lower, etc. Income inequality was found to be correlated with public health until the data were re-analyzed and additional variables controlled (Avendano, 2012), leading Avendano to question whether income inequality was in fact causal, as opposed to either spurious or real but only correlation with some unmeasured variable(s).  Further studies have attempted to determine which social factors are actually causal; some have suggested social expenditure and the generosity of family policies may be. In this way, political policies may affect public health, but actual causality, rather than just correlation, is difficult to determine.

Rodriguez et al. posed the question, ‘Is the political party of the president of the USA associated with an important, objective and sensitive measure of population health, infant mortality?’ The idea is that the party in power drives macroeconomic policy, and macroeconomic policy influences the socioeconomic milieu, affecting variables that affect health and mortality.

Infant mortality has fallen dramatically since 1965, from a total of 24.7 per 1000 births to 6.1 in 2010.  In the graph below, the authors have removed the trend, and show total infant mortality, neonatal and post-neonatal mortality, by president, for blacks and whites.  During Democratic administrations, all rates are lower, across the board, on average 3% lower.



Logged IMR, NMR, and PMR residual trends and presidential partisan regimes, 1965–2010; Source, Rodriguez et al., 2014
The statistical effects are essentially the same for Blacks and Whites, but infant mortality among Black infants is about two times higher than it is among Whites, so the absolute effect is larger.  The percentages may be small -- very small, in fact -- but their consistency does lend them credibility.

Several things stand out about these results.  First, as the authors point out, the implementation of policies that might have had a direct effect on infant mortality -- Johnson's Great Society and Medicaid in the 1960's, or expansion of Medicaid eligibility between 1979 and 1992 don't correlate with these periodic dips in IMRs.  That would be the easy explanation.  But this means that the correlation with political party may have little or nothing to do with policy differences.

Or, Rodriguez et al. suggest, the correlation could reflect real, cyclical changes in socioeconomic conditions for mothers and infants, depending on national policy.  Or, differential availability of abortion, since high risk fetuses may be more likely to be aborted than fetuses at lower risk.  Or, it might reflect differing attitudes toward health disparities, with Democrats more likely than Republicans to use government to address them -- but what actual governmental policy is implemented, or eliminated, and thus responsible for the fluctuations is anybody's guess.

But there's something curious about these findings.  Neonatal mortality, death before 28 days of life, is generally considered to be due to conditions of pregnancy or congenital abnormalities, while post-neonatal infant mortality, death between 29 days and 1 year, includes sudden infant death syndrome, which isn't correlated with socioeconomic status, but PMR is also considered to be a reflection of socioeconomic conditions.  If that's so, then neonatal mortality should look quite different from post-neonatal mortality in this study, but it doesn't.  It shouldn't be fluctuating with policy differences or income inequality or whatever political or economic factors, if any, might be responsible for the trend reported here.  And, one would expect there to be a more marked difference between Black neonatal and post-neonatal mortality, since health disparities are most reflected in Black infant deaths.

Equally problematic is that one might expect that presidential terms are short relative to the lag time between implementing a new policy and its effects.  The study did allow for a one year lag time, but still, most health policies don't have immediate impact. So the incumbent's party may be irrelevant to what happens during his term, or it would at least be the successor's (sometimes the same sometimes different) party.  Do people's expectations, based on the current President's outlook, change their behavior in subtle ways? Sounds plausible, and would have nothing to do with the policy change itself, but so many people are uninvolved, uninterested in, or skeptical of the political system that this might not be much of an explanation. And the pattern goes back before CNN and FOX imitation news organizations had much intentionally motivating influence on what people thought or were aware of.

Still, social and political epidemiology are interesting approaches to understanding the underlying causes of ill health and mortality.  The fields look at risk factors several steps removed from those generally considered as causes of disease, so that AIDS, or malaria, e.g., might be attributed to poverty rather than HIV infection or being bitten by a parsite-carrying mosquito, and legitimately so.  That is, the idea is that poverty increases one's risk of exposure to diseases, and if you eliminate poverty you eliminate risk.  The difficulty, of course, is that enacting public health policy that calls for eliminating poverty is a lot more difficult than distributing bed nets or clean needles.

And, the problem of identifying cause from correlation is huge with such metadata.  It can be pretty much pure guess work to pull causal factors from the social or political hat, as this paper suggests -- in fact, if something like, to make something up, differences in completeness of registration of vital statistics in Republican and Democratic years were responsible for this cyclical dip, it would look just the same as if the cause were changing policy. The point is, that we just don't know.  In addition, it's hard to avoid interpreting results from one's particular political point of view -- maybe there's something interesting in this paper, maybe there's not, but it's very hard to know.

Wednesday, February 13, 2013

Polio deaths


In previous posts I’ve discussed the ways that diseases tend to persist in war-torn regions in ways that they don’t in other places.  For example, vivax malaria persists along the region separating North and South Korea, and Myanmar continues to have malaria even after neighboring nations such as Thailand and China have been relatively successful at controlling malaria.  I would suggest that the reason for this relation between disease and war has a lot to do with the conditions in which people that live in war zones must live.  When people flee an area where war has begun and cram into refugee camps, it’s predictable that there will be crowd- and hygiene-related disease.  Expect problems with intestinal and diarrheal diseases such as cholera.  Furthermore, while dodging bullets it’s probably hard to worry too much about drugs and bugs.  Public health efforts fall to the wayside.  

Frankly, if people would just stop acting like jackasses we would probably have a lot less trouble in our world.  But that’s probably too much to ask for.

Today I’d like to specifically talk about poliomyelitis (polio).  Polio is a viral disease and it spreads extremely easily via fecal and oral transmission.  Many who become infected with the virus never exhibit symptoms; however children under 5 are disproportionately affected.  Symptoms can vary widely, with the worst being paralysis and even death when the muscles necessary for breathing cease to function.  There is no cure for the disease.

Polio was historically a major problem even in Western nations such as the U.S.  By the late 1800s it was recognized as a severe and perhaps growing threat and in the 1950s a successful vaccine was created.  Since then a LOT of progress has been made in the control and even eradication of polio.  Today it really isn’t much of a problem for most of the world.  Or is it?

1963 poster from the U.S., courtesy of CDC and Mary Hilpertshauser
According to the World Health Organization, the only remaining nations with endemic polio are Nigeria, Afghanistan, and Pakistan.  These are places that frequently make the major news for reasons other than disease and probably aren’t on your radar when planning a family vacation (I apologize in advance if I’ve offended people in the tourism industry in any of these places).  But remember that most (around 90%) people are asymptomatic and that we have a highly mobile global population.  So you may not be worried about your kids getting polio while hanging out on a beach in Nigeria, but that doesn’t mean that it can’t wind up in your neighborhood (though most U.S. schools require you to get a vaccine before attending).

Anyway, the reason that I’m writing about polio today is that it keeps popping up in the news, most recently with regard to Nigeria.  Nigeria has been having some trouble lately.  For example, the radical religious group Boko Haram has been terrorizing a good chunk of the country now for several years.  The words Boko Haram can be roughly translated from Hausa to English as “Western education is forbidden”.  These guys REALLY don’t like “modern” Western science, which can be a real problem for people who like to save lives by practicing that modern Western science.  If you haven’t heard of them, keep your ears open because I think you’re going to be seeing their name a lot in the future.  

To get back to the polio story, several religious leaders in Nigeria have warned their followers against becoming vaccinated against polio.  Some have claimed that it will make you infertile while others claim that it will actually infect you with the virus (though I suppose if you’re a Boko Haram follower then viruses don’t exist in the first place).

Obviously this is a public health problem, but things have recently taken a drastic turn for the worse.  In the last week it appears that health care workers who administer the vaccine are actually being targeted by gunmen.  At least 9 were killed last Friday.  While no group has claimed these attacks, they resemble previous attacks that have been attributed to Boko Haram.  Perhaps even more disturbing is that these killings come just a couple of months after similar killings in Pakistan, which also targeted polio vaccinators.  In Pakistan, the Taliban has suggested that health care workers are working with, or actually are, CIA operatives.  (And while this may seem crazy, remember that the CIA did pay a Pakistani doctor to help in the capture of Osama bin Laden).

Clearly this is a pretty terrible situation.  Several organizations have hoped to actually eradicate polio, but clearly this won’t happen if the virus is allowed to persist in human populations.  (The potential for nonhuman hosts might also be an issue here, but getting it out of human populations is a noble cause anyway).  

Furthermore, what does this mean for future public health efforts in this region?  I suppose we can continue to bring outsiders in to vaccinate, but in my opinion it is always better to have local people maintaining public health efforts.  And when the news of this type of violence spreads through regions where it is occurring, I can’t help but think that it will in some ways shape the future of public health in those regions.  The children who are growing up now, who are just beginning to form ideas about what they will do when they are older, are now facing a world where being a medical provider can be a very dangerous thing.  Not everyone who goes into medical practice or public health does so because they have a passionate desire to practice medicine.  Some do it because it’s a relatively OK job and the added benefit of it helping others out gives it a sugar coating.  I worry that, for this type of person, a career giving vaccinations to local people will no longer be on the radar as a future potential occupation.  This could be a tragedy for public health, meaning that it would also be a tragedy for global health.




Monday, January 21, 2013

Disease driven poverty


In a few of my previous blog posts I’ve discussed the relationship between poverty and infectious disease.  Many of the most prevalent and severe infectious diseases in the world disproportionately affect the world’s poor.  Part of the reason is that the necessary resources aren’t available for tackling such diseases.  A lot of money is currently spent (wasted?) on designing biomedical ‘cures’ for diseases that persist in some places (usually economically poor places) while having already been eradicated in other places (usually economically rich).  It is my position that diseases such as malaria and tuberculosis remain major threats to some populations simply because of the way that financial resources are allocated in our extremely heterogeneous world.

But there is another angle to this story.

Not only does poverty lead to poor health, but poor health can also lead to poverty.  Quite frequently, that is, the arrows point both ways and the reality is a system in which there are “positive” feedback loops.  There is a growing literature on this type of system which is frequently referred to as a “poverty trap.”  Much of this literature has been in economics, where mathematical models have indicated that populations with infectious diseases are less able to ‘develop’ economically.

With economic development at the population level, e.g. with the growth of average income levels, we tend to see an increase in overall life expectancy at age 0.  Most likely this indicates a relationship between improved health and increased wealth.  However, most of the models that actually look at this relationship are either ecological (they are looking at the entire population and frequently assume homogeneity within the population) or at an individual level.  A few models have also looked at community or household levels.

One major problem with models of all types is that results can change when we change our unit of scale.  The effects of poverty on disease, for example, might be different if we look at a community level rather than a province/state level or even consider an entire nation to be a single population.  This is a problem known as the ecological fallacy (and is closely related to the modifiable areal unit problem) in which causal relationships at the population level don’t explain what is happening at, say, the individual level.  

Regardless of these problems and issues, there does appear to be a feedback loop between poverty and disease.

And this is an interesting thing from an anthropological view.  First off, poverty can mean different things to different people.  For example, to some, poverty means “not modern.”  Some indigenous groups actually choose to live in a traditional house rather than a more modern one.  In my opinion, “traditional” (or not modern) does not equal poverty, but it does get mistaken as poverty.  Poverty can also be a relative thing; something that becomes apparent when you don’t have as much stuff as the people with whom you are coming into contact.  Clearly this may lead to psychological and sociological issues, but it might also explain gradients in outcomes (relative health?)  Finally, there is a type of poverty that exists where people are simply unable to put food on the table.  While there can be some argument about the effects of modernization and relative poverty, I would suggest that this final type of poverty is unambiguous and its negative effects are less debatable.  

In poverty trap models we are frequently interested in investigating and understanding threshold levels under or above which equilibria are reached.  (There are quite a few relatively new papers out that are excellent references (see: Bonds, Keenan, Rohani, & Sachs, 2010; Plucinski, Ngonghala, Getz, & Bonds, 2013; Wood, n.d.)).  Perhaps it is easiest to consider at the unit of the household.

An already poor or marginally poor household in which the major breadwinner is afflicted by severe disease is plagued with multiple problems.  For example, aside from the risk of infection for other household members, if that person is afflicted by malaria or dengue fever, they may not be able to work for several weeks.  A house on the margin of poverty may then fall just enough behind in household money and/or food to fall into true poverty.  Households that are already poor may fall even further.

And an important aspect of this situation is that not only is the person who is actually infected met with further troubles; the entire house is also afflicted.  Furthermore, there tends to be heterogeneity in these effects even within households.  That is, poor households may see things such as greater infant mortality, and this effect may be exacerbated when there is a shortage of food or resources in the household.

In poverty trap models, there are usually equilibrium points in poverty levels that, once reached, are quite difficult to break.  From Bonds et al. (2010):

What may be most important in these debates is therefore not whether the effect of health on poverty is more significant than that of poverty on health, but whether the combined effect is powerful enough to generate self-perpetuating patterns of development or the persistence of poverty.  

Children who grow up in households with frequent food shortages may not have the same physical or cognitive abilities as others.  Their immune systems, already taxed by years of exposure to pathogens, may not be able to fight off diseases as well as their healthy counterparts.  Therefore, when they begin their own households, they are already behind in the nutrition, health, and economic game.  And once again, when adults in the new household fall ill and cannot put food on the table, the children will be disproportionately affected.  

This cyclical pattern, where disease leads to poverty and poverty can lead to disproportionate disease, provides a perfect storm in which there aren’t enough resources to keep from getting sick, where once sick you are likely to fall further into poverty, and once you fall further into poverty you are even further away from pulling yourself and your family out.  This leads to the maintenance of poverty and sickness across the generations.

And this story could perhaps get even more complicated when we consider some evolutionary implications.  For example, populations that have historically been afflicted with malaria also tend to have high proportions of blood and blood-related disorders that seem to protect against malaria.  Almost all of these disorders are harmful in some cases (for example, in homozygotes).  Therefore the evolutionary history of disease can lead to a situation where some individuals are actually plagued with sickness from the very beginning of life.  Paradoxically, under situations of heavy malaria burden, some people with these disorders will apparently be healthier than their non-affected counterparts.  I don’t know whether the side effects of these disorders are enough to lead to poverty traps on their own.

Finally, in an age when many scientists appear to be looking “for the gene for (fill in your favorite thing to study)”, poverty traps and households are an interesting thing to ponder.  Poverty and the apparent predisposition of household members toward succumbing to disease can look like a genetic effect.  If it runs in families, and certainly both poverty and sickness do, then it can look a whole lot like there is a genetic reason for it.  I think that poverty trap models are therefore a nice illustration of how we could arrive at the same phenotype (poverty and sickness) from purely socio-economic and ecological factors.  

REFERENCES:

Bonds, M. H., Keenan, D. C., Rohani, P., & Sachs, J. D. (2010). Poverty trap formed by the ecology of infectious diseases. Proceedings of the Royal Society B: Biological Sciences, 277(1685), 1185–92. doi:10.1098/rspb.2009.1778

Plucinski, M. M., Ngonghala, C. N., Getz, W. M., & Bonds, M. H. (2013). Clusters of poverty and disease emerge from feedbacks on an epidemiological network. Journal of The Royal Society Interface, 10(80), doi: 10.1098/rsif.2012.0656.

Wood, J. (in press). The Biodemography of Subsistence Farming: Population, Food and Family. Cambridge University Press.

Monday, December 10, 2012

Adding the human context to disease ecology


Sometimes there exist subregions where, for a variety of reasons, diseases just tend to lurk and persist regardless of what is occurring in the surrounding regions.  An excellent example can be seen in Southeast Asia, where there exists malarious pockets surrounded by malaria free regions.  In Southeast Asia, these places tend to be hilly, forested regions and international borders.  For example, both Thailand and China have been relatively successful at eradicating malaria from much of their nations while continuing to have a malaria problem along their borders with Myanmar (also known as Burma).

So what is it about these places that make malaria eradication so difficult, at least on the China or Thai sides of the border?  Well, the simple answer is that it’s complicated.

This last summer I made a trip to the border between China and Myanmar to visit one of the field sites we are using in our malaria research.  My trip to Nabang, China began in Kunming (the capital of Yunnan Province), where I caught a plane to Tengchong and then caught a five hour ride through steep mountains to the relatively small border town called Nabang.

Directly across the China-Myanmar border from Nabang is a town called Laiza which at one point was a tourist attraction for wealthy Chinese, offering a legal gambling outlet.  Today the fancy new gambling halls are still up and running in Laiza and there are several relatively nice hotels in Nabang, both mostly empty and waiting for the tourists to come.  Lining the streets of much of Nabang are brand new, fancy looking street lights, none of which have ever been turned on.  Nabang has the feel of a Wild West gold mining town after the gold is all gone.

So what happened to this place?  Basically, war happened.

Downtown Nabang, with Laiza in the distance.
Laiza, which is in Kachin state, is named after the indigenous group (the Kachin) that has historically lived in this region of Northern Myanmar.  The Kachin are known for their fighting skill, they were our allies in this region during WWII, and they have historically been at odds with the ruling national government.  In 2011, after a 17 year truce, civil war broke out between the Kachin and the government military.  Unless you’re familiar with this region though, you’ve probably never heard of this war.  This isn’t the type of shock and awe war that we all saw when the U.S. went to war with Iraq or even the high-level shelling currently occurring in Syria.  Villages get burned in the middle of the night, women and children are kidnapped, raped and forced into labor, and military camps are occasionally ambushed.  It is a low grade, slow-and-steady war that claws at the psyche of the people living in this area.   

The KIA (Kachin Independence Army) is currently located in the Laiza Hotel, right in the middle of what was once a tourist retreat.  

What does this mean for the human ecology of the place?  
For one, it means that many people are clustering up near the border in make-shift camps for ‘internally displaced persons’ (what you’re called when you’re a refugee in your own nation).  When I was there, the people living in camps were working together in preparation for many more people to arrive.  

Villagers preparing for new people to arrive.
It also means that the population has a very unique composition, made up mostly of women, children, and the elderly.  Working aged men were mostly absent, except for the occasional young adults that would zip by on their motor bikes, donning camouflage and carrying AK47s.  Instead of helping out with household chores and working in the near-by agricultural fields, the men are moving covertly through the mountainous and forested landscape, engaged in warfare with the Burmese military.    

A KIA soldier riding through town.
And what does this mean for the disease ecology of the place?
By disrupting the everyday lives of populations, conditions are primed for disease.  Close-quarters mean that easily transmittable diseases will almost certainly move through the population rather than be confined to individual households.  Diarrheal diseases that are common in children may become a problem for everyone.  The same is true for airborne diseases such as influenza and tuberculosis.  For already stressed and sometimes malnourished people, this is an added threat.

Furthermore, vector borne diseases are an increased threat.  Newly cleared fields easily form water puddles when it rains, making excellent breeding grounds for mosquitoes.  Dengue fever could easily thrive in these camps.  And for a variety of reasons, malaria is already a growing problem.  

Some preliminary research in this area has indicated that working-aged males appear to disproportionately acquire falciparum malaria infections.  Given that most of these working aged males are living in the conditions of war, moving through the jungles at night, sleeping out-doors, and almost constantly being exposed to a range of mosquito vectors, perhaps this is no surprise.  

And what happens when they are too sick to fight?  One could imagine that they then come home, with a thriving population of parasites swimming in their blood.  A potential real danger, and one that my research is particularly aimed at, is that these individuals could then pass the disease on to their families and neighbors. 

Also, given that artemisinins have been available in Myanmar and China for decades, and since there clearly is no regulation of their use in Myanmar (especially in this part of the nation), conditions are also primed for drug-resistant parasites.  We are already seeing this on the Thai-Myanmar border, it isn’t a stretch to expect to find it in this region next.  

And if these pretty terrible conditions weren’t bad enough, there is another disease that has predominated in this area for some time.  It is a hotspot for HIV/AIDS –because there are thriving sex and opium trades.  (Note: The sex and opium trade bring in problems of their own, even outside of infectious disease.)  Much of the world has been privy to the knowledge about where HIV/AIDS comes from, how it is spread, and how to keep from becoming infected.  However, Myanmar has been largely closed to the world until the last several years, meaning that such educational campaigns are unlikely to have reached many outside of the wealthy, urban, or elite of this nation.   

What does this all of this mean for public health efforts?  
Clearly this isn’t an easy place to work.  It isn’t even an easy place to get to.  Our collaborators at Kunming Medical University have sent several teams of graduate students to the field sites here in order to collect demographic and epidemiological data.  Many of them don’t stay for long.  It is a depressing environment, there are frequent earthquakes, the heat is almost unbearable and AC units don’t work when the electricity is out.  Oh… and it’s a war zone.   This makes getting data difficult, and given the fluctuating population size and composition, it makes epidemiological modeling difficult.  This means it is really hard to fully understand the disease situation.  

The hospital in Nabang, China has been badly damaged by earthquakes.  Here it is being repaired and expanded.  This hospital sees a lot of malaria cases as well as wounded soldiers from the fighting.
Not all malaria endemic places in Southeast Asia are the same but there certainly are some commonalities.  Another malaria endemic border zone, along the Thai-Myanmar border, was until very recently also a site of ethnic tensions and occasional war.  The Thai-Cambodian border zone was a strong-hold for the Khmer Rouge up until the 1990s.  It was also a site of heavy, informal mining efforts, which led to living conditions that appeared very similar to the refugee camps I’ve seen in Northern Myanmar on the Chinese border.  Perhaps a common theme across these regions is the disruption of ‘normal’ human ecology.  These are places where people haven’t had the chance to settle in, to develop their homes and villages, and to fix problems associated with sanitation and hygiene.  

Perhaps it is too much to think that some of these regions will ever be malaria free.  But I can’t help but think that conditions would be much more controllable, perhaps even with a low, maximally acceptable, background level of malaria, if only the socio-economic and political conditions weren’t as they are.  And while it’s easy for me to say that if we could just stop warfare and fix poverty that we’d have a lot more success at controlling disease, clearly doing these things isn’t anywhere near easy.  However, these challenges can’t even begin to be approached until there is a widespread realization that these are in fact the underlying, downstream conditions that lead to bad human health outcomes.  

Main road through one of the study villages near Laiza, Myanmar.  

Friday, October 19, 2012

Social Malaria

My name is Daniel Parker and I am a PhD candidate at Penn State University in the Anthropology and Demography Departments.  I consider myself to be a population scientist and my research concerns a range of population scales, from the microscopic level to human metapopulations (populations of populations).  Humans are my favorite study organism; however I am also very interested in the microparasites and invertebrate vectors that plague humans.  My dissertation research looks at human migration and malaria in Southeast Asia.  Anne and Ken invited me to write a guest post on this subject, and this is it.
--------------------------------



Are there social determinants to malaria infection?

If you’re a social scientist you might be quick to say yes, but if you understand the biology of the disease the question may not make much sense to you.

A female anopheline mosquito feeds on someone carrying the sexual stage of the parasite.  The blood meal gives her the nutrition necessary for laying her eggs.  Assuming that the parasite has successfully undergone another transformation in the mosquito gut, and that the mosquito feeds on another person, she may transfer the infection.  Mosquitoes probably don’t care about the socio-economic status of the people on which they feed (though they do seem to prefer people with stinky feet and pregnant women).  It is probably safe to say that all other things being equal, mosquitoes really don’t care who they bite.  But are all other things equal?  Not even close…

Let’s consider our not-too-distant history with malaria in the U.S. since it was a plague of non-trivial proportions for a large swath of our nation.  During the 1860s a prominent scientist (one of the first to publicly suggest that malaria may come from mosquitoes) argued for having a giant screen placed around Washington D.C. (which was a swampy, malaria infested city up until the mid 1900s).[1]  Several of our presidents seem to have suffered from the disease.  George Washington suffered throughout much of his life with bouts of fever that were likely malaria.  Presidents Monroe, Jackson, Lincoln, Grant, and Garfield also may have suffered from malaria.  On a personal note, both of my grandparents contracted malaria growing up in modern day Oklahoma (at that time it was still Indian Territory).  My grandmother still drinks tonic water, which contains the antimalarial Quinine, when she feels a headache or chills today.    The following maps (I apologize for the poor resolution) come from a CDC webpage about the history of malaria in the U.S.

 
CDC Malaria History

A question, then, is: How were we so successful at eradicating malaria here?  Furthermore, why didn’t we do that everywhere else?!!!

A favorite story for many anti-environmentalists is that it was all or mostly because we used DDT.  And beginning in the 1930s we did use the hell out of DDT.  Apparently it was common practice for parents in the Southern U.S. to encourage their children to run behind DDT fog trucks as they drove down streets.  (See this blog post for some related stories).  But some real problems with DDT are that it doesn’t just target mosquitoes, probably also targets the predators that would feed on mosquitoes and other pests, and can potentially cause all sorts of troubles (with regard to bioaccumulation and/or biomagnifications) as it works its way through trophic levels.  A few people noticed this could be a problem (see Silent Spring by Rachel Carson) and DDT production was halted in the U.S in 1972.  (Soon after there were global efforts at banning its use for agricultural purposes).

But DDT wasn’t the only thing that changed in the U.S. during the Second Great War.  The U.S. was just coming out of the Great Depression and there were some interesting demographic things going on too.  For example, lots of working-aged males were away for the war, returned in masse, and then some major baby-making ensued.  The economy was rebounding and suburbia was born, meaning that many of those baby-makers could afford houses (increasingly with air conditioning units) that wouldn’t have been possible in previous years.  There were major public works projects aimed at building and improving drainage systems and sanitation.

During this same time period chloroquine, a major antimalarial drug with some important improvements on quinine, went into wide-spread use (mostly in the 1940s) but by the 1950s there were drug resistant parasite strains in Southeast Asia and South America.  This isn’t a surprising occurrence.  Antimalarials provide a pretty heavy selective force against the parasites.  Furthermore, those parasites undergo both clonal and sexual reproduction, meaning they can potentially generate a lot of novel variants and strains.  This has been the curse of antimalarials ever since, soon after they are rolled out the parasites develop resistance and resistant strains quickly spread globally.

Eradication of malaria in the U.S. occurred during a time when we were using heavy amounts of DDT, when we had access to relatively cheap antimalarials, and when we were undergoing some major socio-economic, structural, and demographic changes.  However the DDT was becoming an issue of its own and wasn't working as well as it once did.  The antimalarials weren't working as well as they once did either.  Despite this fact, and despite the fact that mosquito vectors for malaria still exist in the U.S., we still don’t have a real malaria problem.  And while it is almost impossible to tease out all of the contributors to our current malaria-free status, I argue that the social and economic factors that changed during this time period are the main reason why malaria is no longer a problem for us here in the U.S.  If that weren't the case, we’d be back to using insecticides and antimalarials to try to eradicate it once again.

I’m certainly not the first to notice such things.  A study on dengue fever (a mosquito-borne viral disease) in a Southern Texas/Northern Mexico town split by the international border (los dos Laredos) found that people without air conditioning units seem to have more dengue infections when compared to people who do.[2]  Poor people, living on the Mexico side of the border, tended to leave their largely unscreened windows open since they didn't have AC units to combat the sometimes brutal heat in that part of the world.  This is a clear example of how socio-economic factors can influence mosquito-borne disease transmission, but it plays out in other ways in other environments and parts of the world.

In Southeast Asia, where I do malaria research, many if not most of the people who are afflicted with malaria are poor, ethnic minorities and migrants who have been marginalized by governments and rival ethnic groups.[3]  Constant, low-grade warfare in Myanmar (Burma) for the last half century has left many of the residents of that nation in a state of public health crisis.  And, since pathogens don’t normally respect international borders, malaria remains a problem for neighboring countries such as Thailand (which is mostly malaria free when you exclude its border regions).  The story is the same along China’s border with Myanmar in Yunnan Province.  Mosquitoes don’t target people because they’re poor disenfranchised ethnic minorities.  But a lot of those ethnic minorities do happen to live in conditions that allow malaria to persist, and the mosquitoes who pick up malaria go on to feed on other potential human hosts, regardless of their economic status.  This means that your neighbor’s poverty can actually be bad for you too.

Arguably, most (not all!) public health advances can be largely attributed to socio-economic change (google: McKeown hypothesis).  Increasing the standard of living for entire populations tends to increase the health of populations too.  In Asia, nations such as Taiwan, Japan, most of South Korea (excluding its border zone with North Korea), and Singapore are malaria free.  Obviously, it isn’t always an easy task to increase the standard of living for a population, but the benefits go far beyond putting some extra cash in peoples’ pockets and letting them have nice homes.  The benefits include decreases in diseases of many types, not just malaria, and that is good for everyone.

Consider, now, the amount of money that is dumped into attempts at creating new antimalarials or that ever elusive malaria vaccine.  Consider the amount of money that has been dumped into genome sequencing and countless other really expensive scientific endeavors.  And then consider whether or not they actually have a lot of promise for eliminating or controlling malaria in places that are still plagued by this disease.  Sure, sequencing can provide insight into the evolutionary dynamics associated with the emergence and spread of drug resistance (and that is really exciting).  Some people believe that genomics will lead to personalized medicine, but even if this is true then I am skeptical that it will ever trickle down to the people that most need medical attention.  New antimalarials and new combinations of antimalarials may work for a while.  But it seems pretty obvious to me that what actually works over the long term, regardless of parasite evolution and genetics, is what we did right here in the U.S.  So, at the risk of jeopardizing my own future in malaria research, I've got to ask:

From a public health standpoint, is it possible that it’s cheaper to attack socio-economic problems in malarious places rather than to have thousands and thousands of labs spending millions and millions of dollars for cures that seem to always be short lived?  

Wouldn't we all get more bang for our buck if we took an approach that doesn't only address one specific parasite?       

1. Charles, S. T. Albert F. A. King (1841-1914), an armchair scientist. Journal of the history of medicine and allied sciences 24, 22–36 (1969).
2. Reiter, P. et al. Texas lifestyle limits transmission of dengue virus. Emerging Infectious Diseases 9, 86 (2003).
3. WHO, Strengthening malaria control for ethnic minorities in the Greater Mekong Subregion. 2011, (2008).