Tuesday, March 11, 2014

Watchdog journalism vs science spin

Recently we got into a tad of trouble by criticizing the one and only Alan Alda's sincere effort to help scientists learn how to communicate understandably with the general public.  On its surface we didn't disagree with the idea that Alda might have something to teach scientists.  Instead, our point was that scientists already spend too much time and energy 'explaining' their work to the press and the public--and rather than clear public education on technical subjects, too often that means hype and lobbying so the public purse will open wide to them again (e.g., 'look at my major finding!....more research is urgently needed').

Indeed, this is the first reaction of universities and their PR offices when work by their faculty gets published, and it is a near reflex among scientists themselves these days.  And, it works.  We have not invented the issue as something to write superficially critical blog posts about: in the extreme, the likes of the Washington Post have until not long ago simply published university press releases as sent to them, though when this became known they acknowledged the problem and said they would stop this practice.  What happened to journalists as watchdogs?

Watchdog at open door (with labrys). Roman mosaic from the Casa di Paquius Proculus (I 7, 1) in Pompeii. Wikimedia

Yes, of course better science public education would be welcome, since it is true that the public pays for most of our research games and it's good for people to understand the world we all live in. But the problem we raised is not the desirability of an informed public; it's that if scientists are trained by an actor to be even slicker in what they say than they already are, the lobbying aspect will only intensify. We hasten to add that there are excellent science reporters out there, who write excellent stories and don't just basically echo what is published or what an interviewed author says.

Our quibble is not with them.  But our post struck some readers as being ultra stodgy (Twitter even taught us the French word for 'grouch'), as we were taken to be critiquing science journalists and advocating that scientists stay as boring and remote as possible, rather than being trained to be better at explaining their work to the public.  Fine.  We took our lumps.

Then, on Friday we discussed a WHO report on studies purportedly showing the health effects of consuming sugar.  We tried to look closely at that paper and to show what's under the hood of the public story as reported in the science news media.  It was not that the story is an emperor with no clothes, it's that it's all clothes and no emperor inside--but it wasn't reported that way.

We could ask why should it be people like us who give a story such as that one the scrutiny it deserves? The answer is that too much science journalism these days is not critical, in the proper 'evaluative' sense of the term.  We have journalists who repeat stories announced by scientists without examining the stories and calling out scientists on their hype.  We see the same thing in politics, economics, war reporting and so on, as well; too often, journalism means being a mouthpiece for the powers-that-be.  It isn't Alan Alda and acting training that we need.  What we need are more reporters who do their job.

So the fact that those of us who actually try to understand the science stories being told every day, or the science projects being proposed, have to do this dissection, explains why we didn't like the idea of making scientists even slicker in front of the camera.

The problem could be fixed in a more proper way.  Let the scientists do their job, and let the reporters and interested personalities like Alan Alda present the results in a smooth way--that's what they're good at, after all.  But, we believe that better science reporting could be done if more reporters remembered that the main role of journalism in a free society is to pick and ponder at the stories being spun to them, and report what the story really is (or isn't), protecting how the public purse is used, not just carrying the baton as if they were running the anchor leg in a relay race.  And for junk stories based on awful design and over-interpretation--perhaps the most important journalistic responsibility is not to report them in the first place.

Monday, March 10, 2014

Chewing the antibiotics-make-us-fat story

A piece in the New York Times yesterday, "The Fat Drug" by Pagan Kennedy, asks whether antibiotics are making us fat.  As far back as 1948 people knew that antibiotics could make animals grow faster, bigger and fatter. Even after more than a half-century, it still isn't wholly clear why the 'sub-therapeutic' doses given to animals cause them to gain weight, though there is some thought that perhaps they kill off the animals' gut flora, thus allowing all calories consumed to go directly into growth of the host. But Kennedy (and others, including a September 2013 paper in Nature) suggests that the same thing could be happening to us.

Some ethically questionable studies were done way back in the 1950's to test this, Kennedy reports.  Schoolchildren in Guatemala were fed antibiotics everyday for a year, and a doctor in Florida carried out the same experiment on a group of mentally disabled children; they did indeed grow larger.

This isn't a surprise really.  But the piece also raises a few questions that it doesn't answer, because few of us are consuming antibiotics daily.  First, could there possibly be a direct link between consumption of antibiotic-laced meat and obesity?  About this Kennedy says,
Of course, while farm animals often eat a significant dose of antibiotics in food, the situation is different for human beings. By the time most meat reaches our table, it contains little or no antibiotics. So we receive our greatest exposure in the pills we take, rather than the food we eat.
Antibiotics in meat
But I'm not sure what the evidence is for this.  Animals are eating 'sub-therapeutic doses' -- in fact, so are we, given that residual antibiotics are often found in meat.  I've tried to find evidence as to what kind of effect the amount we consume in meat might have, but have found nothing, though it's possible that cooking it before we eat it will denature the antibiotics so they don't do anything.  So, how little we consume, or how much it might take to produce the same effects in us as in food animals isn't clear, at least to me.  (The effects of antibiotic usage in food on antibiotic resistance in people are another matter, and fairly clear, but not part of this particular issue.)

So here's a very crude test of the effect of eating meat laced with antibiotics.  Consider the following:  Americans now eat more meat per capita than anyone but Luxembourgians.

Meat consumption, 2007; UN Food and Agriculture Association, in The Economist

We use more antibiotics by kilogram of meat produced than any other country.

Antibiotics by kg of meat produced; Source


And the prevalence of obesity in the US is higher than anywhere in the world.


Obesity rates, 2009; The Economist
So, the US has the highest rates of obesity, and the highest rates of antibiotic usage in meat, and nearly the highest meat consumption rates.  Does this mean, then, that the antibiotics we're consuming in our meat are causing us to become obese?  It sure looks like it.

Until you look at obesity rates in the countries that are just below the US in the amount of antibiotics we use in animals -- Greece, Netherlands, France, Spain, with obesity rates all over the map.  And, while the correspondence between countries isn't great among these charts, countries with the highest obesity rates that are also on the antibiotic usage chart don't have among the highest usage rates.  So this crude attempt to answer the question of whether antibiotics in the meat we consume are making us fat seems to point to no, though certainly not definitively.

What about prescribed antibiotics?
But, I really like the question, not only because it might have explained the obesity epidemic, but because it doesn't immediately assume a genetic cause.  But we can still steer away from genetics and ask the question as Kennedy did -- is it the antibiotics we're giving our children to treat things like ear infections that's responsible for obesity?

According to Kennedy, mice fed high calorie diets and antibiotics in one study gained a lot more weight than mice fed the same food minus the antibiotic, the controls.  We aren't mice, and we aren't pigs, but it does start to sound like there might be a connection.

But if so, what would the dosage have to be?  There must be a significant difference between being fed even sub-therapeutic doses of antibiotics daily and having a therapeutic dose once or twice a year (I haven't been able to find an actual figure for average antibiotic use in the US, so this is a guess, but I'd bet it's not too wild).  And, we don't know whether there's a particularly at-risk age of antibiotic use -- does it have to be in childhood, or would adult users be at risk as well?

So, dosage and timing would seem to be important questions to answer, as well as context (as our friend Charlie Sing would remind us).  Would there be particular lifestyles or diets that increase risk of antibiotic-induced obesity?

Finally, the August Nature paper on the 'microbiome' and obesity reports that obese people have different gut flora than thin people.
Low richness of gut microbiota has been reported in patients with inflammatory bowel disorder (IBD), elderly patients with inflammation and in obese individuals, but the differences of richness within these groups or among non-obese individuals were not previously detected. As the composition of the gut microbiota seems to be rather stable over long periods of adulthood, its richness may well be a characteristic feature of an individual. In mice, the richness seems to be affected by repeated antibiotic treatments, and host genetics could also have a role. 
The suggestion seems to be that antibiotics could be responsible for the difference.  But, it's very premature to conclude from this paper that antibiotics cause obesity, because, for one thing, it's not clear whether the richness of the gut flora preceded obesity, or is a consequence.  And, certainly the drug/obesity connection isn't going to sidestep the question of a genetic influence for long.  Geneticists know very well how to keep their share of the pie.  They've been doing it for years:  "Yes, of course, the rapid change in disease X rates is due to environment, but that's because some people have genes that make they hyper-responders.  So we need to do genome mapping to find these few sensitive genotypes!"  Unfortunately, mapping for most of the common diseases, like obesity or diabetes or asthma or autism or you-name-it -- or even just human stature -- has been justified on the 'a few sensitive hyper-responding-genes-are-responsible' argument......and all have shown no evidence for that.  Instead, it's clear that environmental changes really are responsible for many major risk changes, not unlucky genotypes.

Anyway, the antibiotics/obesity connection in humans is an intriguing idea, but there's still a lot we don't know.

Friday, March 7, 2014

The WHO and recommendations on sugar consumption: an ill-posed problem

The search for magic health bullets, or the Single Evil goes on unabated, despite a steady record of essential failure.  Is that fair?  Well, we've got decades of very extensive, expensive, and expansively technical studies of some questions of major public health relevance -- and little to show for it.

As a new example, after a two-year effort to connect the dots between between sugar and disease, the World Health Organization, whose large professional research staff should know how to do that,  believes it has done so, and now recommends we all reduce our sugar intake to 10% of our diet, or better yet, 5%.  That's 12 or fewer tablespoons of sugar a day.  
There is increasing concern that consumption of free sugars, particularly in the form of sugar-sweetened beverages, may result in both reduced intake of foods containing more nutritionally adequate calories and an increase in total caloric intake, leading to an unhealthy diet, weight gain and increased risk of noncommunicable diseases (NCDs).
And,
Also of great concern is the role free sugars play in the development of dental diseases, particularly dental caries. Dental diseases are the most prevalent NCDs globally and though great improvements in prevention and treatment have occurred in the last decades, dental diseases continue to cause pain, anxiety, functional limitation and social handicap through tooth loss, for large numbers of people worldwide.
So, sugar is bad for our teeth and bad for our health.



But how good is the evidence?

To answer that question, the WHO commissioned a meta-analysis, that is, a single combined analysis of all pertinent previous cohort studies of the effects of sugar on obesity, which was published last month in the British Medical Journal by Morenga et al.  The authors note that they included studies that controlled for lifestyle factors and medical interventions.  They looked at two groups of studies; cohort studies and trials.  Of the trials, they chose two groups.
One group included studies in which participants in the intervention arm were advised to decrease or increase sugars, or foods and drinks containing sugars. Although such advice was generally accompanied by the recommendation to increase or decrease other forms of carbohydrate, there was no strict attempt at weight control. These trials are referred to as ad libitum studies. The other group of trials attempted to achieve isoenergetic replacement of sugars with other forms of carbohydrate. Interventions designed to achieve weight loss were excluded because the ultimate aim of the review was to facilitate the development of population based recommendations rather than nutritional recommendations for the management of obesity.
That is, people in these studies were either asked to add or subtract sugary foods and drinks from their diet, but change nothing else, or they were asked to subtract sugar and substitute it with a different carbohydrate.  Presumably the latter was to control for the effect of simply adding or subtracting calories of any sort, though this isn't clear in the paper.

To identify studies that might clarify the issue, a literature search identified 7895 potentially relevant studies, but of these, 19 met their criteria for ad libitum studies, and 11 others met their criteria for isoenergetic replacement studies.  And they also identified 9445 potentially relevant cohort studies, and of these, 38 were deemed to be appropriate for inclusion.

From these 30 trials and 38 prospective cohort studies, what do they conclude?  Well, of the 30 trials, five studies measured the effect of reducing dietary sugars.  To quote the authors of the WHO commissioned BMJ paper, "Reduction in dietary sugars intake was associated with significantly reduced weight (-0.80 kg (95% confidence interval -1.21 to -0.39); P lt 0.001) at the end of the intervention period by comparison with no reduction or an increase in sugars intake."

So, five studies report statistically 'significant' weight loss with reduction in sugar intake.  But what are we talking about here?  Well, an average decrease of less than 2 pounds, or at most 2 1/2 pounds, with variation around that.  This may be 'statistically significant', but all I can say is that if I lost 2 pounds I wouldn't think it important enough to tell my mother about, never mind publish it.  And, only one of the studies, participants were asked to substitute low sugar foods for the high sugar foods they were eliminating.  So, one study tried to test the effect of eliminating sugar, but not changing caloric intake, but the others did not.

And what about the study duration differences?  Comparing 10 weeks to 8 months, the durations of some of the studies, is an apples to oranges kind of comparison -- what about the effects of differential seasonal differences in diet, exercise, their correlations, and so forth?  And, three of the five studies reported results only for those who completed the study, which is a possible source of bias.  Who was determined enough to finish, and who not, and why?  Indeed, the authors write that excluding two of these studies weakened the results somewhat, meaning that there was indeed a difference between completers and non-completers, at least in those two studies.

The authors identified 12 studies that asked participants to substitute sugars with other macronutrients.
Interventions ranged from two weeks to six months, and sugars were in the form of either sucrose or fructose used to sweeten foods or liquids. We saw no evidence of difference in weight change as a result of differences in sugars intakes when energy intakes were equivalent (0.04 kg (95% confidence interval −0.04 to 0.13)).
While Morenga et al. emphasize the results that showed weight gain to be associated with sugar intake,  a number of these studies reported no change in weight, reporting "Sucrose intake not significantly associated with weight gain at follow-up", or "No relation was seen between sweet foods consumption and WC [weight change]".  See Table 5 in the paper.  

Data from studies on children were 'equivocal', and the authors discuss possible reasons for this, including poor compliance with dietary advice.

So, after all of this effort, on a subject that has been widely trumpeted as well-known, is sugar bad for us?  Maybe, but we don't know it from this study.  All the studies included in the meta-analysis are based on methods of assessing food intake that are themselves questionable.  Dietary recall, the method used in the trials, and food frequency questionnaires used in the cohort studies, or indeed just about any other method of dietary assessment, are notoriously inaccurate measures of what we eat.  And, the amount of weight gain reported to be due to sugar consumption is perhaps statistically significant, but with respect to the dangers associated with obesity, it's trivial.

In addition, do we actually know whether study participants cut down on total caloric intake when they reduced sugar intake?  If they didn't substitute sugar with something else, it's impossible to know whether any subsequent weight loss is due to the reduced sugar in their diet, or simply fewer calories.  The fact that the subjects in the isoenergetic studies lost no weight when they replaced sugar with other carbohydrates would suggest that this is the case.

Morenga et al. say that possible confounders -- lifestyle or dietary factors -- that could influence an association between sugar consumption and weight gain, were controlled for in the studies they include in their analysis.  Epidemiological studies can never anticipate or control for all factors that might influence a causal variable, because causal pathways are complex and all components of those pathways can not be known completely. The problem of confounding, that is, causation by an unmeasured factor whose exposure is correlated with a measured one (like sugar intake?) is notoriously tough to identify.

All of this is predicated on the assumption that weight gain is unhealthy.  Or leads to ill health.  The WHO paper didn't address the strength of the evidence for this. Nor did they show any evidence to suggest that sugar contributes to ill health, or even weight gain, any more than any other food.  When speculating on how sugar could be associated with weight gain, the authors write, "The most obvious mechanism by which increasing sugars might promote weight gain is by increasing energy consumption to an extent that exceeds energy output and distorts energy balance."  That is, people who gain weight consume more of anything than they burn, as the isoenergetic studies suggest.  By this logic, eliminating milk or eggs or meat and not replacing them would cause weight loss, too.

Morenga et al. do write that mechanisms by which sugar itself might be a cause of weight gain have been suggested, but that is beyond the scope of their paper.

Tooth decay
Sugar consumption is much less unambiguously associated with tooth decay.  Or at least I thought so, and it seemed something 'everybody knows'.....until I read the review commissioned by the WHO, a paper in the Journal of Dental Research, the foundation for the WHO recommendation that sugar consumption should be less than 5% of our diet.
From 5,990 papers identified, 55 studies were eligible – 3 intervention, 8 cohort, 20 population, and 24 cross-sectional. Data variability limited meta-analysis. Of the studies, 42 out of 50 of those in children and 5 out of 5 in adults reported at least one positive association between sugars and caries. There is evidence of moderate quality showing that caries is lower when free-sugars intake is < 10% E. With the < 5% E cut-off, a significant relationship was observed, but the evidence was judged to be of very low quality. The findings are relevant to minimizing caries risk throughout the life course.
The evidence for an association is judged to be 'moderate' or 'of very low quality.'  This largely reflects the fact that most studies didn't ask the question the WHO was interested in -- is there a threshold amount of sugar that is highly associated with cavities?  Still, the studies do show an association, even if confounders like socioeconomic status might be problems.

Conclusions
Based on these studies, the WHO now recommends that we cut sugar down to 10 or even better, 5% of our diet.  That's 6 - 12 tablespoons of sugar a day.  According to the Guardian, average intake among adults in the UK is 11.6%.  Surely reducing it to 10% isn't going to eliminate diabetes or heart disease. If we could even measure our intake accurately enough, and had the patience to tinker with our every mouthful.

Further, the WHO has asked a question that can't be answered:  Is there a threshold over which sugar consumption is unhealthy?  We wrote not long ago about the well-posed problem problem and this is a good example.  There isn't a single answer to the question.  Indeed, it's not even possible to understand, never mind answer. Why should there be a threshold?  What kind of sugar consumption?  Does the vehicle for sugar consumption (cake vs soda, e.g.) make a difference? What does unhealthy mean anyway?  The question is really not a question at all from a scientific point of view.

Is this the best we can do?  Is it more than today's example of hand-waving dressed in the language of science?  After all, we've been studying these subjects for decades.  And if it is the best we can do, why bother and why spend the resources when there are more soluble problems that deserve attention? 

Thursday, March 6, 2014

Fixing on the nitrogen fixation problem

Haber Bosch fixes nitrogen.  The process is considered by many to be one of the most important technological advances of the 20th century.  Indeed, it is estimated that about a third of the people born since 1909 have been sustained by fertilizer produced by the Haber Bosch process (Erisman et al., Nature, 2008).  It must also be said that the process has been responsible for the death of millions as well, because it also fixes nitrogen for use in explosives, but it is its use in agriculture that we are focusing on today.

Nitrogen is an essential component of all living cells.  It is crucial to metabolic processes, to growth and development, and in plants it is a component of chlorophyll, so it's essential for photosynthesis as well as seed and fruit production.  But nitrogen can be hard to come by if you're a plant.  It's a major constituent of our atmosphere – 78% -- but plants can't use it in its atmospheric form because as such it is chemically inert, an unreactive molecule that must be primed for further chemical reactions.  The bonds between the two nitrogen atoms (N2) in atmospheric nitrogen are the strongest bonds known, in fact, so it must be 'fixed' by some natural or man-made process, the bonds broken, and the nitrogen otherwise made ‘reactive, or available to plants.*


Fritz Haber, 1918

Plants can use nitrogen from either nitrate (NO3-) or ammonium ions (NH4+).  They absorb it through their roots, but neither form is abundant in soil.  Fritz Haber developed a process to 'fix' nitrogen about a century ago, to split the nitrogen bonds and prime the molecules for further chemical reactions, linking them to hydrogen to make ammonium or to oxygen to make nitrates, and thus make nitrogen bioavailable.  Now known as the Haber Bosch process, because Carl Bosch later scaled it up for commercial use, agriculture has been dependent on it ever since.

There are up sides and down sides to this.  Haber Bosch converts atmospheric N's into ammonia molecules at high temperature and pressure.  Wikipedia tells us that "This conversion is typically conducted at 15–25 MPa (2,200–3,600 psi) or 150–250 bar and between 300–550 °C (572–1,022 °F)…"  As such, it is not an energy efficient procedure.  Indeed, just this one chemical reaction uses 1-2% of the world's energy supply.  And according to a recent BBC radio program hosted by UCL chemistry professor Andrea Sella about the Haber Bosch process, only 20% of the nitrogen fixed in this way goes into final products; the bulk gets lost in the environment.


The nitrogen cascade, from an EPA report on reactive nitrogen

The Haber Bosch process now produces about 100 million tons of ammonia a year, according to the same BBC program, which then is used to make 454 million tons of nitrogen fertilizer, usually in the form of anhydrous ammonia, ammonium nitrate, or urea.  The Green Revolution of the 1960's brought fertilizer-dependent agriculture to much of the world, and along with it, a more productive way to feed the world, but there have been consequences. As Aneja et al. put it in a Nature Geosciences article in 2008 ("Farm Pollution", paywalled):
Nitrogen emissions in various forms (nitrogen oxides (NOx), nitrous oxide (N2O), ammonia (NH3), and organic nitrogen (Norg)) are one of the two main classes of pollutants that are emitted by modern agriculture. Although produced naturally in soils through microbial denitrification and nitrification processes, nitrous oxide —a greenhouse gas that is much more effective than carbon dioxide in trapping heat in the atmosphere — arises from animal production in large quantities, depending on the nitrogen input and management of manure. In order to increase yields, agricultural operations often directly add reactive nitrogen to soils, either through the application of fertilizer or livestock manure to fields, or by growing nitrogen-fixing crops. These measures increase nitrous oxide emissions via microbial reactions, especially enhanced nitrification. Indirect additions of reactive nitrogen exacerbate the problem. For example, nitrogen from fertilizer or manure volatilizes as ammonia and oxides of nitrogen are redeposited in downwind regions as ammonia, particulate ammonium, nitric acid and nitrate.
Haber Bosch has led directly to water and air pollution, reduced biodiversity, acidification of the soil, ocean dead zones, negative effects on human health, and global warming.  So, while biological fixation of nitrogen also has consequences, people are beginning to think about replacements for the Haber Bosch process primarily because of its environmental effects.  There are two angles from which to approach this -- one is to engineer plants to fix nitrogen themselves, and the other is to make the mechanics of fixing nitrogen more efficient.  

In theory, it's possible to produce ammonia at low pressure and temperatures, unlike the Haber Bosch process.  The trick is to push electrons into the nitrogen molecule, N2, to weaken the bonds, priming it to accept additional protons, positively charged hydrogen, to eventually produce ammonia.  This is being tried in the lab, but there are a lot of ways this can and does go wrong, so that the hydrogen leaks rather than makes ammonia and the process is decades away from being commercially viable.

And even if it does become viable, it still means farmers will be spreading tons of synthetic fertilizer on their fields, still polluting soil, water and air.  So another approach is to engineer plants to fix their own nitrogen, as legumes -- peas and beans -- do naturally with the aid of symbiotic bacteria that produce an enzyme called nitrogenase.

The interaction between legumes and these nitrogen-fixing bacteria happens in nodules that the plant forms on its roots.  These are essentially nitrogen organs, and they fill with bacteria that can break the nitrogen bonds of atmospheric N2 so that it becomes available to the plant.  It's not a one-way deal, though, as the plants are a source of carbon for the bacteria.

Most major cash crops don't have the ability to interact with nitrogen-fixing bacteria, though, which is why, given the widespread practice of monocropping, they need to be intensively fertilized.  But, the genetic architecture by which legumes interact with bacteria is understood and the pathway is already present in grasses, just doing something else.  So plant breeders are trying to coax the genes to recognize signals from bacteria so that these plants can create their own interactions with nitrogen fixing bacteria.

The process in legumes, which uses rhizobial bacteria, happens in the absence of oxygen, however, and an anaerobic setting can't always be arranged.  But there are bacteria that can fix nitrogen in the presence of oxygen.  Cyanobacteria do it, for example.  Gunnera tinctoria, or giant rhubarb, has a symbiotic relationship with nitrogen-fixing cyanobacteria, and they are also photosynthetic, producing oxygen.  They are actually inside the cells of the plant, meaning that it's possible that a plant wouldn't need the root nodules that legumes use in their nitrogen fixing process.


Gunnera tinctoria: Wikipedia

Sugar cane, too, fixes nitrogen, either with or without oxygen.  It doesn’t exploit the same rhizobia that legumes do, nor does it use cyanobacteria.  Instead, it interacts with Glucoacetobacter diazotrophicus, an organism that lives in intercellular spaces in the stem of the plant.  Other plant/N-fixing bacterial relationships are still being found, and plant breeders are experimenting with coating seeds with bacteria as a way to introduce the symbiotic relationship between plants and bacteria.

Would this be efficient?  That is, can it replace a significant amount of the inefficient fertilizer produced by the Haber Bosch process?  The same BBC radio program we mention above suggests that it should be possible to replace at least 20% of it, and that plants would still give good crop yields even though there's an energy cost to fixing nitrogen.  Twenty percent is not insignificant, given the total amount of fertilizer used in the world today, and given that demand for it is rising in China and India, as they try to  feed their ever-growing populations.  But, it would still leave us heavily dependent on Haber Bosch.

The topic of nitrogen fixation was raised recently in an exchange of emails among a group of agronomists, farmers, ag economists, geneticists and interested onlookers including me.  The question of whether engineering corn to fix nitrogen would be the path to lower reliance on synthetic fertilizer was summarily dismissed as silly.  Kendall Lamkey, chair of Agronomy at Iowa State, did the numbers.  He pointed out that soybean is a legume.  It fixes nitrogen, and yet:

In the corn-soybean system in Iowa, soybean (the legume) by far has the largest total N needs, particularly at high yield levels.  Total N needs are dependent primarily on seed protein content and yield.  ...a 65 bushel/acre soybean crop has a seed N need of 240 lbs/acre. Soybean has a N harvest index (the proportion of the total above ground N that is in the grain) of about 0.7, so the total N requirement of a 65 bushel/acre soybean crop is about 350 lbs/acre.  Soybeans depend primarily on soil N until about the start of pod-fill when they switchover to depending primarily on N-fixation. We talk about soybean fixing about 50% of its total N needs on the average, but the reality is that the range is large and dependent on many factors, but with our levels of soil nitrate and yield, 50% fixation captures a lot of the acres. That means a 65 bushel soybean crop needs about 175 lb N/acre from the soil supply. 

Kendall went on to say that a recent record soybean yield in Iowa (160 bu/acre) he bets required on the order of 600 lbs of nitrogen per acre.  However,

A 200 bushel/acre corn crop tops out at about 178 lbs/acre at physiological maturity.  The N-harvest index is around 0.64 as well, so 114 lbs/acre of the 178 is in the grain.

So, a soybean harvest at these yield levels removes much more N from field than a corn harvest and both require about the same amount of soil N, which comes from fertilizer and manure inputs and soil organic matter decomposition.

If you measure change in soil N content over time in the corn-soybean rotation, you will find there is a net loss of N from the system, which translates into a net loss of soil organic matter.  If you do the same thing with a corn-corn rotation, you will find a net increase of N in the system.  This difference is driven primarily by the low crop residue inputs in soybean compared to corn. 

Unless we change the cropping system, N fixation in corn is not a cure.  Neither is reduced fertilizer N inputs.  It would help, but not as much as you might think, unless it caused farmers to change their cropping system.  Changing the cropping system is what caused the problem in the first place. 

Before 1913, when fertilizer made by Haber Bosch became commercially available, plants got their nitrogen naturally.  Farmers rotated crops, planting nitrogen-fixing legumes in fields where they also grew cereals, they applied manure from animals that had eaten nitrogen-rich feed, and/or the composted nitrogen-rich remains of other crops. Where available, farmers used guano as fertilizer, shipped far and wide from Pacific Islands, or nitrates from South America, but by the turn of the 20th century, these sources of reactive nitrogen were not enough to feed the growing world population. 

Cropping practices have changed dramatically in the last century, largely because farmers can rely on synthetic fertilizer to enhance their crop yields.  A sharp decrease in the diversity of crops planted in the American midwest occurred over that 50 or 60 years, and now corn and soybeans are the predominant crops there, with heavy reliance on synthetic fertilizer.  Haber Bosch has changed the way agriculture is done, with all its consequences.  

And, Haber Bosch has also thus indirectly enabled dramatic population growth, making possible an increase in the number of people on earth from close to 2 billion in 1900 to 7 1/2 billion today.  Erisman et al. write that "...the number of humans supported per hectare of arable land has increased from 1.9 to 4.3 persons between 1908 and 2008", and they estimate that probably half of us wouldn't be alive today without Haber Bosch.


Of the total world population (solid line), an estimate is made of the number of people that could be sustained without reactive nitrogen from the Haber–Bosch process (long dashed line), also expressed as a percentage of the global population (short dashed line). The recorded increase in average fertilizer use per hectare of agricultural land (blue symbols) and the increase in per capita meat production (green symbols) is also shown. (ErismanNature Geoscience, 2009)
Would growing food that fixed nitrogen itself enable further population growth?  Not if what it would change is the way that nitrogen is made available to plants, not the amount of food that can be grown.  If plants can be engineered to fix nitrogen, this could have beneficial environmental effects, although biologically fixed nitrogen is also a pollution source.  But, if we’re still spreading as much reactive nitrogen on fields as we do now, regardless of how it’s made, this can’t attenuate the environmental effects.

So, perhaps a law of nature that we must face is that every solution raises its own problems.


----------------------------
*Reactive nitrogen (Nr) includes inorganic chemically reduced forms of N (NHx) [e.g., ammonia (NH3) and ammonium ion (NH4+)], inorganic chemically oxidized forms of N [e.g., nitrogen oxides (NOx), nitric acid (HNO3), nitrous oxide (N2O), N2O5, HONO, peroxy acetyl compounds such as peroxyacytyl nitrate (PAN), and nitrate ion (N3-)], as well as organic compounds (e.g., urea, amines, amino acids, and proteins).  From a 2011 EPA report, “Reactive Nitrogen in the United States: An Analysis of Inputs, Flows, Consequences, and Management Options”. 

This post was inspired and enhanced by a recent discussion among agronomists, ag economists and geneticists, and comments, and improved with assistance from Matt Liebman.  Any remaining errors are my own. 

Wednesday, March 5, 2014

Ring around the tree trunk

It's March 5th, and here in central Pennsylvania the 2+ feet of snow that fell a few weeks ago has now melted down to about 6 inches of crystallized white stuff.  It was beautiful for a while -- now it's mostly a dirty slippery crusty coating keeping spring at bay.

But it's also curious.  Now that it has been melting in earnest, there are rings around a lot of trees, rings of no snow.

Here's what's happening in our backyard, from wide rings, to just-getting-started rings, to no ring at all.










Trees aren't warm-blooded creatures so why is the snow melting around the trunks?  Where is the heat coming from? 

These trees are all within fifteen feet of each other, so it's not likely that there's something about the soil, or proximity to some heat source that explains the difference.  

Time to ask the internets.

It turns out that there are plants that do produce a lot of heat, though it's apparently not for the purpose of melting snow.  Voodoo lilies and skunk cabbage are the two examples that come up again and again when you google exothermic plants, plants that give off heat.  Both of these plants smell rotten -- to people, at least.  They flower early in the spring, even through the snow, and the heat they produce apparently helps to dissipate the odor and attract (non-human) pollinators.  How this happens is well-known, but not relevant to our question today.  

But I did stumble across a beautiful description of this by a skunk cabbage-loving botanist (see the link for lovely drawings).  
A couple of times I've been lucky enough to see spathes growing up through a thin layer of ice, the ice melted around the spathe in a circular form. This is an indication of skunk cabbage's remarkable capacity to produce heat when flowering. If you catch the right time, you can put your finger into the cavity formed by the spathe and when you touch the flower head, your finger tip warms up noticeably. Biologist Roger Knutson found that skunk cabbage flowers produce warmth over a period of 12-14 days, remaining on average 20° C (36° F) above the outside air temperature, whether during the day or night. During this time they regulate their warmth, as a warm-blooded animal might!
And, plants respire.  That is, they convert sugars into energy to fuel metabolic processes, growth during the growing season, and just staying alive in winter, and some of that energy will be released as heat.  The roots respire during winter, too.  Even if it's not much heat being released, it may be enough to melt snow when the ambient temperature is high enough.  

Plus, here in Pennsylvania we have no deep permafrost, and tree roots go deep, and make a solid connection to the trunk above ground, and sap may be running this time of year, too.  So it is reasonable that as they emerge from the ground, trunks are just a touch warmer than freezing.  One would, however, need an explanation for those trees in the same area and of the same species, that don't have a melt-ring at their base.  With no leaves to create shade, are they nonetheless located to get less sunlight than their ringed neighbors?

Probably a best, or plausibly better answer is that trees have low albedo or reflecting power, which means that they absorb more energy from the sun than does the surrounding ground, and thus are slightly warmer.  We're guessing here, but perhaps younger trees, with thinner bark, release more of that energy than older, thick-skinned trees, which could explain why it seems that the snow melts more readily around smaller, younger trees.  The idea receives considerable credence in the observation that there are melt rings around other posts and poles, not just trees.  The daylight may not only be absorbed by them, but reflected downward to the snow at their base.

Whatever the true answer, clearly spring is on its way.  

Tuesday, March 4, 2014

Lucretius, and stories about the Nature of Things

I never took courses in Classics, so my knowledge of the major figures and their works is only sometimes from my own informal reading, and mainly from secondary summaries or hearsay.

But, after having read of and about it for many years, I decided I should actually read Lucretius' (99-55BC) long poem entitled De Rerum Natura (On the Nature of Things).  So many times, I've heard that Lucretius anticipated modern science in sometimes rather eerily prescient ways, and I felt I should see for myself.  I can't read Latin, so I have read an English prose translation in English, hoping at least to see the content, even if unable to appreciate the artistry.  


The deep truths?  Source: crusader.bac.edu



Lucretius
Lucretius was a Roman, nominally writing to persuade a friend of his, but he was basically expounding the views of the Greek Epicurean school of philosophy, led by Epicurius and Democritus and others who lived a couple of hundred years earlier. Among other things, the Epicureans argued persuasively that there were no gods and that human existence is matter-to-matter, dust-to-dust: enjoy life while you have it because that's all there is.  You will decay and your material will return to the cosmos from whence it came.  Nothingness cannot generate material, and material is eternal.  Your mind (or spirit, or whatever term you use) is part of your body and disappears with your body, into the cosmos, to be re-used in the future, as it had come to form you when you were conceived.

There exists matter, they said, made of atoms ('a-tom', or cannot cut), which are the smallest fundamental particles, and 'void' (space).  Atoms move around randomly through space and collide when they meet, at which time they can stick and form larger constructs, which today we call  'molecules'.  Aggregates of these compounds make up the stuff of the world.  It can separate and rejoin, but matter is neither created out of nothing nor destroyed, nor is space.  To allow for our apparent free will, the Epicureans said that atoms could sometimes 'swerve' from their normally determined paths.

The atomic theory of the Epicureans sounds quite like modern molecular and atomic physics and cosmology (though it is true that modern physics does seem to allow strange things like the creation of matter and energy from cosmic vacuum, and perhaps multiple universes, and so on).   Thus, the ideas Lucretius described can seem presciently scientific, two thousand years before their time.  I have read such characterizations frequently.   But there are some interesting points here, that have to do with how ideas are anchored in reality, and with selective reporting.

For one thing, if you read the rest of Lucretius, you'll find stories of the origins of the things in an earth-centered universe, including anthropological tales explaining the origin of humans and their cultural evolution--how we started out crude and beast-like, then discovered weapons, clothing, governments, language and song, the discovery of agriculture, domestication of animals.  He also used his theory to explain the nature of lightning, earthquakes, volcanoes, weather, geology,  gravity,why the Nile floods and the nature of magnetism.  He explained the working of our senses like vision, touch and taste, in atomic terms--accounting, for example, for the emanations from the atoms on the surface of our bodies, that enable us to see 'ourselves' in mirrors.  He raised developmental arguments to show that chimeric beasts, like Centaurs, cannot be real.  He delves into racial variation and why different populations are subject to different diseases.  And he goes into the clinical nature and epidemiology of plagues.

A main aim of Lucretius was to purge people of superstition.  He fervently wanted to dismantle anything other than a pure materialism, even in explaining the origin of moral aspects of society.    In this sense, too, he is favorably cited for his 'prescient' materialistic atomic theory of everything.

In the major sections of De Rerum, however, the apparent prescience becomes less and less, and any idea that he foreshadowed modern science dissolves.  Basically, the Epicureans were applying their notion of common-sense reasoning based on very general observations.  They strung out one categorical assertion after another of what 'must' be the case.  In today's parlance, they were providing hand-waving 'explanations' ('accounts' would be a better term) that seemed consistent but did not require any sort of rigorous means of establishing truth.

Along comes the Enlightenment
Aristotle, Plato, and others of the Greek philosophers reinforced the idea that reasoning itself was enough to generate understanding of the world.  We are basically built, they said, to see and understand truth.  Such a view of knowledge lasted until about 400 years ago, the period called the Enlightenment (in Europe), the time of Francis Bacon, Descartes, Galileo, Newton, and many others.  Those authors asserted that, to the contrary, to understand Nature one had to make systematic observations, and develop proper, formal, systematic reasoning to fit hypotheses to those observations, to develop general theory or laws of the world.  Out of this was born the 'scientific method' and the idea that truth was to be understood by empiricism and actual testing of ideas, not just story-telling--and no mysticism.

Reading Lucretius makes one realize first, that even if a story like the Epicureans' atomic theory has aspects we'd regard today as truth, it was to them basically a sort of guessing.  Secondly, just because a story is plausible does not give it a necessary connection to truth, no matter how consistent the story may seem.  We now do have actual 'scientific' theories to account for--or, now, actually, explain--phenomena such as earthquakes, weather, volcanoes, the nature of metals and water, the diversity of life, a great deal of biology, and even culture history.  If you think of how we know these things, even if there are major gaps in that knowledge, you can see how very powerful and correct (or at least much more accurate) a systematic approach to knowledge can be, when the subject is amenable to such study.

It is a great credit to centuries of insightful, diligent scientists, our forebears, whose legacy has brought us to this point.  It is a wonderful gift from them to our own time.

Advances in technology and methods may be making some Enlightenment concepts obsolete, and we continually find new ways of knowing that go ever farther beyond our own personal biological senses.  For those of us in science, reading the likes of Lucretius is an occasion to compare then and now, to see why just being intelligent and able to construct consistent explanations is not enough, and that for many areas we do now have ways to gain knowledge that has a firmer footing in reality--not just plausibility.


But....
That's all to the good, but if you do a more measured reading of Lucretius, you can see that in many ways we haven't come all that far.  We do a lot of cherry-picking of things in Lucretius that sound similar to today's ideas and thus seem particularly insightful.  But it is not clear that they were more than a mix of subjective insight and, mainly, good guesses--after all, there were competing theories of the nature of Nature even at the time.  And other areas of Epicurean thought, well, are just not mentioned by those remarking on their apparent modernity.  Selective citation gives an impression of deep insight.  Most of De Rerum Natura was simply story-telling.

In many areas of science, perhaps even some aspects of fundamental physics and cosmology, but particularly in the social and even aspects of evolutionary sciences, we still make careers based on plausibility story-telling.  Our use of mathematics or statistical methods--random surveys, questionnaires, arguing by analogy, and so on--and massive data collection, give the same sort of patina of professional wisdom that one can see in the rhetoric of Lucretius.

We tell our stories with confidence, assertions of what 'must' be so, or what is 'obvious'.  Often, those interested in behavior and psychology are committed to purging religious mysticism by showing that behavior may seem immaterial but that this is an illusion, and purely material evolutionary and genetic explanations are offered.  No 'free will'!  The world is only a physical reality.  The role of natural selection and competition in explaining even morality as a material phenomenon is part of this, because Darwin provided a global (may one say 'Epicurean'?) material framework for it.  Evolutionary stories are routinely reported to the public in that way as well.  Even if some caveats or doubts are included here and there, they are often buried by the headlines--and the same can be found in Lucretius, over two thousand years ago.

Explanations of physical and behavioral variation and its evolutionary causes, along with many 'adaptive' stories making forcefully asserted plausibility arguments about what evolved 'for' what, still abound. They are not just told on television--we can't really blame Disney and Discover for appealing to their audiences, because they are businesses; but the same stories are in the science journals and science reportage as well.  We see tales every day reporting miraculous discoveries about genetic causation, for example.  It is sobering to see that, in areas where we don't have a really effective methodology or theoretical basis, we are in roughly similar sandals as our ancient predecessors.

Cherries; Wikipedia
Intoxicated by the many genuinely dramatic discoveries of modern, systematic science, we do our own cherry-picking, and tend to suspend judgment where findings are less secure, dressing our explanations in sophisticated technological hand-waving.

When we don't have actual good explanations, we make up good-sounding stories, just as our forebears did, and they're often widely accepted today--just as they were then.

Monday, March 3, 2014

A "Question This Answer" award program?

It's cliché that old people who remember how things once were are unhappy about how things now are.  Of course, this can always be attributed to senility or the yearning of the old crew for their particular "good old days"; often those days were old but not so good.  But sometimes and in some ways, they really were better, but only those who lived through them can do their best to try to make younger people aware of what they've lost, and could--with enough drive and will--recapture.

Manoj Samanta, a regular MT reader and correspondent thoughtfully sent this link to a discussion by Nobel winner Sydney Brenner about his particular good old days in genetics.  If you read it, you'll see that his points are not new.  You've seen us mention them many times here.

An institutionalized, bureaucratized, well-funded System may have its good points, but it also becomes ossified,  self-protective and self-perpetuating.  It loses the freshness, or frisson, of its original mission, that motivated energized, creative thinking.  This kind of phenomenon is important  in all areas of human life, but our concern is science.

Scientists are now employees rather than professionals, to a great extent.  They work for a layered set of institutions (their universities, their funders, the media on which they depend, etc.).  Students are their serfs.  Graduate students and even post-doctoral fellows are typically assigned a piece of the Boss's grant project and a schedule on which to deliver 'results'.

"Answer this question"
The bottom line is basically that a grant was proposed to ask a question and the funds were given to answer it.  But because the "Answer this Question" system has evolved over the past decades to hide in well-planned and conservative safety behind incremental change, reports, meetings, forms, new large-scale technology, and a rush to publish in quantity rather than in a more measured fashion, it can stifle really new ideas that are problematic and not at all guaranteed to succeed.

Even in my own time, as a graduate student it was my responsibility to conceive of my own research problem.  My adviser was just that--an adviser.  My project wasn't his project or his work--he did plenty of his own (because, in part, he didn't have to spend all his time writing grants).  It was up to me to frame the question I wanted to address, show him that it was worth addressing, and then figure out how to do it.  In my case, and this is important these days, no funds were needed (it was a theoretical, computer-based project).  If it flopped, well, no PhD unless I thought of a different project.

It didn't flop, fortunately for me, and I went on to a post-doc in a mighty medical school for a mighty intimidating figure in human genetics.  But he basically also gave me and the other post-docs in his group the same sort of freedom:  if it was related to the general area of work (human genetic variation and its origins), he would support us in what we chose to do.

Those were heady days.

We think it would be wonderful if someone, with some resources and the foresight to do it, would implement a kind of retro- program that, at least for some lucky students or post-docs, would resuscitate the environment for creative thinking; Brenner, in the interview, is doing something along these lines. He's lucky, because he has the stature and perhaps resources or leverage on resources to make it happen.

Here has been be my idea:

"Question this answer!"
The most incisive way to do new science is not just to push the proverbial 'cutting edge' of a field another few millimeters, the way thousands of hapless soldiers have had to sacrifice their futures for a few meters of cratered territory.  It is not to further their mentor's career nor to learn how to operate just the same way for the next 40 or more years.  No.  It is different.

Instead, we would like to see funds available for a student who would take accepted wisdom, and question it:  What if the standard wisdom isn't true?

Funding for a "Question this answer!" research program for students could be exciting for everyone, student and mentor alike, and for the general community.  Such funds and support will have to come from someone not so timorous as to venture forth, not from institutions so stodgy they ought to be of life support.  That's because most things they fund won't go very far.....

But those that do will change the world.