Showing posts with label big science. Show all posts
Showing posts with label big science. Show all posts

Monday, December 16, 2013

Innovation-stimulation: will it work? Definitely worth a try!

Francis Collins has for some reason decided that NIH should try really, really this time, to stimulate research 'innovation' by moving at least a bit away from costly, wasteful, excessive but incremental big project grants, to dedicating a goodly chunk of NIH external research funding to individual investigators rather than large groups.  Here is the Nature story about it.

Alpine ibex climbing Cingino Dam in Italy (source: every other web site)
The NIH has been experimenting with funding high-risk, high-reward science with four separate pilot programs, including the Pioneer awards.  According to the Nature piece,
The NIH currently spends less than 5% of its US$30-billion budget on grants for individual researchers, including the annual Pioneer awards, which give seven people an average of $500,000 a year for five years. In contrast, the NIH’s most popular grant, the R01, typically awards researchers $250,000 per year for 3‒5 years, and requires a large amount of preliminary data to support grant applications.
Expanding the program is clearly a move in a good direction.  Big projects have their place, but have become as much a reflexive strategy for self-perpetuation as they are truly justified by their results history (which, by and large, isn't all that good or has reached diminishing returns).

Of course, individual independent investigators are just people, trend-following herd animals like most of us are.  Once the new program is in place, every investigator will flock to the trough.  Most will propose routine, safe projects even if they assert that they're 'innovative'.

Those proposals that really are innovative will involve risk in two main senses.  First, they mainly will involve procedures or strategies that are to the area and/or to the investigator, truly new, unclear, or untried.  Second, if the work is really innovative, most of it won't get completed on time, won't yield much in the way of publications, and -- worse -- won't find anything really new.

But is that outcome really 'worse'?  We think just the opposite!  If not much is invested in a project, not much is lost if it was truly creative but failed.  By contrast much is currently invested in huge projects that are so safe that they hardly generate commensurate returns.  Indeed, the reason for the failure of a really exploratory study may provide more useful knowledge than most 'positive' studies' findings.  And most potentially innovative ideas are, and turn out to deserve to be, busts.  That is why we call the ones that succeed innovative: they can change how we think.

This NIH policy change won't change crush of competition to keep the grants flowing, and will make it hard to see what is really innovative, in the inevitable panicky rush to get one's salary covered and keep the lab operating.  It takes experience, perhaps, but not undue cynicism, to predict that this new policy will be gamed and strategized.  The overpopulation of investigators still need funding (or jobs!), and will flood to the new trough, finding all sorts of reasons why their work is innovative.   Do you think it could be otherwise, or that such discussions are not taking place already at brown-bag lunches in departments across the country?

Place limits
Unless we limit how much funding any one investigator can have, don't give these new grants to people who already have a grant or impose some such restrictions, we will largely see just be a game of musical chairs.  New labels, same stuff.  After all, who will be reviewing and administering these applications?  It will be the same people who have brought you big-scale non-innovation all these years.  Unless today's heavy hitters are able to reverse the politics back to the old way (making sure their big-projects don't get curtailed!), NIH will make it a new System, with all the bureaucratic politics and cumbersomeness that that involves.  If their career has been spent in the current treadmill, how many will even be able to think in truly innovative ways?  We are, after all, middle-class people who need to earn a living as things now stand. What else can you expect?

Still, the change should be better than what we currently have!  The amount of funds wasted in the new way will be less than the amount being thrown away in the current rush to Big Science, the seeking of huge projects too big to kill and thus to provide career safety for the lucky investigators and their labs and fancy equipment.  As long as in the new way, the funds for individual researchers are enough to let them do good work but not enough to let them get comfortably entrenched, or for their administrators to depend on the overhead, then it's got a chance to make a real difference.

Of course, this will work even better if those who are training graduate students and post-docs inculcate innovative thinking.  If the grants are big enough just to enable faculty to hire students and technical staff to do the work, they may work less well.  What we need are grants to individuals that are small enough that the recipients will actually have to roll up their sleeves and do some of their own work.

We would suggest a fillip that should be tried:  Give grants to graduate students to do their own truly independent project, not just to be serfs on their mentors' project. Independent, free-standing funding for dissertations.  Labs should be their professors' places of training, not just their playgrounds.

Finally, we know that too-few and too-small will not really work well (compare western science to most of Eastern European, Indian, or South American science in the '80s, for example).  But unrelenting vigilance will be required to prevent coalescence once again into fewer, bigger projects.

If it can be done and really done properly, this could be a salubrious change, in directions we have to approve of, since we've been criticizing the current big-science mode for years. But we have to be patient, because innovation is very hard to come by.

Wednesday, August 7, 2013

The natural and naturally defensive inertia of business-as-usual: is there a better way?

One common response we and others get when critiquing what's going on in the expensive arena of current life science is that we should stop criticizing until we can provide a better idea of how the science should be done--and given the publicity typically associated with business-as-usual (BAU), that requires us to promise a better miracle.

But we intend to be talking about science here, and that makes such responses rather off the mark.  One may differ with our view, but the you-have-to-say-what's-better retort is an unfair response for several basic reasons.

First, science is about understanding nature, and if current BAU isn't very good at that or living up to its own publicly promoted promise, or has entered into a time of diminishing returns, it is perfectly appropriate and fair to criticize it.  If a critique of BAU is on the mark, it is not a scientifically appropriate defense to criticize the critic for not having a solution at hand. Instead, the critique is a kind of call to action. The challenge is daunting, but could stimulate people to think about why and what to do about it, and perhaps some lucky brilliance will yield a better approach.

An Illumina HiSeq 2000 sequencing machine

Secondly, this is largely about politics, because it has to do with resources.  The you-say-what's-better response is in fact not about science but is instead a ploy to defend BAU, and if most objective observers agree that the problems a critique raises are cogent, it is a desperate ploy.  The response is very understandable.  We all have to make a living, and we scientists aren't any less self-protecting than any other part of society.  We're trapped in that mode, to some extent--indeed, we see little evidence that even the protection of the tenure system makes people behave differently, and of course if your salary must come from grants.....

We also have our vested psychological interests, our sense of worth and so on, to protect as well.  Even in science, where exploration of the unknown is our self-professed activity, change is a threat to the comfort of the known territory of BAU.  In our commercial culture that, like it or not, now even includes universities-as-businesses, lots of of money is to be made, careers to be lived, and people to be employed pursuing BAU--even if it never really delivered on its promises.  Indeed, that's kept religions in big business for millennia! 

But these realities in themselves might be used to leverage a better approach.  What if we as a community agreed  collectively to exert the needed political pressure to modify the funding system to put it on a more science-centered basis: for example,
(1) phase out grant-based university faculty salaries, 
(2) give smaller but longer-term grants, hence spreading resources to more investigators,  
(3) ask for accountability for serious, relevant activity, but not 'results' of any pre-specified sort,  
(4) cap the amount of resources any lab could have;  
(5) centralize more high-cost technical resources so no investigator had a reason to hoard results or monopolize or drive up acceptable technology, or keep high funding to maintain a costly resource;  
(6) penalize excessive promises and/or raise the bar of expectations (and qualifications for future funding) to be commensurate with the hype in proposals,  
(7) remove the need to couch research in fear-mongering rhetoric such as promises to eliminate disease, for example, by shifting basic genomics from NIH, or NASA's 'astrobiology' resources to NSF, where they would get a more scientifically critical eye;  
(8) shifting resources to agricultural and ecological research to force NIH to focus on the most important, actual health problems, and thus put funds in much more socially important areas;  
(9) try (somehow) to reform the media to do their job and be less gullible, willing partners in exaggeration (e.g., by holding investigators to what they say to the media, in terms of actual delivered results). 
         (10)  Find some way to fund more truly exploratory, and riskier, projects that may have
         higher  chance of really imporrant new finding.

These kinds of positive recommendations have been made before, and are probably just dreamland, since the many vested interests would vigorously impede them.   Certainly they're unlikely without strong grass-roots pressure from investigators, which any entrenched system makes very difficult and unlikely. Not only that, but of course these suggestions are strategies for stimulating original new thinking, but don't themselves include any specific new scientific ideas.  Nonetheless, it is perfectly appropriate to point out the wasteful inertial basis of current BAU.

Nonetheless, we have not just critiqued BAU. We've often pointed out its tremendous success.  It has to a great extent revealed the nature of Nature, in ways that were suspected theoretically but could not previously be proved.  But that success didn't reveal Nature to be what had been hoped or hyped for: she's not simple and doesn't yield quick and easy miracles.  Too bad! 

The problem, at least as we see it, is entrenched dependence on Next Generation sequencing machines (and then the Next-Next Gen ones), that is, a technology-driven race for the ever-bigger and the consequent need for endless large grant support, a belief system that impedes moving to more effective and focused approaches (many of which would require high technology).  Today there is often not  the luxury of the time to think about what those better approaches might be: too many investigators feel, and acknowledge privately--that they have to spend too much time just justifying why BAU (but of course on a much larger scale!) will rescue us (and keep the funding flowing). 

It's fair to ask people to stop whinging when we and others point out that, as everyone aware actually knows, what we've all been up to has developed an inertia that is no longer optimal.  We all should start lobbying for change in the system, so we can be able to address the real challenge, which is to look full-face at the world we are interested in, and try to assess where we should be going to understand the laws of nature.

Monday, March 11, 2013

Science vs "Science"

It's got a lot to do with how you get your information, whether you trust science or not. And it's got a lot to do with whether you're exposed to real science or "science."

Like this "science"...



This product has been "proven by science," so we're fools not to buy it! I find that if I'm not watching a Nova or Nature or an episode of anything with Morgan Freeman or Stephen Hawking, most everything else that talks of science on the television is trying to sell me something. Most everything else is "science."

It seems like every beauty product advertisement is using "science" to convince me that I'm butt ugly and that to fix it (or prevent it from worsening) I should give them my perfectly good dollars. It's "science" after all.

I'm kind of stunned that it's legal for for-profits to cry "science" when it's their own study, when they merely asked opinions as evidence for effectiveness, or when they didn't do any studies at all. Science isn't allowed to be so biased. Science is supposed to want to improve your life first and foremost, not con you out of your money.

I'm not just thinking about this today because I've been hibernating this February, plopped in front of the tube, absorbing horrifying beauty ads through my aging, sagging wrinkled face. (I really should take care of it better by smearing money all over it.) I'm thinking about all this right now because of my friend Alice Roberts's nice piece "Childbirth: why I take the scientific approach to having a baby" posted on the Guardian Saturday.

Trends to move childbirth out of the hospital setting have put pressure on mothers and fathers to make decisions about what to do when it's time for theirs. You'd assume that because there's a movement to move things home that it's because some smart, science-minded, compassionate folks have figured out that it's healthier. If you can't stand the draconian and bloated government/insurance mogul-run healthcare system, a movement might feed your existing suspicions or opinions that there could be better ways to have a baby than by blindly following orders that these profit-motivated fascists at hospitals bark at us. 

But why assume that home childbirth folks are any less biased, less vested, less driven by self-interests? I don't know but it just seems so common for people to give rebels the benefit of the doubt more often than tradition, than institutions. (Something about "honest signaling" might have just popped into your mind if you've been trained in evolutionary theory.) What Alice found is that information on, that is, data or evidence for, what's healthiest--home or hospital or otherwise (birthing centers, for example)--is kind of difficult to come by!

For starters, she writes, 

"This is partly because the overall risks of maternal and neonatal death are now very small (about five per 100,000 women die in childbirth and four per 1,000 babies), so large numbers of mums are needed to assess relative risks. Maternity provision differs between countries, so looking at risks in other countries, even in Europe and the US, may not be terribly helpful."

Within that small risk there is a lot of jockeying for your support. So the second reason, she says, that makes it hard to find information is, 

"the politics of birth. It can be quite hard for mums-to-be to access impartial evidence and advice when it seems there are plenty of people wanting to influence your decision in one way or the other. Evangelical advocates of home birth often talk about the importance of women's choice and empowerment, as well as instilling distrust in obstetricians. For me, being empowered to make a decision requires access to good evidence and the freedom to make up my own mind. And whilst "maternal satisfaction" is often put forward as an important factor to be taken into consideration, I want to know what the relative risks are. And if there's not yet enough evidence to assess that – I want to know that too."

You'd think we all do. You'd think we all want to know the answer to "where and how will the risks be lowest for having my baby?" But we don't all hold  the belief that it's our right to know the answer to that, the way Alice knows it is, the way Alice demonstrates that it is. And it's not just an issue about the dissenters and the movements spinning information and evidence so we'll see things their way--a very real problem that Alice walks us through in the article. It's the doctors too.

Since the article's been posted in various places I've seen commenters complain how they asked their doctors for papers and numbers to help them make their birth plans and the doctors wouldn't go there. I've never had to make a birth plan but I've had similar experiences with doctors like when, for example, I asked for non-hormonal birth control options because I saw no reason to continue ingesting the stuff when the risks for long-term use aren't known and I was now married and ready to stop taking the pill. My doctor laughed at my question, laughed when I asked for a diaphragm or anything like it, and tried to convince me without any scientific evidence that the pill was fine to take your whole life.

Do I think medical decisions should lie completely in patients' hands? Of course not. We can't all be doctors. But they've got to be better ambassadors of science. They've got to be the best. They've got to be science.

It can't be up to us to figure it out for ourselves, not just because we shouldn't have to but because some of us are terrible at it when we try. This includes bright young people at my university, one for example who had a whole textbook on reproductive biology to answer this homework essay question: Write the life story of an egg. Because she cited it, I know that instead of using her high quality resource, she went straight to livestrong.com for all of her information.

Because of movements like the anti-vaccinators and all the people without celiac disease who won't eat gluten, it's easy to worry that unscientific trends with birth will dial back mortality rates to medieval ones. Heck, it's tempting to worry that when videos like this get around to some people who love all things PALEO, they will make it so.

No wonder so many of us can't trust climate scientists and evolutionary scientists. When it comes to our health, "science" has an agenda that's not always first and foremost what's best for us. When it comes to our beauty, "science" smells like money. If this is all we know of "science" then I'm less surprised of the push back against biology, ecology, climate, space exploration, etc... that to us scientists seems downright ridiculous.

If we're going to get non-scientists on board with real science, we need to take the word back.

Thursday, February 28, 2013

The Brain Drain.....on our budget!

Well, we've seen case after case of Big Science projects that yielded more hype than heft.  GWAS and other 'omics monsters are examples.  So is the $1 B (as in Billion) study on following up children but that hasn't followed a single one up yet.  There is ENCODE, quickly being exposed (e.g., here) as the fund-gobbler with limited or even questionable results that it is.  And the Thousand Genomes project.  And then there are the relentless, repetitive studies of diet and other go-hardly-anywhere or say-the-obvious-again that get the headlines.

And now, perhaps not totally coincidentally timed just when the President has to make a decision about cuts, comes the urgent, world-revolutionizing brain mapping project (BAM!).  Even though no details have yet been announced, already it is being blasted by people who know about the project and have the brains to see through its flimsy rationales (e.g., here). And, it's interesting that so many geneticists, who of course benefitted themselves from their own Big Science decade+, are coming out against it.  Not surprisingly, as it will steer funding to something else.

These projects do have some scientific questions, and identify areas of our limited knowledge.  But these are nearly after-thoughts--first, let's look at everything that technology can find without having to think seriously about why we're doing it first (i.e., to have to state some useful hypotheses).  And second, they seem to be transparent strategies for being bad rather than good citizens:  NIH, and its PT Barnum leader Francis Collins, lays this Big One at Obama's feet just when it comes time to think about budget cuts. And the EU has, even in times of austerity, ponied up a half-billion Euros, for its own version of the Brain Drain (The Human Brain Project) -- is this a case of the US not wanting to be left behind?  If the President says we can't do this all the way, then will he feel pressured to temper other cuts?  This or something like must be what's in the lobbyists for Big Science's minds.

There is waste aplenty, crises in science research and publishing, and the like.  But the very well organized university-science-industry welfare system knows how to propose projects easy to brag about and hard to turn down, to make sure the cuts happen on somebody else's lawn.  Proposing huge new projects of the 'omics' (do everything all at once without real ideas) in the face of a budget crisis is basically to sneer at the public good, a science arrogance, which has all the earmarks of a cynical disregard for society at large and a shallowly selfish form of guild-protection.  Or is this too cynical a view on our part?

No proposed project is entirely worthless, even 'mapping' the human brain.  But the mind-set or strategem to co-opt research funding by going for Big Science is destructive to science itself, making safe, incremental, essentially thought-light (hypotheses need not apply) progress, and restricted to a set of investigators who have to toe the line as components of the bigger project.  These are becoming more and more top-down, NIH-administrated mega-groups, rather than independently initiated projects (known as RO1 applications) and the same is likely happening in other funding agencies.

Investigators bemoan the reduction in RO1 funds, and the flood of applications, but investigators desperate for funds when the chance per application is slow churn out applications, and most of them are safe, incremental projects following fads that seem fundable.  Investigators submit many applications a year, and who can blame them?  Unless there is some real squeeze that forces the system to fund what is really inovative or addresses real problems, which is not the tenor of our Big Science times, making more money available for RO1's would be good, but won't solve the problems.

There is now a long track record to show that what we say is not so wrong-headed.  Yes, even after you filter out the hurricane of hyperbole, most projects find things and, yes, there are improvements in knowledge or even occasionally in medical care.  Some of them are quite important.  But that's not the same as being worth it, or yielding a greater payback compared to more focused studies on more clearly soluble problems would have been.

The rat-race this is imposing on the academic research system and the hungry dependence of universities on external grants, are destructive of jobs, job security, morale, and of science progress and innovation itself.

This does raise a countervailing problem, however.  We already have an excess of people with advanced degrees who can't get jobs, or the kind of jobs they've trained for.  This is separate from the debate about whether there is a shortage of adequately trained technical science and engineering graduates and whether K-12 and research-obsessed universities are dropping the training ball.  We recruit too many graduate students, largely to do our research for us, or help us teach, so we can keep getting those grants that often don't produce that much, and then the grad students find that there aren't the needed real jobs out there afterwards.  The abuse of the system is worse in professional schools than in real universities with students, because professional schools (medical, public health, etc.) pay little of their faculty's salary and can't live on the tuition of their relatively small student body.  This is not their fault so much as the fault of the system we've allowed to be built.

Thus, cutting research funding to eliminate minimally useful or wasteful projects--reducing the Brain Drain--will force a cut-back in our convenient but excess scientific labor pool, as Karl Marx might have referred to it.  Faculty and staff will lose jobs, as will those who make and distribute the materials labs use, advertise it, publish research journals, and the like.  So, budget adjustment rather than just cuts is what we really need.

We all want things that we do to continue.  We build interest groups, settle into comfortable existence, and fight threats to the status quo.  All of this is only natural.  But why should scientists or bureaucrats have an easier job-finding time than people in 'lower' walks of life?  The proper and humane attitude is for the granting agencies to be public-spirited and volunteer cuts--real cuts--in the research budget, but cuts that are phased and tied to reforms that will continue to provide more secure, if more modest, funding to more (especially younger) investigators, to take the chance that this more diverse, more focused rather than grandiose omics-scale, a less frenzied science ecosystem will produce greater, better fruit than it's been doing.

And our nation would then not need to suffer the impending Brain Drain.

Thursday, February 7, 2013

To cut, or how to cut, that is the question

We have criticized the current science funding and approach many times and in many ways here on MT.  Essentially, there is waste, relentless pressure to churn out safe incremental results, and pressure to rely on Big Science for a variety of reasons, some of which have as much to do with careerism as the science itself. The science establishment has been overpopulated in a Malthusian way, even knowingly, and we have noted how and why this understandably leads to various forms of shading of the evidence, including outright fraud.

These are facts that only the most Pollyannish people in science, or perhaps Francis Collins politicking in defense of NIH's budget, would deny.

We have said to the contrary, that grant budgets should be cut substantially.  The objective would be to force investigators to work on more cogent problems, more likely to return useful practical or theoretical results to the society that funds the work, and do that at more reasonable cost.  But we would encourage longer-term funding, and caps on how much any one investigator can have, to spread the wealth.  We know that this, like any such distribution policy, would generate some waste and inefficiency, but it could hardly be more than it is currently, and might increase the chance of real innovative discovery.  Faculty for whom research is part of their job, should be given modest research budgets without having to pass 'peer' review, but accountable instead by periodic demonstration of capable thoughtful work.  Projects, especially big ones, ought to have clear and definitive time limits.  Universities should be weaned off their addiction to grant overhead, and career-building needs to be returned to evaluations based more on originality, depth, and impact than on lobbied, gamed production mills.

But how can it be good to cut funding when it's already so tight?
It has been objected that the probability of funding is already very low--some institutes at the National Institutes of Health are said to be funding only 8% of grant applications--so that cutting could hardly have salubrious effects!  How can we rectify a belief that the system of science is too bloated with the fact that reduced budgets would make it even harder to be funded?

This is a fair question, and the answer isn't simple, but let's try to explain our view, at least.  First, funding is tight perhaps, but the 8% figure doesn't represent the whole story.

A large amount of research support by NIH at least, and probably NSF as well, goes to internally driven programs or projects, perhaps like funding DNA sequencing centers, or to semi-competitive contract bids, or to RFP's.  RFPs are NIH's requests for proposals to address some particular area that they have been convinced need attention; RFP's are drafted with external consultants and in our experience the funding mainly goes more or less predictably to insiders, already established in the field, partly because they're in effect designed or aimed that way, which leaves most proposals coming opportunistically out of the blue and truly far off the fundability mark.

In addition, the low per-proposal rate, whether it's 8% or in fact higher, just leads investigators to submit reams of proposals every year, so that while most proposals may not be funded, most investigators do get some funding.  And if you look at what's funded, you wonder how that could happen if funds were really so tight that only really good science would make the mark.

This same overheated system means less time investigators spend doing any actual work because they're writing so many grant applications, and that leads investigators to routinely overstate what they have done, and to be very safe in proposing what they want to do in the future.  It generates large sets of administrators to handle the processing, etc.  And, of course, to the shading of truth in various ways.  You can't expect otherwise.

Further, there is a very conscious and intentional drive to propose bigger and longer studies on various grounds, some legitimate but many trumped up as rationales, so investigators can manage big groups for long time periods.  Bigger means safer and more status and influence on campus.  You can't fault people for thinking grandly, or seeking more security.  A concentration of funds in Big Science is not good for science overall if it leads to quickly diminishing returns but too-big-to-terminate projects, and this we think is quite common, indeed almost the rule.  The move to 'omics' scale work is very deliberately done and in part for these fiscal and careerist rather than scientific reasons.

Of course, the same, predictably, also drives universities to want more overhead income--universities get a hefty percentage of the budget of just about every grant their faculty members receive, money over and above the grant budget, that goes straight from the funding agency to the university--so there are all sorts of pressures on the system itself to go Big.  There is no reason universities shouldn't keep wanting to expand: in the way we view the world in the business-modeled US and EU, size, growth, and competition are everything.  Investigators who aren't funded can, well, survive however they can survive.  Or not.  Pressures are naturally for 'faculty' to teach less if at all, and do less actual work so they can spend their time and effort on grant-writing.  So naturally we tend to hype every little factoid to the media and publish a relentless stream of (usually never-cited) papers.  Anyone who denies the pervasiveness of this is being disingenuous.

Considering all these factors, however tight funding is it's in part because the system is still bloated without constraints to make people do more focused, accountable, work.  Or to become more efficient.  Or more honest, if it comes to that.

If budgets were cut to the point that NIH and perhaps also NSF and others, had really to evaluate what is most necessary, focused, and likely to yield returns, and to stop things that aren't, and to curb university overhead-greed and administrative overload, and to restrain NIH's and NSF"s own big publicity hype machines, and so on, we could perhaps--perhaps--make things more scientifically efficient.

If we slowed down and scaled back, and made funding more predictable and longer-term, but per capita smaller, and changed to a way of thinking that led to fewer but better-trained graduate students, fewer post-docs, smaller faculty and research staff, smaller and less bloated operations overall, then science might be advanced and perhaps even at a lower cost.  And funding, though more modest perhaps, would be easier to get.

But without real tightening, it is in nobody's interests--certainly not those who accept the Darwinian worldview that life is all about relentless competition and winner-take-all rewards--to change the way they do business.  As it is now, competing more frenetically is the strategy that is perceived to have the best chances of success.

But what about the jobs at stake?
Any cuts will involve threats to jobs and hence draw resistance of whatever sort universities and investigators and other lobbyists can muster.  But there are honorable ways to cut.  Phased budgetary cut-backs could give universities time to adjust.  They could downsize by not replacing personnel who leave or retire, for example.  Phased change that is clearly signaled with enough time to adapt is a proper and feasible way to do things.

A return to more measured expectations and modest but focused work, done in a humanely phased way, could rectify some of the issues and improve the yield of knowledge and 'translatable' results to the public. 

Thursday, January 24, 2013

A 'paradigm shift' in science....or a manoever?

Thomas Kuhn's 1962 book The Structure of  Scientific Revolutions suggested that most of the time we practice 'normal' science, in which we take our current working theory--he called it a 'paradigm'--and try to learn as much as we can.  We spend our time at the frontiers of knowledge, and at some point we have to work harder and harder to make facts fit the theory.  Something is missing, we don't know what, but we insist on forcing the facts to fit.

Then, for reasons hard to account for but in a way that happens regularly enough that it's a pattern Kuhn could outline (even if rare), someone has a major insight, and shows how a totally unexpected new way to view things can account for the facts that had heretofore been so problematic.  Everyone excitedly jumps onto the new bandwagon, and a 'paradigm shift' has occurred. Even then, some old facts may not be as well accounted for, or the new paradigm may just explain issues of contemporary concern, leaving older questions behind.   But the herd follows rapidly, and an era of new 'normal science' begins.

The most famous paradigm shifts involve people like Newton and Galileo in classical physics, Darwin in biology, Einstein and relativity, and the discovery of continental drift.  Because historians and philosophers of science have in a sense glamorized the rare genius who leads such changes, the term 'paradigm shift' has become almost pedestrian:  we all naturally want to be living--and participating--in an important time in history, and far, far too many people declare paradigm shifts far too frequently (often humbly referring to their own work).  It's become a kind of label to justify whatever one is doing, a lobbying tactic, or a bit of wishful thinking.

Is 'omics' a paradigm shift?
The idea grew out of the Enlightenment period in Europe starting about 400 years ago, that empiricism (observation) rather than just thinking, was the secret to understanding the world.  But pure empiricism--just gathering data-- was rejected in the sense that the idea was for the facts to lead to theoretical generalizations, the discovery of the 'laws' of Nature, which is what science is all about.  This led to the formation of the 'scientific method,' of forming hypotheses based on current theory, setting up studies specifically to test the hypothesis, and adjusting the theory according to the results.

If the 17th-19th centuries were largely spent in gathering data from around the world, a first rather extensive kind of exploration.  But by the 20th century such 'Victorian beetle collection' was sneered at, and the view was that to do real science you must be constrained by orderly hypothesis-driven research.  Data alone would not reveal the theory.

With advances in molecular and computing technology, and the complexity of life being documented, things changed.  In the 'omic' era, which began with genomics, the ethos has changed.  Now we are again enamored of massive data collection unburdened by the necessity to specify what we think is going on in any but the most generic terms.  The first omics effort, sequencing the human genome, led to copy-cat omics of all sorts (microbiomics, nutrigenomics, proteomics, .....) in which expensive and extensive technology is thrown at a problem in the hope that fundamental patterns will be revealed.

We now openly aver, if not brag, that we are not doing 'hypothesis-driven' research, as if there is now something wrong with having focused ideas!  Indeed, we now often treat 'targeted' research as a kind of after-omics specialty activity.  Whether this is good or not, I recently heard a speaker refer to the  omics approach as a 'paradigm shift'.  Is that justified?

Before we could even dream about genomic-scale DNA sequencing and the like, we must acknowledge that our understanding of genetic functions and the complex genome had perplexed us in many ways.  If we had no 'candidate' genes in mind-no specific genetic hypothesis--for some purpose, such as to understand a complex disease, but were convinced for some reason that genetic variation must be involved, what was the best way to find the gene(s)?  The answer was to go back to 'Victorian beetle collection'.  Just grab everything you can and hope the pieces fall into place.  It was, given the new technology, a feeling of hope that this might help (even though we had many reasons to believe that we would find what we indeed did find, as some of us were writing even then).

The era of Big Science
Omics approaches are not just naked confessions of ignorance.  If that were the case, one might say that we should not fund such largely purposeless research.  No, more is involved.  Since the Manhattan Project and a few others, it did not escape scientists' attention that big, long, too-large-to-be-canceled projects could sequester huge amounts of funding.  We shouldn't have to belabor this point here: the way universities and investigators, their salaries and careers, became dependent on, if not addicted to, external grants, the politics of getting started down a costly path enabling one to argue that to stop now would throw away the money so-far invested (e.g., current Higgs Boson/Large Hadron Collider arguments?).  Professors are not dummies, and they know how to strategize to secure funds!

It is fair to ask two questions here:
First, could something more beneficial have been done, perhaps for less cost, in some other way?  Omics-scale research of course does lead to discoveries, at least some of which might not happen or might take a long time to occur.  After the money's been spent and the hundred-author papers published in prestige journals, one can always look back, identify what's been found, and argue that that justifies the cost. 

Second, is this approach likely to generate importantly transformative understanding of Nature?  This is a debatable point, but many have said, and we generally agree, that the System created by Big Science is almost guaranteed to generate incremental rather than conceptually innovative results.  (E.g., economist Tim Harford talked about this last week on BBC Radio 4, comparing the risk-taker science of the Howard Hughes Institutes with the safe and incremental science of the NIH.)  Propose what's big in scale (to impress reviewers or reporters), but safe--you know you'll get some results!  If you compare 250,000 diabetics to 500,000 non-diabetic controls and search for genetic differences, across a genome of 3.1 billion nucleotides, you are bound to get some result (even if it is that no gene stands out as a major causal factor, that is a 'result').  It is safe.

This is not providing a daring return on society's largesse, but it is the way things largely work these days.  We post about this regularly, of course.  The idea of permanent, factory-like, incremental, over-claimed, budget-inflated activity as the way to do science has become the way too many feel is necessary in order to protect careers. Rarely do they admit this openly, of course, as it would be self-defeating.  But it is very well-known, and almost universally acknowledged off the record, that this strategy of convenience seriously under-performs, but is the way to do business.

Hypothesis-free?
This sort of Big Science is often said to be 'hypothesis free'.  That is a big turn away from classical Enlightenment science in which you had to state your theory and then test it.  Indeed, this change itself has been called a 'paradigm shift'.

In fact, even the omics approach is not really theory- or hypothesis-free.  It assumes, though often not stated in this way, that genes do cause the trait, and the omics data will find them.  It is hypothesis-free only in the sense that we don't have to say in advance which gene(s) we think are involved.  Pleading ignorant has become accepted as a kind of insight.

For better or worse, this is certainly a change in how we do business, and it is also a change in our 'gestalt' or worldview about science.  But it does not constitute a new paradigm about the nature of Nature!  Nothing theoretical changes just because we now have factories that can systematically churn out reams of data.  Indeed, the theories of life that we had decades ago, even a century ago, have not fundamentally changed, even though they remain incomplete and imperfect and we have enormous amounts of new understanding of genes and what they do.

The shift to 'omics' has generated masses of data we didn't have before.  What good that will do remains to be seen, as does whether it is the right way to build a science Establishment that generates good for society.  However that turns out, Big Science is certainly a strategy shift, but it has so far generated no sort of paradigm shift.

Tuesday, October 30, 2012

Science funding: what it's really (or at least largely) about

An Op-Ed in Monday's hurricane-nervous NY Times is a plea for more federal science research funding.  It's by a former science adviser and of course it's an advocacy piece.  It makes the attempt to show the benefits to society of university-based federally sponsored research--the usual claim that this leads to new medicines, cleaner energy, and more science jobs.

Of course, these things are true in principle and in some instances actually true.  We'd create more science jobs if we actually did our duty with respect to raising the expectations and standards for our educational programs, especially for undergraduates (but that's not what professors' jobs are all about any more).  The extent we really generate usable research that leads to products is something we don't know much about, because industry loves foisting their responsibility (to do research for their company's products) off on the public, while still being secretive and competitive so that most of the key research is done in house.  Pharmas are trying to cooperate in their support for basic facts which will then be turned over to their own private (and secret) value-added research.

By buying research from universities, companies impose various levels of privatization of the results (and commercial incentives for faculty), which undermines the proper role of public institutions, and in some ways actually privatizes public research.

Federal research is, in a naturally expectable way, bureaucratized to give it inertia so that program officers' portfolios are stable or enlarged, and prominent investigators' labs (and their universities' general funds), have continuity.  That may sound good, but it means safe, incremental work without any serious level of accountability for producing what one promised (here, we refer not just to miracle results, which can't be promised, but things like adequate statistical power to detect what one proposes one has the power to detect, or advances in things like disease therapy that is promised in grant applications.).

One can always crab about this waste of funding generated by the way we know how to work the system in our favor.  We ourselves have been regularly funded for decades, so this post is not  a matter of sour grapes on our part.  But there is, from an anthropological point of view, a broader truth--one that shows in a way the difference between what are called a culture's emics and its etics.  The emics are what we say we're all about, and the etics are what an observer can see we are really up to.  Here, part of the usually unspoken truth about huge government investments is that citizens are given promises by the priests (the recipients) of some specific good in return for investment.  But the momentum and inertia is largely for a different reason.  As the Times author says:
Moreover, the $3.8 billion taxpayers invested in the Human Genome Project between 1988 and 2003 helped create and drive $796 billion in economic activity by industries that now depend on the advances achieved in genetics, according to the Battelle Memorial Institute, a nonprofit group that supports research for the industry. 

So science investments not only created jobs in new industries of the time, like the Internet and nanotechnology, but also the rising tax revenues that made budget surpluses possible.
This is both a post-hoc rationale (one can always look backwards and identify successes and thus try to justify the expense, and its continuation, but is not usually compelled to argue what else, or what better, could have been done by government--or by taxpayers keeping their money--had the policies been different.

At the same time, it's a very legitimate argument.  If science investment doesn't lead to a single real advance in health or energy efficiency, but if it does lead to jobs for lots of people, not just including the scientists, but the people who make, transport, market, advertise, and design their gear, their reagents, even their desks and computers, then those funds are circulating in society and in that sense doing good.

It's a poor kind of justification for the investment relative to its purported purpose.  But life is complex.  Sequencing machines or enzymes or petri-dishes are made by people.  The challenge to identify real societal needs (or to decentralize) and achieve success without just building self-interested groups and bureaucracy is a major one.  Often it leads to disasters, like wars or poor agricultural management, and so on.  But it also is part of the engine of a society, whatever that society's emic delusions about what they're up to may be.

Monday, May 28, 2012

Big Science, Stifling Innovation, Mavericks and what to do (if anything) about it

We want to reply to some of the discussion and dissension related to our recent post about the conservative nature of science and the extent to which it stifles real creativity.  Here are some thoughts:

There are various issues afoot here.

In our post, we echoed Josh Nicholson's view that science doesn't encourage innovation, and that the big boy network is still alive and well in science.  We think both are generally true, but this doesn't mean that all innovators are on to something big, nor that the big boy network doesn't ever encourage innovation.  Some mavericks really are off-target (we could all name our 'favorite' examples) and funding should not be wasted on them.  The association of Josh Nicholson with Peter Duesberg apparently has played a role in some of the responses to his BioEssays paper.  That specific case somewhat clouds the broader issues, but it was the broader issues we wanted to discuss, not any particular ax Nicholson might have to grind.

One must acknowledge that the major players in science are, by and large, legitimate and do contribute to furthering knowledge, even if they do (or that's how they) build empires.  There is nonetheless entrenchment by which these investigators have projects or labs that are very difficult to dislodge.  Partly that's because they have a good track record.  But only partly.  They also typically reach diminishing returns, and the politics are such that the resources cannot easily be moved in favor of newer, fresher ideas.

It's also true that even incremental science is a positive contribution, and indeed most science is almost by necessity incremental: you can't stimulate real innovation without a lot of incremental frustration or data to work from.  Scientific revolutions can occur only when there's something to revolt against!

If we suppose that, contrary to fact, science were hyper-democratized such as by anonymizing grant proposers' identities (see this article in last week's Science), the system would be gamed by everyone.  Ways would be found to keep the Big guys well funded.  Hierarchies would quickly be re-established if the new system stifled them.  The same people would, by and large, end up on top.  Partly--but only partly--that's because they are good people.  Partly, as in our democracy itself, they have contacts, means, leverage, and the like.

And it's very likely that if the system were hyper-democratized a huge amount of funding would be distributed among those who would have trouble being funded otherwise.  Since most of us are average, or even mediocre, most of the time, this would be a large expenditure if it were really implemented relative to a major fraction of total resources,  contributions likely watered down even further than is the case now.  But that kind of  broad democratization is inconceivably unlikely.  More likely we'd have a tokenism pot, with the rest for the current system.

Historically, it seems likely that most really creative mavericks, the ones whom our post was in a sense defending, often or perhaps typically don't play in the stodgy university system anyway.  They drop out and work elsewhere, such as in the start-up business world.  To the extent that's true, a redistribution system would mainly fund the hum-drum.  Of course, maybe the budgets should just be cut, encouraging more of science to be done privately.  Of course, as we say often on MT, there are some fields (we won't name them again here!) whose real scientific contributions are very much less than other fields, because, for instance, they can't really predict anything accurately, one of the core attributes of science.

One can argue about where public policy should invest--how much safe but incremental vs risky and likely to fail but with occasional Bingos!

It is clear from the history of science that the Big guys largely control the agenda and perhaps sometimes for the good, but often for the perpetuation of their views (and resources).  This is natural for them to do, but we know very well that our 'Expert' system for policy is in general not a very good one, and we keep paying for go-nowhere research.

Perhaps the anthropological reality is that no feasible change can make much difference.  Utopian dreams are rarely realized.  Maybe serendipitous creativity just has to happen when it happens.  Maybe funding policy can't make it more likely.  Such revolutionary insights are unusual (and become romanticized) because they're so rare and difficult.

The kind of conservative hierarchy and tribal behavior are really just a part of human culture more broadly.  Still, we feel that the system has to be pushed to correct its waste and conservatism so it doesn't become even more entrenched.  Clearly new investigators are going to be in a pinch--in part because the current system almost forces us to create the proverbial 'excess labor pool', because the system makes us need grad students and post-docs to do our work for us (so we can use our time to write grants), whether or not there will be jobs for them.

Again, there is no easy way to discriminate between cranks, mavericks who are just plain wrong, those of us who romanticize our own deep innovative creativity or play the Genius role, and mediocre talent that really has no legitimate claim to limited resources.  The real geniuses are few and far between.

A partial fix might be for academic jobs to come with research resources as long as research was part of the conditions for tenure or employment.  Much would be wasted on wheel-spinning or trivia, and careerism, of course.  But it could at least potentiate the Bell Labs phenomenon, increasing the chance of discovery.

We cannot expect the well-established scientists generally to agree with these ideas unless they are very senior (as we are) and no longer worried about funding....or are just willing to try to tweak the system to make it better.  When it's just sour grapes, perhaps it is less persuasive.  But sometimes sour grapes are justified, and we should listen!

Friday, May 25, 2012

You scientist, we want you to get ahead....but not too FAR ahead!

A paper in the June issue of BioEssays is titled "Collegiality and careerism trump critical questions and bold new ideas" and, no, we didn't write it.  The subtitle is "A student’s perspective and solution": the author is Joshua Nicholson, a grad student at Virginia Polytech.  It's a mark of the depth of the problem that it is recognized and addressed by a student, in a very savvy and understanding way.  But of course it's students who will most feel its impact as they begin their careers when money is tight and the old boy network is alive and well. The situation is so critical today that even a student can sense it without the embittering experience of years of trying to build a post-training career.
As students we are taught principles and ideals in classrooms, yet as we advance in age, experience, and career, we learn that such lessons may be more rhetoric than reality.
Nicholson is not the first to notice that the current system of funding and rewards encourages more of the same, not innovation.  Scientists, he notes, are discouraged from having radical, or even new ideas in everything from grant applications to even just expression of ideas.  Indeed, numerous examples exist of brilliant scientists who have said they couldn't have done their work within the system; Darwin, Einstein, and whatever you think of Gaia, its conceptor, but also innovative inventor, James Lovelock, has said the same (and he did so recently on BBC Radio 4's The Life Scientific). Other creative people, in the arts, have felt the same way about universities (e.g., Wordsworth the poet, Goya the painter).

The US National Institutes of Health and National Science Foundation both pay lip service to innovation, yes, but still within the same system of application and decision-making.  Nicholson says that the NIH and NSF in fact admit that these efforts are not encouraging innovation (as those of us who have been on such panels and never seen an original project actually funded--usually the reviewers pat the proposer on the head patronizingly and say make it safe and resubmit).  He blames this, correctly, on the review structure; peer review.  Yes, experts in a field are required to evaluate new ideas, but it is they who are often most unwilling to accept them.

To be fair, usually this is not explicit and reviewers may usually not even be aware of their inertial resistance to novelty.  But Nicholson explains that:
(i) they helped establish the prevailing views and thus believe them to be most correct, (ii) they have made a career doing this and thus have the most to lose, and (iii) because of #1 and #2 they may display hubris [2–4, 9, 10]. If, historically, most new ideas in science have been considered heretical by experts [11], does it make sense to rely upon experts to judge and fund new ideas?
He concludes that a student looking to build a career therefore must choose between getting funding by following the crowd and doing more of the same, or being innovative but without any money... that is, driving a taxi.

He goes on to say that the system not only encourages safe science, but cronyism as well.  We would add that this includes hierarchies, which foster obedience by many to the will of the few.  Because the researcher's affiliation, collaborators, co-authors, publication record and so on are a part of the whole grant package, it's impossible for reviewers to not use this information in their judgments and review a grant impartially. As Nicholson puts it, the whole emphasis is on a scientist "being liked" by the scientific community.  Negative findings are rarely published, which in effect means that scientists can't disagree with each other in print, and peer review ensures that scientists stay within the fold.  Nicholson believes this has all created a culture of mediocrity in science.  We can say from experience that submitting grants anonymously is unlikely to work because, like 'anonymous' manuscripts sent out for review, one can almost always guess the authors.

There is of course a problem.  Most off-center science is going to go nowhere.  Real innovation is a small fraction of ideas that claim it (sincerely or as puffery).   Accepted wisdom has been hard-won and that's a legitimate reason to resist.  So not all those whose ideas are off base are brilliant or right. How one tells in advance is the question that's a problem because there's no good way, and that provides a ready-made excuse for generic resistance.

Nicholson's solution to restructuring "the current scientific funding system, to emphasize new and radical work"?  He proposes that the grant review system change to include non-scientists who don't understand the field, as well as scientists who do. "Indeed," he says, "the participation of uninformed individuals in a group has recently been shown to foster democratic consensus and limit special interests." And, "crowd funding" has been successful in a lot of non-scientific arenas, he notes, and could conceivably be used to fund grants as well.

It will come as no surprise to regular MT readers to know that we endorse Joshua Nicholson's indictment of the current system.  Peer review seems necessary, brilliant and democratic, and it was established largely and explicitly to break up and prevent Old Boy networking, and make public research funding more 'public'.  Indeed, money no longer goes quite so exclusively to the Elite universities.  But politicians promise things to get a crowd of funders, who want the rewards.  And even that, like any system, can be gamed, and a pessimist (or realist?) is likely to argue that after you've relied on it for a while, it produces just the kind of stale, non-democratic, old boy network that Nicholson describes--similar hierarchies even with many of the same hierarchs resurfacing.

It's unlikely that the grant system will undergo radical transformation any time soon, because too many people would have to be dislodged, though, perhaps when the old goats retire and get out of the way, that will smooth the way.   But there are rumblings in the world of scientific publishing, and demands for change, and this makes us hopeful that perhaps these growing challenges to the system can have widespread effects in favor of innovation and a more egalitarian sharing of the wealth (in the form of academic positions, grant money, publications, and so on).  The demands are coming from scientists boycotting Elsevier Publishing because they are profiting handsomely from the scientists' free labor; scientists and others petitioning for open and free access to papers publishing the results of studies paid for by the taxpayer; physicists circumventing the old-boy peer review process by publishing online or first passing their manuscripts through open-ended peer review online.  And, yes, there are open access journals (e.g., PLoS), though generally at high cost.

The system probably can't change too radically so long as science costs money and research money doesn't come along with salary as part of an academic job, as it probably should since research is required for the job!  Instead, the opposite is true: universities hunger for you to come do your science there largely because they expect you to bring in money (they live on the overhead)!  And humans are tribal animals so the fact that who you know is such an intrinsic part of the scientific establishment is not a surprise--but that aspect of the system can and should be changed.  The reasons that science has grown into the lumbering, conservative, money-driven, careerist megalith that it is can be debated, as can the degree to which it is delivering the goods, even if imperfectly.  But it is possible that we're beginning to see glimmers of hope for change.  The best science is at least sometimes unconventional, and there must be rewards for that as well.

Friday, April 27, 2012

Metaphysics in science, Part I: "Call us when you actually find something."

What constitutes a 'finding' in science? 
It is supposed to be a discovery of something about Nature, rather than, say, just an 'idea' about Nature.  It's supposed to be real and not a matter of metaphysics.  Metaphysics had a long history in philosophy, when philosophy was the lead-in to what we call science today.  In today's sneering world of science, science is fact, and metaphysics is made-up Blarney rather than stuff that's real.

But what is a 'finding' today?  Administrative interests are constantly on the prowl for results that their company, institutes, portfolio, or clients can use to help lobby for more funding, and the news media aren't far behind.  But everybody's busy, so what counts in science in this sense is something with a melodramatic picture that you can say faster than the word 'science', and that will grab the interest of someone whose attention span doesn't go beyond a Tweet.  Some populations are known to anthropologists by names like 'the basket weavers' -- we'll be known as 'the boasters'.

This attitude is everywhere and seen all the time, and it's detrimental to good science.  Cakes take a certain time to bake and not all food is fast food.  Science has to bake to come out as good as it should be, and can be.  Quick answers yelled from rooftops (of  Nature's offices) and rushed out of the oven for quick display purposes are notorious for false starts and hyped findings that are not confirmed later, for various reasons noted a few years ago by John Ioannidis. 

By Hooke or by crook science in the 21st century
When Robert Hooke first turned the microscope's eye onto nature, we got the first glimpse of things that had previously been impossible to see.   He documented many things in his 1665 Micrographia, like details of insect bodies, and the uneven surface of polished pins. Hooke turned micrographia into visible-scale drawings, and made flea hairs visible....and fleas were important!  We were naive then, and learned a tremendous amount from the new lens on the world.  Blowing things up was legitimate.



Today we live in a similar era, when every tiny finding, visible only through a massively-humongously-parallel-generation sequencer, is blown up--that is blown out of proportion, in our puffery laden, lobbying,  PR-driven world.  Unlike Micrographia, however, not all of today's fleas are actual 'discoveries' in the same sense.  Yet, the PR machine wants to report 'findings', and anything that can be claimed to be one (with a nice figure) is going to be trumpeted.

If it's more than 140 characters, it's not real!
We're pressured only to consider simplistic sound-byte-sized results as true 'findings',  or be embarrassed if we haven't got a slew of papers reporting our sound-byte sized things in 'high impact factor' journals, or hyped by the New York Times or the BBC. Apparently 'just' understanding Nature, most of whose traits are subtle and not melodramatic, isn't real science.  Hype is what the public is sold, it's what's sold on television and on front pages, and it's basically all that congressional staffers and their like are told about.

Now, this might be OK if the recipients of the hype--such as policy makers who have to cough up the funds--weren't so inundated by Everything They See is Phenomenal that they know full well how to ignore most of it.  Or, worse, if they actually don't know that all they see are snow jobs, we're in deep trouble.  So our version of micrographia has become the blowing up of mainly trivial things we hadn't seen before, and making them sound as if they were previously hidden giants.

Two negatives do not make a positive, but one does!
Consistent with all of this is the notorious under-reporting of negative results.  It's worse than unethical in the drug trial realm, because it leads to obvious bias, unjustified profiteering, and actual harm (sometimes lethal) to patients.  Sometimes it's intentional, but even when it's just that investigators think negative results are not worth reporting, or the 'premier' journals don't think it's worth bothering about (i.e., won't sell copy) and won't publish them, it's harmful because it's systematially biasing and misleading to science.

Take GWAS.  People do publish the results, but many of those who aren't boasting of their purported revolutionizing success (because of the 'positive' findings), are bemoaning the failure of GWAS.  "Well, see," they say, "you never find anything!"  GWAS are a failure!

Nobody thinks more than we do that GWAS and its 'next generation' successors are overselling in a bad way and often for bad reasons.  Nonetheless, and we've said this before, GWAS have been a fine success!  The 'negative' findings (no real blockbuster genes, but instead many tiny genetic contributions to risk), are not a negative but a positive: they positively tell us how complex nature is.  They are findings!

Findings about the real nature of Nature may be dramatic, as Hooke found in his day.  They do occasionally turn up in our own day and that makes science fun and interesting.  But most of Nature is complex and not amenable to quick-fix answers.  For a real appreciation of the object of science to plumb the truths of Nature, complexity itself, difficult to work out or explain (say, in evolutionary terms), should be thrilling enough: the grain without being blinded in a blizzard of chaff.

A subtle, nuanced, careful approach to science is often being overlooked these days because of the obsession with the current idea of what constitutes a 'finding'. Whether this state of affairs is just part of the game in a complex middle-class culture, or has tragic implications for the kind of work that could be done but isn't is anybody's guess.  Fixing the system would take major reform, and may not be in the cards given the nature of our society.

Wednesday, April 25, 2012

The ills, wills, and won'ts of science

Science is a rather large segment of our society, and a thoroughly human endeavor.  It's not apart from the rest of our social, economic, and political world even as it attempts to understand that world.  With thousands of universities needing students, faculty, and resources, all of us seeking prominence and recognition, the rather 'bourgeois' aspects of this social phenomenon are not unexpected. How could it be otherwise than that we'll establish hierarchies, advocacy, tribal factions competing both for ideas and funds or, more nobly, to show the others how (as we believe) the world in our area of expertise really is?

There will inevitably be pyramids of privilege and uneven wealth distribution, and in our society competition will drive this based on the widespread belief that competition (while harsh) is good for something, if not for the human soul.  In such an environment we can expect some outright cheating (pretty rare, fortunately), lots of sources of biased reporting and disingenuous 'null hypothesis' testing, dissembling, hyperbole and self-promotion.  Bureaucrats want to keep their portfolios of research projects large and richly funded.  University administrators want the overhead.  Journals want the material, and since they are businesses, the splashier the better (e.g., see this analysis of what's wrong with science publishing and how to fix it).  Ranking systems like 'impact factors' drive such bean-counting environments, naturally--how could it be otherwise?

Then there are the companies that make the instrumentation and other kinds of laboratory gear, including computers and software, that research depends on.  These companies, only naturally, will do what it takes to persuade us that their latest models are vital to success.  And what about the media?  They demand splash for their survival.  That's only natural, too.  And politicians?  They thrive on promises of health miracles, world dominance, scientific thrills, and various kinds of demagoguery by which fears are raised and their wisdom to fund research to relieve them are promised. 

So the hand-wringing and finger pointing about these problems are all only natural.  So, should we stop doing it?  We think the answer is absolutely not!  First, there will always be faults in any human endeavor, and in our type of society for a large endeavor, faults will be built (over time, by us!) into the system.  Some will corner markets better than others.  Most work will be chaff, even if there will always be amazingly insightful, skilled work that  positively contributes to knowledge.  For every Beethoven or Wordsworth,  Leonardo or Darwin, there will be a hive of drones who leave little mark on history.

Major changes rarely arise by brilliant new discoveries (the Darwins of the world), but most changes occur incrementally,  the tanker-of-science gradually changing course as fads come and go, glittering labs gradually fading as a new fad (whether good or bad) takes over.

Genome sequencing and GWAS as an example
We got many MT 'hits' last week by daring to point out that a substantial number of people are saying and writing that the payoff of GWAS and whole genome sequencing in large numbers of humans is not great and people may be tiring of it and the long-term cost commitment required, which prevents other areas (and investigators) from being funded.  There are large interests who are doing such work, and committed to it (for reasons that at least include their already vested interests, as well as various scientific rationales). They have large amounts of money, in many countries, and scientists know very well that a big project gains political investment that people will then be unwilling or unable to close down.  There are many examples, but big biobanks will be another that are just aborning.  They'll claim down the road that they are too big to fail, er, to have their funding cut. 

Of course, the focus or obsession on genes and 'omics' (large-scale, exhaustive generally hypothesis-free and  exploratory enumerations) may not fade.  Predictions that it is playing itself out by overkill or hyper-hype, may be wrong--we'll see.  Indeed, such commitments cannot fade very rapidly even if they deliver nothing at all (which isn't the case), if the hold on huge long-term funding is made.  But whether it pays off or not, there are many who feel it has co-opted too much else relative to its payoff.

This is but one example of the issues being raised about how science, The Enterprise, is being conducted these days, not by wealthy back-yard tinkerers but by a large middle class housed in large institutions.  Many worry about the faults, but of course they (and, sometimes, we) are on the cranky fringe that always exists.  Cranks can have their own agendas, including jealousy, of course.  But without at least some nudging from those who see the faults--and the faults in modern science are deep and wide--course corrections might be even harder to make, and lack of correction much costlier to the society that pays for it.

Tuesday, April 17, 2012

A bit of a storm

Our post yesterday asking whether support for whole genome sequencing was fading seems to have triggered a bit of a storm.  We know this because our hit count for the day was astronomical (well, for us).  We noticed that a bunch of tweets were sending readers our way, so, naturally enough we thought we'd check out what people were saying about the post on Twitter.  And it was interesting.

Most people, though not all, who made an editorial comment disagreed with us.  And, ok, it's hard to go into detail in 140 characters, but the comments were pretty uninspired, shall we say (along the lines of "Is whole genome sequencing fading? The answer is No!"), but even so, to us, an indication that we'd hit a nerve.  As far as we can tell, the argument is that because sequencing is still being done, it should continue to be. 

This looks to us basically like some serious circling the wagons going on.  People with vested interest in the status quo protecting their interests.  Ok, fair enough, and understandable.  But, this does the science a disservice.  There are serious issues here -- tweeting about how sequencing has to happen because it's happening just doesn't do them justice.

As Ken posted yesterday, writing about why whole genome sequencing hasn't met the promises made about it:
There are too many variants to sort through, the individual signal is too weak, and too many parts of the genome contribute to many if not most traits, for genomes to be all that important--whether for predicting future disease, normal phenotypes like behaviors, or fitness in the face of natural selection.
As he also wrote, there are some traits for which one or a few genes are important, and working those out is where the genetics money should be spent.  Doing whole genome sequencing because we'll surely learn something even if we don't yet know what, or because personalized medicine is just over the horizon, or just because we can, are not good reasons to keep spending the kinds of money on this that we're spending.  We know enough now to know that genomic contributions to most traits are multiple, varied and complex.

This is not an admission of defeat.  This is an acknowledgement that we've learned a lot of genetics in the last century, reinforced clearly by the new sequencing technology; and what we've learned is that most traits are multifactorial, due to gene by gene and/or gene by environment interactions, there are most often many pathways to the same phenotype, and so on.  We should give up the conceit that we're going to be able ubiquitously to predict and prevent diseases based on genomes, and get on with solving problems.  Those that are genetic need genetic approaches.  But there are other issues, and other ways, to learn about evolution, disease, and the basic nature of life.

Wednesday, April 4, 2012

The Big Scientist Theory of Science

Samson, alpha male gorilla in Givskud Zoo
The Great, or Big Man Theory of History posits that the course of history is determined by the ways in which powerful men use their power.  Thus, history can be explained simply by understanding the men who made it.  This idea was first bruited about by the Scottish historian, Thomas Carlyle, in the 1840's ("The history of the world is but the biography of great men").   The opposite view, that society makes great men (or, makes men great), was proposed by Herbert Spencer in the mid-1800's.  And one of the most famous novels in history, War and Peace, was Tolstoy's attempt to show that the Big Man theory was wrong.  Neither view has won out, both still have their proponents.

Here we propose the Big Man Theory of Science.  Or, let's call it the Big Scientist Theory so as not to exclude women.  Perhaps in a Spencerian world, science would advance from discovery to discovery, going where the natural world takes it. The peer review system, oversight by governing bodies, and so on do, in theory, ensure that this is more or less the way science moves forward.  The tide of progress would sweep all along in its path.

But in a Carlylean world, science advances as scientists with big names and lots of money or political influence wish it to.  A scientist made a discovery, got famous, got awards, and that brought more awards.  Now Big Scientist writes visionary papers paving the way forward, makes decisions about where grant money goes, and more insidiously perhaps, but with just as much impact, makes decisions about what lesser scientists can say in their publications, s/he doesn't cite publications by lesser scientists questioning his or her work. 

Big Scientists determine the nature of the grant applications that will be written and funded for some time to come, they determine where the technological and academic investments are going to go, and so on.  They have labs full of DNA sequencers, but the human genome has been sequenced?  The next big scientific endeavor must require their use.  Big Scientist even tells people s/he knows that what s/he is proposing isn't good science, but it has to be done because the grant money must keep coming.  And science and large numbers of scientists follow, because everyone has to follow the money.

When it comes to the media, they first call the Big Scientist.  S/he then gets to expound on this or that, which again serves to set the agenda for the future, and importantly (or mainly) for future research investment.  With few if any exceptions, the Big Scientist advocates an agenda that just so happens to involve things that fit his/her interests and lab needs.
 
These are subtle and not so subtle ways that specific interests are perpetuated, and this engenders conflicts of interest that turn out to be comfortably in keeping with how science works today.  It is not a utopian vision of how things should be done, but it is the reality.  Probably in the very long run, though, it doesn't matter -- important progress will eventually be made.  Scientific ideas can outlive their sell-by date, replaced by better ideas.  But in the short and medium (and sometimes long) run, fads and influence control the momentum as Big Science run by Big Scientists turns out to be good primarily for Big Scientists.

These considerations are about science, but they are also a part of science.  Objectivity of neutral observers trying their best to falsify their ideas, is about as much a fairy tale as you'll ever hear.

Wednesday, December 14, 2011

A Pig's Nose-on?

The news we've all been paying....er, waiting (but not praying)...for is now out, or at least the installment that says we may have found something, it may be definitive, but we need billions more to be sure because it may not.  Your life will be different now that there might, or could, possibly perhaps be a Higgs boson, that elusive thing that puts the lead in your pencil, so to speak--gives mass to fundamental particles--without which you would wither away to a mere nothing (like Higgsy hopefully won't).

Well, as the figure shows, Higglety Pigglety, it really is a mere nothing (well, near-nothing if it exists, or nothing if it doesn't).  The arrow points to the teensy, weensy, boson flying around amidst a cloud of dust particles or nuons or neutrinos, or something like that.  Now don't be cynical, and think this is just a screenshot from some video game.  Yes, it resembles the Droid logon screen, too.  But believe us, it's the real thing (if the real thing exists).



What we're seeing today is by now a standard marketing strategy: a carefully timed media announcement event.  A claim that we've now got it, the Big Finding, but it's only the beginning.  That doesn't make it a false announcement, but there is still the possibility that this is a Pig's nose-on some junk scraps rather than a Higgs boson some scraps of detector signal.  Whether this is beauty or beast only time will tell.

It's easy (and justified?) to have some fun with this long played out Lamborghini of a science story (given its cost), but it again reflects the current view that science must now address Very Big questions on a huge long-term scale.  The rationale is that small studies can't get at the complex or minute effects that we have to find in a sea of data.  Small-scale experimentation may be in order once the 'signal' is found, to understand it in detail, but the prevailing idea is that by and large the easily findable big effects are already known.  In genetics, for example, the rationale is, as well, that small effects on individuals if they're common can lead to large numbers of affecteds on a population scale (100 million people at a risk of one in 100,000 would mean 1000 individuals affected), or that some very strong effects that are so rare they could never generate statistically convincing evidence on their own might in fact be devastating to those few people that inherit them.

This is true, in theory, in physics as well. The Higgs boson affects every bit of matter, including you, your eyes reading the screen, and the screen itself.  So this is a trivially small physical effect individually but a totally profound one in the overall scheme of the cosmos.  It is claimed that (if it actually exists) it will tie together many loose threads in theoretical physics.  Lots of jobs and work to do for physicists, and maybe even stimulation for biologists to ask themselves whether there is something fundamental missing from our thinking.

Whether or when it will have any direct effect on anyone other that students in Physics classes, only the media hype-engine knows (Whether it's real or not, it may show up on their exams!).  It is edifying and elegant, as is the experiment to hunt ol' Higgsy down, if true.  For the middle class it may seem worth its cost, and it may be better than most television (and will generate countless television specials as well).  But, just as whether big genomic science is worth the cost to those billions barely scraping a living together, with less than a boson's worth to eat, is another story.

Thursday, January 14, 2010

Accidents do happen, but....

Touching on what seems to have turned into our theme of the week, John Hawks links to a story in the Telegraph yesterday reporting that a third of academics would leave Britain if threatened cuts to 'curiosity-driven' grants go through. This comes on top of deep cuts in funding for higher education in Britain across the board. According to the story, future research will be funded based on its perceived social and economic benefits; close to 20,000 people have signed a petition protesting this change.
...critics claim the move risks wiping out accidental discoveries as university departments struggle to support professors working on the kind of ground-breaking experimentation that led to the discovery of DNA, X-rays and penicillin.
But hold on.  'Curiosity-driven' research is different from accidental discoveries.

Ken, Malia Fullerton and I wrote a paper not long ago saying that epidemiology isn't working, and, basically, suggesting that people recognize this and come up with some better ideas. We had in mind specifically epidemiology's turn to genetics to explain chronic diseases, including diseases like type II diabetes and asthma, for which, even if people do carry some genetic susceptibility, the more important risk factors are clearly environmental, as shown by the fact that incidence of these diseases has risen sharply in recent decades.

We called the paper "Dissecting complex disease: the quest for the Philosopher's Stone?" (Not the Philosopher's Stone of Harry Potter fame, our reference was to the alchemist's dream of a substance that could turn base metals into gold.) The paper was published as one of the point/counterpoint papers in the International Journal of Epidemiology.

This was an interesting exercise. The paper wasn't reviewed in the usual sense, with us able to correct and revise before publication. The paper was published just as we submitted it, followed by commentaries by prominent epidemiologists. We knew people could find holes in our argument, and we waited for months for the comments, imagining how devastating they were going to be, and how we'd respond. But, when we finally got the commentaries, we were amazed. We could have done a much better job of blasting our paper than any of the comments we got. This was somewhat reassuring in that no one said we were wrong, but disappointing because we had very much wanted to start a dialog on the issues.

How is this relevant to the 'curiosity-driven research' story? Well, one of the major defenses of the status quo in the commentaries about our paper, of spending hundreds of millions of taxpayer dollars on research that everyone knows isn't working, was that we can't cut the funding to epidemiology because everyone knows that good stuff is often found by accident. This strikes us as a very strange justification for maintaining the hugely expensive system of researchers spending inordinate amounts of time and energy to write grants proposing research everyone recognizes isn't going to lead to much, never mind improve public health, and tie up equally inordinate amounts of time, energy and money on the part of reviewers who are also expected not to say that the emperor has no clothes (or the Philosopher has no Stone). In the hope that somebody will stumble across something unexpected one day that really will be progress.

This is not the same as 'curiosity-driven research'. Why is the sky blue? is an honest question and whether or not taxpayers should fund the research needed to answer it can be debated on its merits. If the UK has decided to no longer fund basic science, but only research that will lead to patents, or whatever 'social merits' are, that's very different from the idea that we should maintain a system that isn't working on the off chance that something good will come of it.  That decision can be debated, but at least it's an honest debate.