We have recently commented on the flap in statistics circles about the misleading use of significance test results (p-values) rather than a more complete and forthright presentation of the nature of the results and their importance (three posts, starting here). There has been a lot of criticism of what boils down to misrepresentative headlines publicizing what are in essence very minor results. The American Statistical Association recently published a statement about this, urging clearer presentation of results. But one may ask about this and the practice in general. Our recent set of posts discussed the science. But what about the science politics in all of this?
The ASA is a trade organization whose job it is, in essence, to advance the cause and use of statistical approaches in science. The statistics industry is not a trivial one. There are many companies who make and market statistical analytic software. Then there are the statisticians themselves and their departments and jobs. So one has to ask is the ASA statement and the other hand-wringing sincere and profound or, or to what extent, is this a vested interest protecting its interests? Is it a matter of finding a safe harbor in a storm?
Statistical analysis can be very appropriate and sophisticated in science, but it is also easily mis- or over-applied. Without it, it's fair to say that many academic and applied fields would be in deep trouble; sociopolitical sciences and many biomedical sciences as well fall into this category. Without statistical methods to compare and contrast sampled groups, these areas rest on rather weak theory. Statistical 'significance' can be used to mask what is really low level informativeness or low importance under a patina of very high quantitative sophistication. Causation is the object of science, but statistical methods too often do little more than describe some particular sample.
When a problem arises, as here, there are several possible reactions. One is to stop and realize that it's time for deeper thinking: that current theory, methods, or approaches are not adequately addressing the questions that are being asked. Another reaction is to do public hand-wringing and say that what this shows is that our samples have been too small, or our presentations not clear enough, and we'll now reform.
But if the effects being found are, as is the case in this controversy, typically very weak and hence not very important to society, then the enterprise and the promised reform seem rather hollow. The reform statements have had almost no component that suggests that re-thinking is what's in order. In that sense, what's going on is a stalling tactic, a circling of wagons, or perhaps worse, a manufactured excuse to demand even larger budgets and longer-term studies, that is to demand more--much more--of the same.
The treadmill problem
If that is what happens, it will keep scientists and software outfits and so on, on the same treadmill they've been on, that has led to the problem. It will also be contrary to good science. Good science should be forced by its 'negative' results, to re-think its questions. This is, in general, how major discoveries and theoretical transformations have occurred. But with the corporatization of academic professions, both commercial and in the sense of trade-unions, we have an inertial factor that may actually impede real progress. Of course, those dependent on the business will vigorously resist or resent such a suggestion. That's normal and can be expected, but it won't help unless a spirited attack on the problems at hand goes beyond more-of-the-same.
Is it going to simulate real new thinking, or mainly just strategized thinking for grants and so on?
So is the public worrying about this a holding action or a strategy? Or will we see real rather than just symbolic, pro forma, reform? The likelihood is not, based on the way things work these days.
There is a real bind here. Everyone depends on the treadmill and keeping it in operation. The labs need their funding and publication treadmills, because staff need jobs and professors need tenure and nice salaries. But if by far most findings in this arena are weak at best, then what journals will want to publish them? They have to publish something and keep their treadmill going. What news media will want to trumpet them, to feed their treadmill? How will professors keep their jobs or research-gear outfits sell their wares?
There is fault here, but it's widespread, a kind of silent conspiracy and not everyone is even aware of it. It's been built up gradually over the past few decades, like the frog in slowly heating water who does't realize he's about to be boiled alive. We wear the chains we've forged in our careers. It's not just a costly matter, and one of understandable careerism. It's a threat to the integrity of the enterprise itself.
One person applying for a gene mapping study to find even lesser genomic factors than the few that were already well-established said, when it was suggested that rather than find still more genes, perhaps the known genes might now be investigated instead, "But, mapping is what I do!". Many a conversation I've heard is a quiet boasting about applying for funding for work that's already been done, so one can try something else (that's not being proposed for reviewers to judge).
If this sort of 'soft' dishonesty is part of the game (and if you think it's 'soft'), and yet science depends centrally on honesty, why do we think we can trust what's in the journals? How many seriously negating details are not reported, or buried in huge 'supplemental' files, or not visible because of intricate data manipulation? Gaming the system undermines the very core of science: its integrity. Laughing about gaming the system adds insult to injury. But gaming the system is being taught to graduate students early in their careers (it's called 'grantsmanship').
We have personally encountered this sort of attitude, expressed only in private of course, again and again in the last couple of decades during which big studies and genetic studies have become the standard operating mode in universities, especially biomedical science (it's rife in other areas like space research, too, of course).
There's no bitter personal axe being ground here. I've retired, had plenty of funding through the laboratory years, our work was published and recognized. The problem is of science not personal. The challenge to understand genetics, development, causation and so forth is manifestly not an easy one, or these issues would not have arisen.
It's only human, perhaps, given that the last couple of generations of scientists systematically built up an inflated research community, and the industries that serve it, much of which depends on research grant funding, largely at the public trough, with jobs and labs at stake. The members of the profession know this, but are perhaps too deeply immersed to do anything major to change it, unless some sort of crisis forces that upon us. People well-heeled in the system don't like these thoughts being expressed, but all but the proverbial 1%-ers, cruising along just fine in elite schools with political clout and resources, know there's a problem and know they dare not say too much about it.
The statistical issues are not the cause. The problem is a combination of the complexity of biological organisms as they have evolved, and the simplicity of human desires to understand (and not to get disease). We are pressured not just to understand, but to translate that into dramatically better public and individual health. Sometimes it works very well, but we naturally press the boundaries, as science should. But in our current system we can't afford to be patient. So, we're on a treadmill, but it's largely a treadmill of our own making.