But the Management prize caught our attention as well.
MANAGEMENT PRIZE: Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo of the University of Catania, Italy, for demonstrating mathematically that organizations would become more efficient if they promoted people at random.
REFERENCE: “The Peter Principle Revisited: A Computational Study,” Alessandro Pluchino, Andrea Rapisarda, and Cesare Garofalo, Physica A, vol. 389, no. 3, February 2010, pp. 467-72.Those of us over a certain age remember the splash that The Peter Principle made when it first came out in 1969. Author Peter Lawrence explained in a way that made complete sense why we were destined to be forever surrounded by maddening incompetence: as the prize-winning paper puts it, ’Every new member in a hierarchical organization climbs the hierarchy until he/she reaches his/her level of maximum incompetence’ -- after which, of course they are promoted no further. This is perplexing.
As Pluchino et al explain,
Despite its apparent unreasonableness, such a principle would realistically act in any organization where the mechanism of promotion rewards the best members and where the mechanism at their new level in the hierarchical structure does not depend on the competence they had at the previous level, usually because the tasks of the levels are very different to each other. [In the paper] we show, by means of agent based simulations, that if the latter two features actually hold in a given model of an organization with a hierarchical structure, then not only is the Peter principle unavoidable, but also it yields in turn a significant reduction of the global efficiency of the organization.So, in the worst of all possible worlds (which sounded eerily familiar to many in 1969), most positions in most organizations are filled with the person least able to carry out the required responsibilities.
Did this Ah-ha! realization change the world? Of course not -- we need not point out that the economic fiascos of the past two years are perfect evidence of this. No doubt Pluchino et al. had this in mind as they explored this issue further:
Within a game theory-like approach, we explore different promotion strategies and we find, counterintuitively, that in order to avoid such an effect the best ways for improving the efficiency of a given organization are either to promote each time an agent at random or to promote randomly the best and the worst members in terms of competence.So, the Peter Principle happens because of the widespread, perhaps even universal assumption that if an employee excels at a job at one level, s/he will excel at the job on the next rung of the organizational hierarchy, even if it actually requires very different skills. This is just common sense, right?
But the problem is, as Pluchino et al. point out, that "common sense in many areas of our everyday life, often deceives us." To demonstrate just this, they did mathematical simulations of what would happen to organizational efficiency if the most competent, the least competent, and then a random selection of employees were promoted. They found that the random selection -- or equivalently, promotion of some of the most and some of the least competent -- was preferred.
Fine, so they've solved the global efficiency problem, and the world will surely take notice. But let's take this back around to science -- sidestepping any possible effects of The Peter Principle in academia, which isn't the reason we're interested in this paper. Instead, to us this paper relates to the general problem of determining cause and effect, something we write about here a lot.
Science, another human endeavor, is just as loaded with incorrect assumptions about cause and effect as business, of course. It isn't just that science's knowledge is always limited, but that we cling to things we have reason to believe are not correct but that it would be inexpedient of us to acknowledge. This has to do with vested interests, career momentum, and so on.
We also cling to deeper beliefs: for example, that things simply must have a cause, and that if that's so, then it's only technology and the like that keeps us from identifying it. We just do not like to accept randomness any more than we absolutely have to. Mendelian transmission is an example where, with some exceptions, we do accept limited predictability. But we fight it: many if not most evolutionary biologists, and hanger-on people of all sorts who invoke 'evolution' or Darwin's name to advance some favorite point of view, simply do not want biological traits to be affected by chance. They want them to be predictable from genes. But we know this is true only to a limited extent.
Grant reviews and funding decisions, exam paper scores and grades, promotion and tenure reviews, and many other aspects of academic and scientific life are clearly largely random. Yet we toil away to a great extent to make them seem critically well-evaluated.
Another thing all of this shares is reliance on 'experts'. Even when we know that experts are often as wrong as right when it comes to many of the most key decisions -- be they whether to go to war, how to regulate economies, or science funding policies.
Perhaps it's part of the human condition to rely on denial of things we know are true, assume the world is more causally knowable than we know it is, and hope the bombs we release in the process don't fall on us.
In a serious sense, while we know this is a problem, it is not at all clear what to do about it. Society, science, universities, companies, and so on have to act and take decisions. How should it be done better than we do it now? It sounds funny to suggest it could be done at random -- funny enough to be worthy of an Ig Nobel prize -- but what serious, applicable lessons can be gleaned from that?