Thursday, November 30, 2017

Statistics controversy: missing the p-oint.

There is a valuable discussion in Nature about the problems that have arisen related to the (mis)use of statistics for decision-making.  To simplify the issue, it is the idea that a rather subjectively chosen cutoff, or p, value leads to dichotomizing our inferences, when the underlying phenomena may or may not be dichotomous.  For example, in a simplistic way to explain things,  if a study's results pass such a cutoff test, it means that the chance the observed result would arise if nothing is going on (as opposed to the hypothesized effect) is so small--less than p percent of the time--that we accept the data as showing that our suggested something is going on.  In other words, rare results (using our cutoff criterion for what 'rare' means) are considered to support our idea of what's afoot.  The chosen cutoff level is arbitrary and used by convention, and its use doesn't reflect the various aspects of uncertainty or alternative interpretations that may abound in the actual data.

The Nature commentaries address these issues in various ways, and suggestions are made.  These are helpful and thoughtful in themselves but they miss what I think is a very important, indeed often the critical point, when it comes to their application in many areas of biology and social science.

Instrumentation errors
In these (as other) sciences, various measurements and technologies are used to collect data.  These are mechanical, so to speak, and are always imperfect.  Sometimes it may be reasonable to assume that the errors are unrelated to what is being measured (for example, their distribution is unrelated to the value of a given instance) and don't affect what is being measured (as quantum measurements can do), then correcting for them in some reasonably systematic way, such as assuming normally distributed errors, clearly helps adjust findings for the inadvertent but causally unconnected errors.

Such corrections seem to apply quite validly to social and biological, including evolutionary and genetic, sciences.  We'll never have perfect instrumentation or measurement, and often don't know the nature of our imperfections.  Assuming errors uncorrelated with what is being sought seems reasonable even if approximate to some unknown degree. It's worked so well in the past that this sort of probabilistic treatment of results seems wholly appropriate.

But instrumentation errors are not the only possible errors in some sciences.

Conceptual errors: you can't 'correct' for them in inappropriate studies
Statistics is, properly, a branch of mathematics.  That means it is an axiomatic system, an if-then way to make deductions or inductions.  When and if the 'if' conditions are met, the 'then' consequences must follow.  Statistics rests on probabilism rather than determinism, in the sense that it relates to and is developed around, the idea that some phenomena only occur with a given probability, say p, and that such a value somehow exists in Nature.

It may have to do with the practicalities of sampling by us, or by some natural screening phenomenon (as in, say, mutation, Mendelian transmission, natural selection). But it basically always rests on some version or other of an assumption that the sampling is parametric, that is, that our 'p' value somehow exists 'out there' in Nature.  If we are, say, sampling 10% of a population (and the latter is actually well-defined!) then each draw has the same properties.  For example, if it is a 'random' sample, then no property of a potential samplee affects whether or not it is actually sampled.

But note there is a big 'if' here: Sampling or whatever process is treated as probabilistic needs to have a parameter value!  It is that which is used to compute significance measures and so on, from which we draw conclusions based on the results of our sample.

Is the universe parametric?  Is life?
In physics, for example, the universe is assumed to be parametric.  It is, universally, assumed to have some properties, like gravitational constant, Planck's constant, the speed of light, and so on.  We can estimate the parameters here on earth (as, for example, Newton himself suggested), but assume they're the same elsewhere.  If observation challenges that, we assume the cosmos is regular enough that there are at least some regularities, even if we've not figured them all out yet.

A key feature of a parametric universe is replicability.  When things are replicable, because they are parametric--have fixed universal properties, then statistical estimates and their standard deviations etc. make sense and should reflect the human-introduced (e.g., measurement) sources of variation, not Nature's.  Statistics is a field largely developed for this sort of context, or others in which sampling was reasonably assumed to represent the major source of error.

In my view it is more than incidental, but profound, that 'science' as we know it was an enterprise developed to study the 'laws' of Nature.  Maybe this was the product of the theological beliefs that had preceded the Enlightenment or, as I think at least Newton said, 'science' was trying to understand God's laws.

In this spirit, in his Principia Mathematica (his most famous book), Newton stated the idea that if you understand how Nature works in some local example, what you learned would apply to the entire cosmos.  This is how science, usually implicitly, works today.  Chemistry here is assumed to be the same as chemistry on any distant galaxy, even those we cannot see.  Consistency is the foundation upon which our idea of the cosmos and in that sense, classical science has been built.

Darwin was, in this sense, very clearly a Newtonian.  Natural selection was a 'force' he likened to gravity, and his idea of 'chance' was not the formal one we use today.  But what he did observe, though implicitly, was that evolution was about competing differences.  In this sense, evolution is inherently not parametric.

Not only does evolution rest heavily on probability--chance aspects of reproductive success, which Darwin only minimally acknowledged, but it rests on each individual's own reproductive success being unique.  Without variation, and that means variation in the traits that affect success, not just 'neutral' ones, there would be no evolution.

In this sense, the application of statistics and statistical inference in life sciences is legitimate relative to measurement and sampling issues, but is not relevant in terms of the underlying assumptions of its inferences.  Each study subject is not identical except for randomly distributed 'noise', whether in our measurement or in its fate.

Life has properties we can measure and assign average values to, like the average reproductive success of AA, Aa, and aa genotypes at a given gene. But that is a retrospective average, and it is contrary to what we know about evolution to assume that, say, all AA's have the same fitness parameter and their reproductive variation is only due to chance sampling from that parameter.

Thinking of life in parametric terms is a convenience, but is an approximation of unknown and often unknowable inaccuracy.  Evolution occurs over countless millennia, in which the non-parametric aspects can be dominating.  We can estimate, say, recombination or mutation or fitness values from retrospective data, but they are not parameters that we can rigorously apply to the future and they typically are averages among sampled individuals.

Genetic effects are unique to each background and environmental experience, and we should honor that uniqueness as such!  The statistical crisis that many are trying valiantly to explain away, so they can return to business as usual (even if not reporting p values) is a crisis of convenience, because it makes us think that a bit of different reportage (confidence limits rather than p values, for example) will cure all ills.  That is a band-aid that is a convenient port-in-a-storm, but an illusory fix. It does not recognize the important, or even central, degree to which life is not a parametric phenomenon.

Wednesday, November 29, 2017

There is no obstetrical dilemma

Josie Glausiusz wrote a very nice piece published at Undark today called,

Of Evolution, Culture, and the Obstetrical Dilemma: Anthropologists are revisiting long-held beliefs about human evolution and the difficulty of human childbirth

In it, I'm thrilled to get something I furiously worry about across to a wide audience with this part toward the end of the piece:

“I worry that this idea [of the obstetrical dilemma] is justifying status-quo high rates of C-sections and other childbirth interventions,” Dunsworth says. “People say, ‘it’s just evolution — there’s nothing we can do, and here’s how technology helps, and that’s fabulous. But I know we’re overdoing it. Everybody knows that.”
It's a complicated issue, the obstetrical dilemma (OD), so it's no surprise that there are missing pieces in this particular discussion. The most important, biggest flaw in OD thinking is its assumption that we're born early, an assumption that is featured at the start of the piece with Karp and Washburn. But it's not true. We are not born early and that didn't make it in there. When you stop believing we're born early, the whole thing starts to crumble.

And here's where I am now with some of this...

First of all, we need to change the story so that it's not, no matter how slightly, bolstering unnecessary childbirth interventions. Though my OB/GYN seemed unfamiliar with the obstetrical dilemma hypothesis when I explained it to her as she gave me a pap smear, I think the thinking is pervasive in medical schools. This hunch is getting support on Twitter as we speak. (For some context, I am the first that I know of, several decades after the 'obstetrical dilemma' was born, to tack on "hypothesis" to the name of the idea.)

And, second of all, here's where I get "crazy"(see the piece for crazy) but all over again... Okay. In 2012, in one of a series of blog posts about our then recent paper questioning the obstetrical dilemma hypothesis I wrote this:

Women aren't called broads for nothing. We have, on average, larger dimensions of the pelvis that comprise the birth canal (linked into broader hips) than men do and this is not just relatively but absolutely and this is not just in the U.S., this is species-wide (1). 
There is no better explanation for this than it's due to selection for successful childbirth.

I think I was wrong. I think I know a better explanation for why women have bigger "obstetric" dimensions in the pelvis than men and I THINK IT'S BECAUSE WE HAVE FEMALE-SPECIFIC ORGANS THAT GROW INSIDE AND OCCUPY THAT SPACE AND THEY DO NOT.

Stay tuned for more about vaginal, clitoral, and uterine growth and space-taking... yessssss.

Tuesday, November 28, 2017

Sherlock Holmes, the Galtonian!

In the late 1800s, in England, Darwinism and its intellectual cousin, genetic determinism, were the hot topics.  And Darwin's literal cousin, Francis Galton, was riding high, too.  He was read by the intelligentsia and his ideas both reflected, and seeped into, daily thinking about life.

The Sherlock Holmes story "The Adventures of the Reigate Squires", was published in 1893, and in it we see a reflection of those times, in the view of the role of inheritance that was then common (and still rides rampant for some today).  On a murder case, our sleuth was examining the paper shown here, which was a written note that was vital to solving the crime:

In his perceptive diagnosis of the writing on the note, Sherlock noticed that alternate words were written in different hands, that is, by two different people. The way the t's and e's were written gave that away.  In the story, this implicated two brothers, because the note was written to tie them together to their crime by each brother writing part of the note.

So what?  To Holmes, there was a profound reason he could connect the brothers, not just two different conspirators writing one note, to the crime.  As he said:
"There is a further point, however, which is subtler and of greater interest. There is something in common between these hands. They belong to men who are blood-relatives. It may be most obvious to you in the Greek e's, but to me there are many small points which indicate the same thing. I have no doubt at all that a family mannerism can be traced in these two specimens of writing. I am only, of course, giving you the leading results now of my examination of the paper. There were twenty-three other deductions which would be of more interest to experts than to you. They all tend to deepen the impression upon my mind that the Cunninghams, father and son, had written this letter.

In 1893, Mendel had not been rediscovered, so there were no genetics, and Darwin's nebulous 'gemmules' were basically quantitative determinants of traits.  But using such concepts at least implicitly, Francis Galton had been writing much about inheritance and family resemblance at that time, including behavioral traits such as intelligence, and one can presume that Conan Doyle, a physician by training, would have known about that work. At least, years later and in regard to fingerprints in a later Holmes story, the two were in at least some correspondence (see: ).  Galton coined the word eugenics in 1883, ten years before the above Holmes story, an idea he advanced, in the spirit of viewing human traits as inherent, and thus amenable to improvement: preferential breeding to proliferate the positive, and the opposite to remove the negative, traits from the human  population.

Art imitates life....

Tuesday, November 21, 2017

The Knowledge Factory Crisis: A different, anthropological way to view universities

Nothing we humans do lives up to its own mythology. We are fallible, social, competitive, acquisitive, our understanding is incomplete, and we have competing interests to address, in our lives and as a society.  I posted yesterday about universities as 'knowledge factories, reacting to a BBC radio program that discussed what is happening in universities, when research findings seem unrepeatable.

That program, and my discussion of what is going on at universities, took the generally expressed view of what universities are supposed to be, and examined how that is working.  The discussion concerned technical aspects that related to the nature of scientific information universities address or develop.  That is, in this context, their 'purpose' for being.  How well do they live up to what they are 'supposed' to be?

Many of my points in the post were about the nature of faculty jobs are these days, and the way in which pressures lead to the over-claiming of findings, and so on.  I made some suggestions that, in principle, could help science live up to its ideal.

Here in this post, however, I want to challenge what I have said about this.  Instead, I want to take a somewhat distanced viewpoint, looking at universities from the outside, in a standard kind of viewpoint that anthropologists take, rather than simply accepting universities' own assessments of what they are about.

Doing poorly by their ideal standard
My post noted ways in which universities have become not just a 'knowledge factory', but more crass business factories, as making money blatantly increasingly over-rides their legitimate--or at least, stated--role as idea and talent engines for society.  Here's a story from a few years ago about that, that is still cogent.  The fiscal pursuit discussed in this post is part of the phenomenon.  As universities are run more and more as businesses, which happens even in state universities, they become more exclusive, belying their original objective which (as in the land-grant public universities) was to make higher education available to everyone.  In addition to becoming money makers themselves, academia has become a boon for student-loan bankers, too.

But this is a criticism of university-based science, and expressed as it relates to how universities are structured.  That structure, even in science, leads to problems of science.  One might think that something so fundamentally wrong would be easy to see and to correct.  But perhaps not, because universities are not isolated from society--they are of society, and therein lies some deep truth.

Excelling hugely as viewed anthropologically
If you stop examining how universities compare to their ideals, or to what most people would tell you universities were for, and instead look at them as parts of society, a rather different picture emerges.

Universities are a huge economic engine of society.  They garner their very large incomes from various sources: visitors to their football and basketball stadiums, students whose borrowed money pays tuition, and agencies private and public that pour in money for research.  Whether or not they are living up to some ideal function or nature, they are a major and rather independent part of our economy.

Their employees, from their wildly paid presidents, down to the building custodians, span every segment of society.  The money universities garner pays their salaries, and buys all sorts of things on the open commercial economy, thereby keeping many other people gainfully employed.  Their activities (such as the major breakthrough discoveries they announce almost daily) generate material and hence income for the media industries, print and electronic, which in turn helps feed those industries and their relevant commercial influences (such as customers, television sales, and more).

Human society is a collective way for we human organisms to extract our living from Nature.  We compete as individuals in doing this, and that leads to hierarchies.  Overall, over time, societies have evolved such that these structures extract ever more resources and energy.  Via various cultural ideologies we are able to keep things going smoothly enough, at least internally, so as not to disrupt this extractive activity.

Religion, ownership hierarchies, imperialism, military, and other groups have self-justifications that make people feel they belong.  This contributes to building pyramids--whether they be literal, or figurative such as religions, universities, armies, political entities, social classes, or companies.  Often the justification is religious--nobility by divine right, conquest as manifest destiny, and so on.  That not one of these resulting societal structures lives up to its own ideology has long been noted.  Why should we expect universities to be any different?  These are the cultural ways people organize themselves to extract resources for themselves.

Universities are parasites on society, very hierarchical with obscenely overpaid nobles at the top?  They show no limits on the trephining they do on those who depend on them, such as graduating students with life-burdening debt?  They churn through those who come to them for whom they claim to 'provide' the good things in life?  Of course!  Like it or not, by promising membership and a better life, they are just like religions or political classes or corporations!

Institutions may be so caught up in their belief systems that they don't adapt to the times or competitors, or they may change their actions (if not always their self-description).  If they don't adapt they eventually crumble and are replaced by new entities with new justifications to gain popular appeal or acceptance.  However, fear not, because relative to their actual (as opposed to symbolic) role in societies, universities are doing very well: at present, they very clearly show their adaptability.

In this anthropological sense, universities are doing exceedingly well, far better than ever before, churning resources and money over far faster than ever before.  Grumps (like us) may point out the failings of lacking to live up to our own purported principles--but how is that different from any other engine of society?

In that anthropological sense, whether educating people 'properly' or not, whether claiming more discoveries that stand up to scrutiny, universities are doing very, very, very well.  And that, not the purported reason that an institution exists, is the measure of how and why societal institutions persist or expand.  Hypocrisy and self-justification, or even self-mythology, are always part of social organization. A long-standing anthropological technique for understanding distinguishes what are called emics, from etics: what people say they do, from what they actually do.

Yes, there will have to be some shrinkage with demographic changes, and fewer students attending college, but that doesn't change the fact that, by material measures, universities are incredibly successful parts of society.

What about the intended material aspect of the knowledge factory--knowledge?
But there is another important side to all of this, which takes us back to science itself, which I think is actually important, even if it is naive or pointless to crab at the hypocrisies of science that are explicable in deep societal terms.

This has to do with knowledge itself, and with science on its own terms and goals.  It relates to what could, at least in principle, advance the science itself (assuming such changes could happen without first threatening science's and scientists' and universities' assets).  That will be the subject of our next post.

Monday, November 20, 2017

The 'knowledge factory'

This post reflects much that is in the science news, in particular our current culture's romance with data (or, to be more market-savvy about it, Big Data).  I was led to write this after listening to a BBC Radio program, The Inquiry, an ongoing series of discussions of current topics.  This particular episode is titled Is The Knowledge Factory Broken?

Replicability: a problem and a symptom
The answer is pretty clearly yes.  One of the clearest bits of evidence is the now widespread recognition that too many scientific results, even those published in 'major' journals, are not replicable.  When even the same lab tries to reproduce previous results, they often fail.  The biggest recent noise on this has been in the social, psychological, and biomedical sciences, but The Inquiry suggests that chemistry and physics also have this problem.  If this is true, the bottom line is that we really do have a general problem!

But what is the nature of the problem?  If the world out there actually exists and is the result of physical properties of Nature, then properly done studies that aim to describe that world should mostly be replicable.  I say 'mostly' because measurement and other wholly innocent errors may lead to some false conclusion.  Surprise findings that are the luck of the draw, just innocent flukes, draw headlines and are selectively accepted by the top journals.  Properly applied, statistical methods are designed to account for these sorts of things.  Even then, in what is very well known as the 'winner's curse', there will always be flukes that survive the test, are touted by the major journals, but pass into history unrepeated (and often unrepentant).

This, however, is just the tip of the bad-luck iceberg.  Non-reproducibility is so much more widespread that what we face is more a symptom of underlying issues in the nature of the scientific enterprise itself today than an easily fixable problem.  The best fix is to own up to the underlying problem, and address it.

Is it rats, or scientists who are in the treadmill?
Scientists today are in a rat-race, self-developed and self-driven, out of insatiability for resources, ever-newer technology, faculty salaries, hungry universities....and this system can be arguably said to inhibit better ideas.  One can liken the problem to the famous skit in a candy factory, on the old TV show I Love Lucy.  That is how it feels to many of those in academic science today.

This Inquiry episode about the broken knowledge factory tells it like it is....almost.  Despite concluding that science is "sending careers down research dead-ends, wasting talent and massive resources, misleading all of us", in my view, this is not critical enough.  The program suggests what I think are plain-vanilla, clearly manipulable 'solutions.  They suggest researchers should post their actual data and computer program code in public view so their claims could be scrutinized, that researchers should have better statistical training, and that we should stop publishing just flashy findings.  In my view, this doesn't stress the root and branch reform of the research system that is really necessary.

Indeed, some of this is being done already.  But the deeper practical realities are that scientific reports are typically very densely detailed, investigators can make weaknesses hard to spot (this can be done inadvertently, or sometimes intentionally as authors try to make their findings dramatically worthy of a major journal--and here I'm not referring to the relatively rare actual fraud).

A deeper reality is that everyone is far too busy on what amounts to a research treadmill. The tsunami of papers and their online supporting documentation is far too overwhelming, and other investigators, including readers, reviewers and even co-authors are far too busy with their own research to give adequate scrutiny to work they review. The reality is that open-publishing of raw data and computer code etc. will not generally be very useful, given the extent of the problem.

Science, like any system, will always be imperfect because it's run by us fallible humans.  But things can be reformed, at least, by clearing the money and job-security incentives out of the system--really digging out what the problem is.  How we can support research better, to get better research, when it certainly requires resources, is not so simple, but is what should be addressed, and seriously.

We've made some of these points before, but with apology, they really do bear stressing and repeating.  Appropriate measures should include:

     (1) Stop paying faculty salaries on grants (have the universities who employ them, pay them);

     (2) Stop using manipulable score- or impact-factor counting of papers or other counting-based items to evaluate faculty performance, and try instead to evaluate work in terms of better measures of quality rather than quantity;

     (3) Stop evaluators considering grants secured when evaluating faculty members;

     (4) Place limits on money, numbers of projects, students or post-docs, and even a seniority cap, for any individual investigator;

     (5) Reduce university overhead costs, including the bevy of administrators, to reduce the incentive for securing grants by any means;

     (6) Hold researchers seriously accountable, in some way, for their published work in terms of its reproducibility or claims made for its 'transformative' nature.

     (7) Grants should be smaller in amount, but more numerous (helping more investigators) and for longer terms, so one doesn't have to start scrambling for the next grant just after having received the current one.

     (8) Every faculty position whose responsibilities include research should come with at least adequate baseline working funds, not limited to start-up funds.

     (9)  Faculty should be rewarded for doing good research that does not require external funding but does address an important problem.

     (10)  Reduce the number of graduate students, at least until the overpopulation ebbs as people retire, or, at least, remove such number-counts from faculty performance evaluation.

Well, these are snarky perhaps and repetitive bleats.  But real reform, beyond symbolic band-aids, is never easy, because so many people's lives depend on the system, one we've been building over more than a half-century to what it is today (some authors saw this coming decades ago and wrote with warnings). It can't be changed overnight, but it can be changed, and it can be done humanely.

The Inquiry program reflects things now more often being openly acknowledged. Collectively, we can work to form a more cooperative, substantial world of science.  I think we all know what the problems are.  The public deserves better.  We deserve better!

PS.  P.S.:  In a next post, I'll consider a more 'anthropological' way of viewing what is happening to our purported 'knowledge factory'.

Even deeper, in regard to the science itself, and underlying many of these issues are aspects of the modes of thought and the tools of inference in science.  These have to do with fundamental epistemological issues, and the very basic assumptions of scientific reasoning.  They involve ideas about whether the universe is actually universal, or is parametric, or its phenomena replicable.  We've discussed aspects of these many times, but will add some relevant thoughts in the near future.

Friday, November 10, 2017

33 Syllabi for Intro to BioAnth/ Intro to Human Origins and Evolution

Two years ago, many of you generously sent me your syllabi for your introductory biological anthropology courses when I put out a call here at The Mermaid's Tale. Thank you! Four teaching assistants who are also anthropology majors worked with me on a little study of these syllabi. My collaborators are Alexa Bracken, Katherine Burke, Nadine Kafeety, and Molly Jane Tartaglia and I am grateful for their work on this.

Here are our results...

  • n = 33 syllabi, from 2015 or before, gathered mostly from your helpful submissions and also collected from AAA and departmental websites, though not extensively. Institutions in 3 different nations and at least 17 U.S. states are represented
  • 29/33 require a textbook (as opposed to other readings/resources) 
  • 14/33 have separate labs/recitations
  • 18/33 teach natural selection before learning the genetic basis for variation [this 2017 study supports doing the opposite] 
  • 2/33 mention genetic drift and/or neutral evolution
  • 2/33 mention epigenetics
  • 3/33 mention evo-devo and/or development
  • 3/33 mention controversy/controversies
  • 0/33 mention creationism and/or creation
  • 4/33 mention 'racism' 
  • 1/33 mention 'sexism'

I've typed and deleted a lot of words here and can't seem to avoid sentences that read like I'm telling a bunch of my brilliant friends and colleagues that we're doing it wrong. I don't believe we are.

I understand that syllabi aren't perfect or even great representations of what we do in our courses.

But maybe we could be better at highlighting some of the more complicated and significant terrain we cover in class, in the syllabus. Syllabi are posted publicly; they're seen by countless faculty reviewers and administrators. I think that we biol/evol/physical anthropologists could do better at getting the word out that our courses are not simply the human equivalent of "Intro to walrus origins and evolution."

Anthropology is what makes human evolution different from walrus evolution. And now that we're freed, mostly, from having to teach that evolution is true, why don't we really go for it and teach that it's also okay that evolution is true? Why not face the cultural controversies, recognize the sordid (and worse) history of our discipline and evolutionary science, and that history's massive influence on our culture and society to this day? We are! I know. But let's put it on the syllabus to make it official.

Human evolution is fundamentally different from the rest of evolutionary biology and I believe it's dangerous to pretend it isn't, or to unintentionally give the impression that it isn't. I hope you agree.

Thursday, November 9, 2017

What we can learn from the birds and why there are birds

Evolution is a fact of life, but there are many different interpretations of how it works. There is the persistent classically Darwinian view, in which natural selection explains everything as a deterministic 'force'--clearly the kind of imagery Darwin himself had.  This is nowadays focused around genes as the metaphor for the competing deterministic causal factors that are responsible.  We know that even clearly adaptive traits we see today evolved through earlier stages of adaptation that may have had nothing to do with current functions.

We know now that this is a deeply important factor about the origins of the major functional traits of organisms, but also that life is complex and chance plays a major role in its dynamics.  In one sense this means selection cannot literally be force-like: it must have some 'probabilistic' aspects, even if there isn't a fixed probability, or probability process like coin-flipping, at work.  That aspect, due to competing selection and so on, is more like a series of one-off effects.  At the same time, the fast fox doesn't always catch the fleeing rabbit, so that even if selection is favoring 'fast' genes, there is an element of what would appear afterwards to have been probabilism in the change of fast-gene frequencies.

Every organism is subjected to functional challenges on all of its traits, all of the time, so that even if natural selection acted as a force (which it cannot really precisely be), which adaptive functions among this array of competing constraints win out will be affected by chance, because from trait A's viewpoint, the relative impact of selection on the other traits will always be changing.

We also know that there is complex genetic control of complex functions, and this involves gene duplication and multiple more or less equivalent pathways to similar outcomes.  So any given gene's effect on the trait will be affected by the other redundant genes it carries.

There is still a widespread, almost ritualistic view of evolution, informally at least, in terms of the genes 'for' some trait whose favorable variation was driven essentially in a deterministic, force-like way to replace other genetic alternatives in their species.  This can easily be seen even among biologists, who should know better, and especially in the biomedical community, in which at least some pratctitioners have actually been taught the premises of evolution at a serious level--beyond, for example, what is often purveyed in medical schools. 

A typical habit is to today's functions and traits, and the past's traits (only rarely the past's genes as well), and to extrapolate from then to now, using reasoning--typically informal reasoning--to connect the dots with steady lines, the way we treat objects falling to earth or planets orbiting stars.

However, much of this is because evolutionary change is highly subject to time-compression that both reflects and is caused by these assumptions.  The 'million' aspect of a million years is skipped over as if it were just a few days.  Yet, we are wholly aware of the immense timescales that apply to most evolutionary changes in complex functions, like, say, our brainpower or our upright posture.  One way to try, at least, to unhitch ourselves from these illusory lapses into physics-like determinism, is to look at things over a much more vast time scale, for which we actually have evidence.

An instructive case
It is probably impossible for us to really grasp the meaning of evolution's timescale.  That's the enormous value of mathematical modeling, if it is used properly.  Our 'ancient modern human' ancestors in the fossil record existed around 100,000 years ago, or arguably much less.  Our species has occupied the world since then, but even much of that well within the last 20,000 or so years (only around 12,000 in the Americas).

But we have some really good evidence of things on spans of times a thousand times as long--that is, on the order of 100,000,000 (a hundred million) years.  This example has to do with the evolution of flight.  A very fine discussion of feathered dinosaurs can be heard on the podcast of the BBC Radio 4 program "In Our Time", that can be downloaded as a podcast or listened to online; here is the link:

How did dinosaurs or their precursors develop the complexly rearranged bodies, and the feathered exteriors that were required for flight and the evolution of birds?  What adaptations occurred and when, and can we know why?  Major recent fossil finds, largely in China, have opened these questions for much closer examination than was possible when the first bird fossil, archaeopteryx, was found in Europe in around 1861, right after Darwin's Origin of Species (1859).

This BBC discussion, even expressed implicitly in a strong selectionistic viewpoint, shows the subtleties of the issues, when 100 million years is the span and large the number of specimens.  If you listen carefully, you can see the many nuances, small changes, rudimentary beginnings and so on that were involved--and the nature of speculation and attempts to guess at the nature of the reasons for the existence of these small steps that eventually led to feathered flight--but that, in themselves, were mainly unrelated to flight.  It is a sobering lesson in evolutionary interpretation, and even this discussion necessarily lapses into speculation.