A major issue is that the statistical evidence shows that many important and costly drugs are now known to be effective in only a small fraction of those patients who take them. That is shown in this figure from Schork's commentary. For each of 10 important drugs, the blue icons are persons with positive results, the red icons are the relative number of people who do not respond successfully to the drug.
Schork calls this 'imprecision medicine', and asks how we might improve our precision. The argument is that large-scale sampling is too vague or generic to provide focused results. So he advocates samples of size N=1! This seems rather weird, since you can hardly find associations that are interpretable from a single observation; did a drug actually work, or would the person's health have improved despite the drug, e.g.? But the idea is at least somewhat more sensible: it is to measure every possible little thing on one's chosen guinea pig and observe the outcome of treatment.
"N-of-1" sounds great and, like Big Data, is sure to be exploited by countless investigators to glamorize their research, make their grant applications sound deeply insightful and innovative, and draw attention to their profound scientific insights. There are profound issues here, even if it's too much yet another PR-spinning way to promote one's research. As Schork points out, major epidemiological research, like drug trials, uses huge samples with only very incomplete data on each subject. His plea is for far more individually intense measurements on the subjects. This will lead to more data on those who did or didn't respond. But wait.....what does it mean to say 'those'?
In fact, it means that we have to pool these sorts of data to get what will amount to population samples. Schork writes that "if done properly, claims about a person's response to an intervention could be just as well supported by a statistical analysis" as standard population-based studies. However, it boils down to replication-based methods in the end, and that means basically standard statistical assumptions. You can check the cited reference yourself if you don't agree with our assessment.
That is, even while advocating N-of-1 approaches, the conclusion is that patterns will arise when a collection of such person-trials are looked at jointly. In a sense, this really boils down to collecting more intense information on individuals rather than just collecting rather generic aggregates. It makes sense in that way, but it really does not get around the problem of population sampling and the statistical gerrymandering typically needed to find signals that are strong or reliable enough to be important and generalizable.
While better and more focused data may be an entirely laudable goal, if quality control and so on can in some way be ensured, but beyond this, N-of-1 seems more like a shell game or an illusion in important ways. It's a sloganized way to get around the real truth, of causal complexity, that the scientific community (including us, of course) simply have not found adequate ways of understanding--or, if we have, then we've been dishonorably ignoring what we know in making false promises to the public who support our work and who seem to believe what scientists say.
We often don't have such knowledge, but whether there is or isn't a conceptually better way, rather than a kind of 'trick' to work around the problem, is the relevant question. There will always be successes, both lucky and because of appropriately focused data. The plea for more detailed knowledge, and treatment adjustments, for individual patients goes back to Hippocrates and should not be promoted as a new idea. Medicine is still largely an art and still involves intuition (ask any thoughtful physician if you doubt that).
However, retrospective claims usually stress the successes, even if they are one-off rather than general, at the neglect of the lack of overall effectiveness of the approach--as an excuse to avoid facing fully up to the problem of causal complexity. What we need is not more slogans, but better ideas, questions, more realistic expectations, or really new thinking. The best way of generating the latter is to stop kidding ourselves by encouraging investigators, especially young investigators, to dive into the very crowded reductionist pool.