We’ve talked a lot about new approaches to funding science (see e.g. TiB 100
, TiB 147
), etc. Michael Nielsen and Kanjun Qiu have a new post on this topic, “The trouble in comparing different approaches to science funding
”. It’s probably the best thing I’ve read on the topic in the three years I’ve been writing about it; if you’re remotely interested in the problem, it’s a must read. The piece focuses on a crucial question that’s often ignored: how much do we care about increasing the probability of generating outlier results
versus improving the average quality of research
? The two are quite different and the answer has big implications for what sort of funding models we try and how to measure their success.
We’ve discussed outliers and the power law a lot before, both in the context of VC (Jerry Neumann’s classic piece
) and of science (see TiB 127
). Michael and Kanjun take this thinking a step further and point out some non-obvious implications of seeking more outlier outcomes in science. First, “today’s outliers may be an extremely misleading guide to tomorrow’s
”, which makes it very challenging to use track record-based criteria to decide who to fund. Second, “the intervention used may shape who applies, and what they apply with, in crucial ways
”, which in turn shapes the distribution of outcomes, but again makes predictions very hard.
One conclusion from the piece is that we need genuine pluralism in science funders’ objectives and approaches. As the authors argue, “more stringest peer review” or the UK’s REF
may be exactly the right way to improve the median outcome, but it’s almost certainly the wrong approach if your goal is to increase variance or discover more outliers. If these are your goals - as they might be for something like the UK’s ARIA (on which recent good news
) - you might need to try radical mechanisms like “tenure insurance” or “Long-Shot Prizes”. There’s much more in the piece - do read the whole thing.