Thoughts in Between
TiB 218: Science's stagnation; China's private cities; critiquing critiques of effective altruism; and more...
Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.
It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.
Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
Science really is slowing down
The idea that scientific progress is slowing down and we ought to do something about it is one of our favourite topics here. Occasionally, though, someone asks whether the basic premise is true, which is a fair question. I usually point them to this famous paper by Nicholas Bloom et al; this piece by Patrick Collison and Michael Nielsen; or this discussion by Tyler Cowen and Ben Southwood (see also TiB 92). Matt Clancy (see previous coverage), though, has done us all a great service though and this week published what might be the definitive examination of the question. It's one to bookmark.
Clancy's conclusion is perhaps best summarised in one of his concluding paragraphs:
Diverse groups - the Nobel nominators, contemporary surveyed scientists, academics, and inventors - all seem to have an increasing preference for the work of the past, relative to the present
Clancy shows that Nobel prizes are awarded for increasingly older work; that the works that make up a field's canon (here, its top 0.1% most cited papers) are getting older and less dynamic; that the number of distinct topics covered in scientific literature is becoming more stagnant; that the proportion of citations made to new(ish) work has fallen markedly; and the share of citations to recent work in patents has fallen dramatically in the last 30 years. Each of these is illustrated with some excellent charts.
Why is this happening? Clancy says the simplest explanation is the "burden of knowledge": if you need increasingly more baseline knowledge to make a new discovery, new discoveries necessarily require growing effort to make (we see the same in measures of innovation and technological progress - see this post by Clancy for more). This might be a good explanation, but it remains bad news! Less scientific progress means less economic growth which means... a lot of bad things. That's why measures that might counteract (or partially counteract) this effect - from new kinds of scientific institutions to the use of AI in science (see TiB 72 and 114) - feel so important.
SimCity with Chinese characteristics?
We've talked a few times about innovations in governance (see, e.g., TiB 118 on charter cities). There's a great ChinaTalk interview (transcript) this week with Qian Lu, a Chinese economost, on the topic of Jiaolong, a full privatised city in China (Alex Tabarrok also wrote about Jiaolong back in March). The "founding" firm, Jiaolong Co, agreed to invest hundreds of millions of dollars in infrastructure in return for planning rights and a share of tax revenue. The headline measures of success are impressive: Jiaolong has a population of 100,000 people and an "annual value of production" of over $7 billion; a near neighbour, Xihanggang, which is planned and operated by local government seems to be performing much less well (though I found the numbers in the article a little difficult to follow on this...)
Lu proposes two reasons for Jiaolong's success. First, the contract between the local government and Jiaolong Co creates strong incentives for investment in infrastructure. In fact, the contract is structured in a similar way to that between a Limited Partner and a VC manager: Jiaolong Co gets 20 percent of tax revenues once certain economic goals are met (here, tax rates per square metre and a minimum level of investment), which rises to 25% if performance exceeds expectations. In return, the government agrees to remove obstacles in the private company's path (and continues to provide "coercive" services such as policing).
Second, Lu suggests that private entrepreneurs are better able to aggregate local knowledge for more efficient planning. It does seem somewhat ironic that an economist in notionally Communist China is citing Hayek's (excellent) "The Use of Knowledge in Society" to defend the superiority of market processes... but there we are! The argument is that because Jiaolong Co had complete control of planning rights, it was able to auction them off for maximum efficiency in a very small land area. Whether Jiaolong succeeds or not, it is a striking case study at a time when various Western polities are consumed in bitter local disputes on zoning and NIMBY-ism (see TiB 198). One to watch.
How demanding should moral principles be?
I try to rotate themes and sources here, but Michael Nielsen (see last week's TiB) just published a great set of notes that critique Effective Altruism ("EA"; see TiB 215 and 207) that are really worth reading. Michael brings his trademark intellect and humaneness to the subject and the result is one of the better critiques of EA as both a movement and a philosophy. I'd also recommend the brief replies from Ben Todd, Patrick McKenzie and, especially, Alexander Berger (and especially this tweet, which I agree is the core EA insight, at least for me). I wanted to add two thoughts.
The first concerns the critique that EA is a "misery trap". Michael argues that because EA is (at least potentially) a totalising philosophy - "do as much good as possible" - it can't provide a framework for a satisfying life: you could always do more good by making yourself more miserable (Michael gives examples of EAs who denied themselves ice cream or even having children because other resource uses would do more good). I recognise the force of the argument, but it strikes me that if there are correct moral principles, we should expect them to be demanding and difficult to live by (Otherwise the world would already be a better place!) Certainly that's my impression of the world's major religions and (some) ideologies. I think at most margins it's a good sign if ones moral framework makes one uncomfortable.
Second, another compelling argument is Michael's concern, which applies to all of utilitarianism, not just EA:
"'[G]ood" isn't fungible, and so any quantification is an oversimplification. Indeed, not just an oversimplification: it is sometimes downright wrong and badly misleading
I’m sympathetic to that. A lot of energy has been expended by a lot of brilliant minds in wrestling with utilitarianism: so many problems, but so hard to give up a commitment to at least "thin utilitarianism". Reflecting on this did make me wonder, though, if EA needs its "Rawlsian moment". Rawls (see TiB 190, 216) wrote A Theory of Justice as a way to address utilitarianism's lack of respect for the "separateness of persons". The result is an elaborate system of rights, principles and mechanisms that constrain utilitarianism's totalising tendency. Perhaps EA - or some flavour of EA - needs a version of its own.
RELATED: Some effective altruists are offering cash prizes (up to $100K) for the best critiques of EA
Quick links
- AI gems. DALL-E's secret language? You can radically improve the performance of a large language model by telling it to "think step by step"! Lovely thread on the aesthetics of machine learning.
- Buzzy research. We may have been underestimating the intelligence of bees (Relevant to AI too)
- Down where it's wetter. Striking visualisation of the world's submarine cables.
- What's the worst colour? This, apparently.
- Can you be on all the "good guy" teams? Thought provoking thread by Vitalik Buterin
Here we go again...
Thanks for reading Thoughts in Between. If you like it, please share it with a friend and/or social media following who might like it too.
Do feel free to reply if you have comments, questions or suggestions.
Until next week,
Matt Clifford
PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.