Thoughts in Between
TiB 122: Newton, Keynes and the power of science; how great powers fall; machine learning and the humanities and more...
Welcome new readers! Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
Magicians and how the world changes
Via Michael Nielsen, I came across this amazing speech by John Maynard Keynes on the life of Isaac Newton. Do read the whole thing (The passages on how Newton thought about intuition and proof are worth reading in conjunction with Nabeel Qureshi’s new essay on “How To Understand Things"). In it, Keynes argues that Newton is not the first of the moderns, as some have said, but “the last of the magicians”, because he was the last great figure to view the world through pre-Newtonian eyes.
This is an important idea because it suggests something deep about what science is and what it does. Nielsen riffs on this here, in what may be the best Twitter thread of the year so far. The core idea is that it’s tempting to think of scientific research as being akin to editing or adding to a vast list of facts, but it’s much more than this because new facts aren’t confined within the existing system of knowledge. Rather, they occasionally (as in the case of Newton’s discoveries) overturn our understanding of what the system is.
As Nielsen puts it, “What we think we are made of keeps changing. What we think humanity is keeps changing. What we regard as an explanation keeps changing”. The ramifications are enormous: new facts enable new technologies (more on this here) and new institutions. Science “dramatically expands the scope of technologies that are at least possible”. And some of these - Nielsen points to the atom bomb, the pill, the computer - in turn expand the scope of how we might live...
How great powers collapse
Charles King has an excellent piece in Foreign Affairs on how great powers collapse. He explores this through the lens of this prophetic essay from 1970 by Soviet dissident Andrei Amalrik, in which he predicts the fall of the USSR. We talked recently about the conundrum of living through great historical change: if you worry about it out loud, you’ll seem hysterical most of the time; if you never do, you risk being swept away by events (It’s the world-historical version of Taleb’s turkey problem). Amalrik provides a framework for identifying when the end really might be nigh.
Governments naturally want continuity, but rapid social, economic and technological change creates enormous pressure for reform. The challenge for government is to find ways to accommodate this within the existing institutional framework. The key point, Amalrik argues, is that we tend to obsess over ideological divisions in a country, but what really matters is divergent interests, in particular between those who want to hold back change and those who want it to accelerate.
The tipping point comes - as it did in the USSR towards the end of the 80s - when a critical mass of the political elite decide that their interests are best served by undermining existing institutions rather than working within them. Last week, we discussed Peter Turchin’s idea of "elite overproduction”, and why it might lead to civil fracture. Amalrik points to the mechanism by which it happens. In King’s framing:
Do [elites] cling to the system that gives them power or recast themselves as visionaries who understand that the ship is sinking?
It’s a question that, worryingly, seems as relevant in the West today as in the USSR 30 years ago.
How NLP will change the humanities
We’ve talked a lot recently about advances in natural language processing (NLP) and its implications. One interesting avenue is new approaches to the humanities (see previous coverage). For example, do you think of fiction or non-fiction as generally more predictable? Ted Underwood asks this question in a fascinating post and explores using new(ish) machine learning techniques to try to answer it.
There are some intriguing results. He points to this paper that shows that GPT-2 - an NLP model we’ve discussed before - finds it easier to predict the next sentence of imagined stories than of remembered ones, presumably because we stick more closely to narrative conventions when making stories up. Underwood also shows that you can train a model to predict which of two randomly selected passages from a novel comes first. Again, this is likely because of genre conventions: certain ideas - like references to guilt, apparently - are more likely to come at the resolution of a story.
Perhaps none of this seems groundbreaking, but it’s one example of the idea discussed above that advances in science change our understanding of basic categories. What is a story? What is a genre? What is fiction? Reading Underwood’s post, you get the sense that we’re at the very beginning of NLP’s influence on the humanities, but that it has the potential to revolutionise these disciplines. Machines that can read and write - whether fiction or code (see this striking demo) - will change the world.
Quick links
- What are the most influential essays? Superb Q&A thread (and the suggestions collated here)
- Plenty of room at the top? The US's racial wealth gap is driven by the gap between rich white and rich black people. (I often think of this extraordinary piece)
- Horror story. People who like scary movies suffered less stress during lockdown, apparently.
- The end of the elephant chart? The income growth of the global 1% has slowed.
- Theodicy and Facebook. Why does Zuck allow misinformation? The same reason theologians tell us God allows evil?
Your feedback
Thanks for reading. If you like Thoughts in Between, please help grow the readership: forward it to a friend or - even better - share it on Twitter or LinkedIn.
Lots of newsletters get stuck in Gmail's Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary.
I'm always happy to hear from readers: feel free to hit reply or message me on Twitter.
Until next week,
Matt Clifford