Thoughts in Between
TiB 89: the costs of financialisation; reframing AI safety; civilisational collapse...
Forwarded this email? Subscribe here. If you like it, please forward it to a friend.
Is "financialisation" destroying society?
The idea that the economy is becoming “financialised” (i.e. finance becomes a a bigger share of GDP) is a popular trope on the left (see, e.g, this book). But the same arguments are increasingly made on the right. American Affairs - a conservative journal that once backed Trump, but no longer - published a big critical piece on The Financialisation of the American Elite. It’s worth a read, wherever you sit on the political spectrum.
The core argument is that, alongside its economic impact (and there’s evidence it’s negative), financialisation has caused a macro talent allocation problem: the most ambitious individuals have been sucked away from the real economy towards a bloated financial sector. HBS graduates are five times more likely to work in finance today vs in 1960. According to this post, salaries in finance today are ~70% higher than in other sectors vs roughly equal in 1980.
Given my day job, I’m sympathetic to the idea that this is bad, but another reason to care is the destructive incentives it may create. Ben Hunt at Epsilon Theory has an entertaining and compelling piece on Texas Instruments - an unlikely but apparently egregious example of the costs of financialisation. In Hunt’s words, "the Adam Neumann [of WeWork] story is repeated in a non-obvious way every day in every S&P 500 company”. Given that financialisation is partly a policy choice (and see here for a fascinating left-wing take), it’s probably something we should be talking about more.
Should we think of AI as a tool or an agent?
With a number of apparent AI breakthroughs in recent weeks, it’s a good time to revisit AI safety (see previous coverage) - efforts to ensure that AI progress benefits humans and avoids catastrophic consequences. Nick Bostrom’s 2014 book Superintelligence - which warned of superintelligent artificial agents with wildly different values from us - has played a large role in framing the debate so far.
This week’s ChinAI podcast sees Jeff Ding interview Eric Drexler, one of the fathers of nanotechnology, on his paper from earlier this year, Reframing Superintelligence. Drexler argues that the Bostrom-ian framing of superintelligent agents doesn’t fit what we know about AI progress so far. Drexler says a more likely outcome is “Comprehensive AI Services” (CAIS) - i.e. a collection of superintelligent tools, rather than agents. Drexler is optimistic that this will give us more options for solving the “control problem”, as the tools won’t have goals and values of their own.
Scott Alexander at Slate Star Codex is broadly positive on Drexler (and makes good metapoints about why he didn't think of this before) Rohin Shah has a great synopsis, but thinks CAIS will eventually be overtaken by AI agents. Gwern’s critique - that “tool AIs want to be agent AIs” - is perhaps the best, despite predating Drexler by two years. Nevertheless, it’s valuable to have a framing of AI risk that’s harder to parody than a paperclip maximiser - which hopefully means more people will (rightly) take it seriously.
Institutions and (ancient) civilisational collapse
I recently read Robert Harris’s new novel The Second Sleep, which is set ~1500 years after an apocalyptic event that led to the loss of almost all science (echoes of A Canticle for Leibowitz). This piqued my interest in real-world catastrophes and I came across the Late Bronze Age collapse: the sudden and violent collapse of a large number of previously thriving states in the Eastern Mediterranean around 1200 BC (great podcast here).
There's little consensus on its causes - partly because the destruction of many leading cities was so complete that little evidence survives. Without wanting to draw parallels to today, popular explanations include climate change and technological disruption (the rapid decline in the utility of bronze and its complex supply chains as iron emerged). Less obviously analogous are the mysterious Sea Peoples, who came out of nowhere to wreak destruction and then disappeared...
The role of institutions in exacerbating the collapse is interesting. In this excellent discussion, Simon Stoddart argues that “social levelling” norms reduced societal resilience (clip here). For example, the wealthiest individuals in some of affected societies were socially compelled to bury their wealth with them on death or to host expensive feasts that drained their resources. I’m not sure that’s a case of premature de-financialisation(!) but it’s certainly an interesting example of unpredictable consequences.
Quick Links
- When did the modern world begin? Great Q&A thread with some interesting answers.
- Economic possibilities for our...selves? Thought provoking account of what happened when Microsoft Japan piloted a four day week.
- Nobody got bombed, I mean, fired, for buying... An intriguing digital marketing move from IBM.
- Product/market fit? Apparently video game economies are dominated by real-world money laundering.
- Life does not always find a way. The most hostile environments on earth are not fun, even for microbes.
Your feedback
Thanks for reading to the bottom. If you enjoy Thoughts in Between, it'd be great if you'd forward it to a friend who might like it too.
Always feel free to hit reply, say hello or follow me on Twitter.
Until next week,
Matt