Thoughts in Between

by Matt Clifford

TiB 168: Pandemics and bad science; when AIs make us unethical; catastrophic risk; and more...

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.

It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

The Science Game™ is killing us

I tend to think that the controversy over the “lab leak hypothesis” - i.e. the possibility that COVID-19 did not occur naturally but escaped from a Chinese virology lab - is the silliest culture war set piece yet: whatever you think the truth is, why should either side of of the Western left/right divide have skin in the game here? (Though see Tyler Cowen for the counter argument). Nevertheless the controversy does touch on one favourite TiB theme - bad incentives in science.

Erik Hoel, the neuroscientist and novelist who was on the Thoughts in Between podcast a few weeks ago, has a superb essay on this topic. Erik argues that dangerous “gain of function” research is driven less by its intrinsic value and more by what he calls the “Science Game™” (a theme that runs throughout Erik’s novel, The Revelations):

Varying some variable with infinite degrees of freedom and then throwing statistics at it until you get that reportable p-value and write up a narrative short story around it

As Erik says, in most fields this is fairly harmless - albeit a waste of talent - but in virology “playing the Science Game™ turns out to have negative externalities like potential mass death at a global scale”. And, of course, it could be even worse. If you want to feel really bad, read this post by AI safety researcher Eliezer Yudkowsky: if we can’t solve the coordination problem of not creating deadly pandemics, how will we avoid building other technologies that might destroy us?

How to avoid the next pandemic (and other risks)

This question of how we avoid destroying ourselves is, obviously, a rather important one, but one that gets relatively little attention in mainstream public discourse. I was therefore excited to read Future Proof, an excellent report published by the Centre for Long Term Resilience last week, which addresses exactly this topic. I highly recommend it - and, in fact, I liked it so much that I recorded a special episode of the Thoughts in Between podcast with three of its authors - Sophie Dannreuther, Angus Mercer and Gregory Lewis. Do have a listen.

Future Proof looks at how the UK (and indeed any) Government can mitigate some of the “extreme risks” we face. It focuses on biosecurity and AI safety, on the basis that they’re relatively neglected in policy compared to other major risks, such as nuclear proliferation or climate change. In the conversation, we discuss particularly what COVID-19 has taught us about how governments can prepare for and prevent pandemics, whether natural or man-made

Despite the subject, Angus, Sophie and Gregory are all refreshingly optimistic about our ability to navigate the risks they write about it. In the episode, we cover:

When AI makes us behave badly

We’ve talked before about some of the implications of more powerful AI for human ethics (such as the “moral deskilling” hypothesis in TiB 119). There’s a good, short paper in Nature this week that reviews the literature on this theme and lays out a framework for thinking about how humans behave and evaluate morality when relying on autonomous systems.

The core idea that is that there are four distinct ways we use AI - adviser, role model, delegate, and partner - and each triggers different types of human beliefs about the morality of our actions. According to the review, AIs acting as delegates or partners let people “reap unethical benefits while feeling good about themselves”. One example is that we’re more likely to think harm to pedestrians - other than children, interestingly - is acceptable when an AI is driving us than if we’re driving (see this paper for more).

Humans aren’t the only victims of these behaviour changes. This new paper suggests that we’re also much more likely to take advantage of benevolent AI agents than benevolent humans - and much less likely to feel guilt about it (see also this write up in the NYT). This might have some worrying implications for the evolution of cooperation between humans and AI actors, as Iyad Rahwan discusses in this thread. Empathy for bots, it turns out, might be an important thing to teach ourselves.

Quick links

  1. What price fully automated luxury communism? Fascinating post by Rob Wiblin on how much it would cost to fully hedge the risk of global labour income going to zero.
  2. Secret history. Brilliant long list of ideas for historical dramas, with not a Henry VIII in sight.
  3. Germ of an idea. The most impressive and granular analysis I've seen yet on how COVID-19 spreads. Nice diagrams.
  4. The road from serfdom. Interesting piece and charts on 800 years of economic inequality (another one for WTF happened in 1971?).
  5. Order, order. Amazing charts of how ordered/chaotic the streets of 100 global cities are (Chicago is most ordered; Singapore's ranking is surprising to me)

How you can help

Thanks for reading all the way to the end. If you'd take a minute to forward this email to the friend who you think would like it the most, I'd be really grateful.

I'm always happy to hear from readers - just hit reply if you have questions, comments or recommendations.

Until next week,

Matt Clifford

PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.