Thoughts in Between

by Matt Clifford

TiB 217: Creating new scientific fields; AI arms races; the geopolitics of chokepoints; and more...

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.

It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

How to create new fields in science

We've talked a lot here about the need for more boldness and risk taking in science. Michael Nielsen has a wonderful new post on what he calls "vision papers", which could plausibly play a role in achieving this. Vision papers, loosely, are papers in which a scientist lays out a potential exciting avenue of research that could change the shape of a field or create a new field. Michael suggests that vision papers are underrated relative to their impact, primarily because they involve a kind of thinking that is generally disincentivised - or even disparaged - in professional scientific life.

If nothing else, the piece provides a reading list of vision papers that could keep you busy for months. Try Alan Turing's paper on computable numbers; Alan Kay's enormously influential "A Personal Computer for Children of All Ages" (1972!); or John Wheeler's 1989 report suggesting that information could be the basis of reality ("it from bit"). Don't miss footnote 2, which lists many more. As Michael notes, these pieces are very different from typical scientific papers:

They're often storytelling or narrative creation, with few technical results, and sometimes apppear superficially closer to literature than what people ordinarily consider science

And yet, Michael argues, such papers can provide a rallying cry for other scientists to explore new territory and, sometimes, create whole new fields of scientific exploration. Given this (potential) impact, why are vision papers so rare? Partly because it requires a very unusual skillset: it needs both great, even brave, imagination and profound scientific understanding (this is not science fiction). Michael concludes by asking what we could do to stimulate more vision papers and muses on the value of a "Vision Prize" that openly solicits them. Given the money and energy flowing to "metascience" right now, it certainly seems worth a try.

The geopolitics of... aviation grade steel

The US's use of the technology supply chain to obtain geopolitical advantage, particularly over China, has been a favourite TiB topic (see TiB 182 and this Adam Tooze essay for a good summary). Perhaps the most visible example has been the legal and diplomatic pressure on ASML to prevent Chinese semiconductor manufacturers obtaining access to the Dutch tech giant's extreme ultraviolet lithography (EUV) machines (see TiB 174 for more). But this is just one of a long list of areas where China feels vulnerable to a reliance on foreign, often US-controlled, imports.

Ben Murphy of Georgetown's Center for Security and Emerging Technology has a really excellent new paper on the topic of the "chokepoints" (Xi Jinping's term) in China's technology ecosystem. Murphy takes as his source a remarkable series of 35 articles in China's Science and Technology Daily (S&TD), a state-run publication, each of which lays out one such chokepoint. ASML and EUV machines are listed, but so are multiple technologies that get much less attention in the West, from high-end radio frequency components to "main bearings for tunnel boring machines"(!) via aviation grade steel for aircraft landing gear.

When we talk about economic "decoupling" between China and the West, we tend to focus on ideas like "splinternet", but the less glamorous world of high-end manufacturing is just as important (the stubborn persistence of the physical again...) It's interesting to see the reasons S&TD cites for China's lag in these key areas. Partly it's about talent (we've talked before about how useless TSMC's plants would be without the people to run them; see TiB 161) and partly it's about Chinese firms' preference to import technology rather than using lower quality domestic supply (see our podcast with Meia Nouwens for more on the myth of China's joined-up public-private cooperation). This does point to a potential weakness in the US's strategy: permanent bans of technology exports force China to create domestic capabilities. Chokepoints hurt, but they don't last forever.

To arms race or not to arms race...

Probably my favourite non-fiction book is Richard Rhodes' superb The Making of the Atomic Bomb (more here), which tells the story of the Manhattan Project and the US's race to be the first to create nuclear weapons. Except, of course, it wasn't really a race; Nazi Germany abandoned its own nuclear programme as early as 1942. How should this make us think about the net impact of the Manhattan Project? And what are the lessons for "arms race" dynamics in new technologies today? Haydn Belfield has a fantastic post exploring these questions, with particular application to artificial general intelligence (which we've been talking about a lot recently - see TiB 213 and 215).

Belfield argues that the Manhattan Project and the later US efforts to close the ICBM "missile gap" in the late 1950s and early 60s were both premised on a mistaken belief that the US was in an existential arms race - and both had the effect of accelerating dangerous technologies and making the world less safe. He worries that we may be about to make a similar (and similarly dangerous) mistake with respect to artificial intelligence (see also TiB 128):

[A]t some point in the next few decades, well-meaning and smart people who work on AGI research and development, alignment and governance will become convinced they are in an existential race with an unsafe and misuse-prone opponent [i.e. the Chinese state] ... [and] therefore advocate for and participate in a ‘sprint’ to AGI

There are indeed lots of analogues (the comparison of the RAND Corporation c. 1960s and today's AI labs is particularly striking). How can we avoid this, if we accept Belfield's argument? He suggests three courses of action. First, that we should invest in understanding whether we are in fact in a race! Second, we should be careful about secrecy (the story of Joseph Rotblat is instructive). Third, scientists should be wary of giving up their power: "an AGI sprint... will not succeed without the participation of [experts]". It's an excellent piece with a lot to ponder.

Quick links

  1. Will there be war? Interesting thread on the state of play in Taiwan.
  2. Party line. Elite public opinion is (suspiciously?) coherent.
  3. Substitutes, complements. More evidence that violent video games don't cause aggression (More here).
  4. Like Magic. Wonderful thread on Magic the Gathering - from AI-designed cards to a Turing complete game.
  5. How to win an election. Changing swing voters' minds matters about three times as much as changing who votes (e.g. improving turnout).

How you can help.

Thanks for reading Thoughts in Between. If you like it, please share it with a friend and/or social media following who might like it too.

Do feel free to reply if you have comments, questions or suggestions.

Until next week,

Matt Clifford

PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.