Thoughts in Between

by Matt Clifford

TiB 199: Dangerous AI; Funding experiments; Rating Net Zero; and more...

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.

It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

How the state can stimulate better research funding

We've talked a lot here about the need for new institutions and models for funding science. If you're interested in this topic, this new piece by Ben Reinhardt - whose work on private ARPAs we discussed in TiB 159 - is a must read. The core idea is that public research funding should "fund organisations not projects" - that is, we should use public funding (like the NSF in the US or UKRI in the UK) to experiment with how we fund research, not just what we fund. The alternative, Ben argues, is depressing:

[P]ushing huge amounts of money through outdated R&D funding structures is like slamming on the accelerator of a car that needs an engine repair: incredibly inefficient and with the potential to backfire.

Ben has two big ideas. First, that public bodies should programmatically match fund privately-financed independent research organisations (IROs), rather than specific research projects. Second, that public bodies should reserve substantial sums (~$50m per organisation per year) to fund for at least a decade organisations that "graduate" from the first programme with impressive track records. The thesis is that the first programme creates a funnel of diverse, high-variance IROs and the second allows the best to scale (and gives philanthropic funders an "exit" that ensures their legacy endures).

I like these ideas a lot. I worry that when we do create new research funders - such as the UK's ARIA - we tend talk up the importance of taking more project-level risk rather than what I've previously called "meta-risk" - that is, risk in the design of the funding organisation itself. There's no shortage of opportunity in the design space for such institutions, as we discussed in TiB 158 and 170. It would be great to see a government agency somewhere pick up these ideas. As Ben notes, the financial cost is a tiny fraction of total government research spend, but could be wildly impactful.

How many AIs can dance on the head on a pin?

I've said before (see TiB 108) that one big problem in an increasingly weird world is that many important ideas will seem kooky to the point of embarrassment. This creates an epistemological problem: recent history suggests that dismissing all weird ideas out of hand is probably a bad strategy - but so is embracing ideas because they're contrarian (see this short piece I wrote for a16z Future). One of the areas where I worry about this most is AI safety (see previous coverage). The idea that superhuman AIs pose an existential threat to humanity is easy to mock - it gets very weird, very quickly - but seems plausibly one of the most important problems facing the world in the coming decades.

For this reason, I recommend this Scott Alexander post that riffs on a (very) long conversation between Eliezer Yudkowsky and Richard Ngo on "AI alignment". What is AI alignment? In essence, the problem of how humans can ensure a machine that's smarter than us acts in accordance with our interests and values. Alexander's great strength here is that he takes the topic very seriously, but he also knows that it sounds, well, bonkers to most outsiders. In Alexander's framing:

Evolution taught us "have lots of kids", and instead we heard "have lots of sex". When we invented birth control, having sex and having kids decoupled, and we completely ignored evolution's lesson from then on. When AIs reach a certain power level, they'll be able to decouple what we told them ("win lots of chess games") from whatever it is they actually heard, and probably the latter extended to infinity will be pretty bad.

Both Yudkowsky and Ngo agree that aligning what we want the AI to do and what it actually does is very hard. The main point of disagreement is that Ngo believes that a super-intelligent AI that lacks a "well developed motivational system" is much less dangerous than one with agent-like properties, and so we could use such an AI to help us solve the alignment problem. Yudkowsky disagrees. Is this debating how many angels can dance on the head of a pin or the most important question in the history of our species? If you think the latter is even remotely plausible, I'd encourage you to read the whole thing.

TiB podcast: the ratings agency for net zero

The TiB podcast returns this week with a conversation with Allister Furey, co-founder and CEO of Sylvera*. This is the third of our climate mini-series, following previous episodes with Michelle You and Christian Hernandez. Sylvera is a technology company that provides ratings for carbon projects around the world, analogous to the way Moody's or S&P provide credit ratings. In this conversation we talk through Allister's journey to starting Sylvera and why he believes it's a critical part of the puzzle of tackling climate change.

Sylvera's core thesis is that to unlock the trillions of dollars of investment that are notionally earmarked for the transition to a net zero economy, it's essential to demonstrate that carbon offset and removal projects are actually having their claimed impact. As Allister describes in this episode, this turns out to be a very thorny technical and political problem...

In this conversation, we discuss:

Enjoy!

*Disclosure: Allister used to work with me at Entrepreneur First and I am a small investor in Sylvera

Quick links

  1. Beyond Wordle. If you're enjoying Wordle, you might like Absurdle, which I found quite addictive. Or Primel (the number equivalent). Or this discussion of a Python script for solving the original game.
  2. Do not pass Go. Astonishing facts about incarceration in America (Almost unbelievable)
  3. What are NFTs actually good for? One of the most interesting arguments I've seen.
  4. Pile 'em high... Countries ranked by actual individual consumption (not GDP). America is (even) richer than you think.
  5. You are what you read. Fascinating study on how reading habits predict psychological characteristics (The Young Adult chart made me laugh!)

Onwards and upwards...

Thank you for reading Thoughts in Between. If you enjoy it, I'd love you to forward it, mention or evangelise about it to a friend.

As ever, do feel free to reply if you have comments, questions or recommendations.

Until next week,

Matt Clifford

PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.