Thoughts in Between

by Matt Clifford

TiB 159: "Private ARPA"; against entrepreneurs; improving access to AI; and more...

Welcome new readers! Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

How to build a private ARPA

We've talked a lot - most recently last week in TiB 158 - about the design of new research organisations, which is a live and important topic given the UK's forthcoming Advanced Research and Invention Agency (ARIA). This week Ben Reinhardt (who used to work with me at Entrepreneur First) published a new and excellent guide to creating a "private ARPA". It's very long - practically book length - but it's well structured and easy to navigate, so I highly recommend browsing.

It's comprehensive in scope and covers everything from the very abstract (e.g. the relationship between profit and impact in the context of invention) to the concrete (e.g. potential legal structures for a private ARPA). What I like most about Ben's approach is that it explicitly takes into account the need for a platform-like, "self-catalysing" approach: that is, it's goal is not to specify a set of research programmes (though it does do that), but to think about the characteristics of an organisation that could systematically and repeatably generate such programmes.

As Ben acknowledges, the most challenging part of building a private ARPA is finding a viable business model. Ben is skeptical that it would have a positive private financial return (and he notes that the celebrated corporate innovation labs of the 20th century were generally attached to cash-rich monopolies with good reasons to fund them - as is, e.g., DeepMind is today). For this reason, I suspect governments are still the most likely funders of ARPA-like organisations now and in future. Ben's guide should be high up their reading list.

Maybe there *are* too many entrepreneurs

The latest Thoughts in Between podcast episode is a conversation with Samo Burja. Samo is the founder of Bismarck Analytics, a political risk consulting firm, and the author of Great Founder Theory, a manuscript on the topic of how great institutions are built and why there are so few of them. It's one of the most interesting things I've read this year and is dense with ideas I've not seen anywhere else.

One of the Samo's most provocative ideas is that it's a sign not of cultural vitality but of institutional decay that so many ambitious people feel the need to become founders . We discuss this around 28 minutes in. The core thesis is that "disruption" is not a good in itself, but an extreme measure necessitated by the fact that so many of our institutions are "dead players" that have failed to remain vibrant and pass on the secrets of their success over generations. Ideally, says Samo, we would learn how to solve this "succession problem" rather than requiring constant reinvention.

I've argued several times in TiB that we should not worry that there might be "too many entrepreneurs". Samo's argument is an interesting challenge to my thesis. I still tend to think that the growing attraction of entrepreneurship is largely technologically determined, but it's worth pondering whether this effect systematically deprives important institutions of the talent they need to thrive (We discussed this briefly in TiB 156 in the context of Noah Smith's excellent interview with Patrick Collison). We cover lots more in the interview; do have a listen.

Why AI needs National Research Clouds

In TiB 152 I argued that policymakers should worry a lot about national computational power, as this becomes an increasingly important ingredient in developing and deploying machine learning capabilities. This week AI policy expert Jack Clark gave an excellent talk at Stanford (slides here and see also this Twitter thread) on this topic, in which he recommends that governments create a "National Research Cloud" (NRC) to level the playing field between increasingly dominant private actors and academia.

The challenge is clear: as we've discussed before (see, e.g., TiB 128), machine learning models are becoming ever more computationally intensive and expensive. This piece notes that even a small Google AI project has a training budget over $1.5m - well outside academic budgets. Jack argues this matters a lot. OpenAI (his former employer) originally deferred releasing its language model on the basis that it could be used to mass produce misinformation (see TiB 52 for more). If academics and others can't replicate or scrutinise cutting edge work in the field, there's a risk of a democratic deficit.

The goal of an NRC would be to reduce this "compute asymmetry" by making large-scale computational resource easy and cheap for scholars to access. As Jack notes, this should be affordable for any rich country because an NRC can "piggyback" on the extraordinary amount of capital investment by Amazon, Microsoft et al in cloud computing. As I've said before, the costs involved in AI today are small by nation state standards - and an NRC looks like a good starting point for any country that wants to balance world-leading capabilities and democratic accountability.

Quick links

  1. "Topology is a hell of a drug". Your brain is not very good at thinking about knots. Great short video.
  2. Adventures in relative risk? During the pandemic, road travel is way down, road deaths are way... up.
  3. You've clearly not played board games with *me*. Companies whose CEOs engage in riskier leisure activities get worse terms on loans(!)
  4. The virtualisation of the world. Stunning chart on physical vs digital investment over the last five years.
  5. The most important event of the 20th century? The incredible impact of the Green Revolution, in a tweet.

How you can help

You made it to the end - thank you! If you'd take just an extra 30 seconds to forward this to a friend who might like it, I'd be very grateful.

And feel free to hit reply if you have questions, comments, suggestions or feedback.

Until next week,

Matt Clifford

PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.