Thoughts in Between

by Matt Clifford

TiB 208: Brain-scale AI; "mediocre superstars"; Tech vs Energy; and more...

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.

It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

"Stubborn persistence of the physical": energy edition

Long-time readers will not be surprised to hear that one of the most important lessons from the war in Ukraine is the "stubborn persistence of the physical", a favourite TiB theme. We are learning a lot about where nickelwheat and fighter jets, among other things, are made... and it turns out that none of these are easily virtualisable! Janan Ganesh has an interesting column on this topic this week (Thanks Freddie for the link), in which he argues that Ukraine demonstrates the hollowness of tech's ascendancy and the more enduring power of energy as the world's most important industry.

There's a lot to this argument. I highly recommend Helen Thompson's new book, Disorder: Hard Times in the 21st Century, which tells the story of the waves of crises that have hit the West over the last two decades through the lens of our insatiable appetite for energy. If you're more of a podcast person, Thompson is the guest star in this excellent two part exploration of the topic on The Rest is History. And, for longer term context, it's hard to beat Daniel Yergin's superb histories, The Prize and The Quest.

But an important caveat: "software is eating the world" likely applies to energy too in the long run. Silicon Valley is certainly turning its attention - and capital - to energy to an unprecedented degree. This piece from December suggests that VC investment in nuclear fusion has reached an inflection point after a decade of subdued activity, thanks to rapid tech improvements. Of course, that capital isn't funding software (though see TiB 203 for the impact of machine learning on cost-effective fusion), but I expect the startup methodology and mindset will play a big role in the future of energy. Ganesh may be right that the "frothiest social app" looks pretty frivolous in the face of geopolitics, but I wouldn't be surprised if Silicon Valley has the last laugh.

China and "brain-scale" AI

Jack Clark (whose Import AI newsletter is consistently excellent) reports on an interesting AI study out of China. Researchers trained a "brain-scale" model with over a trillion parameters (and demonstrated that the model should scale to close to 200 trillion parameters). These are enormous numbers. OpenAI's GPT-3, which we've discussed many times before, was trained with 175 billion parameters. It's not as simple as "more parameters = more powerful model", but as we discussed in TiB 128 and 152, there's good evidence that we've not yet reached the point where the benefits of scaling existing machine learning techniques are tapped out.

This has important implications for the future of AI. I recommend Gwern's write up of what's known as the "scaling hypothesis" for more on this. Two other points worth noting. First, we've discussed before (see TiB 159) the growing strategic value for a country of owning the full "AI stack". This paper is a good example: there’s a hardware as well as a software innovation here. The model was trained on a new Chinese supercomputer; much of the novelty comes from the way they set up the machine to deal with such enormous scale.

Second, as Jack notes, this paper is evidence of the sort of public/private collaboration which is rare in AI research in the West. Some of the authors come from Chinese tech giant Alibaba, but others from Tsinghua University and the Beijing Academy of Artificial Intelligence. Why does this matter? In Clark's words:

[I]nitiatives like this are a rarity in the West, which is dangerous, because it means Western countries are handing over the talent base for large-scale AI development to a small set of private actors who aren't incentivized to care much about national security, relative to profits

If governments in the West are serious about the strategic power of AI (and they should be!), it's crucial that they think through and plan for this dynamic. As some commentators have pointed out recently (excellent thread), it's easier and cheaper for governments to worry about AI ethics than investing in actual AI.

Why founders are (not) like football managers

What does becoming a football (soccer) manager and becoming a startup founder have in common? A possible answer is that both markets structurally overvalue experience relative to potential, which makes it hard for talented new entrants to establish themselves. I stumbled upon this fascinating paper which uses an unusual dataset to try to explain why mediocre football managers get hired by club after club, despite there being a huge pool of potential candidates, many of whom might be more talented.

The explanation is twofold. First, hiring a manager is hard to reverse quickly. A bad manager can do irreparable damage to a club, especially a club with low cash reserves and limited resilience to failure. Second, clubs don't benefit from betting early on an exceptional unknown talent, because great performance is highly visible and it's impossible to prevent richer clubs hiring excellent managers away. It's therefore rational for a club to prefer "experienced mediocrities": it limits the downside and you can't capture the upside anyway. But it's societally suboptimal; it means lots of great talent doesn't get discovered.

It's interesting to think about this framework in the context of venture capital, because neither constraint applies: the downside from making a bad investment is limited and you do benefit (a lot, in fact) from being right early, even (especially) once exceptional performance is obvious to everyone. Nevertheless, I think it's likely that we still societally underinvest in giving young people unbounded opportunities (startups, but also unfettered careers in science) because we underestimate the option value of discovering exceptional ability. The reward function is asymmetric: it's not that bad to find out you're not a great founder, but it's extraordinarily valuable to find out that you are (I wrote more about this here way back in 2014) This is one reason I love my day job: there are few things more exciting than helping people discover their own exceptional talent.

Quick links

  1. Pick a side. Fascinating scatterplot - which countries are most culturally similar to the US? And to China? (What's with Yemen?)
  2. Golden nappies? Childcare in the UK is mindblowingly expensive (how does Luxembourg do it?)
  3. Kardashev scale hedge funds. Not a quick link really, but a fascinating read from Numerai founder Richard Craib on the future of decentralised hedge funds.
  4. Increment by one. The 2022 edition of Stanford's always interesting AI Index is out. Some great datapoints in here - e.g. if you're worried about inflation, you're obviously not buying enough robotic arms, which are 46% cheaper than five years ago...
  5. Notes on a (fictional) pandemic. Superb Goodreads notes by Emily St John Mandel on her excellent novel Station Eleven (via Eugene Wei - as he says, more authors should do this!)

Thank you and good morning

Thanks for reading. I used to A/B test this paragraph to see if certain messages generate more referrals. Spoiler: they don't! Nevertheless, I'd love you share it with someone who might like it.

As always, feel free to hit reply if you have comments, questions or suggestions.

Until next week,

Matt Clifford

PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.