Thoughts in Between
TiB 151: Lessons from the Vaccine Wars; making AI safe; variance and politics; and more...
Welcome new readers! Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
Lessons from the "vaccine wars"
During a year of political failure on the pandemic, the UK Government got one thing triumphantly right: vaccine procurement. If you click on nothing else in today’s edition, read this interview with Kate Bingham, the venture capitalist put in charge of Britain’s vaccine taskforce. It a fascinating story, and touches on some of the issues that will dominate the collision between politics and technology in the coming decade.
The first is the importance of speed. In the domain of strategically important technologies, the ability to deploy quickly - not usually the dimension for which bureaucracies are optimised - will be decisive (See Bingham's points about organisation design and decision making). Bruno Maçães has an excellent discussion of this point in this piece on “vaccine politics”. This adds important nuance to my summary of our podcast with Jade Leung from a couple of weeks ago: states may "win in the end” when it comes to regulating technology, but “the end” may be too late.
The second is the enormous impact (and difficulty) of being a good “venture customer” - the customer of "first resort" for the products of innovation (See TiB 100 for previous discussion). It’s striking how much emphasis Bingham places on this:
“So our offer to Novavax was, we'll help you with manufacturing, we'll help you with the clinical trials... our goal basically was really to be a supportive customer as much as we could.”
This is going to be a key capability for any government that wants to be at the technology frontier. It’s surprising how little attention it gets.
TiB podcast: Marc Warner on making AI "safe"
The TiB podcast episode with Marc Warner on AI safety is now live. Marc is co-founder and CEO of Faculty AI and one of the deepest thinkers on artificial intelligence I know. We discuss how to make AI “safe” - which in Marc’s framing means fair, robust, privacy-compliant and explainable. As Marc points out, most new technologies come with safety concerns, but over time we address these while simultaneously improving performance. In other words, it’s a mistake to worry about a safety/performance trade-off: since the Model T, cars have become faster and safer. AI can be the same.
One of the most important models I’ve learned from Marc is distinguishing between “AI problems” and “political problems”. It’s tempting to look at episodes like the UK’s 2020 “algorithm-graded” exams fiasco and see an “AI problem” - but as Marc points out, the issue here is ambiguity about what society means by “fair”, not the algorithm used to determine it:
“A useful thought experiment to substitute 'done by AI' to 'done by bureaucrats’, 'done by software’, 'done by computers’ or 'done by the Wizard of Oz’... If something doesn't feel legitimate in any of those circumstances, it’s really got nothing to do with [AI]”
This is an argument dating back to Weber: we can use technology, from bureaucracy to AI, to optimise our instrumental rationality, but only the political process can provide the values for which we’re trying to optimise. Marc has some important ideas about how we might incorporate this into our machine learning models.
We explore much more in the conversation, including:
- The talent needed to implement AI
- The use of data science in the fight against COVID
- Why “explainability” of machine learning models is so important...
... and more. I hope you enjoy the conversation.
More on variance: how the world got so weird, redux
Many thanks to the many of you who provided thoughtful and generous feedback on last week’s unusual essay edition. I can’t do justice to them all, but I wanted to share some of the most interesting thoughts and questions I received here. I hope I’ll have time to turn some of these into a more extended piece in the near future. There are three broad questions that came up multiple times in one form or another:
- How is China affected by variance-amplifying institutions?
- What will be the political reactions to variance-amplifying institutions?
- What are the most effective variance-dampening institutions in the internet age?
The China question seems particularly important. If, as I argue, “a free internet entails a world with more variance and more tail risk”, China’s response is effectively to ask, “but what if the internet is not free?” (This week’s example…) Clearly there are some stability benefits: China has found brutal and effective ways to curtail the “weirdness epidemic” that has overtaken the West. But it’s not clear whether this is a good trade-off in equilibrium. Can you capture the upside of higher variance if you can’t tolerate its unpredictability?
The political question in the West is rather different: which actors will lean into variance (no moral equivalence implied) and which will try to suppress it? I suspect that in retrospect we’ll view the Biden administration as the last gasp of “normality” rather than its resurgence - and, in the same vein, I wonder if (also in retrospect) variance vs “normality" will come to be seen as the critical dividing line of the 2020s.
I also suspect that the question of which institutions amplify/dampen variance will turn out to be counterintuitive. The Founders certainly designed the US Senate to be variance-dampening, but arguably by entrenching minority rule, it allows fringe - i.e., high variance - perspectives to thrive. And what about UBI (see TiB 15), which seems closer than ever before in the US? On the face of it, minimum income dampens the impact of volatility. But equally, one argument for UBI is that it unleashes creativity and entrepreneurship - i.e. variance. It’s going to be an interesting decade.
Quick links
BONUS: You can now sign up for my InterIntellect salon on “Power in the Middle Ages” next Tuesday. It should be fun (and will cover lots of TiB topics through a historical lens).
- Surveillance meets stupidity? Fascinating animated graphic showing the (cell phone) connections between Trump's "Stop the Steal" event and the storming of the Capitol.
- ... but verify! Which country's people are most trusted?
- Impassioned. Excellent new study on emotion and reason in political language (Twitter summary form, with charts)
- Fantasy Intellectual Teams. Interesting thought experiment from Arnold Kling. And a follow up. (Please let me know yours if you have ideas - bonus points for original suggestions)
- Experiments in sovereignty, Nevada edition. I would bet we have not heard the last of this.
What do you think?
Thanks for reading Thoughts in Between. If you liked it, I'd love you to share it on social media or forward this email to someone who might enjoy it too. Reviews of the podcast are also very welcome.
Feel free to email me or find me on Twitter if you have any feedback or questions.
Until next week,
Matt Clifford
PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.