Thoughts in Between

by Matt Clifford

Matt's Thoughts In Between - Issue #60

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

Gut instinct and venture capital

Should venture capitalists use their "gut" when evaluating founders? Jerry Neumann says no and points to this fascinating dissertation by Geoff Smart, which suggests that VCs who invest more time and use an explicit structure when assessing founders do better.

Smart introduces a memorable typology that categorises VCs based on how they evaluate founders. "Airline Captain"-type VCs are highly systematic, use multiple methods and believe that accurate human capital assessment is possible. "Art critic" types share this belief but collect less data and rely on "gut". Their assessments, Smart suggests, are less accurate than those of their "Airline Captain" counterparts (there are five other "types" too).

I'd usually think that was the end of the argument, as Neumann is perhaps the most intellectually rigorous VC out there (His post on power laws in venture is a masterpiece). But then academic Laura Huang jumped into the thread and shared two academic papers - here and here - that suggest that angel investors who use their "gut" are more likely to identify "home run" investments. Can both be right? Perhaps both Airline Captains and (experienced) Art Critics are like well-trained neural networks and differ more in their explicability than in the accuracy of their predictions?

What can John Rawls tell us about Donald Trump?

There's been endless debate about whether the rise of Donald Trump represents the failure of liberalism (see previous discussion here). This week I came across this interesting essay by Samuel Scheffler on how we might analyse the Trump phenomenon using the ideas of the 20th century's most important philosopher of liberalism, John Rawls.

Rawls' seminal work, A Theory of Justice, is a critique of pure utilitarianism. Rawls argued that utilitarianism failed to take seriously the separateness of people and ignored the idea of reciprocity that political communities require. He argued for a more egalitarian conception of justice, core to which is an idea he called the Difference Principle. The idea is that social inequalities are justifiable only to the extent that they benefit the worst off (e.g. you might tolerate entrepreneurs becoming very wealthy if this raises living standards for the poorest on net).

What's this got to do with Donald Trump? Scheffler argues that in the 30 years before Trump, the economy embodied the opposite of the Difference Principle: inequality benefited the best off most, while median incomes stagnated. Many policies increased GDP in aggregate - trade, automation, etc - but created winners and losers (see, e.g., the Elephant Chart). Without reciprocity, there's no incentive to uphold the system. I'm not wholly convinced on the politics - Trump voters were not economic losers - but the overall argument, and Rawls more broadly, is worth pondering.

What can AI teach us about God?

I was at a conference last week at which one of the ideas explored was the relationship between technology and the future of religion. Below are some of the ideas I had before and after the discussion.

One of the most fascinating themes in the last few years of AI research is what we've learned about the "space of possible minds". AlphaGo may be a long way from general intelligence, but there was something extraordinary in the way it allowed Go experts to experience a superior, "beautiful", inexplicable non-human intelligence for the first time.

The decline of religion among intellectuals has been accompanied by the rise of humanism - the idea that there is something unique or even sacred about humans (Yuval Harari explores this in Sapiens). I would speculate that as AI progress starts to belie the uniqueness of human minds, belief in God may recover intellectual respectability. AI makes it easy to imagine simulating complex agents that might eventually wonder about their creator and even be able to derive some of the rules that governed their evolution - and has made some wonder if we might in fact be such agents. This "simulation hypothesis" has caused some engineers to rediscover God.

I'm also intrigued by the links between explicability in AI and the question of textual literalism in religion. If even simple machine learning models are hard for humans to understand - a sort of translation problem between kinds of minds - how much harder would it be for a God to communicate a text that a human could understand correctly? It strikes me that even if you are fairly certain that a particular religious text is the word of God, you should worry a lot about your ability to comprehend it.

Quick Links

  1. I, for one, etc... 10 years of robotics progress in side-by-side video.
  2. "The very rich are not like you and me". Chart on how family offices allocate their assets.
  3. I, too, am a contrarian. The brilliant Alex Danco on Silicon Valley's contrarianism obsession.
  4. A seat at the table. Wonderful thread on why meeting room layout matters if you're raising money.
  5. You win or you die. Great graphic on who Game of Thrones fans want to "win".

Your feedback

As always, thanks for reading. If you enjoy Thoughts in Between, please do forward it to a friend who might like it too. And feel free to hit Reply if you have any comments or feedback.

Until next week,