Thoughts in Between

by Matt Clifford

TiB 167: AI safety; China's missing founders; how to build AI companies; and more...

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.

It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

Anthropic, explainability and AI safety

Anthropic, a new AI research company, launched this week with a stellar set of founders and $124m in funding. There are good write ups in the FT and particularly Vox's Future Perfect newsletter. Most of the founding team previously worked at OpenAI, an AI lab we've discussed many times in TiB. It had almost become conventional wisdom that it was futile for startups to compete directly with DeepMind and OpenAI on artificial general intelligence research, so it's fascinating to see a new player emerge with comparably big ambitions.

It's particularly interesting that Anthropic is explicitly an AI safety company - that is, it is interested in how to ensure that the actions and consequences of powerful AI models are aligned with human intentions and values. Given that, it's not surprising that several of Anthropic's named investors, such as Dustin Moskovitz and Jaan Tallinn, are associated with Effective Altruism, a movement that has put a lot of emphasis on AI safety as a potential source of existential risk.

According to Kelsey Piper in Vox, Anthropic's starting point will be "building tools that [external] researchers can use to understand their programs". This approach to AI safety seems promising in a world where scaling models far beyond human legibility has played such a big part in recent AI breakthroughs like GPT-3. As the Anthropic team played a key role in this and other landmark models at OpenAI, they have a deep understanding of the challenge. Their success will be an important test for the whole approach of "safety through explainability". It's one to watch closely.

The world misses out on great founders, China edition

A couple of weeks ago in TiB 165, we discussed this paper, which suggests that there is no shortage of people who have what it takes to become great entrepreneurs: it's just that most of these people are pulled away from the startup sector by conventional high wage jobs. This week a new paper presents evidence from China that points to the same conclusion, thanks to a fascinating dataset.

The paper links university admissions data for 1.8m individuals, including various measures of talent, with a dataset of new company creation. This allows the authors to look at both the relationship between (a) talent and the propensity to start a company and (b) talent and the success of companies started. They find that more talented people are less likely to start a company, but more likely to succeed when they do.

This is consistent with the hypothesis that talent is somewhat fungible across entrepreneurial and conventional career paths, rather than entrepreneurial ability being highly distinctive set of attributes. It's also evidence for the idea that the most important lever for improving the quality of startup ecosystems is to increase the relative attractiveness of starting a company. As we discussed before, this happens organically during recessions, but - my obvious bias granted - it suggests that systematic efforts to increase the prestige of entrepreneurship are highly valuable too.

TiB podcast: Ash Fontana on the AI First Company

This week's TiB podcast episode is a conversation with Ash Fontana, a venture capitalist at Zetta Venture Partners and the author of a new and excellent book, The AI First Company. I've known Ash for a long time; he was the first external investor in Tractable, one of the most successful AI companies we've funded at Entrepreneur First. I highly recommend his book: it's the best thing I've read on how and why machine learning-driven companies are different and how entrepreneurs can build successful companies in the space.

One of the things I like about the book is that it's both theoretically rigorous and deeply practical. For example, it walks the reader through what sorts of experiments to run to figure out if an apparent "data learning effect" (Ash's term for the automatic compounding of informational advantage that's critical for AI companies) is real or illusory. Nevertheless, perhaps my favourite part of our conversation is towards the end when we step back and talk about what advances in AI have taught us about human and animal intelligence and the "space of possible minds" (see TiB 70 and this paper from 1984 (!) for more on this)

We also discuss:


Quick links

  1. Don't lean on me? Fascinating global survey data on "Do you have people you can count on?" and its relationship to income.
  2. Flashcards as security hole. Terrifying Bellingcat report on the nuclear secrets exposed through online flashcard apps(!)
  3. Bees get jet lag. Wonderful short video that does what it says on the tin.
  4. Left aligned. Striking graphic on the way education as a predictor of voting behaviour has shifted since 1970.
  5. Not quite Atlantis. Beautiful photo thread on towns semi-submerged by flooding for hydroelectric dams.

The bit at the end

Thanks for reading Thoughts in Between.

TiB is free, but it’s great to grow the readership, so please do forward this to a friend who might like it or share the link on Twitter, etc.

Questions, comments and recommendations always welcome - just hit reply.

Until next week,

Matt Clifford

PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.