Thoughts in Between

by Matt Clifford

TiB 215: Tyler Cowen on talent; more AI progress; EA as an idea machine; and more...

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.

It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.

Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.

Tyler Cowen and Daniel Gross on talent

My guests on the podcast this week are Tyler Cowen and Daniel Gross, co-authors of a new and must-read book, Talent. Tyler is best known for his daily blogging at Marginal Revolution and as the founder of Emergent Ventures, a grant-making programme focused on ambitious young people. Daniel is a Silicon Valley-based entrepreneur and investor, perhaps best known as the founder of Cue, a search engine, and Pioneer, a global talent-focused startup tournament and investment programme. Both are among the most interesting thinkers I know.

All three of us are talent obsessives, so I've been excited about this conversation. I think you'll get a lot out of it if you're interested in the art and science of finding and evaluating great people. We talk a lot about why the market for talent is so inefficient, despite the strong incentives that exist to find undervalued people and discuss the tactics that Tyler and Daniel think are most likely to discover overlooked individuals (These are topics we've discussed many times here - see, e.g., TiB 100125163 173181187)

We also discuss:

I do highly recommend the book. I’d like to think I knew a lot about this topic, having spent the last decade finding and funding founders who went on to build companies worth ~$10bn, so I was surprised and delighted how much I learned. Enjoy!

Effective Altruism and Idea Machines

During the conversation Tyler argues that Effective Altruism (EA) is a deeply underrated talent community (More here). We've talked a lot about EA here and I agree with Tyler that a disproportionate number of the most talented people I meet in my day job are engaged in or adjacent to the EA community. I expect this only to grow. Nadia Asparouhova (whose work on institutional innovation in science and philanthropy we covered in TiB 205 and 207) has an excellent new essay on EA as an exemplar of what she calls an Idea Machine - "a network of operators, thinkers, and funders, centered around an ideology, that’s designed to turn ideas into outcomes".

Nadia asks why there aren't more Idea Machines and, indeed, "why aren't there more effective altruisms?". I agree with lots in the essay, but I think Nadia underrates the extent to which EA is inherently pluralistic. There is no single EA "HQ" that determines which causes must be prioritised. There are, of course, influential individuals and organisations within EA, but it's striking how open-minded those actors tend to be. See these thoughtful tweets in response to Nadia's essay from the co-CEO of Open Philanthropy, one of the most important EA organisations. Or take this recent post by Will MacAskill, one of the co-founders of the Centre for Effective Altruism, which wrestles with how EA is changing as it becomes increasingly well funded.

For several years, much EA funding came from Dustin Moskovitz, one of the co-founders of Facebook, and his wife Cari Tuna via their backing of Open Philanthropy. More recently, Sam Bankman-Fried (SBF), the founder of crypto exchange FTX, has committed billions to EA causes via his Future Fund (See this NYT profile this week). As Nadia acknowledges, SBF's entry has arguably created a new version of EA - an Idea Machine in its own right. And I expect this to keep happening! As Will notes in his post, if EA's growth continues to be disproportionately among the young, tech-savvy and entrepreneurial, there will be more SBFs. I suspect we'll look back and say that this was a golden age for the creation of Idea Machines - but that a startling proportion were flavours of EA.

RELATED: Interesting set of thoughts (and replies) from Michael Nielsen; Michael is not an EA but is currently reading a lot of EA texts

Yet more AI progress

AI lab DeepMind published a new paper and accompanying blog post this week on a "generalist agent" it calls Gato. By generalist they mean that the same model can perform a range of tasks as diverse as playing video games, text generation and controlling a robotic arm. This is potentially a big deal. Most impressive AI results in recent years, including GPT-3 and DALL-E, (which we discussed in TiB 210) have been generated using models optimised for a single task. Building an AI that can deal well with many, disparate domains is clearly a pre-requisite for artificial general intelligence (AGI). Gato was pre-trained on each of the tasks it can perform, so it's not an AGI in the sense that it can just pick up a new skill cold, but it does seem a big step towards that.

Two points are worth emphasising here. First, this is (yet) another example of what's been called the "blessings of scale" in machine learning: that is, we have not yet found the limitations of throwing bigger models and more compute at problems (see TiB 152128 and 208 for more on this; see also Richard Sutton on "The Bitter Lesson"). This is perhaps the most bullish case for Gato as an important milestone towards AGI; as Gwern puts it here:

I continue to be shocked in my gut that ... something like Gato can be as straightforward as 'just train a 1.2b-param Transformer on half a thousand different tasks, homes, nbd' and it works exactly like you'd think

Second, I continue to find it odd how little mainstream coverage there is of AI progress and AI safety issues in a period where things seem to be moving very fast (another example). Of course, there is a smart and credible crowd of people who are skepical that models like DALL-E and Gato tell us anything about AGI. See this Gary Marcus piece for a good example or this tweet from a DeepMind researcher. But it seems to me you'd have to be almost certain these results mean nothing for them to warrant so little attention (and even very smart people's AI predictions often turn out to be wrong). Four weeks ago I wrote that this Metaculus prediction for the advent of AGI had in two weeks moved forward eight years, to 2035. Today it stands at 2028.

(To join all the dots, this is why AI safety is such a big EA cause area - the replies are good too)

Quick links

  1. Crash dummies? Interesting data on what's happening to private tech valuations. And public ones(!). And the best explanation I've read.
  2. Pale blue dot. Stunning time lapse video of the earth rising over the moon.
  3. Men will literally... Pretty amazing results on the long-term value of cognitive behavioural therapy in reducing violence.
  4. Music translation. People are really good at guessing the meaning of music from cultures and in languages they don't understand (includes a quiz...)
  5. Brexit unintended consequences? Brexit seems to have made the British public much more pro-immigration (actually Dom Cummings did intend this, but I doubt Nigel Farage did)

Thank you, etc

Thanks for reading Thoughts in Between. Please do share it; it makes my day.

As always, feel free to reply if you have comments, questions or suggestions.

Until next week,

Matt Clifford

PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.