Thoughts in Between
TiB 222: AI chameleons; T-rex and Amazon; Scouts vs Bat Signals; and more...
Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.
It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.
Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
Parrots, chameleons and AI advances
We've talked a lot here about large, pre-trained machine learning models (sometimes called "foundation models"; see TiB 179) like GPT-3 or DALL-E. While I see such models as one of the most important milestones so far in the history of AI, they're not without controversy. Perhaps the most influential critique is this paper by Emily Bender et al which memorably calls them "stochastic parrots". That is, despite their impressive outputs, they're not doing anything that we could call "thinking" or "understanding"; they're just probabilistically mimicking words and phrases from the samples in their training set.
Raphael Milliere has a very good new essay that sketches out a middle path - taking the "stochastic parrots" critique seriously, while also recognising the real power and importance of foundation models (see also this Twitter thread for a summary). Milliere suggests that these models are less stochastic parrots than stochastic chameleons: "Parrots repeat canned phrases; chameleons seamlessly blend in new environments... [this] is what makes them so impressive—and potentially harmful". He (rightly, I think) argues that we should spend more time understanding the actual capacities of AI models and less trying to map these capabilities to poorly defined labels like "understanding" and "intelligence".
Nevertheless, I worry "stochastic chameleons" undersells the power of these models. Milliere hints at the reason ("the way large language models can solve a math problem involves a seemingly non-trivial capacity to manipulate the parameters of the input"), but even in the last few days new evidence suggests something important is going on here. A large language model from Google solves a third of MIT undergrad maths problems (good thread!) with 50 percent accuracy. This much better than most forecasters predicted a year ago (also a good thread!) It's early days (I hope) but language does seem to be AI's superpower and many apparently disparate capabilities seem "solved" by language. We're going to need more "language-aligned datasets" (Thanks Ian for the link) but don't underestimate the rate of progress here.
Scouts, bat signals and philanthropy
Back in TiB 204 I said that FTX Foundation's Future Fund (FF) was the most exciting philanthropic initiative I'd seen for a long time. They've just published their first major update and it's worth a read, to see both what they're funding and how they're giving money away. Institutional innovation is (I think) rare in philanthropy (we talked about this in TiB 207 and on the TiB podcast with Nadia Asparouhova) and it's great to see people approach it rigorously and seriously.
Perhaps the most interesting experiment FF has been running is with "regranting". FF recruits regrantors with aligned values and great networks; allocates them a budget; and then with very high probability awards the grants they recommend. This seems to be working well, especially compared to FF's more traditional "open call" for applications:
"[Regranting] is the model we’re currently most excited about. We think regrantors are making pretty reasonable grants [and] many of the grants are opportunities we wouldn’t otherwise have known about"
One way to think about regranting is that it's a little like the venture capital "scout" model applied to philanthropy, whereas the open call is more like what Marc Andreessen calls the "bat signal" model in this conversation (i.e. build a very distinctive brand that acts as a magnet to the right kind of people).
I would have said that FF did have such a brand, so it's interesting that regranting is working better, for now. In their book Talent (and also on the TiB podcast), Tyler Cowen and Daniel Gross suggest that scouting works best when the talents you're trying to recruit may not be aware that they excel on the dimension that you care about - or even that an opportunity of this sort exists at all. There's some evidence of this in the FF writeup (see the story of Braden Lynch in the report), though perhaps this will change as FF becomes more established. In any case, FF seems an unambiguously good thing for the world and I'm excited to see what comes next.
Barbell distributions in startups, dinosaurs and more
I've been thinking a lot about scale recently and happened to stumble upon this excellent post by my friend Rohit (see this episode of the TiB podcast), which I somehow missed when he published it in November last year. The title is "Meditations on barbells" and the core theme is the "hollowing out of the middle”. That is, a wide variety of ecosystems seem to reach an equilibrium where they have a small number of very large "players" and a large number of small "players", but few medium-sized ones. Here I am using "ecosystem" very broadly. Rohit's examples include retailers, public stocks and, err, dinosaurs! Do read the whole thing.
Why does this happen? To paraphrase Rohit in a way he might not agree with: as ecosystems mature, the dimensions on which players compete become better defined and increasingly become dimensions on which scale is an advantage. It's clear how this benefits big players. Think of Amazon competing on cost and delivery speed (or Tyranosaurus Rex on... well, read the post...). Good luck going head to head against them on these! But why does this leave space for smaller players? I think the answer is something like Paul Graham's exhortation to startups: do things that don't scale. In other words, if you can't compete on scale, do the opposite: find things that you can do because you're small and win on those.
Of course, for startups the point is that this is an interim strategy. You could argue, I think, that this is a loose reformulation of the Innovator's Dilemma. Small players have to identify new dimensions on which to compete - and they hope that over time ecosystem changes mean that these dimensions become the dominant ones. It's interesting to apply the "hollowing out / compete on new dimensions" framework to non-commercial, non-ecological ecosystems. I think it fits the market for public intellectuals quite well (this is left as an exercise for the reader) and explains why Twitter is so culturally important. I suspect it applies to science funding bodies too, which is why we're in a golden age for new (small) ones (see TiB 203). One you start to think about barbells, you see them everywhere.
Quick links
- My my my, it's a beautiful universe. Stunning visualisation of the universe at log scale.
- You libgoblin! Amusing / fascinating chart of the relative "compound curse words" on Reddit (obvious content warnings apply)
- "You can't teach height"... as a friend of mine likes to say - but it does pay dividends. Pretty amazing datapoint.
- How to build a chess engine. Does what it says on the tin; very interesting read.
- Rare TiB impact! Inspired by TiB 213, Sam Dumitriu has an excellent piece on how to improve the UK's High Potential Individual visa.
Here we go again...
If you share TiB with a friend this week we will almost certainly reach a special reader number milestone... so please do. I'll be very grateful.
And, as always, feel free to reply you have comments, feedback or suggestions.
Until next week,
Matt Clifford
PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.