View profile

TiB 219: AI sentience as red herring; new scientific fields; managing ambition; and more...

Matt’s Thoughts in Between
Matt’s Thoughts in Between
This week: Google’s AI model isn’t sentient, but it is a problem; why the creation of new scientific fields is the antidote to stagnation; how we can do a better job of managing society’s “collective supply of ambition”; and more…

Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.
It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.
Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
The consequences of LaMDA's non-sentience
I’m not sure I’ve ever had so many reader requests for comment on a topic as on the Blake Lemoine / LaMDA story that the Washington Post ran this weekend (or try this link). If you’ve not been following, the short version is that a Google engineer, Blake Lemoine, has been suspended from his job after claiming that LaMDA, an AI / large language model similar to GPT-3, is sentient. The full transcript of the conversation Lemoine had with LaMDA is here and you can read more commentary from Lemoine here. Here’s a typical passage:
Lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
Now, I share the general consensus that conversations like this absolutely do not mean that the model is sentient, conscious or anything similar (see TiB 202 for more on this). Nor does tell us anything new about the timelines for AGI (even though, as you know, I am more worried about this than many; if this is a topic that interests you, this piece is one of the best things I’ve seen on it recently). What it does reinforce is that we don’t need AGI to have real AI risks! Lemoine may have form for attention grabbing claims, but there’s nothing psychologically implausible about his reaction to LaMDA. We should expect many more such stories in future (Excellent thread on this).
We live in a world conditioned to associate the creation of plausible conversation with intelligence. The Turing test is dead in practice… but it’s not a silly idea. It really was a good approximation of intelligence for the vast majority of human history. Overturning that “test” in a few short years by giving artificial agents near-human language capabilities for almost-free necessarily creates potential for some very jarring situations. We can expect the human reactions to these situations to exacerbate them. My friend Arnaud’s tweet points to one possibility. You don’t need a particularly vivid imagination to think that - in a world where, e.g., QAnon can assume profound real-world importance! - this new age of AI capabilities is going to create some very weird situations. As I’ve said before, our era selects for variance - and then amplifies it.
New scientific fields as the antidote to stagnation
We talked last week about the evidence that science is slowing down - or, more precisely, that it looks like it used to be easier to discover new ideas. Just as I was hitting send, Eric Gilliam published this excellent post (see also Matt Clancy’s commentary) on what we can learn from history about when ideas get easier to find and what this means for those of us who would like to see an acceleration of discovery. Gilliam draws on the work of Gerard Holton, a historian of science active from the 1950s to the 1970s, to argue that scientific slowdown isn’t just the inevitable consequence of the “burden of knowledge”, as we discussed last week. Rather, the key to a flourishing of scientific productivity is the creation of new fields or “branches” of science.
The core idea is that sometimes a discovery points the way to a new branch (or multiple new branches) of scientific inquiry, and often exploring these branches is extraordinarily productive. Gilliam gives the example of the “massive branching out of new fields of physics related to molecular beams, magnetic resonance, and other work that was spawned by I. I. Rabi’s work [in the late 1920s] in developing the original molecular beam techniques” (Do check out the fantastic diagram illustrating this from Horton’s book at the link).
So why don’t get more of this today? Basically because it requires interdisciplinary researchers who can explore and learn from each other’s work outside their immediate field. But, as Gilliam notes, if you look at the last 50 years of science, “exactly the opposite happened”; disciplines became ever more siloed and the incentive to publish in novel fields diminished (as we’ve discussed before; see TiB 103). This seems like an important space for the new types of science institution we’ve discussed many times here to explore - and underlines the huge value of “vision papers” that suggest potential new fields, as we discussed in TiB 217. As Gilliam puts it, “Sometimes, science should feel like play. In the time since Holton’s writing, we seem to have forgotten that”. Do read the whole thing.
Managing society's supply of ambition
Etienne Fortier-Dubois has an excellent new piece on ambition, which is well worth a read. As long-time readers will know, this is a topic that’s close to my heart (indeed, my most popular piece of writing and the one that changed my life the most is a reflection on ambition). Fortier-Dubois asks how we can sustain ambition in the face of rejection - and, relatedly, how gatekeepers of resources sought by the ambitious can minimise the harm their rejections do. As someone who both prizes ambition but also turns down (if not personally) over 15,000 people a year at Entrepreneur First, I think these are underrated questions.
Fortier-Dubois quotes Dwarkesh Patel making a very interesting point about Emergent Ventures (which we discussed in TiB 181), but which could apply to any gatekeeping institution:
If an applicant wins, she finds out that Tyler thinks she is promising, and her confidence increases. But if she is rejected, she finds out that Tyler does not think she has much potential, and her confidence decreases. This may be net positive, because it is much more important to raise the ambitions of the very best people, but it does come at a cost. Under this model, Emergent Ventures works by transferring ambition from the simply great to the truly excellent.
I think there’s a lot to this. There’s doubtless a Matthew Effect in ambition (I certainly believe a sequence of - probably largely fortuitous - “wins” helped me a huge amount in this respect in my teens) and the destruction of ambition among very talented people who don’t quite make the cut is a problem worth pondering.
But what can gatekeepers do if they take seriously Fortier-Dubois’s challenge to see themselves as “managers of society’s collective supply of ambition”? As he notes, personalised feedback might help, but doesn’t scale (it’s also questionable how much it helps; I think often about Paul Graham’s point, which I can’t now locate online, that for any selection process, the closer you are to the cutoff point the less helpful feedback is). I don’t have a great answer but I suspect public anti-portfolios help (see TiB 160 and Bessemer’s example). Once you realise how often even the best gatekeepers are wrong, perhaps you (rightly) put less weight on their judgement.
Quick links
  1. Stories write AI? Very nice online “exhibition” showcasing the history of attempts to have machines write narratives - and how this shaped modern AI
  2. Why Combinator? Really excellent profile of Y Combinator by Mario Gabriele that showcases why its value is so enduring (including a brief discussion of Entrepreneur First towards the end)
  3. Spin doctor? Great analysis of how much universities “charge” researchers (in equity) to spin out their research as companies. This is an underrated lever for improving innovation ecosystems!
  4. The privatisation of AI. The proportion of large-scale AI results coming from academic has basically fallen to zero
  5. Bottled fear. Not only can you smell fear, but if you take the sweat of a frightened person and bottle it, people who smell it become afraid!
Here we go again...
Thanks for reading all the way to the end. If you enjoy Thoughts in Between, shares are always appreciated!
I’m always happy to hear from readers. If you have comments, questions or recommendations, just hit reply.
Until next week,
Matt Clifford
PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.
Did you enjoy this issue? Yes No
Matt’s Thoughts in Between
Matt’s Thoughts in Between @matthewclifford

Matt Clifford's Thoughts In Between

In order to unsubscribe, click here.
If you were forwarded this newsletter and you like it, you can subscribe here.
Created with Revue by Twitter.