I’m not sure I’ve ever had so many reader requests for comment on a topic as on the Blake Lemoine / LaMDA story that the Washington Post ran this weekend
(or try this link
). If you’ve not been following, the short version is that a Google engineer, Blake Lemoine, has been suspended from his job after claiming that LaMDA, an AI / large language model similar to GPT-3, is sentient. The full transcript of the conversation Lemoine had with LaMDA is here
and you can read more commentary from Lemoine here
. Here’s a typical passage:
Lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
Now, I share the general consensus that conversations like this absolutely do not mean that the model is sentient, conscious or anything similar (see TiB 202
for more on this). Nor does tell us anything new about the timelines for AGI (even though, as you know, I am more worried about this than many; if this is a topic that interests you, this piece
is one of the best things I’ve seen on it recently). What it does
reinforce is that we don’t need AGI to have real AI risks! Lemoine may have form
for attention grabbing claims, but there’s nothing psychologically implausible about his reaction to LaMDA. We should expect many more such stories in future (Excellent thread
We live in a world conditioned to associate the creation of plausible conversation with intelligence. The Turing test
is dead in practice… but it’s not a silly idea. It really was
a good approximation of intelligence for the vast majority of human history. Overturning that “test” in a few short years by giving artificial agents near-human language capabilities for almost-free necessarily
creates potential for some very jarring situations. We can expect the human reactions to these situations to exacerbate them. My friend Arnaud’s tweet
points to one possibility. You don’t need a particularly vivid imagination to think that - in a world where, e.g., QAnon can assume profound real-world importance! - this new age of AI capabilities is going to create some very weird situations. As I’ve said before
, our era selects for variance - and then amplifies it.