I talked
a few weeks ago about
OpenAI’s new natural language generation model,
GPT2 (This was the model dubbed “too dangerous to release). My colleague
Ben shared this week
this interesting short essay on GPT2 and its implications by Sarah Constantin. Constantin notes that GPT’s output is just good enough that it can be mistaken for a human if you’re not concentrating - but it breaks down pretty quickly when subjected to scrutiny.
This matters for two reasons. One, it points to a useful framework for thinking about the current state of machine learning: our models are now good at what would be “effortless” pattern recognition for humans, but not yet great at anything “effortful” (Constantin has
another excellent piece on this distinction).
Second, very often today we are
not, in fact, concentrating. This is arguably what makes GPT2 dangerous: the ability to produce content that might not stand up to scrutiny but can be done so cheaply and scaleably that it accelerates the spread of misinformation (I’m reminded of Benedict Evans’ analogy of machine learning as being
like having a million interns). Constantin is more optimistic than I am that living in such a world will improve our powers of concentration. But what if the opposite is true? What if,
as Venkatesh Rao suggests, are our powers are degrading just as automation makes them more important?