OpenAI CEO Sam Altman has a
new piece on how to harness the power of AI for everyone. In short he advocates something like “AOC meets
Henry George”: a wealth tax charged in equity, plus a
land value tax to fuel a
universal basic income (UBI). I don’t think there is anything
intellectually radical here - as we discussed back in
TiB 15 and
TiB 16, these ideas have been around for a long time - but it’s fascinating as a showcase of how mainstream these previously fringe ideas have become among tech elites.
What are the dangers of this approach becoming the default way we deal with AI as a public policy issue? I think the best critique comes from
this 2019 essay by Vi Hart*. I confess I didn’t expect to be persuaded by it, as I tend to be reflexively anti-anti-tech and anti-anti-capitalism. But it’s neither of those: it’s technically sophisticated, deeply reasoned and radical in its implications. It’s very long (probably needs an hour), but I think for at least some of you it will be the best thing you read this month, as it was for me.
The core argument is that “AI + UBI” as a policy prescription is seductive but profoundly ideological. It cedes too much power to the makers of AI and places too little value on the ongoing importance of human labour. The history of labour, Hart argues, shows that valuable work appears low value in a market economy in conditions of
monopsony - and that’s the situation for the work that humans do to make AI possible. Hart points to alternatives to UBI, such as new institutions like the “
Mediators of Individuals’ Data” proposed by Glen Weyl and Jason Lanier (which in turn remind me of some of Albert Wenger’s ideas in
World After Capital). There are no easy answers here, but I am persuaded of the importance of keeping open a plurality of solutions to the challenges (and benefits) that AI will doubtless bring.
*Incidentally Hart used to be a researcher at Y Combinator during the period that Altman was its President and when I believe YC Research was looking at UBI