Thoughts in Between
TiB 192: AI autocracy; Why you love fake news; Carbon removal tech; and more...
Welcome new readers! Thoughts in Between is a newsletter (mainly) about how technology is changing politics, culture and society and how we might do it better.
It goes out every Tuesday to thousands of entrepreneurs, investors, policy makers, politicians and others. It’s free.
Forwarded this email? Subscribe here. Enjoy this email? Forward it to a friend.
Does AI entrench authoritarian regimes?
We’ve talked before about whether AI is a pro-authoritarian technology (see TiB 32). An interesting new paper examines the question by looking at the relationship between government AI procurement and autocracy in China. The authors find that these are mutually reinforcing: local governments that experience more unrest procure more facial recognition technology; and this procurement is in turn associated with lower levels of subsequent protest. Moreover, firms that benefit from these purchases seem to become more innovative, both in their public- and private sector work.
This is interesting for a number of reasons. First, it suggests that facial recognition is already a significant and effective tool for authoritarian governments, even though (presumably) the technology is immature relative to what we might expect five or ten years from now. Second, the authors argue that their findings undermine a common argument that autocracies can’t innovate because such regimes fear - and so stifle - private sector-led technological progress. This paper suggests a mechanism by which autocracies have both incentive and capability to stimulate innovation.
I find this compelling, but in the case of AI specifically I think it’s worth tempering with a read of Henry Farrell’s superb essay “Seeing Like A Finite State Machine” (which we discussed in TiB 93). In short, Farrell argues that machine learning models tend to amplify the deficiencies and biases of the datasets on which they’re trained - and in autocracies self-censorship and lack of democratic error correction mean that those datasets can become wildly detached from reality (though perhaps it’s too optimistic to think democratic societies don’t have the same problem?). Perhaps China and others can overcome this challenge, but the relationship between technology and power seems far from predictable.
Fake news exists because you prefer it
Why is there so much fake news? We’ve talked about this in the past - including, in TiB 81, the idea that some people just want to watch the watch the world burn. A new paper by Michael Thaler, though, suggests an interesting hypothesis: if consumers of news want their news sources to be truthful, but also believe that information that contradicts their political beliefs is less true, producers of news are incentivised to disseminate fake news precisely because they want to be seen as honest! I will leave examples as an exercise for the reader, but I’m sure you can think of some, wherever you stand politically.
Thaler designs two experiments to evidence this. In the first, he shows that people are significantly more likely to communicate messages they know to be false when (a) they know that the receiver’s political party has an ideological preference for the false narrative and (b) they are being incentivised based on the receiver’s evaluation of their truthfulness. In the second, he shows that people who are incentivised to be perceived as truthful will pay to find out about their receiver’s political party when sending messages on political topics, but not when sharing news on neutral topics. In other words, disseminating fake news is (sadly) rational if audiences prefer media companies they perceive as truthful(!)
This is obviously bad news. And it strongly suggests that it’s too simplistic to blame social - or even traditional - media companies for fake news. In a world where many consumers actually prefer fake news, “bad” actors who are willing to supply it will outcompete “good” actors who are not, which is not a great equilibrium. As Thaler says, one avenue to explore is how to change the incentives of the suppliers of news, but this seems tricky. Another is to find ways to reduce the role of motivated reasoning in how consumers evaluate the quality of news. Also tricky! Regulation clearly has a role to play, but this paper suggests there are no easy answers.
Why carbon removal is Supercritical
This week's TiB podcast episode is a conversation with Michelle You. Michelle is co-founder and CEO of Supercritical, a startup that helps companies achieve Net Zero through carbon removal. Michelle is a serial entrepreneur (she was previously one of the founders of Songkick) and in this conversation we dive into why she decided to focus on carbon removal for her next company.
In TiB 185 we looked at this Nature piece that reviewed Microsoft's efforts to achieve true Net Zero so far and some of the challenges they've encountered - including the low quality of many claimed carbon offsets. Observing this problem was one of the jumping off points for Michelle in starting Supercritical, so we talk about why we should, and how we can, stimulate the entry of new carbon removal technologies into the market.
Among other things we talk about:
- Michelle's entrepreneurial journey
- How the Supercritical model works
- Which carbon removal approaches seem most promising today
- Whether working on Supercritical has made Michelle more optimistic or pessimistic on climate change
- How rich is "rich"? Interesting (UK) survey data on what income levels people perceive as wealthy.
- What gives life meaning? Answers from various countries. (The tweeter picks out the UK, but surely the Spain / Korea / Taiwan anomalies are most interesting). See also this data on who says work gives meaning.
- "Does the mafia hire good accountants?" Surely a contender for academic paper title of the year? (Also, the answer appears to be "Yes")
- Look who's back. Scotland's forest cover has roughly trebled in the last 100 years. There's a similar pattern across Europe. More here.
- If you tolerate this... The more intelligent you are, the more hostile you are to political outgroups (on average)
Until next week...
Thanks for reading Thoughts in Between. As usual, if you like this, I'd love you to forward to a friend or share on Twitter.
And if you have comments, questions or recommendations, drop me an email. I'm always happy to hear from you.
Until next week,
PS: Lots of newsletters get stuck in Gmail’s Promotions tab. If you find it in there, please help train the algorithm by dragging it to Primary. It makes a big difference.